- Python 3.13+
ollamainstalled and running (for future integration, currently mocked)pip(Python package installer)- Git
git clone https://github.com/Conwenu/AskTemoc_Backend.git
cd path/to/project-rootMake sure you're in a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activateInstall required packages:
pip install -r requirements.txtEnsure ollama is installed and running locally.
ollama run llama3 # Or any other model you plan to useYou can start the server using:
uvicorn app.main:app --reload- Visit Swagger UI: http://localhost:8000/docs
You can test using curl, Thunder Client Extension, Postman, or directly in Swagger UI.
Endpoint: GET /api/query/
Query Parameter: query
Using curl:
curl -X POST "http://127.0.0.1:8000/api/query/" \
-H "Content-Type: application/json" \
-d '{"query": "What is FastAPI?"}'{
"answer": "Answer: What is FastAPI?"
}Or if you have successfully installed Ollama with the llama3.1:8b model
{
"answer": "The capital of China is Beijing."
}If you haven’t already installed the Crawl4AI library (it should be listed in requirements.txt), run:
pip install crawl4aiAfter installing dependencies, run:
crawl4ai-setupThen verify the installation with:
crawl4ai-doctor