A Flask-based web application that provides an interactive network troubleshooting assistant, powered by Ollama for LLM capabilities and ChromaDB for document retrieval.
- Interactive chat interface for network troubleshooting
- Intelligent problem type detection
- Guided information gathering system
- Document indexing and management for knowledge retrieval
- Context-aware responses leveraging your network documentation
- Environment-based configuration system
- Built-in document viewer and editor for your network documentation
Note: Sample examples of documents are available in the network_doc_samples directory. All of the example documents were generated by claude.ai and are not based on any real network.
- Python 3.8 or higher
- Ollama running locally or on a remote server
- Network documentation (optional, but recommended)
The application uses a custom Ollama model specifically tailored for network troubleshooting:
-
Install Ollama from the official website (https://ollama.com/)
-
Create the custom model using the included Modelfile:
ollama create network-assistant -f Modelfile
-
Verify the model is created:
ollama list
The model is based on Llama 3.2 or phi4 and configured with a specialized system prompt to provide expert-level network troubleshooting assistance. It's designed to help IT professionals diagnose and resolve networking issues across various domains. You can of course use any model you like, but this has only been tested on Llama 3.2 and phi4.
-
Clone the repository:
git clone https://github.com/jtbwatson/network-assistant.git cd network-troubleshooting-assistant -
Create a virtual environment and activate it:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install the required packages:
pip install -r requirements.txt
-
Create a
.envfile by copying the sample:cp .env.sample .env
-
Edit the
.envfile to match your environment:# Primary configuration OLLAMA_HOST=http://localhost:11434 OLLAMA_MODEL=network-assistant -
Start the application:
python app.py
-
Open a web browser and go to
http://localhost:5000
- Utilizes Llama 3.2 or phi4 as the foundational language model
- Customized with a specialized system prompt for network troubleshooting
- Configured to provide technical, contextually relevant assistance
The model is engineered to:
- Diagnose complex network issues
- Provide step-by-step technical guidance
- Explain networking concepts clearly
- Offer both immediate and long-term solutions
- Adapt responses to the user's technical skill level
The application uses environment variables for configuration, which can be set in a .env file:
| Variable | Description | Default Value |
|---|---|---|
DOCS_DIR |
Directory for storing network documentation | ./network_docs |
DB_DIR |
Directory for ChromaDB storage | ./chroma_db |
CHUNK_SIZE |
Size of text chunks for indexing | 512 |
CHUNK_OVERLAP |
Overlap between chunks | 50 |
SEARCH_RESULTS |
Number of search results to retrieve | 5 |
OLLAMA_HOST |
URL of your Ollama instance | http://localhost:11434 |
OLLAMA_MODEL |
Name of the Ollama model to use | network-assistant |
USE_OLLAMA_EMBEDDINGS |
Whether to use Ollama for generating embeddings | true |
OLLAMA_EMBEDDING_MODEL |
Model to use for embeddings when using Ollama | nomic-embed-text |
OLLAMA_EMBEDDING_BATCH_SIZE |
Batch size for embedding generation | 10 |
PORT |
Port to run the application on | 5000 |
DEBUG_MODE |
Enable Flask debug mode | False |
For best results, place your network documentation in the network_docs directory (or your custom configured directory). Supported formats:
- Markdown (
.md) - Text files (
.txt) - YAML configuration files (
.yaml,.yml)
- Indexing: Click the "Index Documents" button in the UI to make your documents searchable
- Viewing: Browse and view documents directly in the application
- Editing: Edit and update documents through the built-in document viewer/editor
- Reindexing: Use "Force Reindex All" when you want to refresh the entire document database
The application supports two methods for generating embeddings:
-
Ollama Embeddings (Default): Uses the specified Ollama model to generate embeddings
- Provides consistent embedding quality across different environments
- Set
USE_OLLAMA_EMBEDDINGS=trueand specifyOLLAMA_EMBEDDING_MODEL
-
Local Embeddings: Uses SentenceTransformer locally
- Works without additional Ollama configuration
- Set
USE_OLLAMA_EMBEDDINGS=falseto use this method
The application maintains conversation history within each session to provide context-aware responses. This helps the assistant remember previous questions and build on prior interactions.
If you see "ChromaDB not available" messages:
- Make sure you've installed the required packages
- Check if there are compatibility issues with your Python version
- Consider manually installing ChromaDB:
pip install chromadb
If you see "Could not connect to Ollama":
- Verify that Ollama is running
- Check that the
OLLAMA_HOSTsetting points to the correct URL - Make sure your network allows connections to the Ollama server
- Verify that the model specified in
OLLAMA_MODELis available on your Ollama instance
If documents aren't being indexed properly:
- Check file permissions on the
DOCS_DIRdirectory - Ensure documents are in supported formats (
.md,.txt,.yaml,.yml) - Try using "Force Reindex All" to rebuild the index
- Check the application logs for specific error messages
- This application is designed for internal network use only
- There is no authentication built into the application
- Do not expose this service to the public internet without adding proper security measures
[MIT License]
Contributions are welcome! Please feel free to submit a Pull Request.