An advanced backend service that provides intelligent, AI-driven analysis and optimization suggestions for Dockerfiles. The service uses a distributed task queue to handle long-running image builds asynchronously, ensuring the API remains responsive and scalable.
- AI-Powered Suggestions: Integrates the Google Gemini 2.5 Pro API to deliver high-level, context-aware advice on Dockerfile security, efficiency, and best practices.
- Dynamic Image Inspection: Builds the Docker image to perform a technical analysis, providing a detailed breakdown of layer sizes.
- Asynchronous Processing: Utilizes a Celery and Redis task queue to run slow processes like
docker buildin the background, preventing API timeouts. - RESTful API: Exposes simple endpoints to submit analysis jobs and retrieve the results, allowing for easy integration with other tools and CI/CD pipelines.
- Python: The core programming language for the entire application.
- Flask: A lightweight web framework used to build and serve the REST API.
- Celery: A distributed task queue that manages and executes background jobs (e.g., AI analysis, image building).
- Redis: An in-memory data store that serves as both the message broker and the result backend for Celery.
- Docker SDK for Python: The library used to programmatically interact with the Docker daemon to build and inspect images.
- Google Gemini API: The AI model used to provide intelligent analysis and optimization recommendations.
The application is composed of three main services that run concurrently:
- Flask API Server: The public-facing entry point. It receives requests, creates a new analysis job, and immediately returns a
task_id. - Redis Server: The message broker. It holds the queue of jobs waiting to be processed and stores the final results.
- Celery Worker: The background process. It constantly monitors the Redis queue, picks up new jobs, performs the AI analysis and Docker build, and saves the result back to Redis.
Triggers a new analysis for the Dockerfile located in the project's root directory.
- Endpoint:
/inspect - Method:
POST - Example Request:
curl -X POST [http://127.0.0.1:5000/inspect](http://127.0.0.1:5000/inspect)
- Success Response:
{ "task_id": "some-unique-task-id" }
Fetches the status and result of a previously submitted job.
- Endpoint:
/results/<task_id> - Method:
GET - Example Request:
curl [http://127.0.0.1:5000/results/some-unique-task-id](http://127.0.0.1:5000/results/some-unique-task-id)
- Success Response:
{ "status": "SUCCESS", "state": "SUCCESS", "result": { "ai_suggestions": "...", "size_analysis": { "...": "..." }, "build_error": null } }
- Python 3.9+
- Docker Desktop (or Docker Engine)
- Redis
-
Clone the repository:
git clone [https://github.com/Nihalrt/docker-optimizer.git](https://github.com/Nihalrt/docker-optimizer.git) cd docker-optimizer -
Create and activate a virtual environment:
python3 -m venv venv source venv/bin/activate -
Install dependencies:
pip install -r requirements.txt
-
Set up your Gemini API Key:
- Get an API key from Google AI Studio.
- Create a file named
.envin the project root. - Add your API key to the
.envfile:GEMINI_API_KEY="YOUR_API_KEY_HERE"
You must run three separate services in three different terminal windows.
-
Terminal 1: Start Redis If you installed Redis via Homebrew, you can start it as a service:
brew services start redis
If not, run the Redis server directly:
redis-server
-
Terminal 2: Start the Flask API Server Navigate to the project directory, activate the virtual environment, and run:
# (Inside the project directory) source venv/bin/activate python -m api.app
-
Terminal 3: Start the Celery Worker Navigate to the project directory, activate the virtual environment, and run:
# (Inside the project directory) source venv/bin/activate celery -A workers.worker_config:celery_app worker --loglevel=info
Important for macOS users: If you experience Docker connection issues from within VS Code's integrated terminal, run the Celery worker in a separate, standard macOS Terminal application.
Once all three services are running, you can use the API endpoints as described above.