An ML model that infers Iris species when provided Sepal and Petal measurements with an API service. It uses the famous Iris Dataset.
The purpose of this project is to practice creating and serving a scalable ML app by creating a lightweight async dockerized API service. The containers can be orchestrated in theory with a tool like Kubernetes, and the model is light enough that each instance is designed to store it in memory.
The model was built using CatBoost. This is a performant and fast boosting algorithm with a rich Python API, making it ideal for this project.
The backend is a FASTAPI app.
Just is a handy way to save and run project-specific commands.
Install it to your $PATH by following the instructions here.
Once installed, from the project root directory, all recipes found in the justfile can be run by simply doing
just <recipe name>- create hello world fastapi app with test
- dockerize it
- load model on startup
- create model endpoint that validates args but does nothing
- connect model and endpoint
- Dockerize tests and have them stateful to instantiate model
- write tests for predict route
- speed tests for dockerized app
- make a database
- create a dummy register endpoint
- create an empty database to store things
- add API key support