Skip to content

An ML model that infers Iris species when provided Sepal and Petal measurements with an API service

License

Notifications You must be signed in to change notification settings

eligrubbs/iris_ml_api

Repository files navigation

iris_ml_api

An ML model that infers Iris species when provided Sepal and Petal measurements with an API service. It uses the famous Iris Dataset.

The purpose of this project is to practice creating and serving a scalable ML app by creating a lightweight async dockerized API service. The containers can be orchestrated in theory with a tool like Kubernetes, and the model is light enough that each instance is designed to store it in memory.

Model

The model was built using CatBoost. This is a performant and fast boosting algorithm with a rich Python API, making it ideal for this project.

Backend

The backend is a FASTAPI app.

Install Just to execute helper script recipes

Just is a handy way to save and run project-specific commands.

Install it to your $PATH by following the instructions here.

Once installed, from the project root directory, all recipes found in the justfile can be run by simply doing

just <recipe name>

Backend TODO

  • create hello world fastapi app with test
  • dockerize it
  • load model on startup
  • create model endpoint that validates args but does nothing
  • connect model and endpoint
  • Dockerize tests and have them stateful to instantiate model
  • write tests for predict route
  • speed tests for dockerized app

More pie-in-the-sky goals

  • make a database
    • create a dummy register endpoint
    • create an empty database to store things
    • add API key support

About

An ML model that infers Iris species when provided Sepal and Petal measurements with an API service

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published