N8n sample project created to the I2A2 Autonomous Agents course
Requires:
- docker
- docker-compose
Use make init to install docker over colima.
- Set
gpu_enabled=trueto the.envto have gpu support - only works for linux and windows. - For macos, you set
native_mac=trueto the.envto have gpu support.
You can also try
make bootstrapto install dependencies, start the containers, and load the default model. Use same variables for gpu support.
Start the project with make start or docker-compose up -d.
Access the http://localhost:5678 and login with your credentials.
The first time will require to sign for a free license.
- You can select "I'll not use n8n for work" to simplify the options.
- Then select to receive the license in the next page.
- Go in the
settings > Usage and planand put the active key you received in your email.
Next, you will need to config some connection at n8n credentials:
- Add connection for ollama:
- Base URL: http://ollama:11434
- Add connection for postgres:
- Host: pgvector
- Username: n8n
- Password: n8n-pgvector
- Database: db
You can also play around ollama webui at http://localhost:3000.
- If not done yet, load a model using
make load-model - We default to
gemma3, but you can load different models usingmake load-model model=<model_name>
Run make help to see all available targets and options.
Targets:
help This help.
init Install dependencies and init container provider
bootstrap install deps and start it with the default model
start start local compose with essential services
status show status of local compose
stop stop local compose
logs show logs of local compose
load-model load model into local compose using model=<model> options, check https://ollama.com/library
open-ollama open ollama in browser
Options:
model Model to use for ollama, default: gemma3 (light); others: mistral, deepseek-r1 (big), llama4 (huge)
cpu Number of CPUs to allocate for colima
memory Memory to allocate for colima
disk Disk space to allocate for colima
gpu_enabled Enable ollama gpu support on linux and windows
native_mac Enable ollama gpu support on macosStart the project with make start or docker-compose up -d.
Access the http://localhost:5678 and login with your credentials.
You can also use gpu for better llm performance:
make start gpu_enabled=truefor windows and linuxmake start native_mac=truefor macos
:note: Remember, the makefile commands are not mandatory, but keep them as a live-document reference for the commands you need to use. Feel free to use the direct docker/ollama/etc commands present in the project.
You can also use make commands to open the two available interface:
make open-n8nOpen the n8n UImake open-ollamaOpen the ollama UI
To load models, check for the makefile command:
make load-model model=<model_name>You can use any model from https://ollama.com/library, or create your own custom models.
For the n8n workflows, please check for workflows/readme.md
For the n8n workflows, please check for workflows/readme.md
TBD - we still need to define a provider, but we can refer to the official server-setups documentation.
Feel free to ask for access to the project. You can also fork and open a pull request.