Skip to content

PRODYNA/nextjs-observability

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

107 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Todo Application – Observable Full-Stack Demo

A full-stack TODO application demonstrating end-to-end observability with OpenTelemetry. The stack includes a Next.js frontend, an Express backend, Redis for persistence, and a complete telemetry pipeline.

image

Business Logic

The application offers a simple TODO manager where users can:

  • View all current todos (each is either checked or unchecked)
  • Check or uncheck a todo
  • Create a new todo
  • Delete a todo

Architecture

graph TD
    Browser[Browser] -->|HTTP :80| Traefik[Traefik Reverse Proxy]
    Traefik -->|"PathPrefix(/)"| Frontend[Next.js Frontend :3000]
    Traefik -->|"PathPrefix(/api)"| Backend[Express Backend :4000]
    Frontend -->|Server-side fetch| Traefik
    Backend --> Redis[(Redis :6379)]

    Traefik -->|Traces / Metrics / Logs| Collector[OTel Collector :4318]
    Frontend -->|Traces / Metrics / Logs| Collector
    Backend -->|Traces / Metrics / Logs| Collector
    Collector -->|OTLP HTTP| Observability[Observability Platform]
Loading
Component Technology Purpose
Frontend Next.js 16, React 19, TypeScript Server-rendered UI with client-side interactivity
Backend Express 5, TypeScript REST API for TODO CRUD operations
Redis Redis 7 Alpine Persistent storage for TODO items
Traefik Traefik v3 Reverse proxy, path-based routing, access logging
OTel Collector OpenTelemetry Collector Contrib Receives, processes, and exports telemetry data

For detailed documentation on each service see:

  • Backend README – routes, configuration, metrics, and OpenTelemetry integration
  • Frontend README – backend connection, architecture, configuration, and OpenTelemetry integration

Docker Compose

All services are orchestrated with docker-compose.yml.

Services

graph LR
    subgraph docker-compose
        Traefik[traefik :80]
        Frontend[frontend :3000]
        Backend[backend :4000]
        Redis[redis :6379]
        OTel[otel-collector :4317/:4318]
    end
    Traefik --> Frontend
    Traefik --> Backend
    Backend --> Redis
    Traefik --> OTel
    Frontend --> OTel
    Backend --> OTel
Loading
Service Image / Build Exposed Ports Description
traefik traefik:v3.6.8 80 Reverse proxy – routes / → frontend, /api → backend
frontend Built from ./frontend 3000 (internal) Next.js application
backend Built from ./backend 4000 (internal) Express REST API
redis redis:7-alpine 6379 Persistent storage
otel-collector otel/opentelemetry-collector-contrib:0.143.0 4317 (gRPC), 4318 (HTTP) Telemetry pipeline

Starting the Application

# Start all services (build images if needed)
docker compose up --build -d

# View logs
docker compose logs -f

The application is available at http://localhost (port 80 via Traefik).

Rebuilding and Restarting

To ensure all images are fully rebuilt from scratch:

# Stop all services, rebuild all images from scratch, and start
docker compose down
docker compose build --no-cache
docker compose up -d

Or as a single command:

docker compose up --build --force-recreate -d

Stopping the Application

docker compose down

Configurable Base Path

By default the backend serves its routes at the root (/todos, /health). You can mount all backend routes under a sub-path by setting environment variables:

Service Variable Example Effect
backend BASE_PATH /api Routes become /api/todos, /api/health
frontend BACKEND_URL http://traefik/api Required. Frontend calls ${BACKEND_URL}/todos. Include the base path directly in the URL.

BACKEND_URL is a required environment variable for the frontend — if it is not set the frontend will crash on startup.

Observability

flowchart LR
    subgraph Sources
        FE[Frontend]
        BE[Backend]
        TR[Traefik]
    end
    subgraph OTel Collector
        R[OTLP Receiver]
        P[Batch + Memory Limiter]
        E[OTLP HTTP Exporter]
    end
    FE -->|traces, metrics, logs| R
    BE -->|traces, metrics, logs| R
    TR -->|traces, metrics, logs| R
    R --> P --> E
    E -->|OTLP HTTP| Platform[Observability Platform]
Loading

All components send traces, metrics, and logs to the OpenTelemetry Collector via OTLP HTTP (http://otel-collector:4318). The collector batches and forwards the data to the configured observability platform.

  • Traces – Distributed tracing across Traefik → Frontend → Backend → Redis
  • Metrics – Custom business metrics (todo create/delete/check/uncheck/view counts, Redis operation duration)
  • Logs – JSON-structured logs (backend) with trace_id and span_id for correlation

OTel Collector Configuration

The collector config is located at otel/otel-collector-config.yaml:

  • Receivers: OTLP (gRPC on :4317, HTTP on :4318)
  • Processors: Batch (5s timeout, 100–1000 batch size), Memory limiter (75% limit)
  • Exporters: OTLP HTTP to the configured observability platform

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published