Automated build system for creating Discourse Docker images optimized for Kubernetes deployment.
This repository:
- Tracks upstream
discourse/discourse_dockerreleases without forking - Builds custom Docker images for Kubernetes deployment
- Automates builds when upstream releases new versions
- Produces deterministically-tagged images with version manifests
- Git Submodule: Uses
discourse_dockeras a submodule (no fork required) - Base Configuration:
config/basecontainer.yamldefines the base image configuration - Plugin Sets:
config/plugins/*.yamldefine optional plugin configurations - Version Tracking:
versions.yamltracks the last built version - Automated Builds: GitHub Actions workflows detect and build new releases
Discourse's standard bootstrap requires a running PostgreSQL and Redis, which aren't available in CI. This project solves that by splitting the process:
- Build time:
k8s-bootstraprunspups --stdin --skip-tags migrate,precompileinside adiscourse/basecontainer, installing everything except DB-dependent operations (migrations) and asset precompilation - Runtime: Database migrations and asset precompilation are handled either by a Kubernetes Job (recommended for multi-replica) or by setting
MIGRATE_ON_BOOT=1andPRECOMPILE_ON_BOOT=1environment variables (suitable for single-pod deployments)
Both variables default to 0 in the image, so nothing runs on boot unless explicitly enabled.
The docker_manager plugin is not included and not needed for Kubernetes deployments. It's only used for in-place upgrades via ./launcher rebuild in traditional Docker setups. In K8s, upgrades happen by deploying new image versions through CI/CD.
discourse-k8s-image/
├── .github/
│ └── workflows/
│ ├── check-upstream.yml # Cron: detect new Discourse releases
│ ├── build-image.yml # Build and push Docker image
│ └── test.yml # Run validation tests on push/PR
│
├── discourse_docker/ # Git submodule (upstream)
│
├── config/
│ ├── basecontainer.yaml # Base container config (no plugins)
│ └── plugins/
│ ├── default.yaml # Default: no plugins
│ └── example.yaml # Example plugin configuration
│
├── scripts/
│ ├── k8s-bootstrap # Core build script
│ ├── build.sh # Local build helper
│ ├── generate-manifest.sh # Create version manifest
│ ├── list-versions # Query available Discourse versions
│ ├── test-k8s-bootstrap # Full integration test (requires Docker)
│ └── test-k8s-bootstrap-validation # Quick validation test (no Docker)
│
├── kubernetes/
│ ├── base/ # Kustomize base manifests
│ └── overlays/
│ ├── single-pod/ # Single replica, migrations on boot
│ └── production/ # Multi-replica, HPA, PDB, Ingress
│
├── ARCHITECTURE.md # Detailed architecture and K8s patterns
├── LICENSE
├── versions.yaml # Tracks last-built versions
└── README.md
| Tag Format | Example | Purpose |
|---|---|---|
v2026.1.0-abc123def456 |
Full version + config hash | Immutable, specific build |
2026.1-latest |
Major.minor + latest | Rolling tag for minor version |
The config hash is a 12-character SHA256 of the full merged configuration (base + plugins), ensuring different plugin sets produce different image tags.
Plugins are configured separately from the base container definition. This allows you to:
- Build images with different plugin sets without modifying the base config
- Keep the base configuration clean and versioned
- Create custom plugin combinations for different deployments
Create a new file in config/plugins/ (e.g., config/plugins/acme.yaml):
# Custom plugin configuration for ACME deployment
plugins:
- git clone https://github.com/discourse/discourse-solved.git
- git clone --branch v1.2.3 https://github.com/discourse/discourse-voting.gitImportant: Pin plugins to specific branches or tags for reproducible builds.
Trigger a build with your plugin configuration:
# Using GitHub Actions UI
gh workflow run build-image.yml -f plugins=acme
# Or specify version and plugins
gh workflow run build-image.yml -f discourse_version=v2026.1.0 -f plugins=acmeNote: Different plugin sets generate different config hashes, ensuring unique image tags for each combination.
- Go to Actions tab in GitHub
- Select "Build Discourse Image" workflow
- Click "Run workflow"
- Enter the Discourse version (e.g.,
v2026.1.0) or leave empty for latest - Enter the plugin config name (e.g.,
acme) or leave asdefaultfor no plugins - Click "Run workflow"
Build an image locally using build.sh:
# Build with default plugins (none)
./scripts/build.sh v2026.1.0
# Build with a specific plugin set
./scripts/build.sh v2026.1.0 acmeThis creates an image tagged as discourse-k8s:v2026.1.0-<config-hash>.
The check-upstream.yml workflow runs daily at 6 AM UTC:
- Updates the
discourse_dockersubmodule to latest upstreammain(pushes directly to main if changed) - Queries GitHub API for latest Discourse release
- Compares against last-built version in
versions.yaml - If new version detected, triggers build workflow against the freshly updated main branch
- Build workflow creates image with default plugin set (no plugins)
Note: Automated builds use the default plugin configuration (empty). Custom plugin builds must be triggered manually.
| Workflow | Trigger | Purpose |
|---|---|---|
test.yml |
Push/PR to main | Runs test-k8s-bootstrap-validation |
build-image.yml |
Manual or called by check-upstream | Builds and pushes image to ghcr.io |
check-upstream.yml |
Daily 6 AM UTC cron or manual | Updates submodule, detects new releases, triggers build |
Images are published to GitHub Container Registry (GHCR):
ghcr.io/ginsys/discourse:v2026.1.0-abc123def456
ghcr.io/ginsys/discourse:2026.1-latest
# Authenticate with GHCR
echo $GITHUB_TOKEN | docker login ghcr.io -u <username> --password-stdin
# Pull specific version
docker pull ghcr.io/ginsys/discourse:v2026.1.0-abc123def456
# Pull latest for major.minor
docker pull ghcr.io/ginsys/discourse:2026.1-latestEach image includes /version-manifest.yaml with build details:
discourse:
version: "v2026.1.0"
plugins_hash: "abc123def456"
plugins:
[]
dependencies:
postgresql: "15"
redis: "7.4.7"
ruby: "3.3.8"
build:
timestamp: "2026-01-15T10:30:00Z"
builder: "github-actions"
workflow_run: "1234567890"
commit: "abc123..."Retrieve from a running container:
docker run --rm ghcr.io/ginsys/discourse:v2026.1.0-abc123def456 cat /version-manifest.yamlQuery dependency versions via OCI labels (no container needed):
docker inspect --format '{{index .Config.Labels "org.discourse.postgresql-version"}}' <image>
docker inspect --format '{{index .Config.Labels "org.discourse.redis-version"}}' <image>
docker inspect --format '{{index .Config.Labels "org.discourse.ruby-version"}}' <image>The image uses the following environment variables at runtime. Both boot-time variables default to 0 in the image:
| Variable | Description | Default |
|---|---|---|
DISCOURSE_DB_HOST |
PostgreSQL hostname | placeholder |
DISCOURSE_REDIS_HOST |
Redis/Valkey hostname | placeholder |
DISCOURSE_HOSTNAME |
Public hostname for Discourse | placeholder |
MIGRATE_ON_BOOT |
Run db:migrate on container start |
0 |
PRECOMPILE_ON_BOOT |
Run assets:precompile on container start |
0 |
For single-pod deployments, set MIGRATE_ON_BOOT=1 and PRECOMPILE_ON_BOOT=1. For multi-replica deployments, leave at 0 and use a Kubernetes Job (see below).
Single-pod (simplest):
kubectl kustomize kubernetes/overlays/single-pod/ | kubectl apply -f -Migrations and precompilation run on boot. No separate Job needed.
Multi-replica (production):
# 1. Delete previous migration Job (Jobs are immutable — can't update image)
kubectl delete job discourse-migrate -n discourse --ignore-not-found
# 2. Run migrations as a Job (update image tag in migration-job.yaml first)
kubectl apply -f kubernetes/base/migration-job.yaml
kubectl wait --for=condition=complete job/discourse-migrate -n discourse --timeout=600s
# 3. Then deploy/update the application
kubectl kustomize kubernetes/overlays/production/ | kubectl apply -f -If using a GitOps tool, the Job annotations handle this automatically: Flux (kustomize.toolkit.fluxcd.io/force), ArgoCD (BeforeHookCreation), Helm (before-hook-creation).
See kubernetes/ for full Kustomize manifests and ARCHITECTURE.md for detailed deployment patterns, probe tuning, HPA, PDB, and operational runbooks.
./scripts/test-k8s-bootstrap-validationValidates error detection and config path correctness. Runs in CI on every push/PR.
./scripts/test-k8s-bootstrapBuilds an actual image and verifies the installed Discourse version matches.
./scripts/list-versionsLists recent stable Discourse releases from the GitHub API.
- Network Isolation: Build workflow never has production DB credentials
- No Runtime Secrets: Only GITHUB_TOKEN for registry push
- Reproducible: Same inputs = same image (pinned submodule, plugin refs)
- Rollback Ready: Previous images retained by registry policy
Only one secret is required:
GITHUB_TOKEN- Automatically provided by GitHub Actions
Ensure the Discourse version exists as a git tag in the upstream repository:
./scripts/list-versions 20- Verify the plugin repository URL is correct
- Ensure the
ref(branch/tag/commit) exists - Check plugin compatibility with the Discourse version
Ensure Docker has sufficient resources:
- Memory: At least 4GB
- Disk space: At least 10GB free
The submodule is updated automatically each day by the check-upstream.yml workflow. To update manually:
cd discourse_docker
git fetch origin
git checkout <commit-sha>
cd ..
git add discourse_docker
git commit -m "Update discourse_docker submodule to <commit-sha>"- Make changes to
config/basecontainer.yaml - Run
./scripts/test-k8s-bootstrap-validationto validate - For full verification:
./scripts/build.sh v2026.1.0 - Commit and push changes
- Create or modify plugin config in
config/plugins/ - Trigger workflow with plugin name:
gh workflow run build-image.yml -f plugins=<name> - Test the resulting image
- Commit and push the plugin configuration
- Discourse Official Repository
- discourse_docker Repository
- Discourse Meta Forum
- GitHub Container Registry Documentation
This build infrastructure is licensed under the MIT License. See LICENSE for details.
Discourse itself is licensed under GPL v2.