Compose in Production
Docker Compose can be used to deploy applications in production environments. This guide covers best practices, configuration strategies, and operational patterns for running Compose-based applications in production.
Production Readiness Checklist
Before deploying to production, ensure your Compose application meets these criteria:
| Category | Requirement | Status |
|---|---|---|
| Images | Use specific image tags (not latest) | Required |
| Images | Images scanned for vulnerabilities | Required |
| Security | Containers run as non-root users | Required |
| Security | Secrets managed externally (not in Compose file) | Required |
| Security | Read-only root filesystems where possible | Recommended |
| Resources | CPU and memory limits defined | Required |
| Health | Health checks configured for all services | Required |
| Logging | Centralized logging configured | Required |
| Restart | Restart policies set (unless-stopped) | Required |
| Data | Persistent data on named volumes | Required |
| Network | Internal networks for backend services | Recommended |
| Backup | Volume backup strategy in place | Required |
Production Compose File
Base Configuration
docker-compose.yml — base configuration:
yaml
services:
frontend:
image: myregistry/frontend:${VERSION}
restart: unless-stopped
deploy:
resources:
limits:
cpus: '0.5'
memory: 256M
reservations:
cpus: '0.25'
memory: 128M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 30s
timeout: 10s
retries: 3
networks:
- frontend-net
read_only: true
tmpfs:
- /tmp
- /var/run
- /var/cache/nginx
api:
image: myregistry/api:${VERSION}
restart: unless-stopped
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
networks:
- frontend-net
- backend-net
security_opt:
- no-new-privileges:true
db:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- db-data:/var/lib/postgresql/data
deploy:
resources:
limits:
cpus: '2.0'
memory: 1G
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER"]
interval: 10s
timeout: 5s
retries: 5
networks:
- backend-net
redis:
image: redis:7-alpine
restart: unless-stopped
command: >
redis-server
--maxmemory 256mb
--maxmemory-policy allkeys-lru
--appendonly yes
volumes:
- redis-data:/data
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- backend-net
networks:
frontend-net:
driver: bridge
backend-net:
driver: bridge
internal: true
volumes:
db-data:
redis-data:Production Override
docker-compose.prod.yml:
yaml
services:
frontend:
ports:
- "80:80"
- "443:443"
volumes:
- ./ssl:/etc/nginx/ssl:ro
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
api:
environment:
NODE_ENV: production
LOG_LEVEL: info
env_file:
- .env.prod
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
db:
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"Deployment Command
bash
# Deploy to production
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
# With specific version
VERSION=1.2.3 docker compose -f docker-compose.yml -f docker-compose.prod.yml up -dSecrets Management
Using Environment Files
bash
# .env.prod (NOT committed to git)
DB_USER=produser
DB_PASSWORD=super-secure-password-123
DB_NAME=production_db
API_SECRET_KEY=long-random-secret-key
REDIS_PASSWORD=redis-secure-passyaml
services:
api:
env_file:
- .env.prodExternal Secret Management
For production, use external secret managers:
bash
# Using Docker secrets (Swarm mode)
echo "my-password" | docker secret create db_password -
# Using environment variables from secret manager
# AWS Secrets Manager
DB_PASSWORD=$(aws secretsmanager get-secret-value --secret-id prod/db/password --query SecretString --output text)
# HashiCorp Vault
DB_PASSWORD=$(vault kv get -field=password secret/prod/db)
# Export for Compose
export DB_PASSWORD
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -dZero-Downtime Deployments
Rolling Update Strategy
bash
#!/bin/bash
# deploy.sh — Zero-downtime deployment script
set -e
# Variables
export VERSION=$1
COMPOSE_FILES="-f docker-compose.yml -f docker-compose.prod.yml"
echo "Deploying version: $VERSION"
# Pull new images
docker compose $COMPOSE_FILES pull
# Update services one at a time
for SERVICE in api frontend; do
echo "Updating $SERVICE..."
docker compose $COMPOSE_FILES up -d --no-deps $SERVICE
# Wait for health check
echo "Waiting for $SERVICE to be healthy..."
timeout 120 bash -c "
until docker compose $COMPOSE_FILES ps $SERVICE | grep -q 'healthy'; do
sleep 5
done
"
echo "$SERVICE is healthy!"
done
echo "Deployment complete!"
# Clean up old images
docker image prune -fBlue-Green Deployment
yaml
# docker-compose.blue-green.yml
services:
api-blue:
image: myregistry/api:${BLUE_VERSION}
networks:
- backend-net
profiles:
- blue
api-green:
image: myregistry/api:${GREEN_VERSION}
networks:
- backend-net
profiles:
- green
nginx:
image: nginx:1.25-alpine
ports:
- "80:80"
volumes:
- ./nginx/${ACTIVE_COLOR}.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- api-${ACTIVE_COLOR}
networks:
- frontend-net
- backend-netbash
# Deploy green (while blue is running)
ACTIVE_COLOR=blue GREEN_VERSION=2.0 docker compose --profile green up -d api-green
# Switch traffic to green
ACTIVE_COLOR=green docker compose up -d nginx
# Remove blue
docker compose --profile blue down api-blueMonitoring and Logging
Centralized Logging with ELK
yaml
services:
# Application services use json-file driver
api:
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
labels: "service"
labels:
service: api
# Filebeat ships logs to Elasticsearch
filebeat:
image: docker.elastic.co/beats/filebeat:8.12.0
user: root
volumes:
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.12.0
environment:
- discovery.type=single-node
- xpack.security.enabled=false
volumes:
- es-data:/usr/share/elasticsearch/data
kibana:
image: docker.elastic.co/kibana/kibana:8.12.0
ports:
- "5601:5601"
depends_on:
- elasticsearch
volumes:
es-data:Monitoring with Prometheus and Grafana
yaml
services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
ports:
- "9090:9090"
restart: unless-stopped
grafana:
image: grafana/grafana:latest
volumes:
- grafana-data:/var/lib/grafana
ports:
- "3001:3000"
environment:
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD}
restart: unless-stopped
node-exporter:
image: prom/node-exporter:latest
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
restart: unless-stopped
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
restart: unless-stopped
volumes:
prometheus-data:
grafana-data:Backup and Recovery
Automated Volume Backup
yaml
services:
backup:
image: alpine:3.19
volumes:
- db-data:/source:ro
- ./backups:/backup
command: >
sh -c "
TIMESTAMP=$$(date +%Y%m%d_%H%M%S) &&
tar czf /backup/db-data-$$TIMESTAMP.tar.gz -C /source . &&
find /backup -name 'db-data-*.tar.gz' -mtime +7 -delete &&
echo 'Backup completed: db-data-$$TIMESTAMP.tar.gz'
"
profiles:
- backupbash
# Run backup manually
docker compose --profile backup run --rm backup
# Schedule with cron
# 0 2 * * * cd /app && docker compose --profile backup run --rm backupOperational Commands
bash
# Check service health
docker compose ps
docker compose ps --format json
# View resource usage
docker compose top
docker stats $(docker compose ps -q)
# View logs across services
docker compose logs --tail=100 -f
# Restart a specific service
docker compose restart api
# Scale a service
docker compose up -d --scale api=3
# Execute a command in production
docker compose exec api node scripts/migrate.js
# Database backup
docker compose exec db pg_dump -U $DB_USER $DB_NAME > backup.sql
# View compose configuration (resolved)
docker compose configProduction Deployment Checklist
| Step | Command | Purpose |
|---|---|---|
| 1. Pull images | docker compose pull | Get latest images |
| 2. Validate config | docker compose config | Check for errors |
| 3. Deploy | docker compose up -d | Start/update services |
| 4. Verify health | docker compose ps | Check all services healthy |
| 5. Check logs | docker compose logs --tail=50 | Look for errors |
| 6. Test endpoints | curl http://localhost/health | Verify functionality |
| 7. Clean up | docker image prune -f | Remove old images |
Next Steps
- Docker Compose Quick Start — Getting started with Compose
- Compose File Reference — Complete Compose file syntax
- Container Orchestration — Scale beyond single host
- Security Best Practices — Secure your production deployment
- Storage Management — Production storage strategies