A Complete Guide to Containerization
Table of Contents
- Introduction to Docker
- Getting Started
- Docker Images
- Docker Containers
- Docker Compose
- Docker Registries
- Docker Swarm
- Advanced Docker Commands
- Best Practices
- Production Deployment Strategies
Introduction to Docker
What is Docker?
Docker is a platform that enables developers to package applications and their dependencies into lightweight, portable containers. Containers provide a consistent environment for applications to run across different computing environments.
graph TD
A[Application Code] --> B[Docker Container]
C[Dependencies] --> B
D[Runtime] --> B
E[System Tools] --> B
F[Libraries] --> B
B --> G[Any Environment]
G --> H[Development]
G --> I[Testing]
G --> J[Production]Containerization vs Virtualization
graph LR
subgraph "Virtual Machines"
A1[App A] --> B1[Guest OS]
A2[App B] --> B2[Guest OS]
B1 --> C1[Hypervisor]
B2 --> C1
C1 --> D1[Host OS]
D1 --> E1[Hardware]
end
subgraph "Containers"
F1[App A] --> G1[Container Runtime]
F2[App B] --> G1
G1 --> H1[Host OS]
H1 --> I1[Hardware]
endKey Features of Docker
- Portability: Run anywhere Docker is installed
- Lightweight: Share host OS kernel
- Scalability: Easy horizontal scaling
- Isolation: Process-level separation
- Version Control: Image versioning and rollbacks
- Resource Efficiency: Optimal resource utilization
Getting Started
Installation
Windows Installation
- Download Docker Desktop from docker.com
- Run the installer with administrator privileges
- Enable WSL 2 if prompted
- Restart your computer
- Verify installation:
docker --version
docker-compose --versionBashmacOS Installation
- Download Docker Desktop for Mac
- Drag Docker.app to Applications folder
- Launch Docker Desktop
- Verify installation:
docker --versionBashLinux Installation (Ubuntu/Debian)
# Update package index
sudo apt-get update
# Install dependencies
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Set up repository
echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
# Add user to docker group
sudo usermod -aG docker $USERBashYour First Container
Let’s run your first container:
# Pull and run a simple container
docker run hello-world
# Run an interactive Ubuntu container
docker run -it ubuntu bash
# Run a web server
docker run -d -p 8080:80 nginxBashDocker Architecture
graph TB
subgraph "Docker Client"
A[docker build]
B[docker pull]
C[docker run]
end
subgraph "Docker Host"
D[Docker Daemon]
E[Images]
F[Containers]
end
subgraph "Registry"
G[Docker Hub]
H[Private Registry]
end
A --> D
B --> D
C --> D
D --> E
D --> F
D <--> G
D <--> HDocker Images
Understanding Docker Images
Docker images are read-only templates used to create containers. They contain the application code, runtime, dependencies, and configuration needed to run an application.
graph TD
A[Base Image] --> B[Add Dependencies]
B --> C[Add Application Code]
C --> D[Configure Environment]
D --> E[Final Image]
E --> F[Container Instance 1]
E --> G[Container Instance 2]
E --> H[Container Instance N]Image Management Commands
# List images
docker images
# Pull an image
docker pull nginx:latest
# Remove an image
docker rmi nginx:latest
# Image history
docker history nginx
# Inspect image details
docker inspect nginx
# Search for images
docker search pythonBashCreating Docker Images
Method 1: Using Dockerfile
Create a Dockerfile:
# Use official Python runtime as base image
FROM python:3.9-slim
# Set working directory
WORKDIR /app
# Copy requirements file
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Expose port
EXPOSE 5000
# Set environment variable
ENV FLASK_APP=app.py
# Define command to run application
CMD ["flask", "run", "--host=0.0.0.0"]DockerfileBuild the image:
docker build -t my-python-app:v1.0 .BashMethod 2: From Container
# Run a container
docker run -it ubuntu bash
# Make changes inside container
apt-get update
apt-get install -y python3
# Exit container
exit
# Commit changes to new image
docker commit <container_id> my-custom-ubuntu:v1.0BashMulti-Stage Builds
Optimize image size with multi-stage builds:
# Build stage
FROM node:16 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# Production stage
FROM node:16-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]DockerfileBest Practices for Images
- Use official base images
- Minimize layers
- Use .dockerignore
- Don’t run as root
- Use specific tags
Example .dockerignore:
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
coverage
.nyc_outputJSONDocker Containers
Container Lifecycle
stateDiagram-v2
[*] --> Created: docker create
Created --> Running: docker start
Running --> Paused: docker pause
Paused --> Running: docker unpause
Running --> Stopped: docker stop
Stopped --> Running: docker start
Stopped --> [*]: docker rm
Running --> [*]: docker rm -fContainer Management
Basic Container Operations
# Run a container
docker run -d --name web-server -p 8080:80 nginx
# List running containers
docker ps
# List all containers
docker ps -a
# Stop a container
docker stop web-server
# Start a stopped container
docker start web-server
# Restart a container
docker restart web-server
# Remove a container
docker rm web-server
# Remove running container forcefully
docker rm -f web-serverBashInteractive Containers
# Run interactive container
docker run -it ubuntu bash
# Execute command in running container
docker exec -it web-server bash
# Execute single command
docker exec web-server ls /etc/
# Attach to running container
docker attach web-serverBashContainer Monitoring
# View container logs
docker logs web-server
# Follow logs in real-time
docker logs -f web-server
# View resource usage
docker stats web-server
# Inspect container details
docker inspect web-server
# View processes in container
docker top web-serverBashContainer Networking
Docker provides several network drivers:
graph TD
A[Container Networking] --> B[Bridge Network]
A --> C[Host Network]
A --> D[Overlay Network]
A --> E[Macvlan Network]
A --> F[None Network]
B --> G[Default for single host]
C --> H[Share host network stack]
D --> I[Multi-host communication]
E --> J[Assign MAC addresses]
F --> K[No networking]Network Management
# List networks
docker network ls
# Create network
docker network create my-network
# Create network with specific driver
docker network create --driver bridge my-bridge-network
# Inspect network
docker network inspect my-network
# Connect container to network
docker network connect my-network web-server
# Disconnect container from network
docker network disconnect my-network web-server
# Remove network
docker network rm my-networkBashPort Mapping
# Map single port
docker run -d -p 8080:80 nginx
# Map multiple ports
docker run -d -p 8080:80 -p 8443:443 nginx
# Map to specific interface
docker run -d -p 127.0.0.1:8080:80 nginx
# Random port mapping
docker run -d -P nginxBashContainer Storage
Volume Management
# Create volume
docker volume create my-volume
# List volumes
docker volume ls
# Inspect volume
docker volume inspect my-volume
# Remove volume
docker volume rm my-volume
# Remove unused volumes
docker volume pruneBashVolume Types
graph TD
A[Docker Storage] --> B[Volumes]
A --> C[Bind Mounts]
A --> D[tmpfs Mounts]
B --> E[Managed by Docker]
B --> F[Persistent]
B --> G[Shareable]
C --> H[Host filesystem]
C --> I[Direct access]
D --> J[Memory storage]
D --> K[Temporary]Using Volumes
# Named volume
docker run -d -v my-volume:/var/lib/mysql mysql
# Bind mount
docker run -d -v /host/path:/container/path nginx
# tmpfs mount
docker run -d --tmpfs /tmp nginx
# Read-only mount
docker run -d -v /host/path:/container/path:ro nginxBashContainer Security
Security Best Practices
- Use non-root user:
FROM ubuntu
RUN useradd -m appuser
USER appuserDockerfile- Limit resources:
docker run -d --memory=512m --cpus=1 nginxBash- Read-only filesystem:
docker run -d --read-only nginxBash- Security options:
docker run -d --security-opt=no-new-privileges nginxBashDocker Compose
Introduction to Docker Compose
Docker Compose is a tool for defining and running multi-container Docker applications using a YAML file.
Installation
graph LR
A[compose.yaml] --> B[Docker Compose]
B --> C[Container 1]
B --> D[Container 2]
B --> E[Container N]
B --> F[Networks]
B --> G[Volumes]Linux Installation
# Download Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# Make executable
sudo chmod +x /usr/local/bin/docker-compose
# Verify installation
docker-compose --versionBashBasic Compose File
Create compose.yaml:
version: '3.8'
services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./html:/usr/share/nginx/html
networks:
- frontend
app:
build: ./app
depends_on:
- db
environment:
- DATABASE_URL=postgresql://user:password@db:5432/myapp
networks:
- frontend
- backend
db:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- db_data:/var/lib/postgresql/data
networks:
- backend
networks:
frontend:
backend:
volumes:
db_data:YAMLCompose Commands
# Start services
docker-compose up
# Start in detached mode
docker-compose up -d
# Start specific service
docker-compose up web
# Stop services
docker-compose down
# Stop and remove volumes
docker-compose down -v
# View logs
docker-compose logs
# Follow logs
docker-compose logs -f
# Scale services
docker-compose up --scale web=3
# Execute commands
docker-compose exec web bashBashAdvanced Compose Features
Environment Variables
Create .env file:
POSTGRES_VERSION=13
APP_PORT=8080
DATABASE_NAME=myappBashUse in compose.yaml:
version: '3.8'
services:
db:
image: postgres:${POSTGRES_VERSION}
environment:
POSTGRES_DB: ${DATABASE_NAME}
web:
build: .
ports:
- "${APP_PORT}:80"YAMLHealth Checks
services:
web:
image: nginx
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40sYAMLResource Limits
services:
web:
image: nginx
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256MYAMLProduction Compose Example
version: '3.8'
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- web
networks:
- frontend
web:
build:
context: .
dockerfile: Dockerfile.prod
environment:
- NODE_ENV=production
- DATABASE_URL=${DATABASE_URL}
depends_on:
- db
- redis
networks:
- frontend
- backend
deploy:
replicas: 3
restart_policy:
condition: on-failure
db:
image: postgres:13-alpine
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- backend
redis:
image: redis:alpine
volumes:
- redis_data:/data
networks:
- backend
volumes:
postgres_data:
redis_data:
networks:
frontend:
backend:YAMLDocker Registries
Overview of Container Registries
graph TD
A[Developer] --> B[Build Image]
B --> C[Tag Image]
C --> D[Push to Registry]
D --> E[Docker Hub]
D --> F[AWS ECR]
D --> G[Google GCR]
D --> H[Azure ACR]
D --> I[Private Registry]
E --> J[Pull Image]
F --> J
G --> J
H --> J
I --> J
J --> K[Deploy Container]Docker Hub
Setting Up Docker Hub
- Create account at hub.docker.com
- Login via CLI:
docker loginBash- Tag and push image:
# Tag image
docker tag my-app:latest username/my-app:latest
# Push to Docker Hub
docker push username/my-app:latest
# Pull from Docker Hub
docker pull username/my-app:latestBashAutomated Builds
Set up automated builds with GitHub:
# .github/workflows/docker.yml
name: Docker Build and Push
on:
push:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
push: true
tags: username/my-app:latestYAMLAmazon ECR
Setup ECR Repository
# Create repository
aws ecr create-repository --repository-name my-app
# Get login token
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <account-id>.dkr.ecr.us-east-1.amazonaws.com
# Tag and push
docker tag my-app:latest <account-id>.dkr.ecr.us-east-1.amazonaws.com/my-app:latest
docker push <account-id>.dkr.ecr.us-east-1.amazonaws.com/my-app:latestBashGoogle Container Registry
Setup GCR
# Configure Docker for GCR
gcloud auth configure-docker
# Tag and push
docker tag my-app:latest gcr.io/project-id/my-app:latest
docker push gcr.io/project-id/my-app:latestBashAzure Container Registry
Setup ACR
# Login to ACR
az acr login --name myregistry
# Tag and push
docker tag my-app:latest myregistry.azurecr.io/my-app:latest
docker push myregistry.azurecr.io/my-app:latestBashPrivate Docker Registry
Setup Private Registry
# Run registry container
docker run -d -p 5000:5000 --restart=always --name registry \
-v /opt/registry:/var/lib/registry \
registry:2
# Tag and push to private registry
docker tag my-app:latest localhost:5000/my-app:latest
docker push localhost:5000/my-app:latestBashSecure Private Registry
# docker-compose.yml for secure registry
version: '3.8'
services:
registry:
image: registry:2
ports:
- "5000:5000"
environment:
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
volumes:
- ./data:/var/lib/registry
- ./certs:/certs
- ./auth:/authYAMLDocker Swarm
Introduction to Docker Swarm
Docker Swarm is Docker’s native clustering and orchestration tool that allows you to manage a cluster of Docker nodes as a single virtual system.
graph TB
subgraph "Docker Swarm Cluster"
A[Manager Node 1] --> B[Worker Node 1]
A --> C[Worker Node 2]
A --> D[Worker Node 3]
E[Manager Node 2] --> B
E --> C
E --> D
F[Manager Node 3] --> B
F --> C
F --> D
end
G[Client] --> A
G --> E
G --> FSetting Up a Swarm Cluster
Initialize Swarm
# Initialize swarm on manager node
docker swarm init --advertise-addr <manager-ip>
# Example
docker swarm init --advertise-addr 192.168.1.100BashJoin Nodes to Swarm
# Get join token for workers
docker swarm join-token worker
# Join worker node
docker swarm join --token <token> <manager-ip>:2377
# Get join token for managers
docker swarm join-token manager
# Join manager node
docker swarm join --token <token> <manager-ip>:2377BashManage Nodes
# List nodes
docker node ls
# Inspect node
docker node inspect <node-id>
# Promote worker to manager
docker node promote <node-id>
# Demote manager to worker
docker node demote <node-id>
# Remove node from swarm
docker node rm <node-id>BashSwarm Services
Creating Services
# Create simple service
docker service create --name web-service --replicas 3 nginx
# Create service with port mapping
docker service create --name web-service \
--replicas 3 \
--publish 8080:80 \
nginx
# Create service with resource constraints
docker service create --name web-service \
--replicas 3 \
--limit-cpu 0.5 \
--limit-memory 512m \
nginxBashService Management
# List services
docker service ls
# Inspect service
docker service inspect web-service
# View service logs
docker service logs web-service
# Scale service
docker service scale web-service=5
# Update service
docker service update --image nginx:1.20 web-service
# Remove service
docker service rm web-serviceBashStack Deployment
Deploy Stack with Compose
Create stack.yml:
version: '3.8'
services:
web:
image: nginx:latest
ports:
- "80:80"
deploy:
replicas: 3
restart_policy:
condition: on-failure
placement:
constraints:
- node.role == worker
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints:
- node.role == manager
networks:
- webnet
networks:
webnet:
driver: overlayYAMLDeploy stack:
# Deploy stack
docker stack deploy -c stack.yml mystack
# List stacks
docker stack ls
# List stack services
docker stack services mystack
# Remove stack
docker stack rm mystackBashSwarm Networks
graph TD
A[Overlay Network] --> B[Container 1 - Node 1]
A --> C[Container 2 - Node 2]
A --> D[Container 3 - Node 3]
E[Ingress Network] --> F[Load Balancer]
F --> B
F --> C
F --> DNetwork Management
# Create overlay network
docker network create --driver overlay my-overlay
# Create encrypted overlay network
docker network create --driver overlay --opt encrypted my-secure-overlay
# List networks
docker network ls
# Inspect network
docker network inspect my-overlayBashSwarm Security
Secrets Management
# Create secret from file
echo "mypassword" | docker secret create db_password -
# Create secret from stdin
docker secret create db_password /path/to/password.txt
# List secrets
docker secret ls
# Inspect secret
docker secret inspect db_password
# Use secret in service
docker service create --name web \
--secret db_password \
nginxBashUsing Secrets in Stack
version: '3.8'
services:
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
secrets:
db_password:
external: trueYAMLLoad Balancing and Routing
graph LR
A[External Request] --> B[Swarm Load Balancer]
B --> C[Service Replica 1]
B --> D[Service Replica 2]
B --> E[Service Replica 3]
subgraph "Node 1"
C
end
subgraph "Node 2"
D
end
subgraph "Node 3"
E
endRolling Updates
# Update service with rolling update
docker service update \
--image nginx:1.20 \
--update-parallelism 1 \
--update-delay 10s \
web-service
# Rollback service
docker service rollback web-serviceBashAdvanced Docker Commands
Container Inspection and Debugging
# Detailed container information
docker inspect <container_id>
# Container resource usage
docker stats <container_id>
# Running processes in container
docker top <container_id>
# Container filesystem changes
docker diff <container_id>
# Container events
docker events --filter container=<container_id>
# System-wide information
docker system info
# Disk usage
docker system df
# Cleanup unused resources
docker system prune
# Cleanup with specific filters
docker system prune --filter "until=24h"BashImage Operations
# Image history and layers
docker history <image_name>
# Export container filesystem
docker export <container_id> > container.tar
# Import filesystem as image
cat container.tar | docker import - <image_name>:<tag>
# Save image to tar file
docker save -o image.tar <image_name>
# Load image from tar file
docker load -i image.tar
# Copy files between container and host
docker cp <container_id>:/path/to/file /host/path
docker cp /host/path <container_id>:/path/to/fileBashAdvanced Networking
# Create custom bridge network
docker network create --driver bridge \
--subnet=172.20.0.0/16 \
--ip-range=172.20.240.0/20 \
custom-bridge
# Create macvlan network
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
macvlan-net
# Connect container to multiple networks
docker run -d --name multi-net-container \
--network network1 \
nginx
docker network connect network2 multi-net-containerBashResource Management
# CPU and memory limits
docker run -d --name limited-container \
--cpus="1.5" \
--memory="1g" \
--memory-swap="2g" \
nginx
# Set CPU shares (relative weight)
docker run -d --cpu-shares=512 nginx
# Set memory swappiness
docker run -d --memory-swappiness=0 nginx
# Block I/O constraints
docker run -d --device-read-bps /dev/sda:1mb nginxBashSecurity Hardening
# Run with security options
docker run -d \
--security-opt=no-new-privileges \
--security-opt=seccomp=seccomp-profile.json \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--read-only \
--user 1000:1000 \
nginx
# Use custom AppArmor profile
docker run -d --security-opt apparmor=docker-nginx nginx
# Run with SELinux labels
docker run -d --security-opt label=level:s0:c100,c200 nginxBashMulti-Architecture Builds
# Create and use buildx builder
docker buildx create --name mybuilder --use
# Build for multiple platforms
docker buildx build --platform linux/amd64,linux/arm64 \
-t myapp:latest --push .
# Build with cache mount
docker buildx build \
--cache-from type=local,src=/tmp/.buildx-cache \
--cache-to type=local,dest=/tmp/.buildx-cache \
-t myapp:latest .BashHealth Checks and Monitoring
# Dockerfile with health check
FROM nginx
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost/ || exit 1Dockerfile# Check container health
docker inspect --format='{{.State.Health.Status}}' <container_id>
# Custom health check command
docker run -d --health-cmd="curl -f http://localhost:8080/health" \
--health-interval=30s \
--health-timeout=10s \
--health-retries=3 \
myappBashBest Practices
Dockerfile Best Practices
Optimize Layer Caching
# Bad: Installs dependencies every time code changes
FROM node:16
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "start"]
# Good: Install dependencies first, then copy code
FROM node:16
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
CMD ["npm", "start"]DockerfileMulti-Stage Builds for Size Optimization
# Build stage
FROM node:16 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:16-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
EXPOSE 3000
USER node
CMD ["node", "dist/server.js"]DockerfileSecurity Best Practices
# Use official, minimal base images
FROM node:16-alpine
# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# Set working directory
WORKDIR /app
# Copy application code with correct ownership
COPY --chown=nextjs:nodejs . .
# Install dependencies
RUN npm ci --only=production && npm cache clean --force
# Switch to non-root user
USER nextjs
# Use specific port
EXPOSE 3000
# Use exec form for CMD
CMD ["node", "server.js"]Dockerfile.dockerignore Best Practices
# Version control
.git
.gitignore
# Dependencies
node_modules
npm-debug.log*
# Environment files
.env
.env.local
.env.production
# Build outputs
dist
build
coverage
# OS generated files
.DS_Store
Thumbs.db
# IDE files
.vscode
.idea
# Documentation
README.md
docs/
# Test files
test/
spec/
*.test.jsDockerfileContainer Runtime Best Practices
Resource Management
# docker-compose.yml with resource limits
version: '3.8'
services:
web:
image: nginx
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost"]
interval: 30s
timeout: 10s
retries: 3YAMLEnvironment Configuration
# Use environment variables for configuration
docker run -d \
-e DATABASE_URL=postgresql://user:pass@db:5432/myapp \
-e REDIS_URL=redis://redis:6379 \
-e LOG_LEVEL=info \
myapp:latestBashLogging Configuration
# docker-compose.yml with logging
version: '3.8'
services:
web:
image: nginx
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"YAMLSecurity Best Practices
Image Security
# Scan images for vulnerabilities
docker scan myapp:latest
# Use minimal base images
FROM scratch
FROM alpine
FROM distroless/nodejs
# Keep base images updated
FROM node:16-alpine
RUN apk update && apk upgradeDockerfileRuntime Security
# Use read-only containers when possible
docker run -d --read-only --tmpfs /tmp nginx
# Drop unnecessary capabilities
docker run -d --cap-drop ALL --cap-add NET_BIND_SERVICE nginx
# Use user namespaces
dockerd --userns-remap=default
# Limit container privileges
docker run -d --security-opt=no-new-privileges nginxBashSecrets Management
# Use Docker secrets in production
version: '3.8'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
secrets:
db_password:
file: ./db_password.txtYAMLProduction Deployment Patterns
Rolling Deployments
# Blue-green deployment script
#!/bin/bash
# Deploy new version
docker service create --name app-green \
--replicas 3 \
--network frontend \
myapp:v2
# Health check
while ! curl -f http://app-green/health; do
sleep 5
done
# Switch traffic
docker service update --label-add version=active app-green
docker service update --label-rm version=active app-blue
# Remove old version
docker service rm app-blueBashMonitoring and Observability
# Monitoring stack
version: '3.8'
services:
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
grafana:
image: grafana/grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
cadvisor:
image: gcr.io/cadvisor/cadvisor
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:roYAMLProduction Deployment Strategies
Container Orchestration Comparison
graph TD
A[Container Orchestration] --> B[Docker Swarm]
A --> C[Kubernetes]
A --> D[Amazon ECS]
A --> E[Azure Container Instances]
B --> F[Simple setup, Built-in Docker]
C --> G[Feature-rich, Large ecosystem]
D --> H[AWS native, Serverless options]
E --> I[Azure native, Pay-per-use]High Availability Architecture
graph TB
subgraph "Load Balancer"
A[NGINX/HAProxy]
end
subgraph "Application Tier"
B[App Container 1]
C[App Container 2]
D[App Container 3]
end
subgraph "Database Tier"
E[Primary DB]
F[Secondary DB]
end
subgraph "Cache Tier"
G[Redis Cluster]
end
A --> B
A --> C
A --> D
B --> E
C --> E
D --> E
E --> F
B --> G
C --> G
D --> GCI/CD Pipeline
# .github/workflows/deploy.yml
name: Deploy to Production
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run tests
run: |
docker build -t myapp:test .
docker run --rm myapp:test npm test
build:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build and push
run: |
docker build -t myapp:${{ github.sha }} .
docker tag myapp:${{ github.sha }} myapp:latest
docker push myapp:${{ github.sha }}
docker push myapp:latest
deploy:
needs: build
runs-on: ubuntu-latest
steps:
- name: Deploy to production
run: |
docker service update --image myapp:${{ github.sha }} production_appYAMLBackup and Disaster Recovery
#!/bin/bash
# Backup script for containerized applications
# Backup database
docker exec postgres_container pg_dump -U user database > backup_$(date +%Y%m%d).sql
# Backup volumes
docker run --rm -v postgres_data:/source -v $(pwd):/backup ubuntu tar czf /backup/postgres_data_backup_$(date +%Y%m%d).tar.gz -C /source .
# Upload to S3
aws s3 cp backup_$(date +%Y%m%d).sql s3://my-backups/
aws s3 cp postgres_data_backup_$(date +%Y%m%d).tar.gz s3://my-backups/BashPerformance Optimization
Resource Optimization
# Optimized production compose file
version: '3.8'
services:
web:
image: myapp:latest
deploy:
replicas: 3
resources:
limits:
cpus: '1'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
restart_policy:
condition: on-failure
max_attempts: 3
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60sYAMLCaching Strategies
# Multi-stage build with dependency caching
FROM node:16-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:16-alpine AS runner
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY package*.json ./
USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]DockerfileScaling Strategies
Horizontal Scaling
# Auto-scaling with Docker Swarm
docker service create \
--name web-service \
--replicas 3 \
--limit-cpu 0.5 \
--limit-memory 512m \
--update-parallelism 1 \
--update-delay 10s \
myapp:latest
# Scale based on metrics
while true; do
CPU_USAGE=$(docker stats --no-stream --format "table {{.CPUPerc}}" | tail -n +2 | sed 's/%//' | awk '{sum+=$1} END {print sum/NR}')
if (( $(echo "$CPU_USAGE > 80" | bc -l) )); then
docker service scale web-service=$(($(docker service ls --filter name=web-service --format "{{.Replicas}}" | cut -d'/' -f1) + 1))
fi
sleep 30
doneBashDatabase Scaling
# Database cluster setup
version: '3.8'
services:
postgres-primary:
image: postgres:13
environment:
POSTGRES_REPLICATION_MODE: master
POSTGRES_REPLICATION_USER: replicator
POSTGRES_REPLICATION_PASSWORD: replicator_password
volumes:
- postgres_primary_data:/var/lib/postgresql/data
postgres-secondary:
image: postgres:13
environment:
POSTGRES_REPLICATION_MODE: slave
POSTGRES_REPLICATION_USER: replicator
POSTGRES_REPLICATION_PASSWORD: replicator_password
POSTGRES_MASTER_HOST: postgres-primary
depends_on:
- postgres-primaryYAMLTroubleshooting Guide
Common Issues and Solutions
- Container Won’t Start
# Check logs
docker logs <container_id>
# Check exit code
docker ps -a
# Debug with shell
docker run -it --entrypoint /bin/sh <image_name>Bash- Network Connectivity Issues
# Test network connectivity
docker exec <container_id> ping <target>
# Check network configuration
docker network inspect <network_name>
# List network interfaces in container
docker exec <container_id> ip addr showBash- Performance Issues
# Monitor resource usage
docker stats
# Check system resources
docker system df
# Analyze container processes
docker exec <container_id> topBash- Storage Issues
# Check disk usage
docker system df
# Clean up unused resources
docker system prune -a
# Check volume usage
docker volume lsBashThis comprehensive Docker guide covers everything from basic concepts to advanced production deployment strategies. Each section includes practical examples and real-world scenarios that you can implement in your projects.
Key Takeaways:
- Start Simple: Begin with basic containers and gradually move to complex orchestration
- Security First: Always follow security best practices from development to production
- Monitor Everything: Implement comprehensive monitoring and logging
- Automate Deployments: Use CI/CD pipelines for consistent deployments
- Plan for Scale: Design your architecture to handle growth
- Document Everything: Maintain clear documentation for your containerized applications
Remember to practice these concepts in a development environment before implementing them in production. Docker’s ecosystem is vast and constantly evolving, so stay updated with the latest features and best practices.
Discover more from Altgr Blog
Subscribe to get the latest posts sent to your email.
