Updated for Docker 24.x and latest features
📚 Table of Contents
- 🚀 Getting Started
- 🔧 Docker Basics
- 📦 Container Management
- 🖼️ Image Management
- 📝 Dockerfile Mastery
- 🎼 Docker Compose
- 🌐 Networking
- 💾 Volumes & Storage
- 🏪 Registry & Hub
- ⚙️ System Management
- 🔍 Debugging & Troubleshooting
- ⭐ Best Practices
- 🆕 Latest Features (2024-2025)
- 🎓 Learning Path for Beginners
- 🚀 Advanced Patterns
🚀 Getting Started
🎯 What is Docker?
Docker is a containerization platform that packages applications and their dependencies into lightweight, portable containers.
Key Concepts for Beginners:
- Container: A running instance of an image
- Image: A template/blueprint for containers
- Dockerfile: Instructions to build an image
- Registry: Storage for Docker images (like Docker Hub)
🛠️ Installation & Setup
# Check Docker version
docker --version
docker version
# Check Docker info
docker info
# Test Docker installation
docker run hello-world
# Enable Docker BuildKit (latest build engine)
export DOCKER_BUILDKIT=1Bash🏁 Your First Container
# Run your first container
docker run ubuntu echo "Hello Docker!"
# Run interactive container
docker run -it ubuntu /bin/bash
# Run web server
docker run -d -p 8080:80 nginx
# Visit http://localhost:8080 in browserBash🔧 Docker Basics
Essential Commands Every Beginner Should Know
# Show help
docker --help
docker <command> --help
# Login to Docker Hub
docker login
# Logout from Docker Hub
docker logout
# Show Docker system information
docker system info
# Show disk usage
docker system dfBashUnderstanding Docker Architecture
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Docker CLI │───▶│ Docker Daemon │───▶│ Containers │
│ (Client) │ │ (dockerd) │ │ (Runtime) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ Images │
│ (Storage) │
└─────────────────┘Bash📦 Container Management
🚀 Running Containers (Enhanced)
# Basic container run
docker run <image>
# Run in background (detached mode)
docker run -d <image>
# Run with custom name
docker run --name my-container <image>
# Run with port mapping
docker run -p 8080:80 nginx
# Run with environment variables
docker run -e NODE_ENV=production -e PORT=3000 <image>
# Run with volume mount
docker run -v $(pwd):/app <image>
# Run interactively with pseudo-TTY
docker run -it ubuntu /bin/bash
# Run and auto-remove when stopped
docker run --rm <image>
# Run with resource limits (NEW: improved syntax)
docker run --memory=512m --cpus=1.5 --memory-swap=1g <image>
# Run with restart policy
docker run --restart=unless-stopped <image>
# Run with custom hostname
docker run --hostname my-host <image>
# Run with specific user
docker run --user 1001:1001 <image>
# Run with working directory
docker run -w /app <image>
# Run with security options (NEW)
docker run --security-opt no-new-privileges:true <image>Bash🔄 Container Lifecycle Management
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# List containers with custom format
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
# Start/stop containers
docker start <container>
docker stop <container>
docker restart <container>
# Pause/unpause containers (freeze processes)
docker pause <container>
docker unpause <container>
# Kill container (force stop)
docker kill <container>
# Remove container
docker rm <container>
# Remove running container (force)
docker rm -f <container>
# Remove all stopped containers
docker container prune
# Remove containers by filter
docker container prune --filter "until=24h"Bash🔍 Container Interaction & Monitoring
# Execute commands in running container
docker exec <container> ls -la
docker exec -it <container> /bin/bash
# View container logs
docker logs <container>
docker logs -f <container> # Follow logs
docker logs --tail 100 <container> # Last 100 lines
docker logs --since="2025-01-01" <container> # Since date
docker logs --until="2025-01-02" <container> # Until date
# Monitor container resources (ENHANCED)
docker stats # All containers
docker stats <container> # Specific container
docker stats --no-stream # Snapshot
# Container processes
docker top <container>
# Container details
docker inspect <container>
docker inspect <container> | jq '.NetworkSettings.IPAddress'
# Copy files (bidirectional)
docker cp <container>:/path/file ./local-file
docker cp ./local-file <container>:/path/file
# Container filesystem changes
docker diff <container>
# Wait for container to stop
docker wait <container>
# Get container exit code
docker wait <container> && echo $?Bash🖼️ Image Management
📥 Working with Images
# List images
docker images
docker image ls
# List images with filters
docker images --filter "dangling=true" # Unused images
docker images --filter "before=nginx:latest" # Images before nginx
docker images --filter "since=ubuntu:20.04" # Images after ubuntu
# Pull images
docker pull ubuntu:22.04
docker pull --platform linux/amd64 nginx # Specific platform
# Build images
docker build -t my-app:1.0 .
docker build -t my-app:latest . --no-cache # Without cache
# Build with BuildKit (ENHANCED)
DOCKER_BUILDKIT=1 docker build \
--build-arg VERSION=1.0 \
--target production \
--platform linux/amd64,linux/arm64 \
-t my-app:1.0 .
# Tag images
docker tag my-app:1.0 my-app:latest
docker tag my-app:1.0 registry.com/my-app:1.0
# Push images
docker push my-app:1.0
# Remove images
docker rmi my-app:1.0
docker rmi $(docker images -q) # Remove all images
# Remove unused images
docker image prune
docker image prune -a # Remove all unused images
# Image information
docker history my-app:1.0 # Layer history
docker inspect my-app:1.0 # Detailed info
# Search Docker Hub
docker search nginx --limit 10Bash💿 Image Export/Import
# Save image to tar file
docker save -o my-app.tar my-app:1.0
# Load image from tar file
docker load -i my-app.tar
# Export container filesystem
docker export my-container > container.tar
# Import container as image
docker import container.tar my-imported-image:latest
# Multi-platform image save (NEW)
docker buildx build --platform linux/amd64,linux/arm64 \
-t my-app:multi --output type=docker .Bash📝 Dockerfile Mastery
🏗️ Basic Dockerfile Instructions
# Syntax version (NEW: recommended)
# syntax=docker/dockerfile:1.6
# Base image with specific version
FROM node:18.17-alpine AS base
# Metadata labels (ENHANCED)
LABEL org.opencontainers.image.title="My App"
LABEL org.opencontainers.image.description="A sample application"
LABEL org.opencontainers.image.version="1.0.0"
LABEL org.opencontainers.image.authors="your-email@example.com"
LABEL org.opencontainers.image.source="https://github.com/user/repo"
# Set working directory
WORKDIR /app
# Copy dependency files first (for better caching)
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production && npm cache clean --force
# Copy application code
COPY . .
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001 -G nodejs
# Change ownership
RUN chown -R nextjs:nodejs /app
USER nextjs
# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000
# Expose port
EXPOSE 3000
# Health check (ENHANCED)
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
CMD node healthcheck.js || exit 1
# Default command
CMD ["node", "server.js"]Bash🚀 Advanced Dockerfile Techniques
# syntax=docker/dockerfile:1.6
# Multi-stage build with BuildKit features
FROM node:18-alpine AS dependencies
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci --only=production
FROM node:18-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci
COPY . .
RUN npm run build
FROM node:18-alpine AS runtime
WORKDIR /app
# Copy only necessary files
COPY --from=dependencies /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
COPY package*.json ./
# Security: run as non-root
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001 -G nodejs
USER nextjs
# Use exec form for signals
CMD ["node", "dist/server.js"]Bash🛡️ Dockerfile Security Best Practices
# Use minimal base images
FROM alpine:3.18
# Use specific versions
FROM node:18.17.1-alpine3.18
# Multi-stage with distroless (NEW PATTERN)
FROM node:18-alpine AS builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build
FROM gcr.io/distroless/nodejs18-debian11
COPY --from=builder /app/dist /app
WORKDIR /app
CMD ["server.js"]
# Security scanning with dive (add to CI/CD)
# dive my-app:latest
# Use .dockerignore effectively
# .dockerignore example:
node_modules
npm-debug.log
.git
.env
*.md
coverage/
.nyc_outputBash🎯 BuildKit Advanced Features (NEW)
# syntax=docker/dockerfile:1.6
FROM alpine AS base
# Cache mounts for package managers
RUN --mount=type=cache,target=/var/cache/apk \
apk add --update curl
# Secret mounts for sensitive data
RUN --mount=type=secret,id=mypassword \
curl -u user:$(cat /run/secrets/mypassword) https://api.example.com
# SSH mounts for private repositories
RUN --mount=type=ssh \
git clone git@github.com:private/repo.git
# Build-time cache invalidation
ARG CACHEBUST=1
RUN echo "Cache bust: $CACHEBUST"
# Heredoc syntax (NEW)
RUN <<EOF
#!/bin/bash
set -e
echo "Installing dependencies..."
apk add --no-cache curl wget
echo "Setup complete"
EOFBash🎼 Docker Compose
🚀 Basic Compose Commands (Enhanced)
# Start services
docker compose up # NEW: 'docker compose' (v2)
docker compose up -d # Background
docker compose up --build # Force rebuild
# Stop services
docker compose down # Stop and remove
docker compose down -v # Also remove volumes
docker compose down --rmi all # Also remove images
# Service management
docker compose start <service> # Start specific service
docker compose stop <service> # Stop specific service
docker compose restart <service> # Restart specific service
# Scaling (ENHANCED)
docker compose up --scale web=3 # Scale web service to 3 instances
# Logs
docker compose logs # All services
docker compose logs -f web # Follow web service logs
docker compose logs --tail=50 web # Last 50 lines
# Execute commands
docker compose exec web bash # Interactive shell
docker compose run web npm test # Run one-off command
# Configuration
docker compose config # Validate and view config
docker compose config --services # List services
docker compose ps # Show running servicesBash📋 Modern Docker Compose File (v3.8+)
# docker-compose.yml
# Global settings
x-common-variables: &common-variables
POSTGRES_DB: myapp
POSTGRES_USER: user
x-logging: &default-logging
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile
target: production
args:
- NODE_ENV=production
image: my-app:latest
container_name: my-app-web
restart: unless-stopped
ports:
- "3000:3000"
environment:
<<: *common-variables
NODE_ENV: production
DATABASE_URL: postgresql://user:${POSTGRES_PASSWORD}@db:5432/myapp
volumes:
- ./uploads:/app/uploads:rw
networks:
- app-network
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
logging: *default-logging
deploy:
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 256M
db:
image: postgres:15.4-alpine
container_name: my-app-db
restart: unless-stopped
environment:
<<: *common-variables
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
networks:
- app-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
logging: *default-logging
redis:
image: redis:7.2-alpine
container_name: my-app-redis
restart: unless-stopped
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
volumes:
- redis_data:/data
networks:
- app-network
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
logging: *default-logging
# NEW: Development overrides
web-dev:
extends:
service: web
build:
target: development
environment:
NODE_ENV: development
volumes:
- .:/app:delegated
- /app/node_modules
ports:
- "3000:3000"
- "9229:9229" # Debug port
profiles:
- dev
volumes:
postgres_data:
driver: local
redis_data:
driver: local
networks:
app-network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
# Environment file (.env)
# POSTGRES_PASSWORD=securepassword
# REDIS_PASSWORD=redispasswordYAML🔧 Advanced Compose Features
# docker-compose.override.yml (automatically loaded)
version: '3.8'
services:
web:
volumes:
- .:/app:delegated # Development volume mount
environment:
- DEBUG=true
# Production compose file
# docker-compose.prod.yml
version: '3.8'
services:
web:
image: my-registry.com/my-app:${VERSION}
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
# Usage with multiple files
# docker compose -f docker-compose.yml -f docker-compose.prod.yml upYAML🌐 Networking
🔌 Network Management (Enhanced)
# List networks
docker network ls
# Create networks
docker network create my-network # Default bridge
docker network create --driver bridge my-bridge # Bridge network
docker network create --driver overlay my-overlay # Overlay (Swarm)
# Advanced network creation
docker network create \
--driver bridge \
--subnet=172.20.0.0/16 \
--ip-range=172.20.240.0/20 \
--gateway=172.20.0.1 \
my-custom-network
# Network operations
docker network connect my-network my-container # Connect container
docker network disconnect my-network my-container # Disconnect container
docker network inspect my-network # Inspect network
docker network rm my-network # Remove network
docker network prune # Remove unused networksBash🏗️ Network Types & Use Cases
# 1. Bridge Network (Default)
docker run --network bridge nginx
# Use case: Single host communication
# 2. Host Network
docker run --network host nginx
# Use case: Maximum performance, direct host access
# 3. None Network
docker run --network none alpine
# Use case: Isolated containers, security
# 4. Custom Bridge
docker network create my-app-network
docker run --network my-app-network --name web nginx
docker run --network my-app-network --name db postgres
# Use case: Service isolation, custom DNS
# 5. Overlay Network (Docker Swarm)
docker network create --driver overlay --attachable my-overlay
# Use case: Multi-host communicationBash🔀 Port Mapping & DNS
# Port mapping variations
docker run -p 8080:80 nginx # Host port 8080 → Container port 80
docker run -p 127.0.0.1:8080:80 nginx # Bind to localhost only
docker run -p 8080:80/tcp -p 8081:80/udp nginx # TCP and UDP ports
docker run -P nginx # Random port mapping
# Service discovery (automatic DNS)
# Containers can reach each other by name within same network
docker network create my-net
docker run -d --name web --network my-net nginx
docker run -d --name api --network my-net node:alpine
# 'web' can reach 'api' using hostname 'api'
# Custom hostname and aliases
docker run --name web --hostname my-web-server \
--network-alias webapp --network my-net nginxBash💾 Volumes & Storage
📁 Volume Management (Enhanced)
# Volume operations
docker volume create my-volume # Create named volume
docker volume ls # List volumes
docker volume inspect my-volume # Inspect volume details
docker volume rm my-volume # Remove volume
docker volume prune # Remove unused volumes
# Advanced volume creation
docker volume create \
--driver local \
--opt type=nfs \
--opt o=addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
nfs-volumeBash💿 Volume Types & Patterns
# 1. Named Volumes (Recommended for data persistence)
docker run -v my-data:/app/data postgres
# Managed by Docker, survives container removal
# 2. Bind Mounts (Development & configuration)
docker run -v $(pwd):/app node:alpine # Current directory
docker run -v /host/config:/app/config:ro nginx # Read-only mount
# Direct host filesystem access
# 3. Anonymous Volumes
docker run -v /app/data postgres
# Docker creates random name, cleaned up with container
# 4. tmpfs Mounts (In-memory, temporary data)
docker run --tmpfs /app/temp:rw,noexec,nosuid,size=100m alpine
# Fast, secure, limited size
# 5. Volume with specific driver
docker run -v my-volume:/data \
--volume-driver=rexray/ebs postgresBash🔄 Volume Operations & Backup
# Share volumes between containers
docker volume create shared-data
docker run -v shared-data:/data --name app1 alpine
docker run -v shared-data:/data --name app2 alpine
# Backup volume data
docker run --rm \
-v my-volume:/source:ro \
-v $(pwd):/backup \
alpine tar czf /backup/backup-$(date +%Y%m%d).tar.gz -C /source .
# Restore volume data
docker run --rm \
-v my-volume:/target \
-v $(pwd):/backup \
alpine tar xzf /backup/backup-20250101.tar.gz -C /target
# Copy data between volumes
docker run --rm \
-v source-volume:/source:ro \
-v target-volume:/target \
alpine cp -av /source/. /target/
# Inspect volume usage
docker system df -v # Detailed volume usageBash🚀 Advanced Storage Patterns
# Development with node_modules optimization
docker run -v $(pwd):/app -v /app/node_modules node:alpine
# Prevents overwriting node_modules with empty host directory
# Multi-stage volume optimization
# Dockerfile:
FROM node:alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci
FROM node:alpine AS runtime
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Volume with specific mount options
docker run \
--mount type=bind,source=$(pwd),target=/app,readonly \
--mount type=volume,source=my-volume,target=/data \
alpineDockerfile🏪 Registry & Hub
🌐 Docker Hub Operations (Enhanced)
# Authentication
docker login # Interactive login
docker login --username myuser # Specify username
echo $PASSWORD | docker login --password-stdin # Secure password input
# Repository operations
docker tag my-app:latest myuser/my-app:latest
docker tag my-app:latest myuser/my-app:1.0.0
docker push myuser/my-app:latest
docker push myuser/my-app:1.0.0
# Multi-architecture images (NEW)
docker buildx create --use
docker buildx build --platform linux/amd64,linux/arm64 \
-t myuser/my-app:latest --push .
# Repository management
docker search nginx --limit 25 # Search with limit
docker pull myuser/my-app:latest
docker pull --platform linux/arm64 myuser/my-app:latestBash🏢 Private Registry Setup
# Run local registry
docker run -d -p 5000:5000 --name registry registry:2.8
# Secure registry with TLS
docker run -d -p 5000:5000 --name registry \
-v $(pwd)/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_PRIVATE_KEY=/certs/domain.key \
registry:2.8
# Registry with authentication
docker run -d -p 5000:5000 --name registry \
-v $(pwd)/auth:/auth \
-e REGISTRY_AUTH=htpasswd \
-e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
registry:2.8
# Work with private registry
docker tag my-app:latest localhost:5000/my-app:latest
docker push localhost:5000/my-app:latest
docker pull localhost:5000/my-app:latest
# Registry API operations
curl -X GET http://localhost:5000/v2/_catalog # List repositories
curl -X GET http://localhost:5000/v2/my-app/tags/list # List tagsBash☁️ Cloud Registry Integration
# AWS ECR
aws ecr get-login-password --region us-west-2 | \
docker login --username AWS --password-stdin \
123456789012.dkr.ecr.us-west-2.amazonaws.com
# Google Container Registry
gcloud auth configure-docker
docker push gcr.io/project-id/my-app:latest
# Azure Container Registry
az acr login --name myregistry
docker push myregistry.azurecr.io/my-app:latest
# GitHub Container Registry
echo $GITHUB_TOKEN | docker login ghcr.io -u username --password-stdin
docker push ghcr.io/username/my-app:latestBash⚙️ System Management
📊 System Information & Monitoring
# System information
docker system info # Detailed system info
docker version # Client and daemon versions
# Resource usage
docker system df # Disk usage summary
docker system df -v # Detailed disk usage
docker system events # Real-time events
docker system events --since="2025-01-01" # Events since date
# Process monitoring
docker stats # All containers
docker stats --no-stream # Snapshot
docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}"Bash🧹 Cleanup & Maintenance
# Comprehensive cleanup
docker system prune # Remove unused objects
docker system prune -a # Remove all unused objects
docker system prune -a --volumes # Include volumes
docker system prune --filter "until=24h" # Cleanup by time
# Specific cleanups
docker container prune # Remove stopped containers
docker image prune # Remove dangling images
docker image prune -a # Remove unused images
docker network prune # Remove unused networks
docker volume prune # Remove unused volumes
# Force cleanup (DANGEROUS)
docker rm -f $(docker ps -aq) # Remove all containers
docker rmi -f $(docker images -q) # Remove all images
# Cleanup by filters
docker container prune --filter "until=24h" # Containers older than 24h
docker image prune --filter "until=168h" # Images older than 1 week
docker system prune --filter "label=temporary" # By labelBash🎛️ Resource Management & Limits
# Memory limits
docker run -m 512m nginx # 512MB limit
docker run --memory=1g --memory-swap=2g nginx # Memory + swap limits
docker run --oom-kill-disable nginx # Disable OOM killer
# CPU limits
docker run --cpus=1.5 nginx # 1.5 CPU cores
docker run --cpu-shares=512 nginx # Relative CPU weight
docker run --cpuset-cpus="0,1" nginx # Specific CPU cores
# I/O limits
docker run --blkio-weight=300 nginx # Block I/O weight
docker run --device-read-bps /dev/sda:1mb nginx # Read bandwidth limit
# PID limits
docker run --pids-limit=100 nginx # Maximum processes
# Ulimit settings
docker run --ulimit nofile=1024:2048 nginx # File descriptor limitsBash🔧 Docker Daemon Configuration
# Daemon configuration file (/etc/docker/daemon.json)
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"storage-driver": "overlay2",
"default-runtime": "runc",
"features": {
"buildkit": true
},
"experimental": true
}
# Restart Docker daemon (Linux)
sudo systemctl restart docker
# Enable BuildKit by default
echo '{"features": {"buildkit": true}}' | sudo tee /etc/docker/daemon.jsonBash🔍 Debugging & Troubleshooting
🛠️ Debugging Commands (Enhanced)
# Container inspection
docker inspect <container> # Full container details
docker inspect <container> --format '{{.State.Status}}' # Specific field
docker inspect <container> | jq '.NetworkSettings.IPAddress' # With jq
# Process debugging
docker top <container> # Container processes
docker exec <container> ps aux # Processes inside container
docker exec <container> top # Real-time processes
# Network debugging
docker exec <container> ping google.com # Network connectivity
docker exec <container> nslookup google.com # DNS resolution
docker exec <container> netstat -tulpn # Network connections
docker port <container> # Port mappings
# File system debugging
docker exec <container> ls -la / # File system structure
docker exec <container> df -h # Disk usage
docker exec <container> mount # Mount points
docker diff <container> # File changes
# Log analysis
docker logs <container> 2>&1 | grep ERROR # Filter logs
docker logs <container> --since="1h" # Last hour logs
docker logs <container> --until="2025-01-01T12:00:00" # Until timestampBash🚨 Common Issues & Solutions
# Issue: Port already in use
# Solution: Find and kill process
sudo lsof -i :8080
sudo kill -9 <PID>
# Issue: Permission denied
# Solution: Fix file permissions
docker exec <container> chown -R user:group /path
docker exec <container> chmod -R 755 /path
# Issue: Out of disk space
# Solution: Clean up Docker
docker system prune -a --volumes
docker rmi $(docker images -f "dangling=true" -q)
# Issue: Container won't start
# Solution: Check logs and inspect
docker logs <container>
docker inspect <container> | jq '.State'
# Issue: Cannot connect to Docker daemon
# Solution: Start Docker service
sudo systemctl start docker
sudo systemctl status docker
# Issue: DNS resolution fails
# Solution: Configure DNS
docker run --dns=8.8.8.8 <image>
# Or edit /etc/docker/daemon.json:
{
"dns": ["8.8.8.8", "8.8.4.4"]
}Bash🔬 Advanced Debugging Techniques
# Debug with temporary containers
docker run --rm -it --network container:<target> nicolaka/netshoot
# Network debugging container with tools
# Attach to running container
docker attach <container> # Attach to main process
# Note: Ctrl+C will stop the container
# Enter container namespace
docker exec -it <container> nsenter -t 1 -m -u -n -p sh
# Enter all namespaces
# Copy core dumps
docker exec <container> gcore <PID> # Generate core dump
docker cp <container>:/core.dump ./ # Copy to host
# Trace system calls
docker exec <container> strace -p <PID> # Trace process
docker run --cap-add SYS_PTRACE <image> # Enable tracing
# Memory debugging
docker exec <container> cat /proc/meminfo # Memory information
docker exec <container> free -h # Memory usage
docker stats --no-stream <container> # Resource snapshotBash⭐ Best Practices
🛡️ Security Best Practices (2025 Edition)
# 1. Use minimal, security-focused base images
FROM gcr.io/distroless/java:11
# OR
FROM alpine:3.18
# 2. Multi-stage builds for smaller attack surface
FROM node:18-alpine AS builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build
FROM gcr.io/distroless/nodejs18
COPY --from=builder /app/dist /app
WORKDIR /app
CMD ["server.js"]
# 3. Run as non-root user
RUN addgroup -g 1001 -S appgroup && \
adduser -S -u 1001 -G appgroup appuser
USER appuser
# 4. Use specific versions
FROM node:18.17.1-alpine3.18
# 5. Security scanning in CI/CD
# Add to GitHub Actions:
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'my-app:latest'
format: 'sarif'
output: 'trivy-results.sarif'Dockerfile🚀 Performance Optimization (Latest)
# 1. Optimize layer caching
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# 2. Use BuildKit cache mounts
RUN --mount=type=cache,target=/root/.npm \
npm ci --only=production
# 3. Multi-stage with cache
FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm npm ci
# 4. Minimize image size
RUN apk add --no-cache curl && \
rm -rf /var/cache/apk/*
# 5. Use .dockerignore effectively
node_modules/
.git/
*.log
coverage/
.env
README.mdDockerfile📊 Production Deployment Best Practices
# docker-compose.prod.yml
version: '3.8'
services:
app:
image: my-registry.com/my-app:${VERSION}
restart: unless-stopped
# Resource limits
deploy:
resources:
limits:
memory: 1G
cpus: '1.0'
reservations:
memory: 512M
cpus: '0.5'
# Health checks
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# Logging
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Security
security_opt:
- no-new-privileges:true
read_only: true
tmpfs:
- /tmp
- /var/run
# Environment
environment:
NODE_ENV: production
# Secrets (Docker Swarm)
secrets:
- db_password
- api_key
secrets:
db_password:
external: true
api_key:
external: trueDockerfile🔄 CI/CD Integration Patterns
# GitHub Actions example
name: Docker Build and Deploy
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: |
ghcr.io/${{ github.repository }}:latest
ghcr.io/${{ github.repository }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=maxYAML🆕 Latest Features (2024-2025)
🚀 Docker BuildKit Enhancements
# Enable BuildKit (now default in Docker 24+)
export DOCKER_BUILDKIT=1
# Multi-platform builds
docker buildx create --use
docker buildx build --platform linux/amd64,linux/arm64 -t my-app .
# Build with remote cache
docker buildx build \
--cache-from type=registry,ref=my-registry.com/my-app:cache \
--cache-to type=registry,ref=my-registry.com/my-app:cache,mode=max \
-t my-app .
# Dockerfile heredoc syntax (NEW)
FROM alpine
RUN <<EOF
apk add --no-cache curl
curl -o /usr/local/bin/kubectl https://dl.k8s.io/release/latest/bin/linux/amd64/kubectl
chmod +x /usr/local/bin/kubectl
EOFBash🔐 Docker Scout (Security Analysis)
# Enable Docker Scout (NEW in 2024)
docker scout quickview # Quick security overview
docker scout cves my-app:latest # CVE analysis
docker scout recommendations my-app:latest # Security recommendations
# Integrate with CI/CD
docker scout cves --exit-code my-app:latest # Fail build on critical CVEsBash🐝 Docker Swarm Mode Updates
# Initialize swarm with new features
docker swarm init --advertise-addr 192.168.1.100
# Deploy stack with configs and secrets
docker stack deploy -c docker-compose.yml my-stack
# Rolling updates with health checks
docker service update --image my-app:v2 \
--update-delay 10s \
--update-parallelism 1 \
--update-failure-action rollback \
my-stack_webBash📦 Compose V2 Features
# docker-compose.yml with new features
version: '3.8'
services:
web:
image: nginx
profiles:
- web
- production
# Dependency conditions (ENHANCED)
depends_on:
db:
condition: service_healthy
restart: true
# Resource limits (UPDATED)
deploy:
resources:
limits:
memory: 512M
pids: 100
reservations:
memory: 256M
# Extension fields (NEW)
x-extra-config:
monitoring: true
backup: daily
# Usage with profiles
# docker compose --profile web upDockerfile🔧 Docker Extensions (NEW)
# Install Docker Desktop Extensions
docker extension install docker/logs-explorer-extension
docker extension install docker/sbom-cli-plugin
# List installed extensions
docker extension ls
# Use extensions
docker sbom my-app:latest # Software Bill of MaterialsBash🎓 Learning Path for Beginners
📚 Step 1: Understanding Containers
# Start with a simple container
docker run hello-world
# Run interactive container
docker run -it ubuntu /bin/bash
# Inside container: ls, cat /etc/os-release, exit
# Run web server
docker run -d -p 8080:80 --name my-nginx nginx
# Visit http://localhost:8080
docker stop my-nginx
docker rm my-nginxBash🏗️ Step 2: Building Your First Image
# Create Dockerfile
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y cowsay
CMD ["/usr/games/cowsay", "Hello Docker!"]Dockerfile# Build and run
docker build -t my-cowsay .
docker run my-cowsayBash📦 Step 3: Real Application Example
# Node.js app Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]Dockerfile🎼 Step 4: Multi-Container App
# docker-compose.yml
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
depends_on:
- db
db:
image: postgres:15
environment:
POSTGRES_PASSWORD: passwordDockerfile🚀 Step 5: Production Deployment
- Use specific image tags
- Add health checks
- Configure logging
- Set resource limits
- Use secrets for sensitive data
🚀 Advanced Patterns
🏭 Microservices Architecture
# Advanced microservices setup
version: '3.8'
services:
api-gateway:
image: nginx:alpine
ports:
- "80:80"
configs:
- source: nginx_config
target: /etc/nginx/nginx.conf
depends_on:
- user-service
- product-service
user-service:
image: my-registry.com/user-service:latest
networks:
- backend
environment:
DATABASE_URL: postgresql://user:pass@user-db:5432/users
deploy:
replicas: 3
product-service:
image: my-registry.com/product-service:latest
networks:
- backend
environment:
DATABASE_URL: postgresql://user:pass@product-db:5432/products
user-db:
image: postgres:15
networks:
- backend
environment:
POSTGRES_DB: users
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
networks:
backend:
driver: overlay
configs:
nginx_config:
external: true
secrets:
db_password:
external: trueDockerfile🔄 Blue-Green Deployment
# Blue-Green deployment pattern
# Deploy to blue environment
docker service create --name app-blue \
--replicas 3 \
--network app-network \
my-app:v1
# Test blue environment
# When ready, switch traffic (update load balancer)
# Deploy green environment
docker service create --name app-green \
--replicas 3 \
--network app-network \
my-app:v2
# Switch traffic to green
# Remove blue when confident
docker service rm app-blueDockerfile📊 Monitoring & Observability
# monitoring/docker-compose.yml
version: '3.8'
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
environment:
GF_SECURITY_ADMIN_PASSWORD: admin
volumes:
- grafana_data:/var/lib/grafana
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
volumes:
prometheus_data:
grafana_data:Dockerfile🎯 Quick Reference
⚡ Essential Commands Cheat Sheet
# Container Lifecycle
docker run -d --name app -p 8080:80 nginx # Run container
docker ps # List running
docker stop app && docker rm app # Stop and remove
# Images
docker build -t my-app . # Build image
docker images # List images
docker rmi my-app # Remove image
# Compose
docker compose up -d # Start services
docker compose down # Stop services
docker compose logs -f # Follow logs
# Cleanup
docker system prune -a # Remove everything unused
docker container prune # Remove stopped containers
docker image prune # Remove unused images
# Debug
docker logs -f container_name # Follow logs
docker exec -it container_name /bin/bash # Interactive shell
docker inspect container_name # Container detailsBash🔧 Environment Variables
# Docker environment variables
export DOCKER_BUILDKIT=1 # Enable BuildKit
export COMPOSE_FILE=docker-compose.yml # Default compose file
export COMPOSE_PROJECT_NAME=myproject # Project name
export DOCKER_HOST=tcp://remote-host:2376 # Remote Docker hostBash📝 Useful Aliases
# Add to ~/.bashrc or ~/.zshrc
alias dps='docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"'
alias dpa='docker ps -a'
alias di='docker images'
alias drmi='docker rmi'
alias dex='docker exec -it'
alias dlogs='docker logs -f'
alias dclean='docker system prune -a'
alias dcup='docker compose up -d'
alias dcdown='docker compose down'
alias dclogs='docker compose logs -f'
alias dcps='docker compose ps'
alias dcexec='docker compose exec'Bash🎯 Common Use Cases & Examples
🌐 Web Development Stack
# LAMP Stack
docker run -d --name mysql \
-e MYSQL_ROOT_PASSWORD=root \
-e MYSQL_DATABASE=webapp \
-p 3306:3306 mysql:8.0
docker run -d --name webapp \
-p 80:80 \
-v $(pwd):/var/www/html \
--link mysql:mysql \
php:apacheBash📊 Data Science Environment
# Jupyter with scientific libraries
docker run -d --name jupyter \
-p 8888:8888 \
-v $(pwd):/home/jovyan/work \
jupyter/scipy-notebook
# TensorFlow environment
docker run -d --name tensorflow \
-p 8888:8888 \
-v $(pwd):/tf/notebooks \
tensorflow/tensorflow:latest-jupyterBash🚀 DevOps Tools
# Jenkins
docker run -d --name jenkins \
-p 8080:8080 -p 50000:50000 \
-v jenkins_home:/var/jenkins_home \
jenkins/jenkins:lts
# SonarQube
docker run -d --name sonarqube \
-p 9000:9000 \
-e SONAR_ES_BOOTSTRAP_CHECKS_DISABLE=true \
sonarqube:communityBashThis comprehensive Docker cheatsheet covers everything from beginner basics to advanced enterprise patterns. Keep it handy for your Docker journey! 🐳
Last Updated: October 2025 | Docker Version: 24.x+ | Compose Version: 2.x+
Running Containers
# Run a container
docker run <image>
# Run container in background (detached)
docker run -d <image>
# Run container with custom name
docker run --name <container_name> <image>
# Run container with port mapping
docker run -p <host_port>:<container_port> <image>
# Run container with environment variables
docker run -e VAR_NAME=value <image>
# Run container with volume mount
docker run -v <host_path>:<container_path> <image>
# Run container interactively
docker run -it <image> /bin/bash
# Run container and remove when stopped
docker run --rm <image>
# Run container with resource limits
docker run --memory=512m --cpus=1 <image>BashContainer Lifecycle
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Start a stopped container
docker start <container_name/id>
# Stop a running container
docker stop <container_name/id>
# Restart a container
docker restart <container_name/id>
# Pause a container
docker pause <container_name/id>
# Unpause a container
docker unpause <container_name/id>
# Kill a container (force stop)
docker kill <container_name/id>
# Remove a container
docker rm <container_name/id>
# Remove all stopped containers
docker container pruneBashContainer Interaction
# Execute command in running container
docker exec <container_name/id> <command>
# Open interactive shell in container
docker exec -it <container_name/id> /bin/bash
# View container logs
docker logs <container_name/id>
# Follow container logs (real-time)
docker logs -f <container_name/id>
# View container processes
docker top <container_name/id>
# View container resource usage
docker stats <container_name/id>
# Inspect container details
docker inspect <container_name/id>
# Copy files between host and container
docker cp <container_name/id>:<src_path> <dest_path>
docker cp <src_path> <container_name/id>:<dest_path>BashImage Management
Working with Images
# List local images
docker images
docker image ls
# Pull image from registry
docker pull <image>:<tag>
# Build image from Dockerfile
docker build -t <image_name>:<tag> <path>
# Build with build arguments
docker build --build-arg ARG_NAME=value -t <image_name> .
# Tag an image
docker tag <source_image> <target_image>:<tag>
# Push image to registry
docker push <image_name>:<tag>
# Remove image
docker rmi <image_name/id>
# Remove unused images
docker image prune
# Remove all images
docker rmi $(docker images -q)
# Search for images on Docker Hub
docker search <term>
# View image history
docker history <image_name>
# Inspect image details
docker inspect <image_name>BashImage Export/Import
# Save image to tar file
docker save -o <filename>.tar <image_name>
# Load image from tar file
docker load -i <filename>.tar
# Export container as tar file
docker export <container_name> > <filename>.tar
# Import container from tar file
docker import <filename>.tar <image_name>:<tag>BashDockerfile
Basic Dockerfile Instructions
# Base image
FROM ubuntu:20.04
# Set maintainer
LABEL maintainer="your-email@example.com"
# Set working directory
WORKDIR /app
# Copy files
COPY . /app
ADD archive.tar.gz /app
# Run commands
RUN apt-get update && apt-get install -y python3
# Set environment variables
ENV NODE_ENV=production
ENV PATH="/app:${PATH}"
# Expose ports
EXPOSE 8080
# Set user
USER nobody
# Volume mount points
VOLUME ["/data"]
# Entry point (always executed)
ENTRYPOINT ["python3"]
# Default command (can be overridden)
CMD ["app.py"]
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1DockerfileAdvanced Dockerfile Techniques
# Multi-stage build
FROM node:16 AS build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
# Build arguments
ARG VERSION=latest
ENV APP_VERSION=$VERSION
# Conditional instructions
ARG ENVIRONMENT=production
RUN if [ "$ENVIRONMENT" = "development" ]; then \
apt-get install -y debug-tools; \
fiDockerfileDockerfile Best Practices
# Use specific tags, not 'latest'
FROM node:16.14-alpine
# Combine RUN instructions to reduce layers
RUN apt-get update && \
apt-get install -y \
curl \
vim && \
rm -rf /var/lib/apt/lists/*
# Use .dockerignore to exclude unnecessary files
# Create .dockerignore file:
node_modules
*.log
.git
README.md
# Don't run as root
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
USER nextjs
# Use COPY instead of ADD when possible
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .DockerfileDocker Compose
Basic Docker Compose Commands
# Start services
docker-compose up
# Start services in background
docker-compose up -d
# Stop services
docker-compose down
# Stop and remove volumes
docker-compose down -v
# Build services
docker-compose build
# Pull latest images
docker-compose pull
# View service logs
docker-compose logs <service_name>
# Follow logs
docker-compose logs -f <service_name>
# Scale services
docker-compose up --scale <service_name>=3
# Execute command in service
docker-compose exec <service_name> <command>
# List services
docker-compose ps
# Restart services
docker-compose restart <service_name>BashDocker Compose File (docker-compose.yml)
version: '3.8'
services:
web:
build: .
ports:
- "8000:8000"
volumes:
- .:/app
environment:
- DEBUG=1
depends_on:
- db
networks:
- app-network
restart: unless-stopped
db:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app-network
redis:
image: redis:alpine
networks:
- app-network
volumes:
postgres_data:
networks:
app-network:
driver: bridgeDockerfileAdvanced Compose Features
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile.prod
args:
- ENV=production
ports:
- "80:8000"
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/mydb
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
deploy:
replicas: 3
resources:
limits:
memory: 512M
reservations:
memory: 256M
db:
image: postgres:13
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user"]
interval: 10s
timeout: 5s
retries: 5DockerfileNetworking
Network Commands
# List networks
docker network ls
# Create network
docker network create <network_name>
# Create custom bridge network
docker network create --driver bridge <network_name>
# Connect container to network
docker network connect <network_name> <container_name>
# Disconnect container from network
docker network disconnect <network_name> <container_name>
# Inspect network
docker network inspect <network_name>
# Remove network
docker network rm <network_name>
# Remove unused networks
docker network pruneBashNetwork Types
# Bridge (default)
docker run --network bridge <image>
# Host (use host networking)
docker run --network host <image>
# None (no networking)
docker run --network none <image>
# Custom network
docker network create mynetwork
docker run --network mynetwork <image>BashPort Mapping
# Map specific port
docker run -p 8080:80 nginx
# Map to random port
docker run -P nginx
# Map multiple ports
docker run -p 8080:80 -p 4443:443 nginx
# Map to specific interface
docker run -p 127.0.0.1:8080:80 nginx
# UDP port mapping
docker run -p 53:53/udp dns-serverBashVolumes & Storage
Volume Commands
# List volumes
docker volume ls
# Create volume
docker volume create <volume_name>
# Inspect volume
docker volume inspect <volume_name>
# Remove volume
docker volume rm <volume_name>
# Remove unused volumes
docker volume prune
# Remove all volumes
docker volume rm $(docker volume ls -q)BashVolume Types
# Named volume
docker run -v myvolume:/app/data <image>
# Bind mount
docker run -v /host/path:/container/path <image>
# Anonymous volume
docker run -v /app/data <image>
# Read-only mount
docker run -v /host/path:/container/path:ro <image>
# tmpfs mount (in-memory)
docker run --tmpfs /app/temp <image>BashVolume Examples
# Share volume between containers
docker volume create shared-data
docker run -v shared-data:/data --name container1 alpine
docker run -v shared-data:/data --name container2 alpine
# Backup volume
docker run --rm -v myvolume:/data -v $(pwd):/backup alpine tar czf /backup/backup.tar.gz -C /data .
# Restore volume
docker run --rm -v myvolume:/data -v $(pwd):/backup alpine tar xzf /backup/backup.tar.gz -C /dataBashRegistry & Hub
Docker Hub Operations
# Login to Docker Hub
docker login
# Push to Docker Hub
docker tag myapp:latest username/myapp:latest
docker push username/myapp:latest
# Pull from Docker Hub
docker pull username/myapp:latest
# Search Docker Hub
docker search nginxBashPrivate Registry
# Run local registry
docker run -d -p 5000:5000 --name registry registry:2
# Tag for private registry
docker tag myapp:latest localhost:5000/myapp:latest
# Push to private registry
docker push localhost:5000/myapp:latest
# Pull from private registry
docker pull localhost:5000/myapp:latestBashRegistry Authentication
# Login to private registry
docker login myregistry.com
# Login with credentials
docker login -u username -p password myregistry.com
# Store credentials in file
docker login --password-stdin myregistry.com < password.txtBashSystem Management
System Information
# Show system information
docker system info
# Show disk usage
docker system df
# Show detailed disk usage
docker system df -v
# Show system events
docker system events
# Show real-time events
docker system events --followBashCleanup Commands
# Remove all unused objects
docker system prune
# Remove all unused objects including volumes
docker system prune -a --volumes
# Remove stopped containers
docker container prune
# Remove unused images
docker image prune
# Remove unused networks
docker network prune
# Remove unused volumes
docker volume prune
# Remove everything (DANGEROUS!)
docker system prune -a --volumes --forceBashResource Management
# Set memory limit
docker run -m 512m <image>
# Set CPU limit
docker run --cpus=1.5 <image>
# Set CPU shares
docker run --cpu-shares=512 <image>
# Set block IO weight
docker run --blkio-weight=300 <image>
# Monitor resource usage
docker stats
# Monitor specific containers
docker stats container1 container2BashDebugging & Troubleshooting
Debugging Commands
# Inspect container/image
docker inspect <container/image>
# View logs
docker logs <container>
docker logs --tail 50 <container>
docker logs --since="2023-01-01T00:00:00" <container>
# Follow logs in real-time
docker logs -f <container>
# Debug running container
docker exec -it <container> /bin/bash
# Check container processes
docker top <container>
# View container changes
docker diff <container>
# Copy files for debugging
docker cp <container>:/path/file ./file
# Export container filesystem
docker export <container> > container.tarBashCommon Troubleshooting
# Check if Docker daemon is running
docker info
# Restart Docker service (Linux)
sudo systemctl restart docker
# Check Docker daemon logs (Linux)
sudo journalctl -u docker.service
# Debug network connectivity
docker exec -it <container> ping <target>
docker exec -it <container> nslookup <domain>
# Check port binding
docker port <container>
# Verify volume mounts
docker inspect <container> | grep -A 10 "Mounts"
# Test image build without cache
docker build --no-cache -t <image> .BashPerformance Debugging
# Monitor resource usage
docker stats --no-stream
# Check container resource limits
docker inspect <container> | grep -A 10 "Resources"
# Profile container startup time
time docker run --rm <image> <command>
# Analyze image layers
docker history <image>
# Check for memory leaks
docker exec <container> top
docker exec <container> ps auxBashBest Practices
Security Best Practices
# 1. Use minimal base images
FROM alpine:latest
# 2. Don't run as root
RUN addgroup -g 1001 -S appgroup && \
adduser -S -u 1001 -G appgroup appuser
USER appuser
# 3. Use specific versions
FROM node:16.14-alpine
# 4. Minimize attack surface
RUN apk add --no-cache curl && \
rm -rf /var/cache/apk/*
# 5. Use multi-stage builds
FROM node:16 AS builder
# ... build steps
FROM node:16-alpine
COPY --from=builder /app/dist ./distDockerfilePerformance Best Practices
# 1. Optimize layer caching
COPY package*.json ./
RUN npm install
COPY . .
# 2. Minimize image size
RUN apt-get update && \
apt-get install -y --no-install-recommends curl && \
rm -rf /var/lib/apt/lists/*
# 3. Use .dockerignore
# Add to .dockerignore:
node_modules
.git
*.log
README.md
# 4. Combine RUN instructions
RUN apt-get update && \
apt-get install -y package1 package2 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*DockerfileProduction Best Practices
# docker-compose.prod.yml
version: '3.8'
services:
app:
image: myapp:${VERSION}
restart: unless-stopped
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
deploy:
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 256M
cpus: '0.25'DockerfileDevelopment Workflow
# 1. Use docker-compose for development
docker-compose -f docker-compose.dev.yml up
# 2. Use volume mounts for hot reloading
volumes:
- .:/app
- /app/node_modules
# 3. Use environment-specific configs
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
# 4. Use health checks
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3DockerfileMonitoring & Logging
# Use structured logging
docker run --log-driver=json-file --log-opt max-size=10m <image>
# Monitor with Prometheus
docker run -d -p 9090:9090 prom/prometheus
# Use centralized logging
docker run --log-driver=syslog --log-opt syslog-address=udp://logs.example.com:514 <image>
# Health checks in production
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1BashQuick Reference
Most Used Commands
# Container lifecycle
docker run -d --name myapp -p 8080:80 nginx
docker stop myapp
docker start myapp
docker rm myapp
# Image management
docker build -t myapp .
docker pull ubuntu:20.04
docker push myregistry.com/myapp:latest
# Debugging
docker logs -f myapp
docker exec -it myapp /bin/bash
docker inspect myapp
# Cleanup
docker system prune -a
docker container prune
docker image pruneBashEnvironment Variables
# Common Docker environment variables
DOCKER_HOST=tcp://192.168.1.100:2376
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=/path/to/certs
COMPOSE_FILE=docker-compose.yml
COMPOSE_PROJECT_NAME=myprojectBashUseful Aliases
# Add to ~/.bashrc or ~/.zshrc
alias dps='docker ps'
alias dpa='docker ps -a'
alias di='docker images'
alias drmi='docker rmi'
alias dex='docker exec -it'
alias dlogs='docker logs -f'
alias dclean='docker system prune -a'
alias dcup='docker-compose up -d'
alias dcdown='docker-compose down'
alias dclogs='docker-compose logs -f'BashUseful Docker Images
Development Tools
# Database
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=password postgres:13
docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password mysql:8
docker run -d -p 6379:6379 redis:alpine
docker run -d -p 27017:27017 mongo:latest
# Web servers
docker run -d -p 80:80 nginx:alpine
docker run -d -p 80:80 httpd:alpine
docker run -d -p 8080:8080 tomcat:9
# Tools
docker run --rm -v $(pwd):/workspace node:16 npm install
docker run --rm -v $(pwd):/app python:3.9 pip install -r requirements.txtBashMonitoring & Observability
# Prometheus
docker run -d -p 9090:9090 prom/prometheus
# Grafana
docker run -d -p 3000:3000 grafana/grafana
# Elasticsearch
docker run -d -p 9200:9200 -e "discovery.type=single-node" elasticsearch:7.17.0
# Kibana
docker run -d -p 5601:5601 kibana:7.17.0BashThis cheatsheet covers the most essential Docker commands and concepts. For more detailed information, refer to the official Docker documentation.
Discover more from Altgr Blog
Subscribe to get the latest posts sent to your email.
