Docker changed how we ship software. Instead of "it works on my machine," you package your app with everything it needs into a container that runs identically everywhere — your laptop, a test server, or production. This guide takes you from zero Docker knowledge to deploying a containerized application, with real code at every step.
Docker is a platform for building, running, and shipping applications inside containers — lightweight, isolated environments that bundle your code with its exact dependencies. Unlike virtual machines, containers share the host operating system's kernel, which makes them fast to start (seconds, not minutes) and efficient with resources.
Why developers reach for Docker in 2026:
Docker installation varies by operating system. Here's the fastest path for each.
The official install script handles everything:
# Install Docker Engine
curl -fsSL https://get.docker.com | sudo sh
# Add your user to the docker group (avoids needing sudo)
sudo usermod -aG docker $USER
# Log out and back in, then verify
docker --version
docker run hello-world
Download Docker Desktop for Mac from the official site. It includes Docker Engine, Docker CLI, Docker Compose, and a GUI dashboard. Works on both Intel and Apple Silicon.
Download Docker Desktop for Windows. It requires WSL 2 (Windows Subsystem for Linux), which the installer will set up if needed. After installation, open a terminal and run docker --version to verify.
Before writing any code, understand these five building blocks. Everything in Docker revolves around them.
An image is a read-only template containing your application, its runtime, libraries, and configuration. Images are built in layers — each instruction in a Dockerfile creates a new layer, and Docker caches layers to speed up rebuilds. You pull pre-built images from Docker Hub (like python:3.12-slim or nginx:alpine) and build your own on top of them.
A container is a running instance of an image. It has its own isolated filesystem, network interface, and process tree. You can run multiple containers from the same image, each with its own state. Containers are ephemeral by default — when you stop and remove one, any data written inside it is gone.
A Dockerfile is a text file with instructions for building an image. Each line is a step: start from a base image, copy files, install dependencies, set environment variables, define the startup command. Dockerfiles are checked into version control alongside your application code.
Volumes persist data beyond the container's lifecycle. When a container is removed, data stored in a volume survives. Use volumes for databases, uploaded files, logs — anything that should outlast individual containers.
Docker networks let containers communicate with each other. By default, containers on the same Docker network can reach each other by container name (Docker provides built-in DNS). This is how your web app container talks to your database container without exposing the database to the outside world.
Let's containerize a simple Flask web application. Start with this project structure:
my-flask-app/
├── app.py
├── requirements.txt
└── Dockerfile
The application code:
# app.py
from flask import Flask, jsonify
app = Flask(__name__)
@app.route("/")
def home():
return jsonify({"message": "Hello from Docker!", "status": "running"})
@app.route("/health")
def health():
return jsonify({"status": "healthy"})
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
# requirements.txt
flask==3.1.0
gunicorn==23.0.0
Now the Dockerfile — this is where the magic happens:
# Dockerfile
FROM python:3.12-slim
# Set working directory inside the container
WORKDIR /app
# Copy dependency file first (for better layer caching)
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY app.py .
# Expose port 5000 (documentation — doesn't actually publish it)
EXPOSE 5000
# Run with gunicorn in production
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "2", "app:app"]
requirements.txt and install dependencies before copying the application code. Docker caches each layer, so if your dependencies haven't changed, Docker skips the pip install step entirely on rebuilds. This can save minutes on large projects.
Build the image from your Dockerfile:
# Build the image and tag it "my-flask-app"
docker build -t my-flask-app .
# Verify it was created
docker images
Run a container from the image:
# Run in detached mode (-d), map port 5000
docker run -d --name flask-server -p 5000:5000 my-flask-app
# Check it's running
docker ps
# View logs
docker logs flask-server
# Test it
curl http://localhost:5000/health
Once your container is running, you can test the API endpoints directly in your browser with our free API Tester — enter http://localhost:5000, pick GET, and hit send. No installation required.
Managing the container lifecycle:
# Stop the container
docker stop flask-server
# Start it again
docker start flask-server
# Remove it (must be stopped first)
docker rm flask-server
# Or force-remove a running container
docker rm -f flask-server
Real applications rarely run as a single container. A typical web app needs at least a web server and a database. Docker Compose lets you define and run multi-container applications with a single YAML file.
Here's a docker-compose.yml that adds PostgreSQL and Redis alongside our Flask app:
# docker-compose.yml
services:
web:
build: .
ports:
- "5000:5000"
environment:
- DATABASE_URL=postgresql://appuser:secret@db:5432/myapp
- REDIS_URL=redis://cache:6379/0
depends_on:
- db
- cache
restart: unless-stopped
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
cache:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
postgres_data:
Notice that the web service connects to the database using db as the hostname — Docker Compose creates a network automatically and each service is reachable by its service name.
Run everything with one command:
# Start all services in detached mode
docker compose up -d
# View logs for all services
docker compose logs -f
# View logs for just the web service
docker compose logs -f web
# Stop everything
docker compose down
# Stop and remove volumes (deletes database data)
docker compose down -v
/health endpoint on a schedule and alert if it fails. Our Cron Expression Generator can build the schedule in crontab, GitHub Actions, Kubernetes CronJob, or systemd format — just describe "every 5 minutes" and get the exact syntax.
| Command | What It Does |
|---|---|
docker build -t name . |
Build an image from a Dockerfile in the current directory |
docker run -d -p 8080:80 name |
Run a container in background, map host port 8080 to container port 80 |
docker ps |
List running containers |
docker ps -a |
List all containers (including stopped) |
docker logs container |
View container logs |
docker exec -it container bash |
Open a shell inside a running container |
docker images |
List all local images |
docker rmi image |
Remove an image |
docker volume ls |
List all volumes |
docker system prune |
Remove unused images, containers, and networks |
docker system df |
Show disk usage by Docker components |
docker compose up -d |
Start all Compose services in background |
docker compose down |
Stop and remove all Compose services |
Getting Docker running locally is one thing. Deploying to production requires a few additional steps to make your setup reliable and secure.
Docker Hub is the default public registry, but you can also use GitHub Container Registry (ghcr.io), AWS ECR, or Google Artifact Registry.
# Tag your image for Docker Hub
docker tag my-flask-app yourusername/my-flask-app:1.0.0
# Log in and push
docker login
docker push yourusername/my-flask-app:1.0.0
On your production server, pull the image and run it:
# Pull the latest image
docker pull yourusername/my-flask-app:1.0.0
# Run with environment variables and restart policy
docker run -d \
--name flask-prod \
--restart unless-stopped \
-p 5000:5000 \
-e DATABASE_URL="postgresql://user:pass@db-host:5432/prod" \
yourusername/my-flask-app:1.0.0
For multi-container deployments, copy your docker-compose.yml to the server and run docker compose up -d. Use a reverse proxy like Nginx or Traefik in front of your application containers to handle TLS termination and load balancing.
The standard production workflow: push code to GitHub, a CI pipeline builds the Docker image, runs tests inside it, pushes the image to a registry, and deploys it. Here's a minimal GitHub Actions workflow:
# .github/workflows/deploy.yml
name: Build and Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Run tests
run: docker run myapp:${{ github.sha }} pytest
- name: Push to registry
run: |
echo "${{ secrets.DOCKER_PASSWORD }}" | docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin
docker tag myapp:${{ github.sha }} ${{ secrets.DOCKER_USERNAME }}/myapp:latest
docker push ${{ secrets.DOCKER_USERNAME }}/myapp:latest
Multi-stage builds let you use one image for building (with compilers, dev tools) and a different, smaller image for running. This dramatically reduces final image size.
# Build stage — includes build tools
FROM python:3.12 AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt
# Production stage — minimal image
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /root/.local /root/.local
COPY app.py .
ENV PATH=/root/.local/bin:$PATH
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:app"]
The final image only contains the slim Python runtime and your installed packages — no compilers, no build artifacts, no cached pip downloads.
A .dockerignore file prevents unnecessary files from being sent to the Docker daemon during builds. This speeds up builds and keeps images smaller.
# .dockerignore
.git
.gitignore
__pycache__
*.pyc
.env
.venv
node_modules
*.md
docker-compose.yml
.github
.dockerignore syntax supports glob patterns like *.pyc and **/__pycache__. Need to construct a complex pattern? Our Regex Generator helps you build and test patterns quickly — describe what you want to match and get the expression back.
RUN adduser --disabled-password appuser and USER appuser in your Dockerfile. If an attacker breaks into your container, they get limited privileges.python:3.12.4-slim is better than python:latest. Pinning versions prevents surprise breakage from upstream changes.docker scout cves my-image to check for known CVEs in your base image and dependencies.COPY .env in your Dockerfile. Anyone who pulls your image can extract those files.-slim or -alpine variants. Fewer packages means fewer potential vulnerabilities.HEALTHCHECK CMD curl -f http://localhost:5000/health || exit 1 to your Dockerfile so Docker knows when your app is actually ready.1.2.3) or git SHAs (abc1234) instead of just latest. This makes rollbacks straightforward.docker system prune periodically to reclaim disk space from unused images and stopped containers.A Docker image is a read-only template containing your application code, dependencies, and configuration. A container is a running instance of that image. Think of the image as a class and the container as an object — you can run multiple containers from the same image, each with its own isolated state.
Docker Engine (the core runtime) is free and open source under the Apache 2.0 license. Docker Desktop is free for personal use, education, and small businesses (under 250 employees and under $10M revenue). Larger organizations need a paid Docker Business subscription for Docker Desktop, but can still use Docker Engine directly on Linux for free.
Plain Docker works fine for single-container applications. Docker Compose becomes essential when your app needs multiple services — for example, a web server plus a database plus a cache. Compose lets you define all services in one YAML file and start everything with a single command. For anything beyond a toy project, Compose saves significant time.
Docker itself uses minimal space, but images and containers add up. A minimal Alpine-based image is about 5MB, while a full Python image can be 900MB+. Using slim or Alpine base images, multi-stage builds, and regularly pruning unused images with docker system prune keeps disk usage manageable. Run docker system df to see exactly what's using space.
Virtual machines run a full guest operating system on top of a hypervisor, which is resource-heavy (each VM needs its own kernel, drivers, and OS). Docker containers share the host OS kernel and only package the application and its dependencies, making them far lighter — a container starts in seconds and uses a fraction of the memory. The tradeoff is that containers provide process-level isolation rather than full hardware-level isolation.
You now have working knowledge of Docker — from writing Dockerfiles and building images to running multi-container applications with Compose and deploying to production. The natural next steps are learning Docker networking in depth, setting up container orchestration with Kubernetes or Docker Swarm for high-availability deployments, and implementing CI/CD pipelines that automatically build, test, and deploy your containers.
The core workflow stays the same regardless of scale: write a Dockerfile, build an image, run containers. Whether you're shipping a side project or a distributed system with dozens of microservices, these fundamentals carry you through.
Building containerized APIs? Test your endpoints and automate your workflows with these free tools — no signup required.
API Tester Cron Generator Regex Generator