Tutorial Docker DevOps March 14, 2026

Docker Tutorial for Beginners — From Install to Deploy (2026)

Docker changed how we ship software. Instead of "it works on my machine," you package your app with everything it needs into a container that runs identically everywhere — your laptop, a test server, or production. This guide takes you from zero Docker knowledge to deploying a containerized application, with real code at every step.

What Is Docker and Why Use It?

Docker is a platform for building, running, and shipping applications inside containers — lightweight, isolated environments that bundle your code with its exact dependencies. Unlike virtual machines, containers share the host operating system's kernel, which makes them fast to start (seconds, not minutes) and efficient with resources.

Why developers reach for Docker in 2026:

Installing Docker

Docker installation varies by operating system. Here's the fastest path for each.

Linux (Ubuntu/Debian)

The official install script handles everything:

# Install Docker Engine
curl -fsSL https://get.docker.com | sudo sh

# Add your user to the docker group (avoids needing sudo)
sudo usermod -aG docker $USER

# Log out and back in, then verify
docker --version
docker run hello-world

macOS

Download Docker Desktop for Mac from the official site. It includes Docker Engine, Docker CLI, Docker Compose, and a GUI dashboard. Works on both Intel and Apple Silicon.

Windows

Download Docker Desktop for Windows. It requires WSL 2 (Windows Subsystem for Linux), which the installer will set up if needed. After installation, open a terminal and run docker --version to verify.

Key Concepts You Need to Know

Before writing any code, understand these five building blocks. Everything in Docker revolves around them.

Images

An image is a read-only template containing your application, its runtime, libraries, and configuration. Images are built in layers — each instruction in a Dockerfile creates a new layer, and Docker caches layers to speed up rebuilds. You pull pre-built images from Docker Hub (like python:3.12-slim or nginx:alpine) and build your own on top of them.

Containers

A container is a running instance of an image. It has its own isolated filesystem, network interface, and process tree. You can run multiple containers from the same image, each with its own state. Containers are ephemeral by default — when you stop and remove one, any data written inside it is gone.

Dockerfiles

A Dockerfile is a text file with instructions for building an image. Each line is a step: start from a base image, copy files, install dependencies, set environment variables, define the startup command. Dockerfiles are checked into version control alongside your application code.

Volumes

Volumes persist data beyond the container's lifecycle. When a container is removed, data stored in a volume survives. Use volumes for databases, uploaded files, logs — anything that should outlast individual containers.

Networks

Docker networks let containers communicate with each other. By default, containers on the same Docker network can reach each other by container name (Docker provides built-in DNS). This is how your web app container talks to your database container without exposing the database to the outside world.

Your First Dockerfile: A Python Flask App

Let's containerize a simple Flask web application. Start with this project structure:

my-flask-app/
├── app.py
├── requirements.txt
└── Dockerfile

The application code:

# app.py
from flask import Flask, jsonify

app = Flask(__name__)

@app.route("/")
def home():
    return jsonify({"message": "Hello from Docker!", "status": "running"})

@app.route("/health")
def health():
    return jsonify({"status": "healthy"})

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000)
# requirements.txt
flask==3.1.0
gunicorn==23.0.0

Now the Dockerfile — this is where the magic happens:

# Dockerfile
FROM python:3.12-slim

# Set working directory inside the container
WORKDIR /app

# Copy dependency file first (for better layer caching)
COPY requirements.txt .

# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY app.py .

# Expose port 5000 (documentation — doesn't actually publish it)
EXPOSE 5000

# Run with gunicorn in production
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "2", "app:app"]
Tip — Layer caching: Notice that we copy requirements.txt and install dependencies before copying the application code. Docker caches each layer, so if your dependencies haven't changed, Docker skips the pip install step entirely on rebuilds. This can save minutes on large projects.

Building and Running Containers

Build the image from your Dockerfile:

# Build the image and tag it "my-flask-app"
docker build -t my-flask-app .

# Verify it was created
docker images

Run a container from the image:

# Run in detached mode (-d), map port 5000
docker run -d --name flask-server -p 5000:5000 my-flask-app

# Check it's running
docker ps

# View logs
docker logs flask-server

# Test it
curl http://localhost:5000/health

Once your container is running, you can test the API endpoints directly in your browser with our free API Tester — enter http://localhost:5000, pick GET, and hit send. No installation required.

Managing the container lifecycle:

# Stop the container
docker stop flask-server

# Start it again
docker start flask-server

# Remove it (must be stopped first)
docker rm flask-server

# Or force-remove a running container
docker rm -f flask-server

Docker Compose: Multi-Container Applications

Real applications rarely run as a single container. A typical web app needs at least a web server and a database. Docker Compose lets you define and run multi-container applications with a single YAML file.

Here's a docker-compose.yml that adds PostgreSQL and Redis alongside our Flask app:

# docker-compose.yml
services:
  web:
    build: .
    ports:
      - "5000:5000"
    environment:
      - DATABASE_URL=postgresql://appuser:secret@db:5432/myapp
      - REDIS_URL=redis://cache:6379/0
    depends_on:
      - db
      - cache
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: myapp
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  cache:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  postgres_data:

Notice that the web service connects to the database using db as the hostname — Docker Compose creates a network automatically and each service is reachable by its service name.

Run everything with one command:

# Start all services in detached mode
docker compose up -d

# View logs for all services
docker compose logs -f

# View logs for just the web service
docker compose logs -f web

# Stop everything
docker compose down

# Stop and remove volumes (deletes database data)
docker compose down -v
Tip — Scheduling health checks: In production, you'll want to hit the /health endpoint on a schedule and alert if it fails. Our Cron Expression Generator can build the schedule in crontab, GitHub Actions, Kubernetes CronJob, or systemd format — just describe "every 5 minutes" and get the exact syntax.

Docker Commands Cheat Sheet

Command What It Does
docker build -t name . Build an image from a Dockerfile in the current directory
docker run -d -p 8080:80 name Run a container in background, map host port 8080 to container port 80
docker ps List running containers
docker ps -a List all containers (including stopped)
docker logs container View container logs
docker exec -it container bash Open a shell inside a running container
docker images List all local images
docker rmi image Remove an image
docker volume ls List all volumes
docker system prune Remove unused images, containers, and networks
docker system df Show disk usage by Docker components
docker compose up -d Start all Compose services in background
docker compose down Stop and remove all Compose services

Deploying to Production

Getting Docker running locally is one thing. Deploying to production requires a few additional steps to make your setup reliable and secure.

Push Your Image to a Registry

Docker Hub is the default public registry, but you can also use GitHub Container Registry (ghcr.io), AWS ECR, or Google Artifact Registry.

# Tag your image for Docker Hub
docker tag my-flask-app yourusername/my-flask-app:1.0.0

# Log in and push
docker login
docker push yourusername/my-flask-app:1.0.0

Deploy on Your Server

On your production server, pull the image and run it:

# Pull the latest image
docker pull yourusername/my-flask-app:1.0.0

# Run with environment variables and restart policy
docker run -d \
  --name flask-prod \
  --restart unless-stopped \
  -p 5000:5000 \
  -e DATABASE_URL="postgresql://user:pass@db-host:5432/prod" \
  yourusername/my-flask-app:1.0.0

For multi-container deployments, copy your docker-compose.yml to the server and run docker compose up -d. Use a reverse proxy like Nginx or Traefik in front of your application containers to handle TLS termination and load balancing.

Automate with CI/CD

The standard production workflow: push code to GitHub, a CI pipeline builds the Docker image, runs tests inside it, pushes the image to a registry, and deploys it. Here's a minimal GitHub Actions workflow:

# .github/workflows/deploy.yml
name: Build and Deploy
on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build image
        run: docker build -t myapp:${{ github.sha }} .

      - name: Run tests
        run: docker run myapp:${{ github.sha }} pytest

      - name: Push to registry
        run: |
          echo "${{ secrets.DOCKER_PASSWORD }}" | docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin
          docker tag myapp:${{ github.sha }} ${{ secrets.DOCKER_USERNAME }}/myapp:latest
          docker push ${{ secrets.DOCKER_USERNAME }}/myapp:latest

Docker Best Practices

Multi-Stage Builds

Multi-stage builds let you use one image for building (with compilers, dev tools) and a different, smaller image for running. This dramatically reduces final image size.

# Build stage — includes build tools
FROM python:3.12 AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt

# Production stage — minimal image
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /root/.local /root/.local
COPY app.py .
ENV PATH=/root/.local/bin:$PATH
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:app"]

The final image only contains the slim Python runtime and your installed packages — no compilers, no build artifacts, no cached pip downloads.

Use .dockerignore

A .dockerignore file prevents unnecessary files from being sent to the Docker daemon during builds. This speeds up builds and keeps images smaller.

# .dockerignore
.git
.gitignore
__pycache__
*.pyc
.env
.venv
node_modules
*.md
docker-compose.yml
.github
Tip — Pattern matching: The .dockerignore syntax supports glob patterns like *.pyc and **/__pycache__. Need to construct a complex pattern? Our Regex Generator helps you build and test patterns quickly — describe what you want to match and get the expression back.

Security Essentials

More Production Tips

Frequently Asked Questions

What is the difference between a Docker image and a container?

A Docker image is a read-only template containing your application code, dependencies, and configuration. A container is a running instance of that image. Think of the image as a class and the container as an object — you can run multiple containers from the same image, each with its own isolated state.

Is Docker free for personal and commercial use?

Docker Engine (the core runtime) is free and open source under the Apache 2.0 license. Docker Desktop is free for personal use, education, and small businesses (under 250 employees and under $10M revenue). Larger organizations need a paid Docker Business subscription for Docker Desktop, but can still use Docker Engine directly on Linux for free.

Do I need Docker Compose or can I use plain Docker?

Plain Docker works fine for single-container applications. Docker Compose becomes essential when your app needs multiple services — for example, a web server plus a database plus a cache. Compose lets you define all services in one YAML file and start everything with a single command. For anything beyond a toy project, Compose saves significant time.

How much disk space does Docker use?

Docker itself uses minimal space, but images and containers add up. A minimal Alpine-based image is about 5MB, while a full Python image can be 900MB+. Using slim or Alpine base images, multi-stage builds, and regularly pruning unused images with docker system prune keeps disk usage manageable. Run docker system df to see exactly what's using space.

What is the difference between Docker and a virtual machine?

Virtual machines run a full guest operating system on top of a hypervisor, which is resource-heavy (each VM needs its own kernel, drivers, and OS). Docker containers share the host OS kernel and only package the application and its dependencies, making them far lighter — a container starts in seconds and uses a fraction of the memory. The tradeoff is that containers provide process-level isolation rather than full hardware-level isolation.

What's Next

You now have working knowledge of Docker — from writing Dockerfiles and building images to running multi-container applications with Compose and deploying to production. The natural next steps are learning Docker networking in depth, setting up container orchestration with Kubernetes or Docker Swarm for high-availability deployments, and implementing CI/CD pipelines that automatically build, test, and deploy your containers.

The core workflow stays the same regardless of scale: write a Dockerfile, build an image, run containers. Whether you're shipping a side project or a distributed system with dozens of microservices, these fundamentals carry you through.

Building containerized APIs? Test your endpoints and automate your workflows with these free tools — no signup required.

API Tester Cron Generator Regex Generator