Kodnos
/
DevOps

Docker in Plain Language: What It Actually Does and Why You Should Care

Docker confused me for way too long. Here is the complete, no-jargon guide covering Dockerfiles, images, containers, Compose, common mistakes, and when Docker is actually worth using.

A
admin
Apr 11, 2026 12 min read 31 views
Docker in Plain Language: What It Actually Does and Why You Should Care

I spent way too long thinking Docker was unnecessarily complicated. Every tutorial started with container theory, moved into Linux namespaces and cgroups, and by page three I was already lost. Turns out, I was overthinking it. Docker is actually one of the most practical tools you can learn, and the core concepts fit in your head once you strip away the jargon.

Let me save you the confusion I went through.

The "It Works on My Machine" Problem

Here is a scenario every developer has lived through at least once: you build an app on your laptop and everything works perfectly. The tests pass, the API responds, the frontend renders beautifully. You deploy it to a server and... it breaks.

Different operating system. Different Python version. A library that was installed globally on your machine but missing on the server. A configuration file in a different location. An environment variable you forgot to set.

This is the fundamental problem Docker solves. It packages your application with everything it needs — the right version of Java or Node.js, the exact libraries with correct versions, the configuration files, the environment variables, all of it — into a self-contained unit called a container. That container runs the same way on your laptop, your teammate's laptop, and the production server.

No more "but it works on my machine." If it works in the container, it works everywhere.

Containers vs Virtual Machines: The Apartment Analogy

Before containers, we had virtual machines. Both solve the environment consistency problem, but in very different ways.

A virtual machine runs an entire operating system — its own kernel, its own file system, its own everything. It is like renting a whole apartment. You get complete isolation, but you are also paying for the kitchen, the bathroom, and the living room even if all you need is a desk to work at.

A container shares the host operating system's kernel and only packages the application-level stuff. It is like getting a private room in a co-working space — you have your own space with your own stuff, but you share the building's infrastructure (plumbing, electricity, security).

This makes containers:

  • Much smaller — A VM image might be 2-10 GB. A container image for a Java app is typically 200-400 MB.
  • Much faster to start — VMs take minutes to boot. Containers start in seconds (sometimes milliseconds).
  • Much lighter on resources — You can run maybe 3-5 VMs on a laptop. You can run 20-30 containers easily.
  • For most application deployment scenarios, containers give you enough isolation without the overhead of running a full OS.

    The Three Core Concepts

    Docker has a lot of features, but you only need to understand three things to be productive: Dockerfiles, images, and containers.

    1. Dockerfile: The Recipe

    A Dockerfile is a text file that describes how to build your application's environment. Think of it as a recipe — a series of steps that take you from a base ingredient (an operating system with Java installed) to a finished dish (your application, ready to run).

    Here is a Dockerfile for a Spring Boot application:

    Dockerfile
    # Start from a base image with Java 21
    FROM eclipse-temurin:21-jre-alpine

    # Set the working directory inside the container WORKDIR /app

    # Copy the built JAR file into the container COPY target/myapp.jar app.jar

    # Tell Docker which port your app listens on EXPOSE 8080

    # Define what happens when the container starts ENTRYPOINT ["java", "-jar", "app.jar"]

    Five instructions. That is a complete Dockerfile for a production application. Each line is an instruction:

  • FROM picks the starting point (a minimal Linux with Java pre-installed)
  • WORKDIR sets where we are working inside the container
  • COPY brings our application into the container
  • EXPOSE documents the port (this is mostly informational)
  • ENTRYPOINT defines the command that runs when the container starts
  • For a Node.js app, it would look like this:

    Dockerfile
    FROM node:20-alpine
    WORKDIR /app
    COPY package*.json ./
    RUN npm ci --production
    COPY . .
    EXPOSE 3000
    CMD ["node", "server.js"]
    

    Notice we copy package.json first and install dependencies before copying the source code. This is a Docker best practice called layer caching — since dependencies change less often than source code, Docker can reuse the cached dependency layer and only rebuild what changed.

    2. Image: The Snapshot

    When you run docker build, Docker reads your Dockerfile and executes each instruction to create an image. An image is a read-only snapshot of your application and its complete environment — the OS, the runtime, the libraries, your code, everything.

    Bash
    docker build -t myapp:1.0 .
    

    Images are immutable. Once built, they never change. This is actually a feature — you know that the image you tested in staging is exactly the same one running in production. No surprises.

    Images are also layered. Each instruction in your Dockerfile creates a layer, and layers are cached. If you change your source code but not your dependencies, Docker only rebuilds the layers that changed. This makes subsequent builds much faster.

    3. Container: The Running Instance

    A container is a running instance of an image. You can think of the image as a class and the container as an object — the image defines what the application looks like, and the container is an actual running copy.

    Bash
    docker run -d -p 8080:8080 --name myapp myapp:1.0
    

    This starts your app in the background (-d), maps port 8080 on your machine to port 8080 in the container (-p), and gives it a name (--name).

    You can run multiple containers from the same image, each isolated from the others. This is useful for scaling — run three copies of your API behind a load balancer, all from the same image.

    Docker Compose: Orchestrating Multiple Services

    Real applications rarely consist of a single service. A typical web app needs an application server, a database, maybe a cache, and a reverse proxy. Docker Compose lets you define all of these in a single YAML file and manage them as a unit.

    YAML
    services:
      backend:
        build: .
        ports:
          - "8080:8080"
        environment:
          - DATABASE_URL=postgresql://postgres:secret@postgres:5432/myapp
          - REDIS_URL=redis://redis:6379
        depends_on:
          postgres:
            condition: service_healthy
          redis:
            condition: service_started

    postgres: image: postgres:16-alpine environment: POSTGRES_DB: myapp POSTGRES_PASSWORD: secret volumes: - pgdata:/var/lib/postgresql/data healthcheck: test: pg_isready -U postgres interval: 10s timeout: 5s retries: 5

    redis: image: redis:7-alpine

    nginx: image: nginx:1.27-alpine ports: - "80:80" volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro depends_on: - backend

    volumes: pgdata:

    One command brings everything up:

    Bash
    docker compose up -d
    

    And one command tears it all down:

    Bash
    docker compose down
    

    This is incredibly powerful for development. New team member joining? They clone the repo, run docker compose up, and they have the entire application stack running in minutes. No more spending a full day setting up a development environment.

    Common Mistakes I Made (So You Do Not Have To)

    1. Using Huge Base Images

    Dockerfile
    # Bad: ubuntu is 75MB, then you install everything manually
    FROM ubuntu:22.04
    RUN apt-get update && apt-get install -y openjdk-21-jre

    # Good: purpose-built, minimal image FROM eclipse-temurin:21-jre-alpine

    Alpine-based images are typically 5-10x smaller. Smaller images mean faster pulls, faster deployments, and a smaller attack surface.

    2. Forgetting .dockerignore

    Without a .dockerignore file, Docker copies your entire project directory into the build context — including node_modules (hundreds of megabytes), .git (could be gigabytes), test fixtures, and IDE configuration files.

    # .dockerignore
    node_modules
    .git
    *.md
    .env
    target/
    .idea/
    

    3. Running as Root

    By default, containers run as root. This is a security risk — if an attacker escapes the container, they have root access to the host.

    Dockerfile
    # Create a non-root user
    RUN addgroup -S appgroup && adduser -S appuser -G appgroup
    USER appuser
    

    4. Not Leveraging Layer Caching

    Put instructions that change rarely before ones that change often:

    Dockerfile
    # Dependencies change rarely - cached layer
    COPY package*.json ./
    RUN npm ci

    # Source code changes often - rebuilt each time COPY . . RUN npm run build

    5. Storing Data in Containers

    Containers are ephemeral — when they stop, their data is gone. Use volumes for anything that needs to persist:

    YAML
    volumes:
      - pgdata:/var/lib/postgresql/data  # Database files
      - ./uploads:/app/uploads            # User uploads
    

    Useful Docker Commands You Will Use Daily

    Bash
    # See running containers
    docker ps

    # See logs docker logs myapp --tail 50 -f

    # Execute a command inside a running container docker exec -it myapp /bin/sh

    # Stop and remove a container docker stop myapp && docker rm myapp

    # Remove unused images to free disk space docker image prune -f

    # Remove everything unused (careful!) docker system prune -a

    When Docker Is Overkill

    Honestly? For a simple script, a personal side project in early development, or a homework assignment, Docker adds complexity you do not need yet. There is real cognitive overhead in learning Dockerfiles, compose files, and debugging container networking issues.

    Start using Docker when:

  • You are working with a team and need consistent environments
  • You are deploying to production and need reproducible builds
  • Your app has multiple services (backend + database + cache)
  • You are tired of "it works on my machine" conversations

The Bottom Line

Docker is not magic, and it is not as complicated as it seems at first. It is just a really good way to package and run software consistently across different environments.

Once you understand Dockerfiles, images, containers, and compose files, you have got about 90 percent of what you need for day-to-day development work. The remaining 10 percent — multi-stage builds, networking modes, orchestration with Kubernetes — you can pick up as you need them.

The best way to learn Docker is to containerize a project you already have. Take a working app, write a Dockerfile for it, get it running in a container, then add a compose file for the database. You will be surprised how quickly the concepts click.

12 MinShare this article
Oğuzhan Berke Özdil
Author

Oğuzhan Berke Özdil

I have been connected to computers since childhood. On this website, I share what I learn and experience while trying to build a strong foundation in software. I completed my BSc in Computer Science at AGH University of Krakow and I am currently pursuing an MSc in Computer Science with a focus on AI & Data Analysis at the same university.