This guide is designed as a learning path. By the end, a beginner should understand what Docker is, why it exists, how to use it locally, and how it fits into real software delivery.
Understanding why Docker exists
Before Docker became common, teams regularly faced environment mismatch. The same application worked on one laptop and failed on another because runtime versions, system libraries, and local setup steps were different. Docker became widely adopted around 2013 because it solved this operational inconsistency by packaging applications with the dependencies they need. The result is predictable execution across development, testing, CI, and production environments.
Installing Docker and validating the setup
On macOS and Windows, Docker Desktop is the standard installation path. On Linux, Docker Engine is typically installed from the distribution repository or official Docker packages. After installation, the runtime can be validated with a single command:
docker run hello-world
If this command prints the success message, the Docker daemon and CLI are working correctly.
How images, containers, and registries relate
Docker works with a simple model. An image is the packaged blueprint, and a container is a running instance created from that blueprint. Images are usually stored in registries. Docker Hub is the most common public registry, while many companies use private registries for internal artifacts.
When an image is run for the first time, Docker pulls it from a registry, stores it in the local image cache, and starts the container on the local machine. This means a database such as PostgreSQL runs locally inside a container, not inside a remote Docker data center. The registry only stores image layers; runtime execution still happens on the local host or server where the container is started.
Running software without local installation conflicts
A key Docker advantage is runtime isolation. If a project uses PostgreSQL in a container, local PostgreSQL installation is usually unnecessary. The project can connect to the containerized database directly. The same idea applies to many tools and services.
Docker also makes version management practical. Different versions of the same service can run side by side on one machine, as long as their port mappings do not collide. For example, postgres:14 and postgres:16 can run simultaneously in separate containers.
Basic container operations every beginner should know
The command below starts Nginx in detached mode and binds host port 9000 to container port 80:
docker run -d -p 9000:80 --name web nginx:1.23
In this mapping, requests to http://localhost:9000 are forwarded to port 80 inside the container. Detached mode keeps the process running in the background and releases the terminal.
A practical operational flow is:
docker pull nginx:1.23
docker ps
docker logs -f web
docker stop web
docker start web
docker rm -f web
docker images
These commands cover image retrieval, runtime inspection, log streaming, lifecycle control, and cleanup.
Why multiple containers can come from the same image
It is common to run multiple containers from one image in production-like environments. This supports horizontal scaling, traffic distribution, blue-green deployment patterns, isolated workload execution, and behavior testing under load. One image can therefore represent a standard deployable unit while multiple containers provide runtime capacity and resilience.
Moving from one service to multi-service applications
Most real projects include more than one service. A typical stack includes frontend, backend API, database, and sometimes cache. Docker Compose provides a declarative way to define these services and run them together on a shared network. In that network, services communicate by service name rather than localhost.
This is why application topology becomes easier to manage with Compose even for local development.
Building a first custom image
A practical first exercise is a minimal Express application that returns Hello World.
server.js:
const express = require('express');
const app = express();
app.get('/', (_req, res) => res.send('Hello World'));
app.listen(3000, () => console.log('Running on 3000'));
package.json:
{
"name": "docker-express-hello",
"version": "1.0.0",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"express": "^4.19.2"
}
}
Dockerfile:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --omit=dev
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
The line FROM node:20-alpine defines the base image. A base image provides the operating-system and runtime foundation for everything added in later layers.
Build and run:
docker build -t express-hello:1.0 .
docker run -d -p 3000:3000 --name express-hello express-hello:1.0
At this stage, the full image lifecycle is visible: write app, define Dockerfile, build image, run container.
Public and private registries in practice
Docker Hub is suitable for public distribution and community images. Organizations often use private registries for security and access control. Common examples include AWS ECR, Google Artifact Registry, Azure Container Registry, and self-hosted Harbor.
Official images such as PostgreSQL can be used directly, but custom variants can also be built when additional configuration or hardening is required.
Tooling options and daily workflow
Docker Desktop includes a graphical interface for viewing images, starting and stopping containers, and checking logs. The CLI remains the preferred option for reproducible scripts, CI pipelines, and infrastructure automation. In practice, many teams use both: GUI for quick inspection and CLI for repeatable operations.
Docker across the software development lifecycle
Docker is not only a local development tool. It contributes to the full lifecycle by reducing setup drift in development, standardizing runtime in CI testing, producing immutable deployable artifacts, supporting reliable deployment patterns, and simplifying operational rollback and scaling strategies.
A beginner who reaches this point should now understand Docker concepts, container lifecycle operations, image creation, service composition, and the role Docker plays in modern software delivery.
Visual flow (Mermaid)
flowchart LR
A[Developer writes Dockerfile] --> B[Build image]
B --> C[Push image to Docker Hub or private registry]
C --> D[Pull image on local machine or server]
D --> E[Run one or many containers]
E --> F[Expose app with port binding]
G[Docker Compose] --> E
H[Database service container] --> E
This diagram represents a common image lifecycle from development to deployment.
Compose networking example (Mermaid)
flowchart LR
U[Browser User] --> F[Frontend Container]
F --> B[Backend API Container]
B --> D[PostgreSQL Container]
C[Docker Compose Network] --- F
C --- B
C --- D
In Docker Compose, services communicate by service name over the internal network.
Fast command cheat sheet
docker pull nginx:1.23
docker run -d -p 9000:80 --name web nginx:1.23
docker ps
docker logs -f web
docker stop web
docker start web
docker rm -f web
docker images