A Beginner's Guide to Docker Compose

REFERENCE
12 min read

Recently we kicked off a beginner-friendly series about Docker. In part one, A Beginner's Guide to Docker, we covered the basics of Docker images and containers using the standalone Docker CLI.

In today's post, we're onto part two in our series: getting you up to speed with Docker Compose. Recall that in our first post, we used two Docker CLI commands to deploy our containerized services: docker build to build the image, followed by docker run to launch the container. These commands were great for one-off container deployments, but if we wanted to deploy multiple containers, these individual commands start to become cumbersome pretty quickly.

Enter Docker Compose. Docker Compose lets you define, build, and run multi-container applications with just a single configuration file and a single Docker command. It makes managing and maintaining complex containerized deployments much simpler. In fact, we deploy DoltLab (which runs seven services) with Docker Compose!

Let's dive into some examples you can follow to get you started. Before beginning, make sure you have the latest version of Docker installed. On macOS, Docker Desktop includes Compose. On Linux, install Docker Engine and the Compose V2 plugin (docker compose) — see the Compose install docs.

docker-compose.yaml

To start using Docker Compose, you declare your services in a docker-compose.yaml. This file will contain which container images to use, what build steps to perform, which ports to open, what volumes to mount, what Docker networks services should run in, and what environment variables should be set. Once you have your docker-compose.yaml file defined, you simply run docker compose up to build and deploy all services! Docker Compose will automatically detect files named compose.yaml, compose.yml, docker-compose.yaml, or docker-compose.yml in the working directory.

You can think of the Docker Compose configuration file as the “project file” for your containers or services. Once you've successfully defined your containerized project, you can build and deploy it reliably as often as you need.

Let's work through an example where we'll translate a standalone docker run command from our original blog post into a working docker-compose.yaml file. Then we'll expand our example to build a small two-service app (Flask + Redis) to show how compose services can talk to each other.

From Docker to Docker Compose

In part one, we ran an nginx application with a bind mount to serve an index.html file locally on port 80. Now, let’s express that same setup as a docker-compose.yaml file.

Start by creating a new folder and an index.html file:

➜  src git:(main)mkdir docker_compose_example
➜  src git:(main)cd docker_compose_example
➜  docker_compose_example git:(main)cat > index.html <<'HTML'
<!DOCTYPE html>
<html>
<head>
  <title>Hello World</title>
</head>
<body>
  <h1>Hello, World from Docker Compose + NGINX!</h1>
</body>
</html>
HTML

Then define the following docker-compose.yaml:

services:
  web:
    image: nginx:alpine
    ports:
      - "80:80"
    volumes:
      - ./index.html:/usr/share/nginx/html/index.html:ro
➜  docker_compose_example git:(main)cat > docker-compose.yaml <<'EOF'
services:
  web:
    image: nginx:alpine
    ports:
      - "80:80"
    volumes:
      - ./index.html:/usr/share/nginx/html/index.html:ro
EOF

In our docker-compose.yaml above, we define the services block and name our example service "web". Then, we specify the Docker image to use in the image field: nginx:alpine, which is, of course, hosted on Docker Hub.

Just like with the standalone docker run command, Docker Compose will attempt to pull any defined image down from a remote container registry if it can't find the image locally.

In the ports field, we map local port 80 to the container's port 80 so that our service will be served at http://localhost:80. And, in the volumes section, we mount our index.html file to the directory where nginx expects it in its container, /usr/share/nginx/html. The appended :ro tells Docker Compose that this file must be read-only.

Now, we can bring up our web service:

➜  docker_compose_example git:(main)docker compose up
[+] Running 9/9
 ✔ web Pulled                                                                                                                                                                                                                                                  3.8s
   ✔ 9824c27679d3 Pull complete                                                                                                                                                                                                                                0.7s
   ✔ 6bc572a340ec Pull complete                                                                                                                                                                                                                                0.8s
   ✔ 403e3f251637 Pull complete                                                                                                                                                                                                                                0.8s
   ✔ 9adfbae99cb7 Pull complete                                                                                                                                                                                                                                0.9s
   ✔ 7a8a46741e18 Pull complete                                                                                                                                                                                                                                1.0s
   ✔ c9ebe2ff2d2c Pull complete                                                                                                                                                                                                                                1.1s
   ✔ a992fbc61ecc Pull complete                                                                                                                                                                                                                                1.4s
   ✔ cb1ff4086f82 Pull complete                                                                                                                                                                                                                                2.4s
[+] Running 2/2
 ✔ Network docker_compose_example_default  Created                                                                                                                                                                                                             0.0s
 ✔ Container docker_compose_example-web-1  Created                                                                                                                                                                                                             0.1s
Attaching to web-1
web-1  | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
web-1  | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
web-1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
web-1  | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
web-1  | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
web-1  | /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
web-1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
web-1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
web-1  | /docker-entrypoint.sh: Configuration complete; ready for start up
web-1  | 2025/08/19 17:53:34 [notice] 1#1: using the "epoll" event method
web-1  | 2025/08/19 17:53:34 [notice] 1#1: nginx/1.29.1
web-1  | 2025/08/19 17:53:34 [notice] 1#1: built by gcc 14.2.0 (Alpine 14.2.0)
web-1  | 2025/08/19 17:53:34 [notice] 1#1: OS: Linux 6.15.9-arch1-1
web-1  | 2025/08/19 17:53:34 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1024:524288
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker processes
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 30
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 31
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 32
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 33
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 34
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 35
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 36
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 37
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 38
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 39
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 40
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 41
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 42
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 43
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 44
web-1  | 2025/08/19 17:53:34 [notice] 1#1: start worker process 45

With that single command, Docker has deployed our service and we can verify we get our web page at http://localhost:80:

➜  docker_compose_example git:(main)curl http://localhost:80
<!DOCTYPE html>
<html>
<head>
  <title>Hello World</title>
</head>
<body>
  <h1>Hello, World from Docker Compose + NGINX!</h1>
</body>
</html>

Success! We've just run our first Docker Compose deployment.

Now let's make our example a bit less trivial by deploying a multi-service application with Docker Compose.

Composing services

For our expanded Docker Compose example, we're going to build a simple application that deploys two services:

  • api: Will be a Python Flask app that increments a counter in Redis.
  • redis: Will be the Redis key-value store.

Go ahead and delete index.html and the previous docker-compose.yaml file from the project directory, and create a new sub-directory called app. This will be where we define our Python, api, application.

➜  docker_compose_example git:(main)rm docker-compose.yaml
➜  docker_compose_example git:(main)rm index.html
➜  docker_compose_example git:(main)mkdir app
➜  docker_compose_example git:(main)cd app

Now add the following requirements.txt and app.py files:

➜  app git:(main)cat > requirements.txt <<'REQ'
flask==3.0.3
redis==5.0.7
REQ

This file specifies our application's dependencies, which are flask and redis.

➜  app git:(main)cat > app.py <<'PY'
from flask import Flask
import os
import redis

redis_host = os.environ.get("REDIS_HOST", "redis")
redis_port = int(os.environ.get("REDIS_PORT", "6379"))
redis_client = redis.Redis(host=redis_host, port=redis_port, decode_responses=True)

app = Flask(__name__)

@app.route("/")
def index():
    count = redis_client.incr("hits")
    return f"Hello from Flask! This page has been viewed {count} times.\n"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000)
PY

In our api code, we grab the host and port from environment variables, and use those to instantiate a Redis client.

Then, we define an HTTP route handler at the homepage path "/", which increments the "hits" key in Redis every time the homepage is viewed, and displays the number of page views for the end user.

Finally, we serve our simple app on port 5000.

With these files in place, it's now time to create a Dockerfile to build our Python api image.

Just like we did in our first blog with standalone Docker, we can also use custom Dockerfiles in Docker Compose to create bespoke Docker images. Docker Compose will automatically perform the build step for us, equivalent to the docker build command, when we run docker compose up. We will place this Dockerfile at the root of our project, so before creating the file, be sure to cd into the parent directory.

# syntax=docker/dockerfile:1
FROM python:3.12-slim
WORKDIR /app
COPY app/requirements.txt /app/
RUN pip install --no-cache-dir -r requirements.txt
COPY app /app
EXPOSE 5000
CMD ["python", "app.py"]
➜  app git:(main)cd ..
➜  docker_compose_example git:(main)cat > Dockerfile <<'EOF' # syntax=docker/dockerfile:1
FROM python:3.12-slim
WORKDIR /app
COPY app/requirements.txt /app/
RUN pip install --no-cache-dir -r requirements.txt
COPY app /app
EXPOSE 5000
CMD ["python", "app.py"]
EOF

Before creating our docker-compose.yaml file, let's break down what our Dockerfile is doing.

We start by declaring the base image of our custom image to be python:3.12-slim, which will be pulled at build time from Docker Hub.

Next, we create and cd into a directory called /app by using the WORKDIR keyword.

In this directory, we first copy our requirements.txt into /app so we can install our dependencies.

We do this in the subsequent RUN step which runs pip install to install flask and redis.

Then, we copy our local source code into the /app directory with the COPY app /app line, and ensure our container is exposing port 5000.

And finally, we use the CMD keyword to define what should be executed when we run our container. In this case, we want to run our api's Python file, app.py.

Our final step is to create our multi-service docker-compose.yaml file. This file will also define the relationship each service has to the other.

services:
  api:
    build: .
    ports:
      - "5000:5000"
    environment:
      - REDIS_HOST=redis
      - REDIS_PORT=6379
    depends_on:
      - redis
  redis:
    image: redis:7-alpine
    volumes:
      - redis-data:/data
    command: ["redis-server", "--save", "60", "1", "--loglevel", "warning"]

volumes:
  redis-data:
➜  docker_compose_example git:(main)cat > docker-compose.yaml <<'EOF'
services:
  api:
    build: .
    ports:
      - "5000:5000"
    environment:
      - REDIS_HOST=redis
      - REDIS_PORT=6379
    depends_on:
      - redis
  redis:
    image: redis:7-alpine
    volumes:
      - redis-data:/data
    command: ["redis-server", "--save", "60", "1", "--loglevel", "warning"]

volumes:
  redis-data:
EOF

In the docker-compose.yaml above, we define our named services in the services block, api and redis.

In the api block, we're using the build field to tell Docker Compose how to build our service image. Notice there is no image field this time, since we don't want to pull an existing image from Docker Hub this time, we want Docker Compose to build our custom image.

The . argument tells Docker Compose to use the current directory as the build context, and by default it will look for a file named Dockerfile in that directory.

Next, we set values for the environment variables our service expects using the environment block.

And lastly, we define a depends_on block that lists the redis service. This tells Docker Compose to start redis before the api, to establish start order. NOTE: By default this does not wait for redis to be ready/healthy; it only orders startup.

Now let's look at the redis service.

It will use the redis:7-alpine image from Docker Hub, and mount a named volume at /data in the redis container to persist data.

Notice that the volume name redis-data is defined in the top-level volumes block. By specifying a named volume here, we're telling Docker to create and manage persistent storage that survives the shutdown of our application (docker compose down).

The command field contains the list of arguments we want to pass directly to the redis:7-alpine container to start our service. In this example, we're just specifying when to persist the snapshot of data changes with the "--save 60 1" arguments, which creates a new snapshot if at least one write occurs within a 60 second period, and additionally, setting the log-level to warning.

And now we're ready to deploy our multi-service application! This time, we'll use the docker compose up command with the --build flag, so it knows to perform the build step, and also use the -d flag to tell Docker Compose to run the services in the background.

➜  docker_compose_example git:(main)docker compose up --build -d

[+] Running 9/9
 ✔ redis Pulled                                                                                                                                                                                                                                                3.0s
   ✔ 0368fd46e3c6 Already exists                                                                                                                                                                                                                               0.0s
   ✔ 4c55286bbede Pull complete                                                                                                                                                                                                                                0.5s
   ✔ 5e28347af205 Pull complete                                                                                                                                                                                                                                0.6s
   ✔ 311eca34042e Pull complete                                                                                                                                                                                                                                0.8s
   ✔ e6fe6f07e192 Pull complete                                                                                                                                                                                                                                1.5s
   ✔ a2cadbfeca72 Pull complete                                                                                                                                                                                                                                1.5s
   ✔ 4f4fb700ef54 Pull complete                                                                                                                                                                                                                                1.5s
   ✔ a976ed7e7808 Pull complete                                                                                                                                                                                                                                1.6s

...

#9 DONE 2.2s

#10 [5/5] COPY app /app
#10 DONE 0.1s

#11 exporting to image
#11 exporting layers
#11 exporting layers 0.2s done
#11 writing image sha256:e55828409c0cf3291e1ae009ac1cc9543a036c8400957eb6b83c9cb502fa611a done
#11 naming to docker.io/library/docker_compose_example-api done
#11 DONE 0.2s

#12 resolving provenance for metadata file
#12 DONE 0.0s
WARN[0009] Found orphan containers ([docker_compose_example-web-1]) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
[+] Running 4/4
 ✔ docker_compose_example-api                  Built                                                                                                                                                                                                           0.0s
 ✔ Volume "docker_compose_example_redis-data"  Created                                                                                                                                                                                                         0.0s
 ✔ Container docker_compose_example-redis-1    Started                                                                                                                                                                                                         0.2s
 ✔ Container docker_compose_example-api-1      Started

Now our services should be running. We can verify this by using the docker ps command to view the running Docker processes:

➜  docker_compose_example git:(main)docker ps
CONTAINER ID   IMAGE                        COMMAND                  CREATED              STATUS              PORTS                                         NAMES
e8197962a267   docker_compose_example-api   "python app.py"          About a minute ago   Up About a minute   0.0.0.0:5000->5000/tcp, [::]:5000->5000/tcp   docker_compose_example-api-1
2092a4cf35d5   redis:7-alpine

Great! Now we can test our api service by hitting it a few times and incrementing the counter:

➜  docker_compose_example git:(main)curl http://localhost:5000
Hello from Flask! This page has been viewed 1 times.
➜  docker_compose_example git:(main)curl http://localhost:5000
Hello from Flask! This page has been viewed 2 times.
➜  docker_compose_example git:(main)curl http://localhost:5000
Hello from Flask! This page has been viewed 3 times.

It works! For the deeply curious out there, if you're wondering how Docker Compose was able to successfully route requests from the api service to generic host name redis, under the hood, Docker creates a private network for the application where api can automatically resolve hosts by their defined service names, redis in our case. Pretty cool, right?

Once we're done running our application we can bring it down by running the docker compose down command.

➜  docker_compose_example git:(main)docker compose down
[+] Running 3/3
 ✔ Container docker_compose_example-api-1    Removed                                                                                                                                                                                                          10.2s
 ✔ Container docker_compose_example-redis-1  Removed                                                                                                                                                                                                           0.1s
 ✔ Network docker_compose_example_default    Removed

If we ever wanted to deploy this application again, we simply need to run docker compose up and it will all be back online, perfectly configured!

Conclusion

Hopefully this foray into the world of Docker Compose encourages you to try deploying your own containerized applications. Of course, today's post is only a small example of what's possible, and there's much more for you to uncover on your own.

In our final post of the series we'll continue to build on our foundational Docker knowledge and get familiar with Docker Swarm, a powerful tool that helps you coordinate containerized services across multiple hosts. I hope you're looking forward to it.

See you then!

SHARE

JOIN THE DATA EVOLUTION

Get started with Dolt

Or join our mailing list to get product updates.