Understanding Docker-Compose

Kirtfieldk
6 min readJan 27, 2021

If there is ever a need to start up multiple containers to support an application, docker-compose will in most cases be the best tool for the job. Docker-compose allows one or more containers to be built and ran at the same time. This greatly reduces the time of changing into different directories and building the images manually. In reality, most apps require persistent data; thus, databases will at least be a second needed Docker container. In the docker ecosystem above, our application will consist of a React client, Node server, and Nginx load balancer.

It may appear to be pointless to use docker-compose on a single container. The idea of building a docker-compose.yaml file on top of the Dockerfile seems redundant, but a docker-compose.yaml reduces the number of command line arguments needed to run a container. For example,

docker run -p 3000:3000 <image_name>

connects our local port to the port inside the container. There is absolutely no way to connect a local network to the container network inside of a dockerfile, we can only EXPOSE 3000 of the container. However, docker-compose removes the whole -p flag and sets up the network with the ports component.

ports:
- '5000:5000'

In addition, docker-compose sets up volumes without the need of a command line argument. To create a volume between the current directory and the working directory in the container, we would need to use the command below.

docker run --volume ${PWD}:/<path>/ <image_name>

Although it is true that Dockerfile does have a volume component, this tag does not work with connecting the WORKDIR to the PWD or and directory.

We can see how to leverage docker-compose to eliminate the many command line arguments needed to run a container. It allows us to set the values and parameters needed to launch a container with no risk of forgetting configuration.

In our root directory, there are three subdirectories the /client, /server, /nginx that will configure and construct the three containers. Each directory will contain its dockerfile that successfully builds its image and runs the container. These dockerfiles have already been dissected and explained in my previous articles.

Like the kubernetes ingress-nginx controller, the local nginx image will route traffic between the client and sever container if the incoming traffic’s URL begins with /api/. The dockerfile for nginx is standard.

dockerfile

We begin by pulling the nginx:1.19 image from Docker hub and expose our application on port 80. After that, simply delete the standard welcome page and overwrite the default configuration with our local one then start nginx. First we define the upstream networks backend and client and establish the ports we should direct traffic to. We then configure the server to listen to port 80 and route all traffic that starts with /api to http://backend. Before we redirect the traffic, rewrite the incoming url to the original without /api. For example, /api/hello/ will route to /hello/$1 is the string after /api/. Finally, all other incoming traffic will route to the client.

default.conf

Make sure each container runs without error. This ensures that docker-compose will not fail being built and started up.

Now, that we have three working containers — the client, server, and nginx load balancer, it is time to compose them all together. Below will be the example Docker-compose.yml file.

Docker-compose.yml

For all compose files, we must define the version and use the latest version available. After that standard procedure, we must define our services. A service is just a docker image that needs to be build out. This includes local images built on our local machine and images published on docker hub. In standard yml fashion, all of our services will be clearly defined by indents, and all of our services will have a name. I decided to name my services backend, client, and nginx.

Looking at the nginx service, it is notes that this service will depend on the client and backend containers. If any of those containers fail our nginx container will fail also. Next, anytime the nginx container will fail it will restart immediately; this is needed because this service routes outside to the appropriate service. After ensuring that the service will always restart, the build component defines where to find the dockerfile to build out, and in this case to look into the ./nginx directory. To finish up this service, we must port bind nginx to an available port on our local machine.

The backend service will also tell where to find the dockerfile to build out. In addition, this backend service defines a ENV variable that will only be accessible within the server image. Notice how port is commented out, this is because we should not let our local machine be able to access the container directly via the REST api. Finally, we establish a volume between the local ./server directory and the WORDIR in the container for automatic code updates.

The client service is pretty simple, we just build it out.

To build out this compose file use the command

docker-compose build

To start up the containers use the command

docker-compose up
Test

As we can see, the frontend was able to make a connection with the server to fetch the data from the MongoDB database!

Tips

To start up a single service — whenever changes are made, we must build out our compose again because up will rerun the old version of the containers.

docker-compose build <service name>
docker-compose up <service name>

This projected had a Nginx load balancer, but our services are able to reference the other in the code by just using their service name.

axios.get('backend/v1/<route>')

This async call will successfully fetch data from out backend service. Docker-compose automatically creates a network between all of the container making it easy to reference other services in code. This is how we would do it in a docker command.

docker network create <network_name>
docker run --network <network_name> <image_name>

In a compose file, we can easily make the server service known by placing its name in an ENV variable.

services:
client:
enviorment:
HOST_URL=backend
...
backend:
...

In the core example, the clients async calls looked like

axios.get('api/v1/<path')

This call passes through the load balancer and get pushed to the server container instead of going to the server container directly.

--

--