Setting Up a Multi-Container Development Environment with Docker
In this post we will set up a development environment for an application that will be using multiple Docker containers.
The application is a Fibonacci number calculator, which consists of the following services:
- A react frontend (client).
- A Node.js backend (server).
- A Redis worker.
- An Nginx router.
- A Postgres database.
How the application works:
A user will enter a random index
on the browser, the backend
saves the index
in the database
and redis
; thereafter it triggers a redis insert
event. This event will be handled by the redis worker
, which will calculate the fibonacci value
for the index
, and finally insert
in redis
the index
and value
as key value pairs.
A user will then reload the application and see the following:
- The indexes posted so far (retrieved from the database).
- The indexes posted and their corresponding fibonacci values (retrieved from redis).
Prerequisites
This post assumes that you already have some basic knowledge on docker
and git
, however, i will explain all the docker
commands used in this post. Some good resources to get you up and running include:
Project Setup
- Install
docker
using this installation guide - Install
docker-compose
using this installation guide. - Confirm you have
docker
anddocker-compose
installed in your machine by checking their versions on your terminal.
# confirm docker is installed by checking the version
docker --version
Docker version 19.03.8, build afacb8b
# confirm docker compose is installed by checking the version
docker-compose --version
docker-compose version 1.25.5, build 8a1c60f6
git clone
the sample project from here andcd
into thefib-calculator
directory.
Creating a docker file for the client container
cd
into theclient
directory and create adocker
file namedDockerfile.dev
. We will be naming ourdocker
files with a*.dev
extension so that we can differentiate them from the productiondocker
file’s in future.- Add the following lines of code in the file:
FROM node:alpine
WORKDIR '/app'
COPY ./package.json ./
RUN yarn install
COPY . .
CMD ["yarn", "start"]
The FROM node:alpine
pulls the Node.js base image from the docker hub
(docker registry). We are using the alpine
version of Node.js which is a bit lightweight for development purposes.
The WORKDIR '/app'
sets the working directory inside our container to /app
, which is where all our project files and folders will live.
The COPY ./package.json ./
copies the package.json
file from our machine to the current working directory inside the container.
The RUN yarn install
installs all the dependencies specified in the package.json
file.
The COPY . .
copies all the files and folders inside the client
directory into our current working directory.
Finally, CMD ["yarn", "start"]
command defines the start up command for the container. This command is also defined inside the package.json
file.
Creating a docker file for the server container
This docker file will be very similar to the client
container docker file, the only difference will be the Node js base image version and the container start up command.
cd
into theserver
directory and create adocker
file namedDockerfile.dev
.- Add the following lines of code in the file:
FROM node:13
WORKDIR '/app'
COPY ./package.json ./
RUN yarn install
COPY . .
CMD ["yarn", "dev"]
We are using this particular version of Node.js (version 13) because of an NPM package inside package.json
that has a dependency to this version.
Creating a docker file for the worker container
This docker file will be very similar to the server
container one except for the Node.js base image version, which in this case will be the alpine
version.
cd
into theworker
directory and create adocker
file namedDockerfile.dev
.- Add the following lines of code in the file:
FROM node:alpine
WORKDIR '/app'
COPY ./package.json ./
RUN yarn install
COPY . .
CMD ["yarn", "dev"]
Creating a docker file for the nginx container
- Make a new directory named
nginx
on the same level as theclient
directory. cd
into thenginx
directory and create adocker
file namedDockerfile.dev
.- Add the following lines of code in the file:
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
The FROM nginx
gets the nginx
base image from docker hub
.
The COPY ./default.conf /etc/nginx/conf.d/default.conf
copies our nginx
config file in our machine to the default nginx
config file inside the container.
Creating the docker compose file
Here we configure our containers using docker compose. The Compose tool is used to define and run multi-container Docker applications. The application’s services are configured in a YAML file. Then, with a single command, you create and start all the services from your configuration.
cd
into the parent directory of the repository and create a file nameddocker-compose.yml
.- Add the following lines of code in the file:
version: "3"
services:
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- "3000:80"
postgres:
image: "postgres:10.5"
redis:
image: "redis:latest"
api:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /app/node_modules
- ./server:/app
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /app/node_modules
- ./client:/app
worker:
build:
dockerfile: Dockerfile.dev
context: ./worker
volumes:
- /app/node_modules
- ./worker:/app
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
The version: "3"
specifies the docker compose
version to use, we use version 3
which is the newest version at the moment.
The services
section contains the configurations for each container. Bellow is a description of each service configuration options:
Nginx service
The restart: always
configures the nginx
container to always restart when it fails.
The build
defines the configurations that are applied at build time, which include:
dockerfile
specifies the containerdocker
file name.context
specifies the directory containing thedocker
file.
The ports
field defines port mapping (HOST:CONTAINER)
. Here we are mapping port 3000
in our machine to port 80
in the container.
Postgres service
For this service, we only specify the postgres
image to use using image:postgres:10.5
, which sets the container base image to postgres
version 10.5
.
Redis service
Similar to the postgres
service, we use image: "redis:latest"
to set the base image for this container to the latest version of redis
.
API service
Similar to the nginx
service build configuration, the dockerfile
and context
options specify the docker
file to use and it’s relative path respectively.
The volumes
option is used to map a host path to a container path.
- The
/app/node_modules
prevents this container path from being mapped to the host. - The
./server:/app
maps the host files and folders in theserver
directory to theapp
directory inside the container. This enables theapp
to be restarted whenever a file is changed in the host.
The environment:
option is used to specify environment
variables, and in our case we set redis
and postgres
environment variables used when running the server
.
For the client
and worker
services, refer to the api
service.
Starting the containers
Run docker-compose up --build
to build and start the containers. This takes time initially because of downloading the base images from docker hub.
The --build
flag is used for rebuilding containers, use this flag when you make a change in the docker files, otherwise, only run docker-compose up
to start the containers.
Open your browser on port 3000
, and check out the application.
Stopping the containers
Run docker-compose stop
to stop the containers.
Finally, the full code for this multi-container application can be found here, https://github.com/JohnMwashuma/docker-multicontainer. Also check out this link for the full list of docker compose version 3 options.
I hope this helps you going forward in simplifying and accelerating your development workflow with docker.