Running compose is easy, but sometimes specific configs are hard
I found that while running several service together using docker-compose is typically not difficult, if there are doing to be problems it usually happens with some of the configuration specifics. I am going to lay out some things I have learned from messing up plenty of configs in this regard.
Docker compose commands -
Docker-compose up will build and bring up containers by default, according to your compose file configuration. It is also possible to just build, or force a rebuild when bringing them up.
Docker-compose down brings all the services down based on a compose file, which makes it convenient when you have a number of services to setup and take down. There are also commands to allow you to get the logs of services running from a compose file.
# Bring up containers based on compose file
docker-compose -f excompose.yml up -d
# Stop them:
docker-compose -f excompose.yml down
# Get logs from a service in the compose file
docker-compose -f excompose.yml logs servicename - The whole log
docker-compose -f excompose.yml logs --tail=500 servicename - The tail of the log.
# build images based on the compose file, but don't run
docker-compose -f docker-compose.yml build
# Run after forcing a re-build
docker-compose -f docker-compose.yml up -d --build
Setting up service ports in compose
The docker cli takes this as the -p option, but compose includes it under the port: object for a service. The setup in docker compose is (port on host) : (port on container)
ports:
- '81:80'
- '3001:3000'
You would then be able to access with 127.0.0.1:81 or 3001
Services are reached using the service name
The network routing docker does internally makes it easy for services to reach one other; you just need the name given to the service.
services:
mongodb:
# configuration
redis:
# configuration
So in this case, mongodb and redis are the service names, so they are also the url.
urldb = ‘mongodb:27018’
urlredis = ‘redis:3000’
Figuring out what port to use from other services
If services are using the internal docker network, use the ports for the service, not the exposed ports. So if you had redis mapped to 3001:3000 and mongodb 27019:27018, you would contact them directly at 3000/27018 from other services.
front end code like web pages should use host port -
If I am making requests to a backend from a web page, then use the host mapped port so that it gets directed correctly. So if I had an nginx server with mapping 81:80 an axios request to it should use 81 in this case because docker is going to direct those requests to nginx at :80.
Setting up custom networks
Docker compose sets up a default bridge network, but you can set up your owned named networks.
services:
servedev:
container_name: servedev
image: node:12.10-alpine
networks:
- back
# The network block sets it up for use in any service.
networks:
back:
driver: bridge
Getting isolation of networks
This arrangement for the back network would prevent it from being accessed from the outside. It seems like stack/swarm is needed to use overlay driver however, so it is not something that can be put into use at the docker-compose level.
networks:
front:
driver: bridge
back:
driver: overlay
internal: true
Setting up bind volumes
Same as when using plain docker; share a location on the host with the container. My example is to link code to a development server container so that any code changes are reflected by the dev server and it could be used for rapid development.
servedev:
container_name: servedev
build:
context: path/app
dockerfile: Dockerfile-servedev
image: example/servedev:v2
volumes:
- type: bind
source: ./path/app
target: /app/dest
networks:
- app
depends_on:
- express
Using a named, new volume
This case is for when you want a volume to persist some sort of data for a service, but don’t have any initial data. You specify a named volume and link it with a service.
filebeat:
build:
context: .
dockerfile: Dockerfile-filebeat
image: example/filebeat:app1
user: root
volumes:
- type: volume
source: filebeat_data
target: /usr/share/filebeat/data
read_only: false
volumes:
filebeat_data:
Using named, existing volume
This assumes that the volume already exists. The use case for me here was a volume with a database I had already populated. If the volume specified is not found on the host, then this will not work.
mongodb:
container_name: mongozilla
image: mongo:4.2-bionic
volumes:
- type: volume
source: mongodata
target: /data/db
volumes:
mongodata:
external: true
docker-compose can still build images with Dockerfiles
You need to point it to the Dockerfile and build context under the build: config. It builds images for all the services that have a build config.
If the context is wrong, the build will throw errors
The errors thrown do not necessarily point you right to the issue, so configuring the context for Dockerfiles correctly is helpful to do from the start.
servenginx:
container_name: servenginx
build:
context: frontend/
dockerfile: Dockerfile-buildserve
Using secrets to pass sensitive config information -
Secrets take data either passed directly or from a file, and then pass it to a container at /run/secrets/secretname. This is useful to get sensitive information you don’t want exposed.
secrets:
data_user:
file: db_user.txt
data_pass:
file: db_pass.txt
redis_pass:
file: redis_pass.txt
Pass the secrets along to an express container -
secrets:
- data_user
- data_pass
- redis_pass
Customize secret location and permissions using target, mode -
It will still always be in /run/secrets though, so config is limited.
secrets:
- source: data_user
target: path/name
mode: 0644
Docker-compose adds network functionality, configs over docker
When compared to using docker, docker-compose facilitates running services together that can communicate with each other on a docker network. Most of the other configurations are possible with the regular docker cli, but since they are all saved in the docker-compose.yml it keeps track of the exact setup.