Working with Docker containers and images

There are a number of aspects to working with docker. Building images, running containers from the images, looking through container logs, and composing containers together to model running more complete deployments. I am going to do a brief overview of several of these topics to show how docker can be used to effectively run services on a system without needing to install them and how to use it testing multi service deployments.

Docker containers run this website, for example

While running services locally is a great use of docker, it is just one use, and it remains a key part of my development/build/test/deploy pipeline throughout. You can see this looking at one of my Kuberenetes writeups where the deployments are pulling images from dockerhub, Filebeats specifically, to create the pods. I pull either base or custom images from dockerhub for all my deployments/daemonsets, which are running this website right now. With the range of use of Docker in mind, back to getting going with it.

Quickest way to get a container running

Pull an image from dockerhub and run it. When you use docker run and specify a container it will pull it automatically if you don’t have it.

docker run --name mongodb-test -d mongo:4.2-bionic
docker run --name redis-test -d -p 6380:6379 redis:5.0-alpine

-d - detach; runs the container in the background
—name - give it a name, otherwise one is randomly generated. -p - specifies a port mapping host:container

Using exec to run a command/shell in image

Both of the following commands start a shell in the image so you can interact with the service, explore the running container.

docker exec -ti mongodb-test /bin/bash  
docker exec -ti redis-test /bin/ash  

Image building; using dockerfiles to configure images

A dockerfile takes a base image and modifies it with the instructions you give it. The base image is likely not going to be enough in most cases, so this allows creation of a customized image.
Build context is important for dockerfiles
You can’t access files outside the build context. Docker by default will also pass all files in the build context to the build process.
Make sure correct content is in build context, and use .dockerigore All the files you need for the build need to be in the build context. You should also set up a .dockerignore to exclude anything you don’t need, otherwise it all gets sent to the build process.

docker build -f path/dockerfile ./
# -f configures path to dockerfile
# ./ is the build context 

As of docker 18.03, dockerfile should be storable outside build context
This was an annoyance for some time; the dockerfile had to be inside the build context. This should no longer be the case.

Some of the main instructions in a Dockerfile -

FROM mongo:4-xenial - The image to base your image build on.
RUN cmd && cmd - run commands on the image.
COPY src /app/dest - Copy a file from build context to the image.
WORKDIR /path/to/dir - Set the working directory
USER - Switch the user.
ENTRYPOINT - A command that is always executed on start.
CMD - Command to execute on start, after ENTRYPOINT.
Can be overridden on the command line.
Use RUN commands in groups -
Any operations with intermediate steps should be run from start to end in one RUN command. This avoids bloating the image with intermediate steps.
RUN cmd1 && cmd2 && cmd3 && finalstep What is required in the Dockerfile
The FROM statement is the only thing required. What else you add depends on what you are trying to do. I almost always end up using RUN and COPY, since usually I need to add my own code and do configuration for it.

An example Dockerfile with two stage build; React app served by Nginx

This demonstrates a decent number of dockerfile facets. Multi-stage builds might sound like an advanced feature, but I find they are simple to use and can greatly reduce resulting image size.
Two build stages;

  1. Build a react app using node as the base image.
  2. Set up an nginx server, copy over app built in stage one.

Stage 1; build node app -

Note that both stages are in the same dockerfile, but I have separated them for clarity.

# The image to base this stage of the build from.
FROM node:12.10-alpine as node-build  

# Make and set the working dir
RUN mkdir -p /app/example
WORKDIR  /app/example
 
# Copy the package.json and install all dependencies
COPY package.json /app/example/package.json  
RUN npm install --silent   

# Copy the app contents and build production version
COPY .  /app/example/  
RUN npm run build  

Stage 2 starts with another FROM statement :

First stage is saved for use, but anything not use will be discarded when the build finishes.

# Use nginx image; node was only needed for the build.
FROM nginx:stable-alpine  

# Set up the app directory for nginx
RUN mkdir -p /srv/nginx/example/ &&\  
    rm /etc/nginx/conf.d/default.conf  

# Copy nginx server config
COPY /nginx/example.conf /etc/nginx/conf.d/  
# Get the built app from STAGE 1; note the COPY command uses 
# --from=node-build to indicate it is copying from stage 1.
COPY --from=node-build /app/example/build /srv/nginx/zillowdata/  

# Run nginx
CMD ["nginx", "-g", "daemon off;"]  

Then build an image from this Dockerfile

docker build -t example:reactnginx -f Dockerfile-nginxserve .
Run a container from the image

docker run -d -p 8000:80 --name react_test example:reactnginx  
# Then kill it later on using the --name given above
docker kill react_test 

Looking at container process logs

The way this works is that docker redirect stdout/err of the main container process (pid 1, CMD in dockerfile) to its own logs. So that is how the logs are accessed directly via docker, and not logging into the container. You will find the logs there blank.

docker logs react_test  

If you don’t run detached (-d), logs print to console
If you want to get the log output directly, run the container this way:

docker run -p 8000:80 --name react_test example:reactnginx  

Everything is the same as before except the main process logs directly to the shell you are running docker from.

Live code updates; running a container with a bind mount

Using a bind mount allows a container to get live updates for files changed on a host. This could be useful anytime you want to have a container update based on code or file changes right away. This example specified the type (bind) and the src/dest locations.

docker run -it --mount "type=bind,src=/home/dev/app,dst=/app/dest/,ro" \   
  -p 3001:3000 \  
  --name react_test \ 
  example:react_dev  

A few other commands useful for check out images/containers

docker inspect containername
Take a look at a container in more detail.
docker images Shortcut command for docker image ls; show images and their size.
docker image Prefix for image commands. ls, pull, prune, rm, tag are some of the commands available.
docker image pull name - download an image.
docker image ls -a - Show all images, including intermediate.
docker ps -
Show the currently running containers, shortcut for docker container ls.
docker container
Prefix for container commands. ls, stop, start, inspect, rm are some command to control containers.
docker container stop - stop a running container.

Docker is great for running services without installing them

Taken to the extreme, container operating systems like coreos or rancheros run everything in a container. At an everyday level however, docker lets you run various services on your machine without having to install them. This makes running many different configurations or setups easy without resorting to complex multiple installs that seem to always cause problems. It is also just a good way to use services without having to add any of it directly to your system, keeping your system clean while running the exact version/setup of the service you want.