Build dependencies can be problematic. When deploying software on Linux, we can use the libraries and tools provided by the Linux distributions. But in order to build our application, we need to have the libraries and build tools available. For repeatable production builds, the versions should also be fixed. Keeping workstations in sync with the requirements may be inconvenient or impossible (e.g. not a Linux), so a separate build machine or a continuous integration server might be used. But that isn't helpful if we would like to compile incrementally while writing code, or run unit tests before committing changes to a version control system.
I'll show a simple way to set up a Docker environment for running builds and tests right from your source tree. Here's an example C program, hello.c:
int main(int argc, char **argv)
It will be built using this Makefile:
$(CC) -o $@ $^ -ltcmalloc
rm -f hello
Let's record its build dependencies (Ubuntu package names) to deps.txt:
We can create the build environment (a Docker image) using this Dockerfile:
COPY deps.txt /tmp/
RUN apt-get update && \
xargs apt-get -y install < /tmp/deps.txt && \
apt-get clean && \
So let's create it:
$ docker build -t hello-env .
There you have a nice, unchanging system image that you can put in a Docker registry and share with your teammates.
Next we need two shell scripts to help us with it. in-docker launches a Docker container with the working directory mounted inside it (make it executable with chmod +x in-docker):
dir=`readlink -f .`
exec docker run --rm --tty --volume=$dir:$dir --workdir=$dir hello-env sh docker-setup.sh $uid $name $HOME "$@"
docker-setup.sh is run inside the container in order to create a user account before executing the actual build command:
chown $uid:$uid $home
adduser --uid $uid --disabled-password --gecos "" --quiet $name
exec sudo --set-home -u $name "$@"
Finally, we can build the program:
$ ./in-docker make
cc -o hello hello.c -ltcmalloc
./hello: error while loading shared libraries: libtcmalloc.so.4: cannot open shared object file: No such file or directory
The resulting binary can be found normally in the working directory, but since google-perftools is missing, we can't run it... except inside the container:
$ ./in-docker ./hello
MALLOC: 16768 ( 0.0 MiB) Bytes in use by application
MALLOC: + 933888 ( 0.9 MiB) Bytes in page heap freelist
MALLOC: + 97696 ( 0.1 MiB) Bytes in central cache freelist
MALLOC: + 0 ( 0.0 MiB) Bytes in transfer cache freelist
MALLOC: + 224 ( 0.0 MiB) Bytes in thread cache freelists
MALLOC: + 1142936 ( 1.1 MiB) Bytes in malloc metadata
MALLOC: = 2191512 ( 2.1 MiB) Actual memory used (physical + swap)
MALLOC: + 0 ( 0.0 MiB) Bytes released to OS (aka unmapped)
MALLOC: = 2191512 ( 2.1 MiB) Virtual address space used
MALLOC: 10 Spans in use
MALLOC: 1 Thread heaps in use
MALLOC: 8192 Tcmalloc page size
Call ReleaseFreeMemory() to release freelist memory to the OS (via madvise()).
Bytes released to the OS take up virtual address space but no physical memory.
Now, imagine a test suite which requires a database to run. Instead of installing a database on your laptop and resetting it to a known state before each run, we can bundle it in our build environment: install and prepare it in the Dockerfile, and start it in the docker-setup.sh script. Couldn't be simpler.
Finally, we probaby also want to use Docker to deploy the built program. One approach is to create a base image with runtime dependencies, and create the image with the build dependencies on it (instead of directly on Ubuntu). The final deployment image may then also be created on the same base image, without including the unnecessary build dependencies.