If you are familiar with Docker, you may wish to skip this post and move directly to the next.
For years, the Linux kernel has contained the ability to run “Containers”. Docker is the first technology to make effective use of this feature.
When an application runs, it requires various resources – memory, disk and network. Typically, the view of the disk and network are that of the underlying host. This means that particular applications can have conflicts. Some applications have different dependencies (system libraries, etc) or require access to the same network port.
A container provides the application with its own view of the disk and the network. Thus, two containers can run on the same host, each with their own version of the same dependency.
To start a container, we need an “image”. A Docker image essentially encapsulates the layout of the disk that the container process will see. It also defines the command that should be executed when the container is started.
Docker Images are created using a Dockerfile, which is typically included at the root of the application you are running. One container will only ever run a single process, so it is quite likely that you will have more than one instance of the same image, each carrying out slightly different roles.
A Docker image is layered. That is, if we add a rule to the Dockerfile that says “add this file to my image”, it will only need to carry out the steps that precede that ‘add’ command once. Thus, even though your Dockerfile may include ‘apt-get install’ commands that themselves take three or four minutes to execute, once this has been done once, you can potentially rebuild the image in under a second, if the only thing you have changed is code in your own application. This is fundamental to the speedy iterations required for effective software development.
Once a Docker image has been built locally, it needs to be shared between hosts. This is done by pushing it up to a Docker Registry. The next step in this tutorial will be to create a private Docker registry for serving our own projects.
A typical workflow, and the one we will be using for this project, is where a developer creates an image and tests it on a local Docker install. Once they are happy with it, they commit their code to a version control system – in this case, Git. From there, a build server (we will be using Jenkins) will pick up the change, build the code, build an image, and push the image to the Docker registry. In this model, no-one ever pushes an image manually to the registry – it must be done by Jenkins, otherwise we risk having images in the Registry for which we do not have code in Git.
As we build out our infrastructure, we will see examples of how each of these Docker features comes to our aid.