Docker: the what, the why, the how


In a previous post, I briefly threw in a few “buzzwords”, namely Immutable Infrastructure and Docker. To start, Docker is more than just a new buzzword increasingly appearing on folks resumes. It is a tool built to solve a particular problem: the Immutable Infrastucture - also refered to as Immutable Servers. The approach that it takes to achieve that is fairly different from automated configuration tools (ACL), and is one that has opened multiple possibilities for both devs and ops folks.

According to their website:

Docker is a platform for developers and sysadmins to develop, ship, and run applications. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code. Docker lets you get your code tested and deployed into production as fast as possible.”

What is Docker?

Docker is a packaging tool and application runtime. With it you are able to assemble/package an image which you can then run in a container.

In Docker parlance, an image is an immutable layer. You can think of them as the layered segment of a space rocket. Each parts sits on top of the other and depends on the previous one.

Sure took a while

@Wikipedia

Likewise, each layer represent modifications to the filesystems building up to the final image, i.e. the one we intend to ship/run/distribute. For example, let’s say this was our Dockerfile:

In the above file, the source image is ubuntu. Every command that would execute past the “FROM” line (excluding MAINTAINER) would result in a intermediary image all the way to the last command. The final image (the tip of our rocket) can be tagged and versioned so as to give a descriptive name.

docker build -t myUser/myImageName:latest -

The above command would result in the final image built to be tagged with the name of ‘myUser/myImageName’ and the version ‘latest’. If I were to push it to docker hub, I could then refer to it from a seperate Dockerfile as the source/parent image (FROM). If I were to run it, docker would run the image in a container, i.e an isolated environment in which the built image would be executed assuming an ENTRYPOINT or CMD has been defined.

docker run -d --name=some_name -t myUser/myImageName:latest

The container itself is like, well, a container. Let’s say you had a band, and you put them in a container for whichever reason.

The Loud Family

For them to play music, you may want to give them guitars, mics, and other necessary instruments. For you to listen to the music outside of the container walls and communicate with the players, you may want to expose some sort of communication channel. Then when you close the container and tell the band to play, they would be able to do so without interference, yet you would still be able to listen.

Likewise, with a docker container, you can pass it item needed by your environment by passing in environment variables vales(NOTE: You do have the option of defining environment variables in you Dockerfile). You can also choose to expose certain ports so as to communicate with the application within the container…although you do have the option to hook into the host’s network configuration. Other than that your application would run isolated from its host and from other containers. And once the container starts running it is immutable in that you cannot change its configuration. you cannot update the image. The processes running inside can, as they have write access. From outside the container all we can do is attach to it, stop it, restart it or start it. You also have the ability to commit the internal state of the container into a different image.

Usages

The most obvious use I would say is Automated Deployment. Instead baking AMIs, and instead of using Chef or Puppet to build up your server and to grab deploy the latest version of your software, you could build a docker image and tag to the appropriate version.

For example, let’s say I was building a Scalatra application that made use of the SBT Native Packager. I could set things up so as to build a docker image after running my tests and running the package task (‘universal:stage’).

Here’s a sample Dockerfile for the app:

FROM williamyeh/scala:2.11.2

ADD target/universal/stage /services

WORKDIR /services

RUN chmod +x /services/bin/yourproject

EXPOSE 8000
CMD []
ENTRYPOINT ["/services/bin/yourproject"]

Then on the server, you would just need to run it with the needed environment variables passed along, as well as the mapped port. That’s it. Simple isn’t it?

Please refer to this previous post in which I discuss Terraform.io and give a snippet of how I do my deployment using both Terraform and Docker.

Another use case is setting up services that your application may rely upon like MongoDB, Rabbitmq, MySQL, Postgres and even Oracle. Please check out the registry for all that’s publicly available.

Another use case is that of testing your application. As part of CI process you could build and run the image in a detached container with little difficulty, making it easier to run integration tests on your system from Hosted CI solutions like CodeShip or CircleCI. At the time of this writing, CircleCI has added Docker support, whereas I believe you have to install it with Codeship.

Another possiblie usage: building your own Continuous Integration Service. When using tools like TeamCity you have to make sure that any changes to yout environment gets reverted back and cleaned out. You also have to make sure not to run build configurations that could step on each other’s toes. With Docker, you could in theory set things up such that at the start of every build, a Dockerfile is generated containing:

  • all the environment values desired by the user. These could be read from a ci.yml file
  • the base image to be used (e.g. 'williamyeh/scala:2.11.2'). Again read from a ci.yml file
  • an 'ADD' statement grabbing the deployment ssh keys
  • a 'RUN' statement cloning the repository and a 'WORKDIR' statement setting the location of the repository as the working directory
  • a collection of 'RUN' statement matching the set of steps the user wants executed. These could also be read from a ci.yml file

To track the progress of the build, continued invocation of docker logs command could be triggered once the container is running (at the time of this writing I do not believe there is a way to redirect the output from container outside of docker log).

If the user wants build artifacts to be retrieved, you could grab them from the container by running docker cp.

One more possibility: you can also use Docker to set up Dev environments. No more “it works on my environment and not on yours”. Each dev would have the exact same environment (more or less).

× Warning!

I do maintain that the best way to learn about Docker is to checkout the docs. Nevertheless, I hope this article would have helped in your understanding of what it is, what problem(s) it exists to solve, and how it can be leveraged.

Docker is a pretty amazing tool. Please give it a try. You won’t regret it.


Tags: