Update 4/22/14: James Turnball has covered similar territory and reached similar conclusions. In fact the more I look, the more I see this debate playing out and the first generation of solutions beginning to take form. In just two months I’m now much more optimistic about the immediate applicability and viability of Docker to real-world problems.
When I first heard about Docker I knew it was something to watch. In a nutshell Docker is a mechanism on top of Linux Containers (LXC) that makes it easy to build, manage, and share containers. A container is a very lightweight form of virtualization, and Docker allows for quickly creating and destroying containers with very little concern for the base OS environment they are running on top of.
Because Docker is based around the idea of running “just enough” OS to accomplish your goals, and because it is focused on applications rather than systems, there is a lot of power in this model. Imagine a base server that runs absolutely nothing but a process manager and the Docker daemon, and then everything else is isolated and managed within its own lightweight Docker container. Well imagine no longer, because it is being built!
But with power always come responsibility, and Docker has a caveat you can drive a truck through — the ephemeral, process-oriented nature of Docker strongly favors moving back to the old “Golden Master Image” approach to software deployment. That is to say, its great that you can easily distribute a completely isolated application environment that will run everywhere with no effort. But in doing so, it is very easy to ignore all of the myriad problems that modern configuration management (CM) systems such as Puppet were built to address.
In a nutshell, modern CM deals with converging a given operating system to a desired state. So Puppet takes whatever you have running and applies a variety of transformations to bring it into compliance with the profile you have built. Then it is an easy matter to, say, roll out a DNS change to all 500 of your servers, as each will pick up the change and converge on the next run of the Puppet agent (typically every 30 minutes).
Docker images are much more ephemeral, and they are generally being created by specifying a series of specific shell commands in the recipe, called a Dockerfile, such as “apt-get install apache” and “echo 0 >/selinux/enforce” Anyone with modern Ops experience will shudder at the idea of trying to manage a complete OS — or hundreds of them — using this model.
The Docker defender’s response is that since Dockerfiles represent ephemeral machines that are fixed to an unchanging base (aha — a Gold Master!) this brittle approach is not a problem. But it is a problem, not the least because real production applications are often long-running, and servers (nee containers) require a variety of basic configuration for things like DNS, NTP, backups, logging, access control, kernel parameter tweaks, and on, and on.
Docker is a great idea for development systems, and I have no doubt that it will work well in production as well — eventually. But the Docker enthusiasm, while justified, must be tempered with an understanding that Docker is still early in development, and a productionalized Docker deployment will work better if it embraces CM as a vital component, rather than spurning it.
Thus I am very excited about the Deis project, which is a melding of Docker images for applications, Chef for everything else, and a Heroku-style platform as a service approach. Deis has the potential to be a clean, lightweight, but powerful alternative to more complicated PaaS offerings like OpenShift and CloudFoundry.
My advice? If you are looking for a PaaS solution today, consider OpenShift and CloudFoundry, but keep a very close eye on Docker — it is developing quickly and becoming very powerful.
It is an exciting (and bewildering) time to be involved in DevOps, and each of these various technologies will only become better and more mature with time.