A Brief History of Containers

In the beginning, there was MULTICS. Computers lived inside warehouse size buildings. A company would have a single computer, and all processes lived inside it. There was very little processes isolation; memory and I/O had direct access to the physical mapping mapping of the system.

And then was born UNIX. We built up walls between our processes which created a logical abstraction of physical systems. Each process lived in its own world. The principle idea being that programs should not be capable of influencing the state of others except by pre-defined communication pipes. However, all processes will observe the same CPU / Memory / Disk. Linux largely inherited its process isolation policy from UNIX. It's the responsibility of a multiplexer to schedule these, the kernel.

Computers got faster.

It's no secret that a commodity cell-phone processor can do orders of magnitude more instructions per second than the largest MULTICS computer warehouse ever built. In fact it takes a very well designed and optimized program to effectively utilize the computing power of a CPU today. Optimization is an expertise.

MULTICS has put all people into the same house, which was one big room.

UNIX/Linux put families of people into houses which grew to the size of a hotel.

Virtualization put each family into its own apartment.

Containerization put each family member in its own room.

A Tale of Two Virtualizations

Most people are aware that there are two primary notions of virtualization today. Hypervisors and Containers. I'll spare you a discussion of the technical differences. The motivations of the two have become quite blurry. You can find hypervisors designed to abstract single process embedded processors. And you can find containers running full fledged init systems.

Hypervisors have a place in the world, but let's continue on to discuss what we are here for.


A command that has seen resurgances throughout its 40+ year life. chroot is the first program which sought to provide namespace isolation. Even if that isolution was simple filesystem. It's important that early UNIX processes interracted through named pipes which presented as file path on the disk. UNIX's spiritual successor, Plan9 migrated interracted with everything as if it were a file handle.

It meant that it was now possible to create a portable directory of binary, accompanied with some named pipe linkage which could act without affecting any files outside of its namespaced scope.

This was a good start, but why not isolate other resources?

FreeBSD Jail

FreeBSD was the first UNIX variant to strip itself of its AT&T roots and begin to innovate. It added the notion that different chroot namespaces should have their own IP address and process tree. Whereas the chroot was merely a closed door to the outter world, the Jail was a locked one.

And with this innovation in 2000, the majority of what we would call a 'container' today - was invented.

Solaris Zones

Sun Microsystems was brought quite a lot of good software into the world. Their vision was to see all processes become migratable accross not just cores, but servers. That is... you should be able to pause a program, send it over the network, and resume it as if nothing had happened.

They never generally actualized this dream, but Solaris Zones was born. It has all of the features of FreeBSD jails, plus it allowed you to set CPU and memory shares.


Solaris Zones became the envy of the Linux world. OpenVZ was the response. A heavily customized Linux kernel capable of performing the same resource isolation as Solaris Zones.

Unfortunately, Virtuozzo (the VZ in OpenVZ) was unable to mainline these changes, so they had to be re-written. It's not clear to me if it was a philosophical disagreement between kernel developers, or if the patches were not up to a proper standard.

Control Groups

Internal to Google there was an effort to properly patch the Linux Kernel with comparable changes. It's unclear if they were designed primarily for inclusion in the Android OS, or if they were intended to assist their 'BORG Process Scheduler'. But what is clear, is that they were main-lined.

Anyone who tried to use Control Groups (or later, cgroups) likely found them to be unusable. Google mainlined the core technology, but made little attempt to market its usability in the broader eco-system.

Much later, Google did give a half-hearted attempt with its release of LMCTFY (Let Me Contain That For You). But it died, due to being obviated by several superior technologies.


In a very real sense, LXC marks the beginning of proper containerization on Linux. It was actually simple to use and had most of the features people associate with current soft virtualization. It was a useful abstraction for control groups.

Unfortunately, LXC got bogged down in a lot of issues outside of their control. Different companies had differing interests and motivations to market their version of CGROUPS. Simultaneously occurring were the init wars - a working container init system would need to support SystemD, SysV, Upstart, and OpenRC.

What LXC did not have, was an easy-to-use transport method for containers. So they lost to Docker.


It turns out that the primary market for containers do not care about resource isolation much. Companies apparently struggle to build container naming, organization, and deployment.

They struggle so much at this, that they will make great compromises to get it for 'free'. Many companies would rather introduce a single point of failure on their system than create an indexed data archive.

The original Docker was literally a wrapper around LXC commands. It did add some unique LVM sparse disk integration, AUFS integration, and later OverylayFS support. But other than those things, it did no more than LXC - and also quite less.

If someone "won" the container wars, it was Docker. Its parent company, dotCloud weren't huge winners. But finally, everyone could understand how to make their own container.

It doesn't matter that the single-point failure causes system wide lockups. It doesn't matter that it has no init integrations and isn't even capable of running a full OS inside of it. It doesn't matter than its user-space network routing introduced heavy latency. And it apparently doesn't matter that is circumvents decades of security development in various policy enforcements and SELINUX. Docker does most things right, most of the time, and apparently - that's good enough.

The Others

There are many options on the market these days. If you want something that will probably work for the next decade, use SystemD-NSPAWN. If you want something which thinks its solved the packing problem, use Kubernetes. If you're brave enough to ship your own chroot's, use LXC. If you're bored enough to build your own, use libcgroup.

Final Thoughts

The road to stable containers was at least a decade long, and the road to containers for all was probably twice that. Collectively we have agreed, the value of a technology can be measured by the number of StackOverflow questions it has. And in that regard, Docker is unlikely to be unseated for quite some time.