You've always been able to run containers on a variety of operating systems: Zones on Solaris; Jails on BSD; Docker on Linux and now Windows Server; OpenVZ on Linux, and so on. As Docker in particular and containers in general explode in popularity, operating system companies are taking a different tack. They're now arguing that to make the most of containers you need a skinny operating system to go with them.
Why? (Besides giving them a new revenue stream?)
[ For more background on container technology, read The layman's guide to Docker and Container wars: Interesting times ahead for Docker and its competitors ]
How? Polvi realized that since containers isolate applications from the base operating system if something changes in the operating system, it doesn’t mean that the container, or its application, will be affected. Of course, to make certain that's true, you want to make sure the OS only supplies the minimum required services.
Then, taking a leaf from how Google updates its Chrome operating system (remember, CoreOS started as a Chrome OS fork), Polvi saw that with containers servers too could automatically update and this, in turn, would vastly speed up operating system patching.
So, Polvi continued, "if it’s all auto-updating and takes care of itself, you shouldn’t have to worry about it anymore. CoreOS as an organization is maintaining it for you and you just worry about your application side."
So, what CoreOS does, and a host of other operating systems will do either now or soon, is update a small operating system kernel that only provides necessary services as one object. In this model, there is no package updating. Instead, you wait for a server to go down, or since it's on a cloud and there are always other servers to pick up the load, you wait for another server to pick up the load and then you replace the OS with the new updated version.
This way you can quickly provide the latest updates without any downtime that's perceptible to users. With this mechanism you can also provide a consistent operating system across your entire data center or cloud. There are no servers with one set of patches and another with an entirely different set of patches.
Another advantage of this approach is that if something does go wrong with the new version, you can always just roll back to an earlier, safe version. As Paul Cormier, Red Hat’s president of Products and Technologies, said in a recent blog post, "Linux containers, both augment and depend upon the consistency of the operating system."
This idea has caught on like a house on fire. Now, besides CoreOS, Red Hat with Red Hat Enterprise Linux 7 Atomic Host (RHELAH), Canonical with Ubuntu Core, and, in a surprising move, VMware with its first Linux distribution, Photon.
In addition, people who just to want fool around with Docker containers can use boot2docker. This tiny Linux distribution weights only 27Megabytes. It is based on Tiny Core Linux and is made specifically to run Docker containers.
What these container-friendly operating systems have in common, according to Docker, is:
- Stability is enhanced through transactional upgrade/rollback semantics.
- Traditional package managers are absent and may be replaced by new packaging systems (Snappy) or custom image builds (Atomic).
- Security is enhanced through various isolation mechanisms.
- Systemd provides system startup and management.
So, how are they different from each other? That's still materializing. Even the oldest of these, CoreOS, hasn't reached its second birthday yet. Here's what we know so far.
Polvi said in an interview that CoreOS was designed from the start to be "a server that can automatically update itself. That’s very different than the way people think about servers now. If this works, we thought we could unlock a lot of value, that value being around security, reliability, performance, really everything you get from running the latest version of software."
CoreOS manages to do this with FastPatch. In it you update the entire OS as a single unit, instead of package by package.
As for containers, CoreOS started as Docker's best buddy. But then, Polvi said, "Docker started to become a platform in and of itself so it will compete with existing platforms. And that’s fine. I understand if they want to build a platform as a company, that makes a lot of sense as a business. The issue is, we still need that simple component to exist for building platforms."
In December 2014, Polvi explained "We thought Docker would become a simple unit that we can all agree on. Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. … It is not becoming the simple composable building block we had envisioned." So, CoreOS introducted its own container format, Rocket.
CoreOS still supports Docker as well, but moving forward Rocket will be its primary container.
Red Hat also saw the technical advantages of a lean, mean Linux. They started working on it in Project Atomic. This open-source operating system is now available as variations on Fedora, CentOS, and RHEL.
From this foundation, Red Hat built RHELAH. This operating system is based on RHEL 7. It features the image-like atomic updating and rollback. Red Hat has committed to Docker for its container technology.
According to Red Hat, RHELAH has many advantages over its competitors. This includes being able to run "directly on hardware as well as virtualized infrastructure whether public or private." In addition, Red Hat brings its support and SELinux for improved security.
Canonical, Ubuntu's parent company, is taking a different approach from CoreOS and Red Hat. Parts of it are certainly familiar. Canonical claims "Ubuntu Core is the smallest, leanest Ubuntu ever, perfect for ultra-dense computing in cloud container farms, Docker app deployments or Platform as a Service (PaaS) environments. Core is designed for efficiency and has the smallest runtime footprint with the best security profile in the industry: it's an engine, chassis and wheels, no luxuries, just what you need for massively parallel systems."
While you can update Ubuntu core and "Snappy" apps by images, Canonical's Snappy packaging system uses a metadata file along with build tools to create a new Snappy “app." According to Ubuntu founder, Mark Shuttleworth, "The snappy system keeps each part of Ubuntu in a separate, read-only file, and does the same for each application. That way, developers can deliver everything they need to be confident their app will work exactly as they intend, and we can take steps to keep the various apps isolated from one another, and ensure that updates are always perfect.
In addition, Ubuntu uses AppArmor kernel system for security. Ideally, in snappy Ubuntu versions, applications are completely isolated from one another.
Remember when Mendel Rosenblum, VMware's co-founder, said operating systems were obsolete back in 2007? I do. Things have changed. Rosenblum was half right. Virtualization was to change the world--we wouldn't have clouds without it—but operating systems remain as important as ever. So, perhaps it's not surprising that, faced with the container tidal wave, VMware has both adopted container technology and has released the first alpha of its own Linux operating system, Photon.
VMware, however, is not abandoning its virtual machine (VM) ways. Photon only runs, at this time, on VMware vSphere and VMware vCloud Air. In short, VMware believes that containers on VMs, rather than containers on a native operating system, is the way of the future. Well, considering its business model, of course VMware does.
The company is hedging its bets when it comes to containers. VMware is supporting Docker, CoreOS Rocket, and Pivotal's Garden container formats.
VMware is also releasing Lightwave, a container identity and access management program.
So, which one will win out? Where should you put your container dollars?
I don't know.
I really don't.
CoreOS clearly has had more experience than the others. They're also by far the smallest and youngest company. Red Hat brings considerable resources to its offering, but Canonical is no slouch either. As for VMware, they're brand new to containers, but they certainly know virtualization backwards and forwards.
These are all new programs in a new field. I would try them all out, look at my own IT needs, and then decide which of them is worth a pilot program. What's that? You want to deploy now? I don't think so! This is all too new to bet your company on.
This story, "Do you need a container-specific Linux distribution?" was originally published by ITworld.