Review: Container wars: Rocket vs. Odin vs. Docker

1 2 3 Page 2
Page 2 of 3

We didn’t receive a notice that a VM or container misbehaved unless it was the VM or container that sent us the message. We were sad, but this problem isn’t unusual. One cannot set minimum CPU thresholds that would trigger an alert that the VM had crashed, only that it was pegging/peaking in terms of CPU. IPMI messaging, or a similar API set that sent host conditions to the instrumentation would be useful and it’s possible to embed nagios or other monitoring into VM/container payloads if desired. Nonetheless, we found the same with disk as a resource—you get messages when you’re full, not when there’s been no activity for 24 hours (a possible indication that the host died).

While Virtuozzo is not VMware XenServer or Hyper-V, it might suffice the needs of multi-tenant internal clouds, or small/medium service providers as a cloud platform.

Containers can also be formed and run with Docker, if Docker’s ecosystem is desired. In as much as Parallels makes their own, we find Docker is somewhat superfluous, although we tested the host platform with Docker under CentOS. Using our primitive SciMark benchmark, execution time is about 18% longer versus running the test in a native container, but only 15% longer than running an equivalent VM. Yes you can Docker, no, we don’t recommend it because it’s nesting a machine inside of a machine inside of a machine.

Docker 1.6

Docker runs as a root process on a vast list of Linux distributions, MacOS versions, and in pilot Microsoft platforms. We tested four versions of Linux hosts and one MacOS host. This is both good and bad. Docker gets much of its popularity from its simplicity, and its power to orchestrate similar and diverse containers with egalitarian controls.

Most current Linux kernels, along with MacOS, have the ability to run Docker containers. One obtains Docker for the host platform, then runs an OS instance launched “inside” Docker’s control. Docker has many conveniences for those hosting OS instances, which by another name, are simply isolated virtual machines whose resources have been orchestrated by Docker.

Part of the value of Docker is the enormous variation of instances that can be selected from Docker’s repository. The repository and container registry at is brimming with famous app maker “official” images, some hosted at Docker’s site, others linking to the app maker’s site, still others connected to the GitHub. A stunning variety of development platforms and canned app appliances (think WordPress variations, Hadoop cluster components, etc.) are available to use.

A typical Docker instance might be an instance of Ubuntu 14.04 server, already comparatively small in terms of host resources used. Perhaps the container is running a Linux/Apache/MySQL/PHP/Perl (LAMP) instance and the generic instances are plentiful and varied. Many slimmed down images are now available from Canonical, Red Hat, and even images whose innards have been intentionally slimmed down to prevent unused processes from robbing containers of CPU cycles.

Getting things going is as simple as using the docker run command. This command instantiates an instance of the container/VM, which will use a subdirectory for storage, not unlike what Linux chroot does in terms of lowering security significance, and similar to OpenVZ/Virtuozzo.

Docker container images use the union file system/unionfs, which becomes the folder infrastructure for containers. Like OpenVZ/Virtuozzo, Docker makes layers devoted to each Docker-launched container process via unionfs. This permits similar images to share base files, requiring only one update, and conserving space for when image snapshots need to be made, or when images need updates. Update openssh, as an example, and do it for 20 containers at once, as they rely on the updated image.

The use of unionfs is convenient for Docker efficiency, but it’s also the crux of much criticism as a single bad source image craters all dependent upon it. Docker control can also be made from the user-initiator of commands using RESTful put/gets to container images. This user is as useful or as dangerous as root user, and so user controls related to access, passwords, secure repositories of SSH keys, and other standard security mechanisms are strongly recommended. It’s somewhat loose and fast, if really fun.

ISO images can be built in an automated way, if an organization wishes to maintain their own image hierarchy and control their own image security vetting.

Access to Docker internals can be performed through ssh communications, or other APIs, including communications constructs, such as puppet. Storage is demoted, and can be controlled further via user security (chroot, chmod, or other imposed file limitations/metadata controls). In some cases, you can use ploop or other filesystems to achieve some of the benefits we described with OpenVZ/Virtuozzo.

Docker as container ship

Docker is not mindfully conserving common container space in the way that OpenVZ/Virtuozzo does, in terms of de-duplication of stored common files. In theory, OpenVZ/Virtuozzo can pack containers in more tightly as a result. This said, on the same hardware platform used to test OpenVZ/Virtuozzo, we could put a huge number of Docker container instances in place, perhaps more easily.

But Docker has no control plane instrumentation that compares to Virtuozzo. Docker requires studious management of control scripts and keys, a convenience of Virtuozzo. There is a rush of third parties starting to do just this with app offerings, but those weren’t tested.

+ ALSO: Containers vs. virtual machines: How to tell which is the right choice for your enterprise +

Managing a large fleet of container hosts isn’t difficult, although it requires a slightly elevated set of skills. Docker Swarm is an API that allows a group of Docker containers to behave as an object, resulting in a cluster of containers with a single point of control. This permits rapid instance scale-out, and perhaps instance bloat.

This is a place where we had the most fun, making containers march in lockstep to our commands. This piece isn’t perfected yet, and it also is another place to warn that the users able to address Docker Swarm can also, like the Star Trek Next Generation episode, cause the Borg of containers to go to sleep—or any other command desired.

Creating a container fleet that can be managed as an object requires great responsibility. This said, there are applications like Apache Mesos that can control extremely large data sets as clustered containers. From an organizational security profile, it must be done with great care, as exposed resources are hijack-able resources.

Rocket/rkt 0.5.4

Rocket was introduced and evolves somewhat in conjunction with CoreOS, as a shaved OS designed as a low-attack surface, if high efficiency substrate, for Docker. Billing itself on GitHub as an App Container Run Time (system), it’s an application platform designed for small footprint but high stability.

Rocket lacks some key components and requires more construction savvy than either Docker or Virtuozzo. We were pleased to see that it focuses on security with source image provenance and payload control.

Based on Linux kernels, CoreOS has been evolving at a slightly slower pace than Docker, as it was an OS for clusters rather than a control plane. CoreOS is one of several sponsors of rkt, which really isn’t ready for production. Normally, it’s our policy not to include such products in review comparison without very high demand, as this industry is rife of beta expectations not meeting production reality.

As all three container methodologies use the same somewhat mature components of Linux, we include rkt more for its ideology than its practice. Rkt isn’t simple, but we feel it’s less risky because of its methodology of stricter security at many levels.

Rocket enforces a discipline that starts at container instance building management, as images must be built in a specific manner before rkt can launch them. This varies from the OpenVZ and Docker management, as both of these container controllers can use off-the-shelf ISO images, ones evolved from working machines, the container controller’s repositories/registries, or from your friend in the next cubical. Not necessarily so with Rocket.

In testing we found rkt works very similarly to Docker and OpenVZ in the basics of container runtime control: uses a daemon to control populations of containers, and controls the container instances through the life cycle. How it does this is vastly more regimented and potentially much safer than Docker.

Historically, some of the security/reliability deficiencies perceived in Docker were what inspired the development of Rocket/rkt. A manifesto was generated, designed to embody different core values around containers. App Container (appc) is the resulting specification announced to address perceptions that Docker security is weak, and that inherent systems reliability must be placed on authoritative chains, no matter how big or small the ship of containers becomes. Even rkt allows overrides, however.

To these ends, the resulting appc spec seeks to ensure that downloaded images have signatures of provenance, and integrity of assembly method. We quote the github appc spec:

The core goals of the specification include:

  • Designing for fast downloads and starts of App Containers
  • Ensuring images are cryptographically verifiable and highly cacheable
  • Designing for composability and independent implementations
  • Using common technologies for cryptography, archiving, compression and transport
  • Using the DNS namespace to name and discover images

Rocket considers itself an implementation of this spec, and others have signed on to the tenets of this spec, including Red Hat, Google, VMware, and Apcera. And while it appears that CoreOS+rkt is synonymous with the appc, rkt is an implementation, as are implementations from VMware (Lightwave/Photon) and Apcera (Continuum), neither of which were available for comparison at review time.

Rkt uses the appc methodology to source its images as tar (Tape ARchive) files, rather than ISOs, so that their GPG keys are hashed with an ISO to make the container base image authoritative. This makes the source files tough to disturb (imagine file substitutions, inadvertent patch levels, malware, broken packages that can be inserted into ISOs). An ImageID results that’s unique to the image.

We created a folder to use as rkt’s rootfs or top-level filesystem. We uncompressed images, which have a pre-inserted, JSON-format image manifest of what should be inside the un-tar’d (uncompressed) image. Additional encryption steps can also be taken so that a key is required (AES-256 was our favorite) at image decompression/decryption time. We await the control panel that can do this without CLI, but it’s not tough.

Once the encryption’s done and the JSON manifest for the image is satisfied, the image can be executed. It becomes a pod, essentially a container, but the word “pod” evokes other mental images, too.

It’s a set of executables with its own ID, which becomes an object handle UUID (covered by IETF RFC 4122) to manipulate the characteristics of the pod execution. The UUID gets its own namespace, created and made subsequently manageable by rkt; this provides instance control. Rkt then takes care of, at initial execution, creating a rootfs for this UUID with a whitelisted set of folders associated with the JSON manifest. It creates the filesystem new, once, each time the container object is launched, ensuring no leftovers.

The manifest is critical, we found. Nothing works if it’s malformed. Once built, however, its reuse vets the authenticity of the source. This most recent edition will also obtain Docker images. Overall, we feel its regimentation is worth it, and instrumentation will be key to its successful evolution.

Testing Rkt

We built a testbed from a CentOS 6 host on the same Lenovo platform used for OpenVZ and Virtuozzo, launched in a minimal host environment, instead of using CoreOS, for expediency. We crafted two images, one a WordPress image from TurnKeyLinux that we’d fixed for WordPress 4.2.2, and another a generic Ubuntu 14.04 server image, that had been updated once, and built for minimal services. Later, we discovered repository images for both and used those.

Resources, such as memory allocation demanded by the pod, bandwidth, and so forth can be set with a CLI-based rkt command Here, instrumentation is missing and scripting is necessary, for now. We relearned JSON syntax.

The release we tested also permits downloading, via simple http username/password, images from the Docker (or a Docker-like) Registry. This also is the harbinger that while repositories can be used, security overrides can break the chain of authorities, thus potentially breaking audit/compliance needs unless logs are strictly monitored for this behavior and corrections to mistakes are enforced.

The scripts we made then became the basis for rapid replication into our Lenovo host server. We slipped up in our instance generation script, and when we started the machines under embedded puppet commands, we cratered the instance, leaving a smoking hole in the Lenovo as it went into instant thrash, and then locked tighter than an inflated tire bead.

We’d forgotten to set CPU shares, and starting all instances at once had created an enormous startup demand on the host cores. No server would have likely survived it. Fortunately, a reboot allowed us to correct the error of our ways (our script errors).

In practice, we obtained two more images from Docker, and after much finagling, were able to get them to execute with the rkt runtime. We scripted together a number of container launches, and were able to do a pilot scale-out of executables, but not without a lot of work. Third-party tools will help rkt, and failing that, it’ll become an unmaintained altruism.


All three apps run as root, as they need to pick up speed from the kernel. AppArmor and SELinux get only mild treatment in the docs for all three, but all three can use these sandboxes to isolate containers from corrupting or DoSing system resources.

1 2 3 Page 2
Page 2 of 3
The 10 most powerful companies in enterprise networking 2022