Docker exploded onto the scene a couple of years ago, and it's been causing excitement in IT circles ever since.
The application container technology provided by Docker promises to change the way that IT operations are carried out just as virtualization technology did a few years previously.
Here is a list of the 10 most common questions related to this technology.
What are containers and why do you need them?
Containers are a solution to the problem of how to get software to run reliably when moved from one computing environment to another. This could be from a developer's laptop to a test environment, from a staging environment into production and perhaps from a physical machine in a data center to a virtual machine in a private or public cloud.
Problems arise when the supporting software environment is not identical, says Solomon Hykes, the creator of Docker, "You're going to test using Python 2.7, and then it's going to run on Python 3 in production and something weird will happen. Or you'll rely on the behavior of a certain version of an SSL library and another one will be installed. You'll run your tests on Debian and production is on Red Hat and all sorts of weird things happen."
And it's not just different software that can make a difference, he added, "The network topology might be different, or the security policies and storage might be different but the software has to run on it."
How do containers try to solve this problem?
Put simply, a container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerizing the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.
This sounds like virtualization. What's the difference?
With virtualization technology, the package that can be passed around is a virtual machine and it includes an entire operating system as well as the application. A physical server running three virtual machines would have a hypervisor and three separate operating systems running on top of it.
By contrast a server running three containerized applications as with Docker runs a single operating system, and each container shares the operating system kernel with the other containers. Shared parts of the operating system are read only, while each container has its own mount (i.e., a way to access the container) for writing. That means the containers are much more lightweight and use far fewer resources than virtual machines.
What difference does this make in practice?
A container may be only tens of megabytes in size, whereas a virtual machine with its own entire operating system may be several gigabytes in size.
Because of this, a single server can host far more containers than virtual machines. Virtual machines may take several minutes to boot up their operating systems and begin running the applications they host, containerized applications can be started almost instantly.
If containers share an operating system, how secure can they be?
The consensus is that containers are not as secure as virtual machines. This reason is, if there's a vulnerability in the kernel, it could provide a way in to the containers that are sharing it (although SELinux can help.) That's also true with a hypervisor, but since a hypervisor provides far less functionality than a typical Linux kernel (which typically implements file systems, networking, application process controls and so on) it presents a much smaller attack surface.
To summarize, containers cannot generally provide the same level of isolation as hardware virtualization.
What's the difference between Docker and containers?
Docker has become synonymous with container technology because it has been the most successful at popularizing it. But container technology is not new: It has been built in to Linux in the form of LXC for almost 10 years, and similar operating system level virtualization has also been offered by FreeBSD jails, AIX Workload Partitions and Solaris Containers.
And today Docker is not the only game in town for Linux. One notable alternative is rkt, a command line tool for running app containers produced by CoreOS. Rkt is able to handle Docker containers as well as ones that comply with its App Container Image specification.
One reason for launching rkt is that Docker has become too big and has lost its simplicity, according to Alex Polvi, CEO of CoreOS. "Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server," he said.
Kelsey Hightower, CoreOS's chief advocate, adds that App Container images are intended to be more secure than Docker images because they are signed by their creators. "I think users want signing, the way Apple signs apps in the AppStore," he says. "When you use rkt and you pull an App Container image you can decide if you trust the developer before running it. Rkt can also run Docker images, but they won't always be signed."
What operating systems should Docker and rkt be run on?
LXC (and libcontainer, Docker's own container technology that replaces LXC) are Linux based, so any Linux distribution with a fairly modern kernel like 3.8 or newer can run them on x64 hardware.
Most Linux distributions are unnecessarily feature-heavy if their intended use is to run containers. This is no surprise since a number of Linux distributions have cropped up which have been designed specifically for running containers. Some examples include CoreOS, Red Hat's Project Atomic, Canonical's Snappy Ubuntu, and VMware's Project Photon.
Are there any commercial Linux container solutions?
Yes. They include the following:
- Docker Subscription for Enterprise - A bundle solution that included Docker Hub Enterprise, Docker Engine, and a commercial support subscription.
- CoreOS Tectonic - An integrated stack of CoreOS software that includes a management console for workflows and dashboards, an integrated registry to build and share Linux containers, and additional tools to automate deployment and customize rolling updates, along with Google's Kubernetes container management platform.
What happens if I am a Windows shop? Can I still use containers?
Microsoft has announced that it will bring Docker container technology to Windows Server, as well as introduce Windows Server Containers which will run on Windows Server.
A "thin" version of Windows Server called Nano Server, which is specifically designed to run containers will also be introduced. Similar in concept to Windows Server 2008's Windows Server Core, it will be about 5 percent of the size of a typical Windows server installation.
Will containers eventually replace full blown server virtualization?
That's unlikely in the short term if for no other reason than that virtual machines offer better security than containers.
The management tools that are available to orchestrate large numbers of containers are also nowhere near as comprehensive as software like VMware vCenter or Microsoft's System Center which can be used to manage virtualized infrastructure.
It's also likely that virtualization and containers may come to be seen as complementary technologies rather than competing ones. That's because containers can be run in lightweight virtual machines to increase isolation and increase security, and because hardware virtualization makes it far easier to manage the hardware infrastructure such as networks, servers and storage which are needed to support containers.
"Most people have no desire to manage hardware, so they put it on to VMware and manage it in software," says Hightower. "Containers change nothing. You can use containers, and if you don't want to manage the hardware, then you use virtualization as well."
This story, "What are containers and why do you need them?" was originally published by CIO.