Putting the cloud on a diet, for security's sake

clouds laptop smartphone tablet mobile devices
Credit: Thinkstock

The cloud is suffering from bloat that makes it more prone to security issues. It's time to trim some of the fat.

The basic denominator of a systems workload instance, the VM, is shrinking, and quickly. In some ways, this bodes well for security, but may ultimately fork how we think about operating systems.

An old metric, used since the beginning of modern computing, has been the operating system instance, and it grew to use Moore’s law of the shrinking CPU and drop in cost. It was not so long ago that another huge barrier in technology was lifted by the address space of the 64-bit microprocessor.

Today, Intel has no lock on 64-bit technology, and others are rapidly catching up, whether it’s Apple, Samsung, Nvidia, or other ARM licensees.

Historically, until the perfection of virtual machine technology, it was a one-CPU to one-OS instance/license model. VM refinements allowed CPU-sharing, ceilings for CPU processing allocations, along with other enforced gates. There are a half-dozen virtual machine ecosystems that are plenty powerful.

But operating systems have been eating carbs for a very long time, little sugary donuts of this package and that. It doesn’t matter if you’re Microsoft, Canonical, LinuxMint, or Apple. When you have a terabyte drive as a commonly sold substrate, you can be luxurious with the cookies, cakes, bagels, and potato chips.

It’s much like having size-60 pants that you can grow into with no penalty.

But there are risks. Call it systems diabetes. First, the increased bloatware may attract people to your particular OS distribution. Adding the kitchen sink is a well-respected Linux pastime. Opening a terminal session for apt-get, yum, whatever, adds additional bloat.

Each app also increases potential attack surface for nefarious exploitation, or unforeseen-bug-took-down-the-system syndrome. Even at rest, applications can become the nexus for infection just because they’re there and have known behaviors. Perhaps you can’t infect an operating system’s kernel, but maybe another app might one day get sufficient privilege to inoculate a problem.

Enter the skinny OS

The sandbox provided by hypervisors has been mightily exploited to build atomic units of workloads, each workload “walled” to prevent systems resource hijacking. It’s an ugly model, but very efficient.

Why ugly? In a more usable world, workloads could be tailored to be very slim and interchangeable at the OS layer, rather than the hypervisor layer. But we distrust the OS layer, having been well-rewarded by crappy OS reliability and kernel immunity. I won’t point fingers.

This means that the least common denominator workload instance, code and data, is the VM, and shrinking it to reduce the attack surface is desirable. The problem with this concept is this: it will cut down on all of the desirable apps—although what’s desirable seems to be in the eye of the beholder.

Ranging from CoreOS to truly anorexic images like Iron-IO images for Docker, images are shrinking at alarming rates. Microsoft has container images being readied to cut its bloat, too, although in recent times, Microsoft has paid much attention to constrain its attack surfaces.

You might say in observation, well, there are user-images and server-instances, and the two are very distinct, and user images are going to continue to flourish. Some will continue to flourish, in my estimation, but the symbiosis in kernels may take on a new image, one that amusingly seems Apple-ish. Let me explain.

Along with the RISC vs CISC processor (think ARM vs Intel) models, there is the thick kernel vs the microkernel (almost everyone in the world vs Darwin/Apple). As the common denominator of image falls in size like a rock, the need for kernels to support a long list of user-space/client-side possible behaviors will diminish as well.

Does this mean that a fork between common kernels is in the offing? It may well be the case. Custom kernels are already the crux of IoT devices, even the Android OS. It’s not a stretch to believe that OS kernels will be cut to ribbons to reduce space, increase efficiency and performance, and become very highly tailored to specific workload combinations—those with big fat 4K monitors vs. fleets of web or database servers.

I like the diet. It’s been a long time coming from the days of bloatware and horrific trialware and uselessware.

Must read: Hidden Cause of Slow Internet and how to fix it
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies