Virtual-machine evolution

From IBM’s MVS to DESQview to NetWare to Sun’s containers

The concept of virtual machines is not a new one. Here's some history.

The sensible goal here is obvious — use expensive hardware assets more efficiently through the use of multiple concurrent operating system and application instances.

Microcomputers (a term once used to describe PCs and microprocessor-based systems), on the other hand, have had a historically strict and dogmatic one-computer-to-one-operating-system relationship. Multitasking and multithreading on microcomputers are capabilities that sat on the IT wish list for decades.

Early PC applications, such as DESQview (popular in the late 1980s and early 1990s), spawned interest in multitasking, the ability to rapidly switch between client applications. Microsoft and Apple eventually adopted, altered and integrated multitasking into Windows and MacOS, additionally taking up the gauntlet of multithreading — the ability to use CPUs to manage individual instances of jobs on the client side.

Novell’s early NetWare servers upped the ante by allowing applications to run along side kernel resources, in a "kernelish" mode (ring 0, rather than ring 3 of the Intel memory model). This measure allowed Novell’s NetWare-hosted applications to run very quickly.

Multitasking FUD

Much multitasking/multithreading FUD for microprocessor-based platforms ensued.

Novell and Microsoft traded technology barbs concerning the ability of an operating system to do the work of managing independent jobs and hardware resource allocation based on a microprocessor/CPU’s inherent capability to keep track of memory and I/O resources. In a way, Microsoft and Novell were arguing CPU capabilities and inherent system stability, and 16- and 32-bit CPUs are indeed limited in memory and device addressing capabilities.

The outcome of these multitasking efforts was that operating systems became bloated with components that had tight file dependencies and lots of system configuration options that required careful management.

Major (formerly called "mission critical") applications had to be isolated from each other, as software and hardware driver synchronization issues had a direct effect on a server’s overall stability. Change one item incorrectly, and major failures could ensue in a domino-effect cascading failure.

The ability to manage multiple, concurrent operating-system instances was limited by a 32-bit CPU’s ability to address enough memory (without latency-expensive and proprietary memory-paging techniques), as well as the reality of then state-of-the-art servers having only single cores. This meant, in order to take this on, you had to buy proprietary, multi-CPU concoctions that remained expensive and unpopular.

Some solutions to creating availability and application-resource separation came through "PC mainframes" made in the late 1980s and early 1990s by Unisys and Televideo, among others, that amounted to multiple discrete PCs connected to a single backplane.

Around that same time, multi-CPU platforms emerged so that several CPUs could be addressed by an operating system, but the CPUs usually shared common peripheral devices, and performance didn’t double by simply adding a CPU. CPUs and hardware got faster and faster, but often, software wasn’t written to take advantage of second (or more) CPUs even if they were present.

OS containers help with stability

The concept of containers -- the process of creating environments where applications were installed in what amounted to virtualized operating-system resource areas that seemingly gave the applications "ownership" of a platform -- was then proposed. Championed by Sun, the chief benefit of containers, was to increase availability.

If an application crashed or misbehaved, it would be contained and not affect other applications, services and/or processes running concurrently on a system. Container resources could be controlled in terms of CPU use, network and disk I/O, and shares of memory and other resources.

If the question is reliability, then the answer for stability is to find ways to keep applications moving and contain their ability to have a negative effect on other applications through undesired resource domination. Of the things that can kill an application within a server, some are difficulties caused by other applications halting the operating system from use, either by an application crashing the operating system or by monopolizing resources so that other applications can’t use them or are inhibited in ways that deny service.

Application partitioning, sandboxing, containerizing or other ways of granting application autonomy with control, is the goal of containerizing. This might also mean demoting the security of applications, or placing ceilings on their resource uses (such as networking, disk I/O, even memory allocations and interapplication communication).

The advent of multicore CPUs, with 64-bit memory-addressing space, provides an almost logarithmic increase in internal hardware computational resources. The sheer computational power of 64-bit and multicore platforms provides room for many concurrent processes if the processes don’t destabilize other processes.

Virtual machines on everyday available computing hardware are now far easier to use and made easier still by Intel’s VT (known as IVT) and AMD’s V (known as AMD-V) CPU technologies that permit VM products to be even more efficient in managing systems infrastructure. Hence, the wild proliferation necessitating new management measures.


< Return to main test

Learn more about this topic

 
Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Related:

Copyright © 2007 IDG Communications, Inc.