- 18 Hot IT Certifications for 2014
- CIOs Opting for IT Contractors Over Hiring Full-Time Staff
- 12 Best Free iOS 7 Holiday Shopping Apps
- For CMOs Big Data Can Lead to Big Profits
Network World - While virtual servers have proven a boon in the data center, they don't address the challenge of incrementally adding server capacity and automatically distributing load across them. As a result, the responsiveness and availability of a highly utilized Web application, such as Microsoft SharePoint, can deteriorate when the virtual machine it runs on is out of capacity. Next-generation application delivery controllers (ADCs) not only address this challenge, they interoperate with virtualization tools to provide greater control and even make it possible to automatically deploy server resources based on real-time demand.
Virtualization ignores the reality that a given physical server has a fixed performance capacity. The result of virtual machines (VMs) sharing resources means spikes in any one virtual server's utilization can have an adverse impact on all the other virtual servers running on the same hardware. For example, if a virtual server running a database application has a spike of queries, any virtual server on the same hardware may be unable to deliver adequate performance due to the increase in processor load.
Perhaps the most frequently misunderstood aspect of virtualization with respect to quality of service management is the hypervisor's lack of application awareness. While virtualization management tools are able to monitor and control the operating systems they host, the same is not true for the applications running on those guest operating systems. Virtualization environments are blind to failures or bottlenecks at the application layer, which means that, although virtualization infrastructure may consider a guest machine to be healthy according to operating system metrics, the applications running on that server may be unresponsive.
Scaling applications without having to change the application requires server load balancing, where advanced ADCs intelligently distribute end-user requests across multiple servers; from the end-user's perspective, there is only one server.
Advanced ADCs with virtualization-aware management capabilities spin up and shut down virtual machines automatically. If the load increases, additional servers can be brought online. When the load subsides, those additional servers can be automatically turned off, freeing up resources for other servers. The virtualization-aware ADC communicates with the server virtualization platform, such as VMware's vSphere, to monitor Virtual Machine resource utilization, power up VMs when application load requires additional resources, power down unneeded VM instances during periods of low utilization, and power physical machines on and off to save power.
IT administrators can ensure there will always be optimum use of hardware resources by intelligently distributing traffic load among multiple, diverse server resources. Hot spots are eliminated by effectively managing the distribution of work across compute resources, and the need to overprovision to handle load spikes is removed. The fiscal impact is in capital expense (fewer servers), and operational expenses (reduced power, cooling, management and administration).