- Silicon Valley's 19 Coolest Places to Work
- Is Windows 8 Development Worth the Trouble?
- 8 Books Every IT Leader Should Read This Year
- 10 Hot Hadoop Startups to Watch
Industry analysis by Beth Schultz, plus the latest news headlines.
How well IT understands and optimizes the use of virtualization management software -- embracing automation, for example -- distinguishes the best-in-class organizations from the also-rans.
"This doesn't get a lot of focus but it should as data centers get more virtualized," says Dick Csaplar, a senior research analyst who authored the Aberdeen Group report, "Managing Virtualized Applications: Optimizing Dynamic Infrastructures," released this week. For the report, Aberdeen surveyed 85 organizations about their virtualization deployments, including questions regarding strategies, challenges and business and operational benefits.
As part of the project, Aberdeen examined the business processes and IT management philosophy of leaders in virtualization, which, on average, reported seeing a 38% reduction in application downtime since deploying server virtualization. The firm discovered that leaders not only use virtualization management software but also developed an environment where the software can succeed in delivering the desired performance goals.
Csaplar delineates four steps to virtualization success.
"First, you have to surround the application with the right business processes, measurement tools and so forth to really know what you're doing. In other words, first define what success means -- using a 'ready, aim, fire' strategy" -- and then you can measure performance against it," he says.
Next, he says, you've got to collect the data. "Make sure you understand what's going on inside of your infrastructure and that you're capturing the right level of data so if something happens you catch that and it just doesn't go on without you being aware of it," he says.
The third step is to prepare for dynamic optimization. "You've got to have a plan for the eventuality of this server or that network segment going down or losing that storage device. Walk through the what-if scenarios of planning incidents in the data center," he says.
"Only then," he adds as the fourth step, "are you really able to empower software to do what you need because you know what it's supposed to do, when it's supposed to do it and it has the data to trigger corrective actions."
In other words, best-in-class IT organizations can allow management software to address performance issues or optimize the data center automatically because they've taken the first three steps, Csaplar says.
"We're at the point now, as the data center is becoming more virtualized than not, that you have the opportunity to do things that you've never done before, like consolidate applications at night -- you roll your applications that aren't being used much during those hours onto just a few servers and power down the others for energy savings," Csaplar says.
"That's pretty cool. But if you want to do that with a manual process, it'd be rife with error," he adds. "You need to be able to trust your management software to take actions automatically."
Read more about infrastructure management in Network World's Infrastructure Management section.