- 15 Non-Certified IT Skills Growing in Demand
- How 19 Tech Titans Target Healthcare
- Twitter Suffering From Growing Pains (and Facebook Comparisons)
- Agile Comes to Data Integration
Network World - There is a disease in the data center: skyrocketing energy costs, inefficient infrastructure management tools and the unknown effects of looming regulatory action in the United States.
Luckily there is an antidote: a new group of solutions for data center infrastructure management. DCIM tools graphically display a complete inventory of the data center's physical and logical assets, showing rack and data center floor location and rack heat load. Using the software, a data center manager can model any move, add or change by creating sophisticated "what if" scenarios before implementing changes that can dramatically impact data center performance.
They can look to the past and the future, using historical data to report and track trends and forecast or predict future requirements for power, cooling and space, making it possible to drive down energy costs and make the data center run more efficiently.
Gartner reports that DCIM has already been proven to reduce operating expenses by as much as 20%. Other research has shown that DCIM solutions can reduce the time to deploy new servers by up to 50%, extend the life of a data center by up to five years and help attain a power usage effectiveness of 2.0 or less. In today's resource and dollar-constrained world, this is a critical opportunity that organizations need to recognize and act upon.
Data centers have always used massive amounts of energy, but the rising cost of energy, combined with the expanding data center infrastructure, is forcing managers to think about how to run data centers more efficiently.
Forty percent of data center operational costs are attributable to power alone. By 2014, it will be more expensive to power a data center than to buy and manage the hardware in it. In North America, operational costs have risen 100% since 2005. It's clear that rising costs will only get worse in the near future, and data center managers will have to find ways to mitigate and drive down expenses.
That's hard when you don't have all the information you need to make decisions. For example, data centers are so complex it is easy to lose sight of assets, say nothing about knowing what they are doing. And how do you track how much power, cooling and space is needed, or how long resources will survive or if you have enough capacity for growth? These uncertainties and inefficiencies are certainly keeping data center managers up at night. A Cassatt 2009 Data Center Survey reports:
* More than 75% of data center managers only have a general idea of the current dynamic usage profile of their servers.
* About 7% say they don't have a very good handle on what their servers are doing.
* At least 20% know what their servers were originally provisioned to do, but aren't certain that those machines are actually still involved in those tasks.
* And 20% of survey respondents think that 10% to 30% of their servers are "orphans" (they are on, but doing nothing).
One way to "solve" these problems is by over-provisioning, which means buying 50% more equipment than you need. But when pressure mounts to bring costs down, you don't have the visibility needed to address the problem.