The green IT stars of 2010

1 2 3 4 5 Page 2
Page 2 of 5

Beyond enjoying the benefits of using an existing structure to house its supercomputer, the Université Laval and CLUMEQ estimate the silo design results in annual savings of more than 1.5 million kWh, compared with a traditional data center. Transforming the silo into a data center likely costs more than going the conventional square-build, raised-floor route, Parizeau said, "but this does not take into account the higher efficiency of the silo design, nor the fact that we recycled a building that was almost impossible to reuse for anything else. It may have cost a little more, but we got more for the money -- and there were no budget overruns."

Dell spurs efficiency by pulling the plug on unnecessary appsRetiring 7,000 useless or redundant apps contributes to huge efficiency gains in Dell's data centers

Dell has taken its efficiency initiative a step beyond those of most other data center operators: Beyond consolidating hundreds of physical servers, it's pulled the plug on hundreds of others.

One of the key strategies in Dell's efforts to slash energy waste in its data center was to identify precisely which of the 10,000 support applications were necessary. "Our first real 'aha' moment was to challenge the assumption of the phrase 'keeping the lights on' itself, which by definition implies an untouchable set of applications that you must keep running at all costs," said Robin Johnson, CIO at Dell. "We decided instead to look at that part of the business as an opportunity to turn the lights off. Rather than viewing it as the 'must run' portion of IT, we instead became maniacally focused on what could be eliminated from the fixed-cost side of IT."

The first step was to change the way IT billed departments for computer resources. Previously, departments paid a proportionate share of the total IT budget based on their percentage of overall company revenue. Under the model, there was no incentive for departments to give much thought to whether they were running more applications than they needed. Under the new model, departments were charged for their actual usage "but with a twist," said Johnson. "Rather than charging for actual usage using some complex formula of compute capacity consumed, we simply took the entire cost of the data centers and application infrastructure and divided by the total number of applications."

This step helped prepare company leaders for the next stage of the project. Dell's IT department conducted a thorough analysis of the various apps it was supporting and discovered thousands that had no identified owner or appeared to have little or no utilization. Those servers were simply unplugged from the network in controlled batches. Then IT waited for trouble tickets to arrive. "Not surprisingly, for each group of 500 servers that was taken off the network, at most two or three trouble tickets were raised," Johnson said.

This entire process helped Dell eliminate around 60 percent of the 7,000 total apps it ended up removing. The remainder came from identifying niche apps that could be replaced by an enterprise-level solution, as well as weeding out and eliminating duplicate apps.

When these efforts were all done, Dell managed to reduce its number of supported apps from 10,000 to 3,000, which freed up a significant amount of data center resources. These efforts coupled with virtualization have allowed the company to remove 4,000 servers over the past year. Meanwhile, server utilization levels have doubled to 40 percent -- a number Dell is continuing to improve.

Moreover, the company has reaped even more energy savings by upgrading to high-efficiency servers and reorganizing the way it does power and cooling -- including using outside air 150 days of the year in sweltering Austin, Texas. All in all, Dell reports that through its array of data center efficiency efforts, it has increased overall computing capacity by 270 percent, reduced energy consumption by 30 percent, and saved over $50 million in assorted costs. Retiring and consolidating thousands of servers and apps has also simplified IT administration tasks, including management, accounting, and licensing.

EPA's Energy Star for servers and data centers illuminates sustainable paths New specifications set a much-needed bar for energy efficiency in the products or operations

Over the past couple of years, an increasing number of data center operators and hardware manufacturers have proudly proclaimed that the respective facilities they run or the hardware they produce are oh so greener than the competition's. But such proclamations can leave observers wondering what that really means, given that standards for weighing such claims have been lacking.

That's changed in the past year as the Environmental Protection Agency rolled out not one but two brand-new Energy Star specifications, one for servers and one for data centers, that set a bar for assessing and comparing the energy efficiency of individual machines or entire facilities. While not perfect, these two specs reflect some heavy-duty data gathering and feedback solicitation from stakeholders. More important, these specs mark a couple of critical steps forward for IT sustainability in the United States and beyond.

Energy Star for Servers took well over a year to develop, with the EPA collecting comments from vendors, environmental groups, and other concerned parties. The end result was a standard applicable to machines with between one and four sockets and at least one hard drive. Servers that manage to burn the fewest watts while idling are eligible for the Energy Star designation. Power wasted in idle mode is indeed significant, particularly given that servers are notoriously underutilized. Additionally, compliant servers must be capable of measuring their own real-time power use, processor utilization, and air temperature -- all critical data for helping operators assess the overall efficiency of their facilities.

Devising the first edition of the Energy Star for Data Centers spec entailed gathering and analyzing a wealth of data center measurements, amassed over extended periods of time from an array of facilities. Through careful statistical analysis, and again drawing on feedback from stakeholders, the EPA determined what criteria do and do not account for differences in energy efficiency among data centers. The end result was an Energy Star standard based on PUE (Power Usage Effectiveness), which is the ratio of overall data center power consumption to the power consumption of IT equipment.

Energy Star for Data Centers compares a facility's actual PUE against their predicted PUE, which is effectively what the average PUE would be among similar facilities. Data centers that achieve a PUE well below the predicted level (once verified by the third party) can claim Energy Star status. A finalized version of the spec will be released in Portfolio Manager, the EPA's online benchmarking tool, later this year.

Both sets of specs need fine-tuning. Energy Star for Servers, for example, doesn't consider a server's efficiency when it's doing actual work, nor does it take into account cores per processor. Energy Star for Data Centers is based heavily on PUE, which, though useful, hardly paints a complete picture of power usage. Further, the standard doesn't consider differences that can affect overall PUE, such as tier level or what sort of work a data center is doing. The EPA, however, readily recognizes that these standards (like other Energy Star standards) are a work in progress. The organization is already in the process of developing Version 2.0 of Energy Star for Servers and is seeking feedback from stakeholders.

In the meantime, server vendors and data center operators now have useful maps to guide them down the uncertain path toward sustainability.

Ericsson drives a greener supply chain Web-based asset management system promotes increases in efficiency and reuse

Telecom equipment company Ericsson faced a problem not uncommon among manufacturing companies: Its supply chain was fraught with inefficiencies. The company had limited visibility into its own far-reaching inventory of products and parts, and for competitive reasons, repair providers were reluctant to share their inventory data. Thus, in order to ensure it could keep up with customer demands, the company had to maintain excess stock, which can prove both costly and wasteful. Moreover, the company determined that it was spending more time and resources than necessary to get inventory to customers -- not to mention the waste that came from disposing of excess wares that had become obsolete.

In an effort to make its supply chain more efficient and environmentally sound, Ericsson last year deployed a network asset management system from Trade Wings called Re:source Visibility. Among its feats, the system provides Ericsson and its 2,000 global partners with a consolidated, up-to-date view of the inventories at repair centers and service channel operation centers, as well as from new material order teams.

The greater visibility into inventory lets Ericsson and partners determine whether the products or parts a customer needs are available at a nearby repair shop, thus saving the time and expense of ordering and shipping the goods from afar. "An important piece in the success of our initiative is the ability to look beyond the normal boundaries of internal stock levels to first consider equipment that's part of our reuse and WEEE [the European Community's Waste Electrical and Electronic Equipment directive] material flows, and then if necessary, search the secondary market," said Mikael Thoren, global planning manager at Ericsson.

Additionally, Ericsson can better foresee potential shortages of in-demand wares, thus helping to reduce costlier small-production runs to fulfill customer requirements.

From a logistics perspective, Ericsson can use the system to devise efficient transportation routes, taking into account distance, fuel, and emissions when, for example, moving inventory from one location to another. "The system provides both availability of equipment and the distances from point of need, which has provided us with the ability to factor fuel consumption into the decision-making process," said Thoren. "As this initiative continues to evolve, we're working to broaden the number of variables (e.g., weight, transport type, CO2 emissions) available to us in order to maximize the environmental benefits of our reuse optimization strategy."

According to Thoren, the program also supports the company's material take-back service, a legal requirement under the WEEE directive, where customers can request that Ericsson pick up retired goods for end-of-life management. "In 2009, we received approximately 500 requests globally for WEEE collection, which amounted to about 7,045 tons," Thoren said. "Our recovery rate for treated equipment is more than 95 percent; the WEEE directive's requirement is 75 percent."

All of these benefits add up to faster, more efficient customer service, lower operating and energy costs, less electronic waste, and fewer carbon emissions. All told, the company has saved approximately $10 million and seen more than a 20 percent decrease in equipment purchases. Also, repair volumes in some repair centers have dropped by nearly 80 percent.

Intel pinpoints thousands of unproductive servers Using a homegrown application for measuring server utilization, chipmaker is able to reassign or retire 5,000 machines

Imagine running a company with a staff of 2,000 full-time employees who spent around 80 percent of their time doing nothing beyond waiting for some work to do. Odds are, you'd make some staffing changes pretty darn quickly to address such egregious waste. Yet in data centers around the world, servers are permitted to run 24/7, wasting power and adding to organizations' carbon footprints while operating at average utilization levels of 20 percent, 10 percent, or even less.

There are several reasons data center operators tolerate this level of waste. One is that companies lack the necessary tools to gain full visibility into the hardware they're running, such as how much work a machine is doing or whether it's powering a business-critical application. Thus, it's generally easier (and safer) to simply add new racks of servers when computing demands increase, rather than performing a time-consuming inventory of all the machines and pulling the plug on systems that appear to be performing unnecessary work.

Intel last year developed an innovative application for determining which servers were earning their keep and which ones were slacking off. Called iSHARP (Interactive System Health and Resource Productivity), the application is capable of accurately measuring and tracking utilization on the company's large distributed pools of computers. These particular machines are part of an interactive environment, used to process design and development simulations and related tasks for microprocessors.

"This was in effect an effort to drive down the cost of capital expenditures within the batch and interactive services and the evergrowing  operational expenses, including data center power, cooling, and space," said Richard Meneely, Interactive Computing Product Owner for Intel's Engineering Computing group. "We would prefer to not add the expense of building and operating any additional data centers."

In developing iSHARP, Intel first had to define algorithms to correctly identify underutilized machines. Specifically, the app measures CPU and memory utilization on a frequent basis for each system within the interactive computing environment. Those measurements are written to a back-end database for reporting and analysis. The algorithms take into account the individual system's architecture, hardware configuration, and category of application when determining thresholds for identifying underutilization.

1 2 3 4 5 Page 2
Page 2 of 5
The 10 most powerful companies in enterprise networking 2022