IT is under greater pressure to address energy efficiency in the data center, particularly as global regulations aimed at sustainability come into play. A key area of focus is improving server efficiency. \n\nServers can consume more than half of the energy in modern data centers, which makes server efficiency attractive to companies looking to hit carbon-neutral sustainability targets. Plus, reducing energy usage can save money.\n\n\u201cEver-larger data centers have mushroomed across the globe in line with an apparently insatiable demand for computing and storage capacity,\u201d said Uptime Institute in its 2023 data center predictions report. \u201cThe associated energy use is not only expensive \u2013 and generating massive carbon emissions \u2013 but is also putting pressure on the grid.\u201d\n\nTo help enterprises reach their energy-efficiency goals, Uptime Institute has identified five ways to boost server efficiency:\n\nStrategies for more efficient server utilization\n\nFor its analysis, Uptime focused on servers that use AMD EPYC or Intel Xeon processors, and it examined server generations from 2017, 2019, and 2021 using data from The Green Grid\u2019s SERT database (additional details on the SERT data can be found at the end of the article). Here's more on the firm's guidelines to help enterprises analyze and understand the potential for server efficiency improvements.\n\nJump two server generations for a major energy efficiency boost\n\nOlder servers are less energy efficient than new ones, says Jay Dietrich, Uptime Institute's research director of sustainability. For example, Intel servers' efficiency improved by 34% between 2017 and 2019 for CPUs running at 50% utilization, according to a recent report he co-authored. And AMD-based servers saw a whopping 140% improvement, he says.\n\nUpgrading from 2019 to 2021 CPU-based servers will increase efficiency by 32% for Intel servers, and by 47% for AMD servers. The improved efficiency numbers cut across all levels of utilization.\n\nWhen comparing AMD and Intel servers, Intel servers were more efficient in 2017 at all levels of CPU utilization, but since 2019 AMD has leapt ahead. With 2021 servers running at 50% utilization, the average AMD server is 74% more efficient than an Intel.\n\nUnderused servers hamper performance\n\nJust like a car idling in traffic, servers that aren't running at full capacity are just wasting energy.\n\nAccording to a 2022 Uptime Institute data-center survey, only 47% of companies got 50% or better server utilization, up from 36% in 2020. Those numbers might be inflated some because companies that responded may have reported just their best-performing servers\u2014for example those only running batch jobs, which might push utilization as high as 80%, Dietrich says.\n\nUtilization rates in general, though, would likely be lower because many applications don\u2019t run consistently. Business and enterprise software, for example, are used heavily during working hours but much less after hours. The utilization of servers can be increased by having the ones hosting business apps run less time-sensitive workloads during off-peak hours.\n\nThe effort is worth it. Doubling low CPU utilization (20% to 30%) to higher levels of (40% to 60%) can boost average efficiency dramatically, Uptime says.\n\nFor maximum impact, companies should look at increasing utilization while also upgrading servers to the latest models. According to Uptime, combining increased utilization with a server refresh, efficiency can more than double. That means an increase of 100% or more in workload processed for the same amount of energy. When done at scale, this can result in significant capital and operational savings, reduce energy requirements, and improve sustainability performance.\n\nOn the flip side, directly replacing a legacy server with a higher capacity one without also increasing the legacy workload actually reduces utilization rates, says Dietrich, thereby undoing some of the benefits of the upgrade.\n\nIt takes additional planning to increase utilization while also doing a hardware upgrade, but the result is not just better efficiency, but possibly fewer servers because the necessary number of new machines may be less.\n\nLoad up more powerful servers\n\nBuying more powerful hardware can also result in better energy efficiency. But capacity needs to be put to work. Without a major workload consolidation, the business case to upgrade old servers remains dubious, Uptime warned. "Newer generation servers tend to only deliver major efficiency gains when they carry larger workloads. A simple one-to-one machine migration may result in little to no efficiency improvement," Uptime warned. \n\nFor AMD servers in particular, efficiency improves sharply as server work capacity increases. Upgrading from a low-end server that handles two million SSJs to a high-end server that can do more than eight million can double server efficiency. For Intel servers, there are still efficiency benefits, though they are less dramatic, Uptime says.\n\nIncrease server cores to improve efficiency\n\nAnother way to improve efficiency dramatically is increasing the number of processor cores. In the case of 2021 AMD servers, as the number of server cores increases from eight to 64, the efficiency triples, Uptime found. For Intel, the increase was less but still significant for 2021 machines.\n\nIt\u2019s important to note that not all workloads are capable of using all available cores, says Dietrich. "Some workloads will work most efficiently on, say, a 12-core processor," he says. So it\u2019s important to match processors\u2019 ability with the needs of the applications running on the server in order to gain the most efficiency.\n\nIn some cases, hypervisors and virtual machines can be used to maximize usage, he says, but not all applications lend themselves to these environments.\n\nIT power management is often overlooked\n\nPower-management features of servers can improve the energy-efficiency equation, according to Uptime\u2019s research, boosting server efficiency by at least 10%. The way this works is that CPU voltage and frequency can be increased or decreased, and unused cores can move into a low-power idle state. Many organizations don't use these features, however, because of performance worries or latency issues.\n\nAccording to the Uptime Institute report, power management can increase latency by 20 to 80 microseconds, which is unacceptable for some types of workloads, such as financial trading. "And there are some applications where you might decide not to use it because it will cause performance or response time problems," Dietrich says. But there are other applications where delays won\u2019t have a business impact.\n\n"The biggest mistake is that some operators are risk averse," Dietrich says. "They think that if they're going to save a couple of hundred bucks a server on their energy bill but are risking breaking their SLA which will cost them a million dollars, they're not going to turn [power management] on."\n\nDietrich recommends that when companies buy new servers and run their performance tests, make sure they test whether power management affects the applications adversely or not. "If it doesn't bother them, then you can use power management," he says. "You can implement a set of power-management functions that will let you save energy and still provide response time and performance that your customers want."\n\nAndy Lawrence, executive director of research at Uptime, noted in a blog post that the efficiency benefits of IT power management are well established and understood, yet few operators use it. "IT power management has long been overlooked as a means of improving data center efficiency," Lawrence wrote. "Uptime Intelligence\u2019s data shows that in most cases, concerns about IT performance are far outweighed by the reduction in energy use. Managers from both IT and facilities will benefit from analyzing the data, applying it to their use cases and, unless there are significant technical and performance issues, using power management as a default."\n\nHow Uptime measured server efficiency\n\nUptime analyzed the efficiency of 429 server platforms using The Green Grid\u2019s Server Efficiency Rating Tool (SERT) database. The Green Grid is a consortium whose goal is to create tools, provide technical expertise, and advocate for energy and resource efficiency in data center environments.\n\nThe SERT suite is an industry standard for measuring server efficiency; mandatory server efficiency requirements set by the EU\u2019s Ecodesign Directive and the US Energy Star program specify that servers report the SERT overall efficiency metric.\n\nUptime analyzed AMD and Intel server data from the SERT database, noting that different processor types have advantages and disadvantages depending on the workload. Uptime focused on servers that use AMD EPYC or Intel Xeon processors, and analyzed server generations from 2017, 2019, and 2021.\n\nThe institute ran the servers through their paces with a simulated enterprise online transaction-processing application that stresses processors and memory. That simulation is the SERT worklet server-side Java (SSJ). Uptime says it was chosen in part because SSJ data is available for eight levels (rather than just four levels) of server utilization (12.5%, 25%, 37.5%, 50%, 62.5%, 75%, 87.5% and 100%), which allows for a more granular analysis.