Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.\nToday\u2019s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn\u2019t optimal. By using artificial intelligence, as played out through machine learning, there\u2019s enormous potential to streamline the management of complex computing facilities.\n\nAI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.\nInside data-center facilities, there are increasing numbers of sensors that are collecting data from devices including power back-up (UPS), power distribution units, switchgear and chillers. Data about these devices and their environment is parsed by machine-learning algorithms, which cull insights about performance and capacity, for example, and determine appropriate responses, such as changing a setting or sending an alert.\u00a0 As conditions change, a machine-learning system learns from the changes \u2013 it's essentially trained to self-adjust rather than rely on specific programming instructions to perform its tasks.\nThe goal is to enable data-center operators to increase the reliability and efficiency of the facilities and, potentially, run them more autonomously. However, getting the data isn\u2019t a trivial task.\nA baseline requirement is real-time data from major components, says Steve Carlini, senior director of data-center global solutions at Schneider Electric. That means chillers, cooling towers, air handlers, fans and more. On the IT equipment side, it means metrics such as server utilization rate, temperature and power consumption.\n\u201cMetering a data center is not an easy thing,\u201d Carlini says. \u201cThere are tons of connection points for power and cooling in data centers that you need to get data from if you want to try to do AI.\u201d\nIT pros are accustomed to device monitoring and real-time alerting, but that\u2019s not the case on the facilities side of the house. \u201cThe expectation of notification in IT equipment is immediate. On your power systems, it\u2019s not immediate,\u201d Carlini says. \u201cIt\u2019s a different world.\u201d\nIt\u2019s only within the last decade or so that the first data centers were fully instrumented, with meters to monitor power and cooling. And where metering exists, standardization is elusive: Data-center operators rely on building-management systems that utilize multiple communication protocols \u2013 from Modbus and BACnet to LONworks and Niagara \u2013 and have had to be content with devices that don\u2019t share data or can\u2019t be operated via remote control. \u201cTCP\/IP, Ethernet connections \u2013 those kinds of connections were unheard of on the powertrain side and cooling side,\u201d Carlini says.\nThe good news is that data-center monitoring is advancing toward the depth that\u2019s required for advanced analytics and machine learning. \u201cThe service providers and colocation providers have always been pretty good at monitoring at the cage level or the rack level, and monitoring energy usage. Enterprises are starting to deploy it, depending on the size of the data center,\u201d Carlini says.\nMachine learning keeps data centers cool\nA Delta Airlines data center outage, attributed to electrical-system failure, grounded about 2,000 flights over a three-day period in 2016 and cost the airline a reported $150 million. That\u2019s exactly the sort of scenario that machine learning-based automation could potentially avert. Thanks to advances in data center metering and the advent of data pools in the cloud, smart systems have the potential to spot vulnerabilities and drive efficiencies in data-center operations in ways that manual processes can\u2019t.\nA simple example of machine learning-driven intelligence is condition-based maintenance that\u2019s applied to consumable items in a data center, for example, cooling filters. By monitoring the air flow through multiple filters, a smart system could sense if some of the filters are more clogged than others, and then direct the air to the less clogged units until it\u2019s time to change all the filters, Carlini says.\nAnother example is monitoring the temperature and discharge of the batteries in UPS systems. A smart system can identify a UPS system that\u2019s been running in a hotter environment and might have been discharged more often than others, and then designate it as a backup UPS rather than a primary. \u201cIt does a little bit of thinking for you. It\u2019s something that could be done manually, but the machines can also do it. That\u2019s the basic stuff,\u201d Carlini says.\nTaking things up a level is dynamic cooling optimization, which is one of the more common examples of machine learning in the data center today, particularly among larger data-center operators and colocation providers.\nWith dynamic cooling optimization, data center managers can monitor and control a facility\u2019s cooling infrastructure based on environmental conditions. When equipment is moved or computing traffic spikes, heat loads in the building can change, too. Dynamically adjusting cooling output to shifting heat loads can help eliminate unnecessary cooling capacity and reduce operating costs.\nColocation providers are big adopters of dynamic cooling optimization, says Rhonda Ascierto, research director for the datacenter technologies and eco-efficient IT channel at 451 Research. \u201cMachine learning isn\u2019t new to the data center,\u201d Ascierto says. \u201cFolks for a long time have tried to better right-size cooling based on capacity and demand, and machine learning enables you to do that in real time.\u201d\nVigilent is a leader in dynamic cooling optimization. Its technology works to optimize the airflow in a data center facility, automatically finding and eliminating hot spots.\nData center operators tend to run much more cooling equipment than they need to, says Cliff Federspiel, founder, president and CTO of Vigilent. \u201cIt usually produces a semi-acceptable temperature distribution, but at a really high cost.\u201d\nIf there\u2019s a hot spot, the typical reaction is to add more cooling capacity. In reality, higher air velocity can produce pressure differences, interfering with the flow of air through equipment or impeding the return of hot air back to cooling equipment. Even though it\u2019s counterintuitive, it might be more effective to decrease fan speeds, for example.\nVigilent\u2019s machine learning-based technology learns which airflow settings optimize each customer's thermal environment. Delivering the right amount of cooling, exactly where it\u2019s needed, typically results in up to a 40% reduction in cooling-energy bills, the company say.\nBeyond automating cooling systems, Vigilent\u2019s software also provides analytics that customers are using to make operational decisions about their facilities.\n\u201cOur customers are becoming more and more interested in using that data to help manage their capital expenditures, their capacity planning, their reliability programs,\u201d Federspiel says. \u201cIt\u2019s creating opportunities for lots of new kinds of data-dependent decision making in the data center.\u201d\nAI makes existing processes better\nLooking ahead, data-center operators are working to extend the success of dynamic-cooling optimization to other areas. Generally speaking, areas that are ripe for injecting machine learning are familiar processes that require repetitive tasks.\n\u201cNew machine learning-based approaches to data centers will most likely be applied to existing business processes because machine learning works best when you understand the business problem and the rules thoroughly,\u201d Ascierto says.\nEnterprises have existing monitoring tools, of course. There\u2019s a longstanding category of data-center infrastructure management (DCIM) software that can provide visibility into data center assets, interdependencies, performance and capacity. DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. Enterprises use DCIM software to simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.\n\u201cIf you have a basic monitoring and asset management in place, your ability to forecast capacity is vastly improved,\u201d Ascierto says. \u201cFolks are doing that today, using their own data.\u201d\nNext up: adding outside data to the DCIM mix. That\u2019s where machine learning plays a key role.\nData-center management as a service, or DMaaS, is a service that\u2019s based on DCIM software. But it\u2019s not simply a SaaS-delivered version of DCIM software. DMaaS takes data collection a step further, aggregating equipment and device data from scores of data centers. That data is then anonymized, pooled and analyzed at scale using machine learning.\n\nREAD MORE: What does DMaaS deliver that DCIM doesn\u2019t?\n\nTwo early players in the DMaaS market are Schneider Electric and Eaton. Both vendors mined a slew of data from their years of experience in the data-center world, which includes designing and building data centers, building management, electrical distribution, and power and cooling services.\n\u201cThe big, significant change is what Schneider and Eaton are doing, which is having a data lake of many customers\u2019 data. That\u2019s really very interesting for the data-center sector,\u201d Ascierto says.\nAccess to that kind of data, harvested from a wide range of customers with a wide range of operating environments, enables an enterprise to compare its own data-center performance against global benchmarks. For example, Schneider\u2019s DMaaS offering, called EcoStruxure IT, is tied to a data lake containing benchmarking data from more than 500 customers and 2.2 million sensors.\u00a0\n\u201cNot only are you able to understand and solve these issues using your own data. But also, you can use data from thousands of other facilities, including many that are very similar to yours. That\u2019s the big difference,\u201d Ascierto says.\nPredictive and preventative maintenance, for example, benefit from deeper intelligence. \u201cBased on other machines, operating in similar environments with similar utilization levels, similar age, similar components, the AI predicts that something is going to go wrong,\u201d Ascierto says.\nScenario planning is another process that will get a boost from machine learning. Companies do scenario planning today, estimating the impact of an equipment move on power consumption, for example. \u201cThat\u2019s available without machine learning,\u201d Ascierto says. \u201cBut being able to apply machine-learning data, historic data, to specific configurations and different designs \u2013 the ability to be able to determine the outcome of a particular configuration or design is much, much greater.\u201d\nRisk analysis and risk mitigation planning, too, stand to benefit from more in-depth analytics. \u201cData centers are so complex, and the scale is so vast today, that it\u2019s really difficult for human beings to pick up patterns, yet it\u2019s quite trivial for machines,\u201d Ascierto says.\nIn the future, widespread application of machine learning in the data center will give enterprises more insights as they make decisions about where to run certain workloads. \u201cThat is tremendously valuable to organizations, particularly if they are making decisions around best execution venue,\u201d Ascierto says. \u201cShould this application run in this data center? Or should we use a collocation data center?\u201d\nLooking further into the future, smart systems could take on even more sophisticated tasks, enabling data centers to dynamically adjust workloads based on where they will run the most efficiently or most reliably. \u201cSophisticated AI is still a little off in to the future,\u201d Carlini says.\nIn the meantime, for companies that are just getting started, he stresses the importance of getting facilities and IT teams to collaborate more.\n\u201cIt\u2019s very important that you consider all the domains of the data center \u2013 the power, the cooling and the IT room,\u201d Carlini says. The industry is working hard to ensure interoperability among the different domains\u2019 technologies. Enterprises need to do the same on the staffing front.\n\u201cTechnically it\u2019s getting easier, but organizationally you still have silos," he says.