The reality of a self-managing data center is getting closer with HPE\u2019s announcement last week of what it claims to be the first artificial intelligence (AI) predictive engine for trouble in the data center.\nHPE says next year it will offer an AI recommendation engine add-on that\u2019s designed to predict and stop storage- and general-infrastructure trouble before it starts. It\u2019s one of a number of autonomous data center components that we should expect to see soon from players. Other AI and machine learning systems geared towards data centers will be available from companies such as Litbit (which I wrote about in the summer) and Oracle, among others.\n\n\u201cInfrastructure solutions should utilize data science and machine learning,\u201d HPE says in a white paper\u00a0in which it attempts to explain why AI and machine learning are better at preventing downtime than humans.\nCurrently, IT workers have to constantly carry out \u201cintricate forensic work to unravel the maze of issues that impact data delivery to applications.\u201d That creates a bottleneck, HPE says.\nHowever, the company says that through a form of machine learning, iffy-performing components can be identified automatically. That can be done without any traditional human guess work. It can happen early on, too, before users perceive any kind of problem. Essentially, it\u2019s accomplished by tallying massive amounts of collected data throughout the IT infrastructure stack and then analyzing it.\nHow HPE's self-managing solution works\nThe idea is to \u201cdetect and rapidly identify the root cause\u201d and then to \u201cresolve the problem through data collection.\u201d Signatures are then built to identify other users, elements or customers that might be affected. Rules are then developed to instigate a solution, which can be automated.\nFurther, in the event that a user does indeed fail, the AI-machine learning solution, with its new signatures and rules, can quickly interject through the entire system and stop others from inheriting the same issue. Future software updates are optimized based on what\u2019s learned through that AI.\nHPE got to where it is with its AI offering partly through its purchase earlier this year of flash storage and predictive analysis company Nimble Storage. It\u2019s been collecting data science and telemetry for a decade, HPE says. Nimble has, in fact, analyzed over 12,000 cases of app-gap. That\u2019s the moniker HPE uses for the productivity-reducing bottleneck between application and data \u2014 issues, in other words.\nTo accomplish downtime reduction, one needs full analysis of the entire IT stack, HPE says.\nThrough that, downtime can be predicted, the company claims. Slowing infrastructure causes will be identified and then prevented with AI, as opposed to merely being human-monitored and flagged as potential trouble.\nAnd \u201cprescriptive resolution\u201d should be employed if the engine can\u2019t prevent a failure. That means that the engine should be able to fix the problem if it occurs. It should do that by knowing the root cause predictively and analytically, rather than through traditional, manual troubleshooting, and utilizing tools such as web-based forum lookups and so on.\nSelf-managing systems reduce staffing levels\nFinally, HPE is really serious about autonomy. Using this technology, staffing levels conceivably drop. It says eliminating front-line tech support who are often simply collecting information and documenting issues brings the autonomous data center closer to becoming a reality. (The AI engine knows there\u2019s a problem, so you don\u2019t need anyone fielding calls.)\n\u201cFor the small percentage of problems that require the need to talk to an engineer, a customer can immediately reach a level three engineer,\u201d HPE says. Levels one and two are eliminated.