IT organizations that apply artificial intelligence and machine learning (AI\/ML) technology to network management are finding that AI\/ML can make mistakes, but most organizations believe that AI-driven network management will improve their network operations.\nTo realize these benefits, network managers must find a way to trust these AI solutions despite their foibles. Explainable AI tools could hold the key.\nA survey finds network engineers are skeptical.\nIn an Enterprise Management Associates (EMA) survey of 250 IT professionals who use AI\/ML technology for network management, 96% said those solutions have produced false or mistaken insights and recommendations. Nearly 65% described these mistakes as somewhat to very rare, according to the recent EMA report \u201cAI-Driven Networks: Leveling Up Network Management.\u201d Overall, 44% percent of respondents said they have strong trust in their AI-driven network-management tools, and another 42% slightly trust these tools.\nBut members of network-engineering teams reported more skepticism than other groups\u2014IT tool engineers, cloud engineers, or members of CIO suites\u2014suggesting that people with the deepest networking expertise were the least convinced. In fact, 20% of respondents said that cultural resistance and distrust from the network team was one of the biggest roadblocks to successful use of AI-driven networking. Respondents who work within a network engineering team were twice as likely (40%) to cite this challenge.\nGiven the prevalence of errors and the lukewarm acceptance from high-level networking experts, how are organizations building trust in these solutions?\nWhat is explainable AI, and how can it help?\nExplainable AI is an academic concept embraced by a growing number of providers of commercial AI solutions. It\u2019s a subdiscipline of AI research that emphasizes the development of tools that spell out how AI\/ML technology makes decisions and discovers insights. Researchers argue that explainable AI tools pave the way for human acceptance of AI technology. It can also address concerns about ethics and compliance.\nEMA\u2019s research validated this notion. More than 50% of research participants said explainable AI tools are very important to building trust in AI\/ML technology they apply to network management. Another 41% said it was somewhat important.\nMajorities of participants pointed to three explainable AI tools and techniques that best help with building trust:\n\nVisualizations of how insights were discovered (72%): Some vendors embed visual elements that guide humans through the paths AI\/ML algorithms take to develop insights. These include decisions trees, branching visual elements that display how the technology works with and interprets network data.\nNatural language explanations (66%): These explanations can be static phrases pinned to outputs from an AI\/ML tool and can also come in the form of a chatbot or virtual assistant that provides a conversational interface. Users with varying levels of technical expertise can understand these explanations.\nProbability scores (57%): Some AI\/ML solutions present insights without context about how confident they are in their own conclusions. A probability score takes a different tack, pairing each insight or recommendation with a score that tells how confident the system is in its output. This helps the user determine whether to act on the information, take a wait-and-see approach, or ignore it altogether.\n\nRespondents who reported the most overall success with AI-driven networking solutions were more likely to see value in all three of these capabilities.\nThere may be other ways to build trust in AI-driven networking, but explainable AI may be one of the most effective and efficient. It offers some transparency into the AI\/ML systems that might otherwise be opaque. When evaluating AI-driven networking, IT buyers should ask vendors about how they help operators develop trust in these systems with explainable AI.