The enterprise IT landscape is littered with supposedly paradigm-shifting technologies that failed to live up to the hype, and until now, one could argue that AI fell into that category. But generative AI, which has taken the world by storm in the form of OpenAI\u2019s ChatGPT chatbot, just might be the real deal.\nChris Bedi, chief digital information officer at ServiceNow, says the release of ChatGPT last November was \u201can iPhone moment,\u201d an event that captured the public\u2019s attention in a way that \u201cchanged everything forever.\u201d He predicts that generative AI will become embedded into the fabric of every enterprise, and he recommends that CIOs and other IT leaders should begin now to develop their generative AI strategies.\nGartner is no less effusive, predicting that generative AI will become \u201ca general-purpose technology with an impact similar to that of the steam engine, electricity and the internet.\u201d Although generative AI is still in its infancy and there are many pitfalls that need to be navigated, Gartner says, \u201cGenerative AI provides new and disruptive opportunities to increase revenue, reduce costs, improve productivity and better manage risk. In the near future, it will become a competitive advantage and differentiator.\u201d\u00a0\nAI has been around for a long time, but generative AI takes machine learning to the next level with a neural network architecture called transformer (the T in GPT), first described by Google researchers in 2017. So, this is new. Generative AI systems are built on pre-trained (the P in GPT) data sets (45 terabytes for ChatGPT) and are able to respond to queries in conversational language. Generative AI can produce text, images, and video, including software code and networking scripts.\nWe went right to source and asked ChatGPT itself how it can make life easier for enterprise IT. After a pause of no more than a couple of seconds, we got back a numbered list: 1) troubleshooting and issue resolution, 2) documentation and knowledge management, 3) automation and scripting, 4) training and onboarding, 5) security and compliance, 6) project management and planning, 7) stay updated on technology trends. \u00a0\nWithout any prompting, the chatbot added, \u201cIt\u2019s important to note that while ChatGPT can provide valuable guidance and support it should not be solely relied upon for critical decision-making. Human expertise and judgment should always be considered alongside AI-generated suggestions.\u201d\nAfter getting the chatbot\u2019s perspective, we moved on to intelligent humans for their take on several key questions about generative AI: What exactly is it? What can it do for enterprise IT? What can\u2019t it do? How do I get it? What are some of the potential pitfalls that I need to be aware of?\nWhat is generative AI and how is it different from \u2018traditional\u2019 AI?\nFor the most part, traditional AI\/ML technology sits in the background, looking to identify patterns in large data sets. It makes predictions and provides recommendations based on those predictions.\nGenerative AI is fundamentally different. It is a large language model (LLM) trained with vast amounts of data, including samples of human conversation. It is able to digest and summarize data and can interact with a human using natural language. ChatGPT is a super Siri that surprised even its creators when it racked up a million users in its first week after launch and 100 million after two months. It currently generates 1.8 billion visitors per month.\nIn general, when systems scale rapidly, they become more complex, harder to manage, less reliable and less efficient. With large language models, the more data, the more queries, the more interactions, the smarter the system becomes, and the more it begins to resemble human intelligence.\nBut, at least at this stage, these models are not the same as human intelligence. Forrester analyst Rowan Curran says, \u201cWhat they are not doing is creating net new information that has a contextual understanding of itself. These models predict the next word in a sequence based on the previous words in that sequence. It\u2019s important not to treat them as a source of authority, an oracle or anything that has a mind behind it.\u201d\nWhat can generative AI do for enterprise IT?\nAt the networking layer, large language models can perform functions like generating network configurations, writing scripts for IT automation tools and creating networking maps, says Shamus McGillicuddy, vice-president of research at Enterprise Management Associates.\n\u201cIt\u2019s very good for inspiration, imagination, anti-procrastination. One can use it to get started with a task or project. Ask it to give you something, like a piece of content or code. Then one can use his or her knowledge and skills to turn it into something good, whether it\u2019s a policy paper or a network configuration file,\u201d McGillicuddy says.\nIn software development, generative AI can spit out code snippets and has the ability to de-bug code. Large language models use the term \u201ctoken\u201d the way IT pros talk about bytes. With ChatGPT, one token represents four characters, or roughly three-fourths of a word. This is important because each ChatGPT query\/response has a limit of around 4,000 tokens, and the query wording itself counts toward that limit. So, generative AI systems can write pieces of code in a variety of programming languages, but don\u2019t ask them to come up with new versions of an operating system, because when it hits that limit, it stops and resets.\nAt the strategic level, as IT leaders become more familiar and comfortable with generative AI, they will be able to roll it out across the enterprise to make employees more productive, streamline business processes, improve customer service, and drive digital transformation.\nBedi says generative AI\u2019s ability to take large pieces of disparate, complex information and summarize them for human consumption has applications for ITOps, analysis of security and event logs, customer support, call center, help desk, finance, HR, sales, and marketing. \u201cEverybody is awash in tons of content; generative AI has the ability to distill it into something useful and consumable. It can speed up every operation in the company,\u201d he adds.\nGenerative AI pitfalls.\nIf this all this sounds too good to be true, and that\u2019s because it probably is\u2014at least for now. A McKinsey report cautions, \u201cThe outputs generative AI models produce may often sound extremely convincing. But sometimes the information they generate is just plain wrong. Worse, sometimes it\u2019s biased (because it\u2019s built on the gender, racial, and myriad other biases of the internet and society more generally) and can be manipulated to enable unethical or criminal activity.\u201d\nForrester\u2019s Curran uses the term \u201ccoherent nonsense\u201d to describe this phenomenon. But the term gaining the most traction in the generative AI ecosystem is \u201challucination.\u201d\nFuturist Bernard Marr says, \u201cHallucination in AI refers to the generation of outputs that may sound plausible but are either factually incorrect or unrelated to the given context. These outputs often emerge from the AI model\u2019s inherent biases, lack of real-world understanding, or training data limitations. In other words, the AI system \u2018hallucinates\u2019 information that it has not been explicitly trained on, leading to unreliable or misleading responses.\u201d\nThis means that enterprise IT should not put generative AI software code or networking scripts into production without a person double-checking it first\u2014an approach dubbed \u201chuman in the loop.\u201d And organizations should have systems in place to catch instances in which the chatbot might interact with customers in a way that might be considered argumentative, offensive, or inappropriate.\nThe explosion of interest in ChatGPT has also triggered worries about data privacy and shadow generative AI, since it must be assumed that employees at all levels are asking ChatGPT questions.\n\u201cI am very concerned about what data people put into ChatGPT when they are putting queries into it,\u201d says McGillicuddy. \u201cI\u2019m concerned about how that data is used and stored and what rights Open AI asserts to it.\u201d\nKeatron Evans, principal cybersecurity advisor at the InfoSec Institute, cautions, \u201cDon\u2019t use any protected data or personal information when utilizing or experimenting with AI. For instance, say you have a confidential sales report and want to generate a summary using AI. You upload the report, but now the data that you entered is stored on ChatGPT\u2019s servers, and it will use that data to answer queries from other people, possibly exposing your company\u2019s confidential information.\u201d\nHe adds that hackers could exploit ChatGPT code vulnerabilities to steal user information or find a way to steal that data directly from the app itself. \u201cRegardless, uploading sensitive data or information could violate privacy laws, which would result in your company possibly facing large fines,\u201d Evans points out.\u00a0\nAnother, more fuzzy issue concerns ownership of intellectual property. Let\u2019s say an employee uploads proprietary software code to ChatGPT, asking it to de-bug the code or add a piece of functionality. That code goes into the ChatGPT database. What happens when someone else at a different company queries ChatGPT and the output includes chunks of that original code?\n\u00a0Samsung recently banned the use of ChatGTP by employees after an engineer \u201caccidentally leaked internal source code by uploading it to ChatGPT,\u201d according to an internal memo.\u00a0\nHow should organizations acquire generative AI.\nThe vendor community is racing to provide generative AI for enterprises. The usual suspects are leading the way (Google, AWS, Microsoft, IBM) since they have the resources to develop these large language models. But just about every vendor is figuring out a way to embed generative AI into its platforms.\nIDC analyst Nancy Gohring says, \u201cVendors in ITSM and ITOps are already applying generative AI to a variety of use cases, predominantly with the aim of improving tool usability, speeding response times, and expanding use cases. While ensuring human oversight is critical, particularly given the immaturity of the technology, enterprises should seriously consider embracing new offerings as a way to improve efficiencies.\u201d\nFor IT leaders, a sensible approach would be to work with existing platform partners to determine how the vendor roadmap aligns with their enterprises\u2019 style of technology acquisition. Does the enterprise have the willingness and the readiness (either from a skills, funding, or data-processing infrastructure perspective) to spin up its own generative AI capabilities? This approach might deliver competitive advantage, but it also takes time and effort.\nOr would it make more sense to leverage existing technology providers who are embedding generative AI into their platforms? For example, Salesforce has released Einstein GPT which brings generative AI capabilities to the Salesforce CRM platform as well as to the Slack app.\nOf course, similar to the way organizations have adopted hybrid-cloud architectures, it\u2019s likely that enterprises will adopt a mixed model that encompasses both cloud and on-prem deployments. One option would be to build new generative AI apps in the AWS cloud, using the AWS infrastructure, large language models, and toolsets. Another would be to build custom generative AI functionality on top of vendor CRM or ERP platforms.\nWhat some of the leading vendors are offering.\nHere are some examples of how key vendors are ramping up their generative AI capabilities:\nMicrosoft\nMicrosoft, the major investor in OpenAI and its technology partner, is embedding ChatGPT technology throughout its portfolio. Microsoft has introduced Microsoft 365 Copilot, which integrates generative AI into Office productivity apps like Word, Excel, Outlook, and Teams. A feature called Business Chat combines a user\u2019s calendar, emails, chats, documents, contacts, etc., into one knowledge base that can be queried in natural language. Microsoft has announced Dynamics 365 Copilot, which brings generative AI to CRM and ERP. And Microsoft has embedded ChatGPT functionality into its Bing search engine.\nIn a related development, OpenAI has partnered with GitHub to offer a commercial product called GitHub Copilot, a code-writing chatbot that can speak more than a dozen programming languages.\nGoogle\nGoogle has announced Duet AI for Google Workspace, which embeds its Generative AI (Google\u2019s large language model is called PaLM) into the Google productivity suite (Gmail, Google Docs, Sheets, Slides, and Meet). Google is putting Generative AI functionality into its Chrome browser. It has a platform called Vertex AI that enables enterprises and SaaS vendors to build their own applications; a service to help enterprises build AI-powered chat and search applications based on Google\u2019s foundation models; and an answer to GitHub\u2019s Copilot called Duet, designed to help developers write code.\nCisco\nCisco has built its own generative AI and recently announced plans to buy AI startup\u00a0Armorblox.\u00a0Cisco says it will embed Generative AI capabilities across its entire portfolio, starting with its Security Cloud service and Webex collaboration tool.\nCisco\u00a0says that by the end of the year a generative AI-based policy assistant will be able to interact with network admins to help them optimize policy management. Security and IT administrators will be able to describe granular security policies for tasks like firewall management, and the assistant will evaluate how to best implement policies across the security infrastructure.\nJuniper\nJuniper\u00a0is integrating ChatGPT with its Marvis virtual network assistant. Marvis, driven by the AI technology Juniper got with its acquisition of Mist Systems, can detect and describe network problems in natural language. By adding ChatGPT capabilities, Juniper is expanding the role of Marvis, augmenting its documentation and support options to help IT administrators.\nVendors collaborate to provide on-prem generative AI.\nDell Technologies and Nvidia have launched an initiative called Project Helix to help enterprises build and manage generative AI models on-premises. The goal is to support the complete generative AI lifecycle including infrastructure provisioning, modeling, training, fine-tuning, application development, and deployment. Dell will contribute its PowerEdge servers, Nvidia will provide the GPUs, networking and software suite, which includes its NeMo large language model framework and NeMo Guardrails software for building secure chatbots.\nAnd as the generative AI vendor ecosystems grows, we will undoubtedly see more collaborations like the one recently announced between Nvidia and ServiceNow in which the companies work together to build applications for specific business processes and workflows. ServiceNow says the first results of the collaboration will be aimed at building generative AI applications for enterprise IT departments, including trouble-ticket summarization, auto-routing and auto-resolution; incident severity prediction, intent detection, semantic search, and root cause analysis.\nIn addition, we can expect that the hyperscalers, as well as a new generation of SaaS startups, will be delivering vertical-specific generative AI applications. For example, Google has launched a suite of generative AI-based tools for medical imaging. And Gartner predicts that in two years, \u201cmore than 30% of new drugs and materials will be systematically discovered using generative AI techniques, up from zero today.\u201d\nMcKinsey says, \u201cCEOs should consider exploration of generative AI a must, not a maybe. Generative AI can create value in a wide range of use cases. The economics and technical requirements to start are not prohibitive, while the downside of inaction could be quickly falling behind competitors.\u201d\nWhat enterprise IT leaders should be doing now.\nSet policies\nAccording to Gartner, \u201cYour workforce is likely already using generative AI, either on an experimental basis or to support their job-related tasks.\u201d To avoid \u201cshadow\u201d usage, Gartner recommends crafting a usage policy rather than enacting an outright ban. The policy should simply state: Don\u2019t input any personally identifiable information, sensitive information, or intellectual property.\nAnd the company should put monitoring tools in place. The vendor community is already stepping up, with companies like Zscaler, ExtraHop, and LayerX offering ways to monitor and control employee usage of ChatGPT.\nSet guardrails.\nForrester analyst Mike Gualtieri says organizations need to set policies that enable developers to experiment with generative AI, but to establish guardrails, such as requiring that the code go through a security scanning tool, and having a human double-check it. If something goes wrong, \u201cYou can never blame GPT; it\u2019s your responsibility,\u201d cautions Gualtieri.\nEducate and train.\nThe buzz around ChatGPT has generated excitement but also some fear associated with practical concerns like cybersecurity attacks generated by these systems, as well as more emotional worries that machines are coming to replace us.\nWe posed that very question to ChatGPT and got this: \u201cMy purpose is to assist and augment human capabilities, rather than replace them. While AI technologies like ChatGPT can automate certain tasks and provide support, they are not designed to replace the expertise and decision-making abilities of IT professionals.\u201d\nIt\u2019s important for enterprise IT leaders to educate employees across the company on the potential for Generative AI to make their lives easier. It\u2019s also important to launch training programs for IT staffers.\nBeef up security.\nMcGillicuddy predicts that \u201cmalicious actors will be the most prolific users of generative AI for the next year or so.\u201d The concern is that generative AI will be able to write convincing phishing emails and help hackers create deep fakes. Enterprise security leaders need to up their game when it comes to anti-phishing defenses and security tools like data-leak protection.\u00a0\nCreate interdisciplinary teams.\nIT exists to support the business, so IT leaders need to create cross-functional teams that can identify and prioritize business processes that can benefit from generative AI.\nDevelop a long-range strategy.\nEnterprise IT leaders need to develop an organization-wide generative AI strategy that answers several fundamental questions. How can it help cut costs? How can it make employees more productive? How can it create new business opportunities? What\u2019s the best way for us to acquire it? How can we implement it in a way that avoids the pitfalls?