In the rush to deploy generative AI, many organizations are sacrificing security in favor of innovation, IBM warns.\nAmong 200 executives surveyed by IBM, 94% said it\u2019s important to secure generative AI applications and services before deployment. Yet only 24% of respondents\u2019 generative AI projects will include a cybersecurity component within the next six months. In addition, 69% said innovation takes precedence over security for generative AI, according to the IBM Institute for Business Value\u2019s report, The CEO's guide to generative AI: Cybersecurity.\nBusiness leaders appear to be prioritizing development of new capabilities without addressing new security risks \u2013 even though 96% say adopting generative AI makes a security breach likely in their organization within the next three years, IBM stated.\n\u201cAs generative AI proliferates over the next six to 12 months, experts expect new intrusion attacks to exploit scale, speed, sophistication, and precision, with constant new threats on the horizon,\u201d wrote Chris McCurdy, worldwide vice president & general manager with IBM Security in a blog about the study.\nFor network and security teams, challenges could include having to battle the large volumes of spam and phishing emails generative AI can create; watching for denial-of-service attacks by those large traffic volumes; and having to look for new malware that is more difficult to detect and remove than traditional malware.\n\u201cWhen considering both likelihood and potential impact, autonomous attacks launched in mass volume stand out as the greatest risk. However, executives expect hackers faking or impersonating trusted users to have the greatest impact on the business, followed closely by the creation of malicious code,\u201d McCurdy stated.\nThere\u2019s a disconnect between organizations\u2019 understanding of generative AI cybersecurity needs and their implementation of cybersecurity measures, IBM found. \u201cTo prevent expensive\u2014and unnecessary\u2014consequences, CEOs need to address data cybersecurity and data provenance issues head-on by investing in data protection measures, such as encryption and anonymization, as well as data tracking and provenance systems that can better protect the integrity of data used in generative AI models,\u201d McCurdy stated.\nTo that end, organizations are anticipating significant growth in spending on AI-related security. By 2025, AI security budgets are expected to be 116% greater than in 2021, IBM found. Roughly 84% of respondents said they will prioritize GenAI security solutions over conventional ones.\nOn the skills front, 92% of surveyed executives said that it\u2019s more likely their security workforce will be augmented or elevated to focus on higher value work instead of being replaced.\nCybersecurity leaders need to act with urgency in responding to generative AI\u2019s immediate risks, IBM warned. Here are a few of its recommendations for corporate execs:\n\nConvene cybersecurity, technology, data, and operations leaders for a board-level discussion on evolving risks, including how generative AI can be exploited to expose sensitive data and allow unauthorized access to systems. Get everyone up to speed on emerging \u201cadversarial\u201d AI \u2013 nearly imperceptible changes introduced to a core data set that cause malicious outcomes.\nFocus on securing and encrypting the data used to train and tune AI models. Continuously scan for vulnerabilities, malware and corruption during model development, and monitor for AI-specific attacks after the model has been deployed.\nInvest in new defenses specifically designed to secure AI. While existing security controls and expertise can be extended to secure the infrastructure and data that support AI systems, detecting and stopping adversarial attacks on AI models requires new methods.\n\nEMA: Security concerns dog AI\/ML-driven network management\nSecurity also is a key concern for enterprises that are considering AI\/ML-driven network management solutions, according to recent study by Enterprise Management Associates (EMA).\nEMA surveyed 250 IT professionals about their experience with AI\/ML-driven network management solutions and found that nearly 39% are struggling with the security risk associated with sharing network data with AI\/ML systems.\n\u201cMany vendors offer AI-driven networking solutions as cloud-based offerings. IT teams must send their network data into the cloud for analysis. Some industries, like financial services, are averse to sending network data into the cloud. They\u2019d rather keep it in-house with an on-premises tool. Unfortunately, many network vendors won\u2019t support an on-premises version of their AI data lake because they need cloud scalability to make it work,\u201d EMA stated in its report, AI-Driven Networks: Leveling up Network Management.