The heavy buzz around all things AI got louder in the financial reports of networking vendors this quarter, even though AI hasn\u2019t made a significant impact on most vendors\u2019 financial performance and supply chain challenges remain a more immediate concern.\nVendors such as Cisco, Arista, Juniper, Extreme and HPE\u2019s Aruba report that they are shipping more products, thanks to multi-month efforts that include significant product redesigns and relentless efforts by their supply-chain teams to address component shortages. But the situation is still challenging, and some enterprise customers still face order delays.\n\u201cClearly backlog is coming down as we expected it to this year, but it still remains about 3X what we would normally expect,\u201d Rami Rahim, CEO of Juniper, told Wall Street analysts this quarter. Juniper reported about a $2 billion backlog at the start of the year and expects that to be cut in half by the end of its fiscal year in December.\nCisco, too, is still reporting sizable backlog levels but says the situation has improved dramatically.\n\u201cThe aging of our backlog has continued to improve as the supply situation normalizes, and as expected, increased customer deliveries reduced our year-end backlog to roughly double historical levels as we enter fiscal '24,\u201d Cisco CFO Scott Herren told analysts at Cisco\u2019s most recent earnings presentation.\u00a0\u201cThat excess backlog will work down in the first half of fiscal '24 with the majority of that being worked off in Q1, by the way,\u201d he said.\nWhile backlog and supply chain issues are still a topic of concern, the subject of AI development opportunities was predominant for all vendors.\nCisco CEO Chuck Robbins said the company has taken some $500 million in orders for AI Ethernet fabrics, for example.\n\u201cThe acceleration of AI will fundamentally change our world and create new growth drivers for us,\u201d Robbins said. \u201cCisco's ASIC design and scalable fabric for AI position us very well to build out the infrastructure that hyperscalers and others need to build AI ML clusters. We expect Ethernet will lead in connecting AI workloads over the next five years.\u201d\nCisco recently unwrapped new high-end programmable Silicon One processors aimed at underpinning large-scale AI\/ML infrastructure for enterprises and hyperscalers. AI\/ML models have grown from needing a few GPUs to needing tens of thousands linked in parallel and in series. The number of GPUs and the scale of the network are unheard of, Cisco said.\n\u201cThe AI opportunity is exciting, and as our largest cloud customers review their classic cloud and AI networking plans, Arista is adapting to these changes and doubling down on our investments in AI,\u201d Jayshree Ulall, CEO of Arista, told analysts at Arista\u2019s recent financial call. \u201cWe expect larger clusters and production deployments in 2025 and beyond. In the decade ahead, AI networking will become an extension of cloud networking to form a cohesive and seamless front-end and back-end network.\u201d\n\u201cWe are in the middle of trials for back-end AI networks, leading to pilots in 2024,\u201d Ulall added.\nArista and Cisco are betting big that Ethernet will be the tool of the AI networking trade in the future. They are both part of a recently announced group \u2013 the Ultra Ethernet Consortium (UEC), hosted by the Linux Foundation \u2013 that\u2019s working to develop physical, link, transport and software layer Ethernet advances.\nThe group, which includes AMD, Broadcom, Eviden, HPE, Intel, Meta and Microsoft, aims to enhance today\u2019s Ethernet technology in order to handle the scale and speed required by AI.\n\u201cAI traffic and performance demands are different as it comprises of a small number of synchronized high bandwidth flows, making them prone to collisions that slow down the job completion time of AI clusters as they connect 1000s of GPUs, generating billions of parameters,\u201d Ulall said.\nArista has been developing features for its EOS networking software such as intelligent load balancing, and advanced analyzers to report and [other] tools that can achieve predictable performance and established Ethernet and IP technology will ultimately be the underpinning architecture to handle that, Ulall said.\nCisco\u2019s goal is to combine these enhanced Ethernet technologies and take them a step further to let customers set up what it calls a Scheduled Fabric. In a Scheduled Fabric, the physical components \u2013 chips, optics, switches \u2013 are tied together like one big modular chassis and communicate with each other to provide optimal scheduling behavior.\n\u201cAs we get scheduled fabric out and these customers get more comfortable moving from InfiniBand to Ethernet, I think that's when we'll start to see the real impact of AI. And maybe it's late '24, but I would suspect into '25 for sure,\u201d Robbins said.\nIn the meantime, one networking competitor that says AI is already impacting its bottom line is Juniper.\u00a0\n\u201cCustomers are recognizing Juniper\u2019s leadership when it comes to AI-driven operations delivered via a modern microservices cloud,\u201d Juniper\u2019s Rahim said. \u201cRevenue from the Mist segment of our business, which are products driven by Mist AI [Juniper\u2019s core cloud-based management system], had a record quarter growing by nearly 100% year over year in the Q2 timeframe, with orders growing by nearly 40% year over year.\u201d\nJuniper recently integrated the ChatGPT AI-based large language model (LLM) with Mist\u2019s virtual network assistant, Marvis. Marvis can detect and describe myriad network problems, including persistently failing wired or wireless clients, bad cables, access-point coverage holes, problematic WAN links, and insufficient radio-frequency capacity.\u00a0\nBy adding ChatGPT capabilities, Juniper is expanding the role of Marvis and augmenting its documentation and support options to help IT administrators quickly get the necessary assistance with problems or challenges, Juniper stated.