The recent Interop show in Las Vegas was awash with big ideas and the latest and greatest technologies, a healthy sign that IT is alive and well and the industry is brimming with innovation.Keynoter Mark Templeton, CEO of Citrix Systems, put it this way: We’re facing change at all levels — at the core tech layer, in the business role IT is playing, and even where end users are concerned (given their penchant to bring products to work).IN PICTURES: Hot products at Interop 2011His suggestion: Accelerate the shift to service delivery. IT should no longer be about owning devices, Templeton says. The new IT is about aggregating and owning services. “We need an end-to-end model for stitching stuff together to deliver these services,” he says. It shouldn’t matter if the services are delivered from your data center or a cloud provider’s, or if they are consumed at a company desktop or an iPad in someone’s home. Of course in this new dynamic IT world, data center machine assets are increasingly virtual, more specialized, have a tendency to move around, and result in ever more server-to-server links across the data center network.This so called East/West traffic already accounts for some 75% of data center traffic, and adding more server-to-server links will increase latency in traditional three-tier data center networks that have to route that traffic North/South up and down tiers, says HP’s Michael Nielsen, director of network solutions. The answer much discussed at the show: network fabrics that reduce server-to-server links to a single network hop. Nielsen says HP acquired fabric technology when it bought 3Com. HP’s Intelligent Resilient Framework (IRF) allows access layer switches to exchange traffic directly without having to climb the tiers. And of course Juniper, Brocade and others were at the show touting new fabric capabilities. Brocade Vice President Ken Cheng says his company is delivering fabrics today that stand apart because they allow buyers to start small and grow.There was also a lot of chatter at Interop about OpenFlow, an emerging standard that specifies how servers can manipulate the data plane (the forwarding tables) of switches/routers. OpenFlow is intended to simplify the management of traffic flows, particularly between data centers.NEC, which has been participating in OpenFlow research for three years, was at Interop with one of the first purpose built OpenFlow-capable switches, the 48-port (10/100/100Mbps) PF5240, which won a best of show award. The company also showed flow control server software that makes it easy to create virtual nets across a range of devices.And these were only some of the innovations at the show. Templeton is right: Change is rampant at all levels. Related content feature 5 ways to boost server efficiency Right-sizing workloads, upgrading to newer servers, and managing power consumption can help enterprises reach their data center sustainability goals. By Maria Korolov Dec 04, 2023 9 mins Green IT Green IT Green IT news Omdia: AI boosts server spending but unit sales still plunge A rush to build AI capacity using expensive coprocessors is jacking up the prices of servers, says research firm Omdia. By Andy Patrizio Dec 04, 2023 4 mins CPUs and Processors Generative AI Data Center feature What is Ethernet? History, evolution and roadmap The Ethernet protocol connects LANs, WANs, Internet, cloud, IoT devices, Wi-Fi systems into one seamless global communications network. By John Breeden Dec 04, 2023 11 mins Networking news IBM unveils Heron quantum processor and new modular quantum computer IBM also shared its 10-year quantum computing roadmap, which prioritizes improvements in gate operations and error-correction capabilities. By Michael Cooney Dec 04, 2023 5 mins CPUs and Processors CPUs and Processors CPUs and Processors Podcasts Videos Resources Events NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe