Americas

  • United States

10G muscle at ILM

Opinion
Aug 27, 20037 mins
Networking

* Industrial Light & Magic to use as many as 200 links of 10 Gigabit Ethernet

In this summer’s box-office attraction “The Hulk,” mild-mannered scientist Bruce Banner morphs from an ordinary guy into a phosphorescent green, bulging super power – part hero, part monster who makes moviegoers cower in their seats. Terrified viewers have the visual-effects experts at Industrial Light & Magic (ILM) to thank for their racing pulses. And just as Banner has been supercharged with gamma radiation, those ILM artists have been empowered with a hulked-up network.   In March, ILM brought two 10G Ethernet modules – for its Foundry Networks BigIron Layer 3 backbone switches – into its already bandwidth-intensive enterprise network architecture. A Foundry shop since 1999, ILM uses seven BigIron switches (five 8000s, one 4000 and one 15000), about 40 FastIron II – plus a handful of FastIron II+ – closet switches, and more than 80 FastIron 4802 stackable switches, says Raleigh Mann, manager of network operations at the San Rafael, Calif., post-production company.

In this summer’s box-office attraction “The Hulk,” mild-mannered scientist Bruce Banner morphs from an ordinary guy into a phosphorescent green, bulging super power – part hero, part monster who makes moviegoers cower in their seats. Terrified viewers have the visual-effects experts at Industrial Light & Magic (ILM) to thank for their racing pulses. And just as Banner has been supercharged with gamma radiation, those ILM artists have been empowered with a hulked-up network. 

In March, ILM brought two 10G Ethernet modules – for its Foundry Networks BigIron Layer 3 backbone switches – into its already bandwidth-intensive enterprise network architecture. A Foundry shop since 1999, ILM uses seven BigIron switches (five 8000s, one 4000 and one 15000), about 40 FastIron II – plus a handful of FastIron II+ – closet switches, and more than 80 FastIron 4802 stackable switches, says Raleigh Mann, manager of network operations at the San Rafael, Calif., post-production company.

* No time for massaging data

The BigIron modules are linked via a 10G trunk, with throughput speed of 8G bit/sec because of limitations of Foundry’s current architecture, Mann says. (Foundry addresses the throughput limitations in the BigIron MG8, a terabit switching and routing platform introduced in April.) This 10G trunk serves as the conduit from ILM’s production network, on which sits all of the artists’ render processors and file servers, and the data center. Previously, Mann handled the bulkiest of data transfers by trunking together multiple 1G BigIron ports.

Of course, ILM artists know little of 10G or any other network technology. All they know is that they can create ever-bigger data sets that move swiftly across the network. They demand nothing less, Mann says.

“Every 18 months, our work follows Moore’s Law. As computers get cheaper, disk space gets cheaper and our productions can move faster,” he says. To make The Hulk and other movie creations realistic, ILM artists pour on complex textures in increasingly dense simulations. “More horsepower doesn’t mean we can just work faster; it means our work can get more complex. Fabric, hair, water, flame, smoke, sand – we can basically simulate more particles and render a much more complex 3-D image, ” he says, emphasizing that the network has to keep up. “Our artists know the slightest difference in network performance.”

Before implementing the 10G routers, available bandwidth for any particular data flow topped out at the 4G bit/sec Mann achieved by trunking the 1G ports. That data is primarily Network File System (NFS) traffic moving back and forth between Linux file servers and client desktops. ILM carries the NFS traffic using User Datagram Protocol (UDP) rather than the more feature-rich, and resilient, TCP. “We don’t have the benefits of TCP for [flow-control] features such as backoff and sliding window . . . but we decided the best way to get performance out of the network is by using UDP,” Mann says. “We don’t have the luxury of dropping packets. In our industry, with our turnaround time, we don’t have the time to be figuring out problems or massaging data through the network.”

Mann’s mind is eased considerably with 10G. “What I don’t have to think about anymore is whether any particular traffic flow that exceeds a gigabit is split up across cards. I can’t quantify a performance improvement with the 10G, but it does make monitoring the traffic between the data center and the production network a lot easier,” he says. “Having a single interface to watch for traffic flow rather than looking at four different interfaces and adding it all up makes it a lot easier to get a real-time feel and a quick snapshot of what’s going on.”

* 10G at the core

Mann expects his current network architecture, including the one 10G link, to see ILM through 2004. The network becomes obsolete after that, coincident with the planned June 2005 move of ILM headquarters from San Rafael to the Presidio, a former military complex in San Francisco. The move gives Mann his third opportunity to build a new network for ILM.

Mann is well into a design for the new Presidio network, and he knows 10G will play the starring role. Having used the 10G-infused BigIrons as proof of concept, he determined that the technology is stable and that performance is on par with expectations. “The look and feel is still Ethernet. We’ve had no disappointments or issues with the migration,” he adds.

The Presidio network he envisions, and has begun talking to vendors about, will deliver 1G-bit/sec connections to 3,800 or so user machines, using multiple 10G links between each switching closet and a “very large mesh of 10G at the core for redundancy and performance,” Mann says. He anticipates close to 200 10G interconnects on the network.

As if that’s not mind-boggling enough, Mann says 40G is within reason, too. “I definitely see 40G within the core and to some of the higher-density closets,” he explains. “We’re looking at over a terabit capacity of core aggregate bandwidth potentially. Our data sets will just keep getting bigger as people continue to expect more realistic images.”

Mann proudly notes this watermark: In May, network traffic hit 96 terabytes per day as artists cranked out the visual effects for “The Hulk” and two other summer hits – “Terminator 3” and “Pirates of the Caribbean.” He has no doubt that this volume will more than double in the next 18 months, as ILM artists crank out the visual effects for “Star Wars Episode III,” due out in theaters in May 2005.

The ILM environment is a bit bizarre, says Mann, who joined the company a little more than five years ago from AOL. Comparing the amount of data carried on it – a single company’s network – to that carried on the largest ISP’s network, he says, “To be on parallel with that much data in such a dense environment is very strange.”

* The best of times

ILM will begin building the Presidio network and occupying the new data center in January 2005. That means ILM will need to buy the network gear before this time next year.

While Mann could not share budget info, he noted that he anticipates spending “a whole lot of money” on the 10G architecture planned for the Presidio site. “We already invited the players to play, and certainly price will be one of the deciding factors on who wins,” as it was when it came down to choosing his 1G vendor, Mann says. Cisco had been in contention for that network, but lost out because Foundry Networks delivered four times the throughput at approximately half the price, he says.

Meanwhile, Mann is having a great time designing this next-generation network. He calls the task “daunting but fascinating,” adding: “We’re all having fun just conceptualizing this.”