Americas

  • United States

The network behind the new data center

Feature
Feb 16, 20048 mins
Data CenterMPLSSAN

A smarter, more robust infrastructure will turn once disparate network, computing and storage resources into a unified system.

As the song goes, “you gotta have heart.” But a strong ticker is no good without healthy veins and arteries. None of that will do much good without some brains.

The same is true for the new corporate data center: Advanced computing power at the heart of a company can be wasted if network bandwidth, intelligence and traffic control are not optimized. A brainier infrastructure can make networks, data centers and storage act as a unified system, and allow information to travel more efficiently for on-demand applications.

Cisco CEO John Chambers recently painted a picture of this future: “Networking opens up many opportunities in the data center, where devices will tie together in ways they haven’t before,” he said at a December analyst conference. He described how networks would be tightly integrated with computing resources with the goal of making transparent where storage, servers and data applications reside. “That is tailor-made for networking,” he said.

Arriving at this networking transparency will take technologies such as Multi-protocol Label Switching (MPLS), intelligent traffic management and acceleration, and the integration of storage-area networks (SAN) and LANs. And the ever-growing need for bandwidth within data centers, coupled with falling Gigabit prices, will drive an uptake in 10G Ethernet as the backbone technology of choice, says Jay Pultz, a research vice president at Gartner. Big bandwidth never goes out of style, he adds.

Certainly researchers at Lawrence Livermore National Laboratory (LLNL), a lab run by the University of California and the Department of Energy in Livermore, Calif., agree. As LLNL migrates from monolithic supercomputing platforms with large symmetric multiprocessing machines to clusters of commodity-based servers in its data center, it has found network upgrades necessary as well. Deploying large server clusters, each with 1G bit/sec network connections, has pushed the lab to use Cisco 10G Ethernet switches as the core backbone technology, says Dave Wiltzius, network division leader at LLNL.

Additionally, the lab is testing 10G server adapters and hopes to have some server clusters running at 10G soon, he says. (The hundreds of two- and four-way clusters of Intel/Linux boxes are proving to be as powerful as and less costly than traditional supercomputing machines.)

Other network topology changes will come in the distribution layer, consisting of server connections and switches that aggregate LAN traffic at the network’s edge. The ability to plug desktop switches and servers directly into the 10G core will give LLNL cost and operational advantages, Wiltzius says. “It could help us optimize [the distribution layer] of the network and get rid of different types of bottlenecks,” he says.

Along with this new data center architecture, Wiltzius and his staff are looking to make the network a more virtually configurable asset. For this, LLNL has tapped MPLS, a Layer 3 quality-of-service standard that lets packets be tagged, routed and shaped as individual flows across an IP network. MPLS, which LLNL is turning on now in its core switches, will let the lab more easily slice up, prioritize and secure the torrents of traffic running across the 10G backbone. MPLS also will let LLNL create miniature labs and data centers virtually and on the fly, using the giant pool of bandwidth in the network core.

“We have this big bandwidth that is very useful, but we’d like to carve it up to address internal security and privacy needs with [service-level agreement]-type of agreements between different users,” Wiltzius says, noting that MPLS deployment will take place throughout this year. Because MPLS is used at the core of the Internet, which runs at 10G, the use of the technology on the lab’s private 10G infrastructure should be a good fit, he adds.

From data center to data edge

While big bandwidth might never go out of style, sometimes it isn’t appropriate. For ShopNBC.com, another user pushing data center intelligence onto the network, Web acceleration and caching were optimum technology choices. In its data center, ShopNBC.com maintains dozens of Windows-based Web servers for selling merchandise tied to NBC programming such as the Olympics, popular shows like “Friends” and other broadcasts. “We’re interested in taking [applications and data] that once had to be fetched from a server [in the data center] and pushing them onto the network and closer to the edge,” says Steven Craig, vice president of interactive technology at the Minneapolis company.

Web acceleration and caching appliances from NetScaler let ShopNBC.com do this. “We can take assets that are highly static, like the navigation bar on ShopNBC.com that only change one to two times a year, and put them on network platforms like NetScaler,” Craig says.

By having the cache/acceleration appliance deliver static content, ShopNBC.com doesn’t have to “throw more servers” into its data center to accommodate peak shopping times of the year or during NBC promotions that drive up traffic. “What you want to avoid are round trips to a database server” that focuses on delivering dynamic content, Craig says.

ShopNBC.com also uses the NetScaler appliance to accelerate Secure Sockets Layer (SSL) encryption. By handling SSL encryption of data center traffic in the network rather than at each server, ShopNBC.com reduces the strain on Web server processing in the data center and saves on SSL license fees. Those amount to $1,000 per data center node, Craig says. “Instead of buying several dozen SSL licenses, I now buy one a year and put it on the NetScaler,” he says.

While Web and traditional network technologies are coming together at ShopNBC.com, migrating to a new data center at Massachusetts General Hospital (MGH) has brought about the merger of storage and the network infrastructure.

The radiology department at MGH, part of Partners Healthcare Group in Boston, has seen storage needs balloon since it installed a filmless imaging system two years ago. Now all X-rays, magnetic resonance imaging and computerized axial tomography scans are produced and stored digitally. At 450 exams a year, without compression, this equates to 18T bytes of data storage a year, says Tom Schultz, chief engineer for medical imaging at the hospital. “And that’s not even including any reports and documentation associated with the image files,” he says.

The hospital uses a cluster of Digital Linear Tape (DLT) drives to archive its digital pictures of broken bones and body scans. But this makes retrieving and working with images hard for doctors. “If you have a doctor sitting in front of a monitor who wants to go offline [to view a tape-stored image], he or she has to wait 2 to 6 minutes for the image to be available,” Schultz says, explaining that the DLT drives cannot be mounted as quickly as files stored on a hard disk. Over a day, this affects the amount of time a doctor has for patients.

So MGH is working with its image system vendor to incorporate more traffic load balancing into the image archiving workflow, Schultz says. Scanned images enter the Phillips Picture Archive Communication System through gateways on a first-come, first-served basis. The gateways are Sun servers running software that processes and routes images and their associated files, called “studies,” into a database and the tape storage archive. MGH expects to improve performance by load balancing among the nine gateways. “We’d like to have it so that all data doesn’t go down one single pipe,” Schultz says. This would let the gateways operate under one virtual IP address, and load balance the jobs among the machines.

Additionally, MGH is evaluating a live “spinning-disk” archive system from start-up ExaGrid that would let it store images on an array of commodity disks that can sit anywhere on an IP network. The disks could be managed as logical storage volumes, movable and reconfigurable virtually. The ExaGrid system could keep studies in a semi-archived state so they could be recalled quickly, Schultz says. In addition, because the disks could be anywhere on the network, MGH could replicate data stores over a WAN.

“We’re hoping that with ExaGrid, we’ll get one place to dump images,” Schultz says. “Behind the scenes, it will allow us to swap in [network-attached storage] devices and let us grow with no headaches.” Schultz says the hospital will decide on whether to install the ExaGrid system in the first quarter of this year.

Clearly, data center networking is way past the stage of simply connecting servers and hubs together. As companies change their views of the data center from “computer room” to “strategic corporate asset,” the importance of data center optimization will rise. And varied as they are, the decisions being made at companies such as LLNL, ShopNBC.com and MGH will become increasingly common. If strong new data centers are at the heart of corporations, a smart network infrastructure is a must.