Americas

  • United States
by Pierre Baudet, business systems manager, New Balance athletic shoe

Sneaker company steps up to a unified SAN backbone

Feature
Nov 03, 20034 mins
Data Center

New Balance’s SAN backbone has to operate as an always-on utility, guaranteeing availability around the clock.

New Balance’s growth – annual sales up from $210 million to $1.3 billion in 10 years – has had profound implications for our IT infrastructure. This came sharply into focus recently when we had to build a new storage-area network to accommodate a new business-process tool.

Before we bought the eMatrix Collaboration Platform from MatrixOne, product teams for the Boston company batch-processed their CAD drawings between Massachusetts, the West Coast, Asia and Europe.

Today, with product life-cycle management (PLM) software riding on a unified SAN backbone, 250 users in Taiwan and Lawrence, Mass., are collaborating via virtual workspaces in global, cross-functional project teams, instead of bouncing design data back and forth piecemeal.

In the next phase of the project, another 250 users in Europe and elsewhere will connect to the PLM system. The goal is to cut product development cycles in half by integrating the engineering, business and manufacturing sides of the house into a single process.

This integrated approach, however, requires that global teams share updated data from the same pool of storage that marketing, sales, CRM and financial systems draw upon. New Balance’s former SAN simply wasn’t up to the task, so we decided to build a new one.

The new SAN backbone had to operate as an always-on utility, guaranteeing availability around the clock. It had to be flexible enough to let us prioritize applications at different times, and it had to scale so that the demands of any one application would not slow the flow of data to any other – especially a business-critical application.

Our strategy was to go with a best-of-breed approach that we designed for performance and cost-effectiveness.

We already had a strong Ethernet/IP LAN/WAN infrastructure in place. But our existing SAN had reached its limitations. We had a Compaq RA8000 storage array with no available ports, an eight-port HP/Compaq Fibre Channel switch with no available ports, six application servers connected to the LAN via Ethernet and one Compaq tape library and controller for SCSI to Fibre Channel.

Our first step was to replace the RA8000 storage array with a new HP-branded Hitachi disk array. We also incorporated a Sandial storage-network backbone switch, which connects multiple servers to individual ports.

In our old SAN, port limitations meant the app servers and databases were single-attached, which limited our flexibility and meant that we couldn’t configure separate product, test and development environments without building separate SAN islands.

We designed the new production environment for high availability. It is dual-attached via Gigabit Ethernet in the front end and dual-attached Fibre Channel where it connects to the SAN. Dual Cisco 6509 switches handle all the Ethernet connectivity. The Web tier is front-ended via a Web load balancer and Secure Sockets Layer accelerator from Array Networks, drawing on redundant Web servers that parse and format data using HTML. Serving these are two Veritas Software-clustered IBM WebSphere 4.x application servers in front of two Oracle database servers also clustered, this time with Oracle 9i RAC.

During the project, we also installed new fiber cabling, implemented patch panels for troubleshooting, identified the most-efficient use of disk per application requirements and the best way to provide access to that disk.

This architecture let us reject the complexity of back-up management software because the technology is at the storage level to guarantee performance and availability. We also decided against the “snapshot” approach because it accesses production disks.

New Balance's five SAN requirementsConsolidating the storage and setting policies on the backbone was a better solution. Though users in various parts of the world access our applications and databases around the clock, we have a 12-hour window overnight in Boston where the servers are not in high demand. We use this window to run the tape backup, setting a policy to provision bandwidth so that backups can run without sucking up too much bandwidth. The policy guarantees a minimum amount of bandwidth even though other users are creating activity on the network. This minimizes potential for error in the backup while saving wear and tear from starting and stopping the array.

We also created a fail-safe production, test and development architecture that lets us test and develop without eating into bandwidth on the production SAN.

Today, our storage backbone is a unified, performance-driven environment with dedicated intelligent switching and connection control.

The benefits include less-intrusive backups, discrete production, test and development environments, improved application performance and prioritized bandwidth, security and availability.