Outsourcers aim to aid new data center

To win business, companies offer a host of options for utility infrastructure.

Outsourcers offer a variety of options to the new data center.

Data center outsourcing is a different game from what it was earlier this decade. Contracts are shrinking from six to 10 years to three to five years, according to Deloitte Consulting. Single-provider megadeals are on the wane, Gartner reports. And while cost reduction is still a big reason for signing outsourcing deals, many corporations are no longer just interested in passing on "their mess for less," says Jeff Kaplan, president of Thinkstrategies. Increasingly, he says, IT executives look toward outsourcing providers for help migrating from legacy environments to the more flexible and lower-cost platforms of the new data center.

"Most people are feeling overwhelmed with the whole 'new data center' idea," Kaplan says. "It's pretty complicated, with dozens of technologies involved, and very few corporations have enough internal expertise to sort it out."

IBM, one of the leading outsourcers, sees a troika of concerns driving IT executives to consider outsourcing their new data center migration, says Mike Riegel, Big Blue's director of on-demand business. "Business leaders today are simultaneously interested in growing revenue, cutting costs and being more flexible - never before have we seen them do all three at the same time, " he says

Outsourcers are responding by incorporating more new-data-center technology into their service offerings. Here's a look inside five leading outsourcing operations.

CSC: Results-Driven Computing Grid

Computer Sciences Corp. (CSC), which does not make products for the new data center, plays up the benefits of vendor agnosticism.

In the storage arena, for example, CSC relies mainly on Hitachi Data Systems, whose Tagmastore Universal Storage Platform virtualizes heterogeneous storage systems into one pool, and EMC, which recently began offering a network-based storage virtualization system called Invista. It also works with a range of other vendors, including Fujitsu, HP, IBM and Sun. It tops off its storage offering with automated provisioning and management software from Creekpath Systems, says Chris Helme, CSC's vice president of global production operations.

In grid computing, CSC recognized that many users couldn't commit to the large capital investment often required. It developed the hardware-independent Results-Driven Computing Grid, which can run any x86 operating system and any software stack "in a defenselike security environment," Helme explains.

Other new-data-center-type technologies in use at CSC include high-availability server clusters from HP, IBM, Sun, Veritas and other vendors, and capacity on demand for storage and computing. Beyond such traditional methods as spare CPUs, dynamic workload management and spare capacity, CSC uses a proprietary method for expanding and contracting the computing environment to match business requirements, Helme says. CSC calls this Results-Driven Computing.

For its bandwidth-on-demand offerings, including MPLS, IP Security VPNs, virtual LANs, VoIP and QoS, CSC uses technology platforms from a variety of vendors, including Check Point Software, Cisco, Juniper Networks, Nortel and Packeteer, and various global carriers, including British Telecom and Global Crossing.

Thinkstrategies' Kaplan considers CSC's vendor-independence a big plus, but says the outsourcer could do a better job articulating its utility-computing strategies and success stories. "It hasn't been in the game as much" as IBM, HP and Sun, he says.

EDS: Agile Enterprise Architecture

Electronic Data Systems' (EDS) biggest challenge in new-data-center outsourcing is breaking out of its traditional "megadeal" approach and creating a cost structure that can accommodate smaller, more flexible engagements. "It's been struggling to develop a coherent, consistent and compelling utility computing story that competes against IBM and HP," Kaplan says.

In that regard, the company last year created the Agile Enterprise Architecture (AEA). EDS has built a standard technology infrastructure on which to run the bulk of its customers' IT operations. Technology partners include Cisco for routers, EMC for storage hardware and Sun for servers.

Other components of the AEA plan are:

  • A partnership with Sun for automatic provisioning of Windows, Linux or Unix on the vendor's AMD Opteron-based blade servers.

  • Twenty-nine best practices for tasks such as server consolidation, utility computing, storage virtualization and application renewal.

  • Use of the Microsoft .Net platform as the preferred operating environment.

"EDS now has a competitive list of, 'If we provide this function, there's a price for setting up each server and the ability to buy partial racks,' vs. 'Don't worry about how much you need, but here's a great big bill each month,' " says Dan Twing, a research vice president with Enterprise Management Associates. "It's more a la carte."

At the network level, EDS is building a global IP/MPLS backbone that will serve as the foundation for grid and utility computing when it becomes operational this quarter. EDS says the goal is to manage systems and applications from any point in the world. "We will be able to virtualize our computing capacity between data centers here and in Germany, as well as our call centers and application delivery centers," says Gordon Martin, vice president of EDS's communications services.

EDS also is adding more applications to the list of packaged applications that it hosts, as well as virtualizing these applications.

The company enables physical virtualization by combining Cisco InfiniBand Server Switches and Multifabric Server Switches to allow an entire fabric of servers to share virtualized pools of I/O and storage resources. Cisco VFrame Server Fabric Virtualization software provides the provisioning and orchestration of compute resources over this unified fabric. To enable logical virtualization, EDS primarily uses VMware software but has started to add Sun's Solaris 10 containers. It also intends to use Microsoft's server virtualization product eventually.

"There's an extreme amount of interest in commingling workloads . . . to take advantage of non-used cycles in the environment,"says Larry Lozon, vice president of hosting and storage services at EDS. Through the global network, application processing can be divided up among an EDS data center, the client site or a third-party environment.

EDS is rolling out its time-tested mainframe-metering model into the server-based world, and several clients are road-testing it. "What we'll be getting to is, 'Here's a particular application service, and it costs this much per hour to run, along with add-on services in terms of back-up/restore capabilities,'" Lozon says. He says some of that may rollout in 2006.

HP: Adaptive Enterprise Strategy

You can't discuss HP's new-data-center outsourcing approach without immediately talking about its Adaptive Enterprise Strategy, the name for its infrastructure scheme that automatically adjusts to support business needs. This strategy includes the following components of the new data center:

  • Grid computing: HP is developing technologies for intelligent enterprise grids that can process mission-critical applications while navigating corporate firewalls and networks.

  • Server clustering: HP's suite of server clustering technologies and services includes HP Serviceguard for Unix and Linux , HP Unified Cluster Portfolio for High Performance Computing, HP OpenVMS Cluster software and HP BladeSystem /Systems Insight.

  • Capacity on demand: HP offers a range of usage-based pricing capabilities, including Instant Capacity, Temporary Instant Capacity, pay per use, managed storage solution, an Exchange utility and a PC utility.

  • Server virtualization: HP can pool, share and allocate resources across its Integrity, BladeSystem, Proliant and Nonstop servers.

  • Storage virtualization: HP StorageWorks Enterprise Virtual Array Systems can adjust storage allocation size while applications are running.

  • Management software: HP OpenView helps manage IT and telecom resources in an autonomic fashion. This includes application management, business management, configuration management, governance, infrastructure management and more.

HP uses its new-data-center technologies internally, says Nick van der Zweep, HP's director of virtualization and utility computing. So when internal or external customers ask for a new service, the service can be carved out of an already-running pool of resources and be up and running in 24 hours, van der Zweep says. If a project gets canceled, those resources can be used for other applications.

HP also is strong in its capacity-on-demand capabilities. "It talks the right language of business vs. technical metrics and  solutions," Kaplan says.

The HP Utility Meter monitors server and storage usage rates, then inputs those into a billing and mediation system so HP can charge based on active CPUs or gigabytes used on a daily or monthly basis. It can create custom measurements, too. For DreamWorks, for example, it charges per rendered animation frame. And for Amadeus, it charges per number of airline seats booked. When either infrastructure is not being used at peak capacity, HP can use it to run other applications.

Although usage-based computing works mainly on HP's own hardware, van der Zweep emphasizes that its outsourced data centers contain "a tremendous amount of" non-HP hardware. "We have some capability to shift resources around on other vendor's equipment using processes and software development that we don't sell to customers," van der Zweep says.

IBM: Virtual hosting

IBM offers virtual servers, a top-selling storage-virtualization product, network virtualization services and management for the new data center.

The company's Virtual Hosting outsourcing strategy encompasses the following:

  • A multiplatform virtual server that lets corporations choose between 100% virtual hosting or a mix of virtualization and traditional hosting services. Customers can choose a pay-per-usage plan.

  • Virtual server services for xSeries, pSeries and iSeries IBM servers. With these servers, multiple applications that previously resided on separate physical servers are run in partitioned, secure and logically isolated areas of a single device. As demand escalates, so does the ability to add processing capacity.

  • Virtual server services for the Eserver zSeries 990 running Linux.

  • Virtual networking services, including on-demand, usage-based firewall, load balancing and routing. These resources are pooled, and capacity is directed to applications or servers as needed. Router, firewall and load-balancing services are consolidated onto a single hardware platform, the virtual services switch, replacing more than 100 stand-alone appliances.

  • Virtual infrastructure services, such as online database backup, storage on demand, back-up and restore content caching and VPN connectivity.

The linchpin of this strategy is the Universal Management Infrastructure (UMI), a complex architecture that uses Tivoli management software, WebSphere and other code to enable IBM to provision and automate service delivery.

The architecture includes 41 automated and standardized processes, including server provisioning (which IBM says can happen in a matter of hours), problem management (which uses autonomic computing to route alerts from applications or business processes) and configuration management (which can, for instance, automatically add resources from another server farm if the external Web site is hitting 80% utilization).

The benefit of this autonomic environment is a 15% to 20% reduction in infrastructure costs and a 30% reduction of application costs, Riegel says.

Users, too, expect UMI to lead to cost savings, says Rob de Haas, global head of data center services for ABN AMRO Bank,the Dutch bank that recently signed a five-year, $2.2 billion global outsourcing contract with IBM and four other outsourcers to build the bank's on-demand IT infrastructure.

"UMI will enable ABN AMRO to pay only for the computing power we use," he says."It mitigates the risk of outages by applying IT resources where they are needed, raising service levels and improving application availability, which is critical to the bank."

Under the UMI umbrella, IBM also says it can support multivendor servers and the major operating systems, including Linux, Solaris, HP-UX, AIX and Windows.

Unisys: 3D Visible Enterprise

Phil Smith, vice president of outsourcing and infrastructure

Related:
1 2 Page 1
Must read: 10 new UI features coming to Windows 10