Where are we now?

We map the industry's progress on the journey to the new data center.

When you think about it, you have only two ways to travel. You can select a destination and make travel plans to get you there. Or you can focus on the journey and wander with no real endpoint in mind. When it comes to migrating today's IT infrastructure to its next-generation form, the new data center, network executives already on their way agree that the destination method is superior. Nevertheless, many say they feel forced into wandering.

"When I think of enterprise architecture, I think of our general direction on how we want to build and deploy systems. We plan about five years out, but we keep changing that plan about every month, it seems, to remain responsive to the business," says Dave Cogswell, director of technical services at Data-Tronics, the IT subsidiary of ABF Freight System, in Fort Smith, Ark. Being nimble lets IT be driven by business need but it makes the feasibility of accurate, long-range planning "very debatable," Cogswell contends.

Yet, when considering the enormous changes required to build the new data center , a long-term architecture goal becomes a must, experts argue. Only with a destination in mind can network executives make intelligent decisions about the technologies they need to build a new data center infrastructure.

"You can sit back, follow the herd and stay in the mainstream. [But] it's too hard to interpret the world in that kind of read-and-react approach. If you see a change that's dramatic coming down the pipe, it behooves you to have a plan and to have an architecture for where you are trying to get," says Geoffrey Moore, author and managing director of consulting firm TCG Advisors. Moore advocates an architectural plan that looks as far as 10 years out. (See story .)

The new data center has moved from the conceptual idea it was a year ago to a production infrastructure that today's early adopters are testing and deploying. As the new data center evolves, all agree that a long-range plan will be based on two ideas. First, the new data center relies on a new business model, the extended enterprise, which is in itself the basic building block for yet another emerging business model - the global ecosystem. Second, the basis for the extended enterprise's (and, eventually, the global ecosystem's) IT infrastructure will be change management.

The network executive's objective when building and supporting the extended enterprise business model is to create an environment that offers equally safe, reliable and productive IT systems for all user constituents, be they employees, customers, suppliers or what have you. The challenge for network executives in executing on this promise will be providing a consistent level of service while being extremely adaptable - the only constant will be change. Vendors and analysts across the board are evangelizing their visions and terminology for this adaptive (HP's term), grid-like (Oracle), on-demand and autonomic (IBM) dynamic IT (IDC) infrastructure.

All visions have common core components. These visions see the infrastructure as a system of componentized, automated and hot-swappable services glued together with best practices. This is a vastly different mindset than the traditional view of the infrastructure as a mix of wires, boxes and software.

With the traditional viewpoint, the technology stars as the basis of the architecture vision. But because technology changes more rapidly than a large company can examine, test and deploy it, such a viewpoint has become problematic.

"Rather than being an inhibitor of change, IT should be an enabler. The fact that it isn't is ironic since flexibility was a key expectation of IT from the beginning," says Russ Daniels, CTO of software and adaptive enterprise at HP. "We think it is important that you approach [adaptive enterprise architecture] holistically, not as a technology concern. We don't think it is about moving from this messaging system to that messaging system or rewriting code from this to that. You will become adaptive as a result of thousands of architecture decisions you make over a period of years."

Daniels learned this while re-engineering HP's IT architecture after the Compaq merger. "At the time of the merger, we were spending 72% of our IT budget on operations and 28% on what we identified as innovation - new capabilities available to the business. The objective of HP is to be at more of a 50/50 split," he says.

To get there, Daniels realized, HP had to identify and automate the manual tasks in which human-caused errors were driving up maintenance costs. Automation can become an ideal lens by which to envision the entire new enterprise architecture, he says.

Users who have tried this approach agree. "The whole change management process - utility computing - is the way to view IT," says one IT executive from a Fortune 500 insurance company that recently completed an 18-month project to automate server and application management with data center automation software from Opsware.

"Our soft spot was systems, where IT head counts per server were high. We wanted to increase the number of servers being supported by one person, and we looked at automation tools to help us automate processes," says the user, who asked not to be identified.

"Tools like Opsware will help us get where we want to go without re-engineering the whole infrastructure." He now is looking at how change management can become the foundation of his company's next-generation, utility computing-type architecture.

A new model

As network executives build out their extended enterprises into global ecosystems, they are learning that their industry will dictate much of their architecture needs.

"A lot of what happens in on-demand only comes to life in an industry context . . . formed and inspired by industry thinking," says John Lutz, global vice president for on-demand at IBM. To that end, vendors such as IBM, HP and Oracle have developed detailed architectural maps for most of the major industries. These maps are excellent starting points for long-range planning, users say

Many companies in the utility, automotive and electronics manufacturing industries already are shifting their architectures to the new data center model and can serve as examples. Key indicators of how far along a particular industry is include the adoption rate of bellwether technologies such as virtualization, XML, Web services and service-oriented architectures, says Frank Gens, senior vice president of research at IDC. Other factors include how volatile the market is and how information-intensive the industry is.

While the particular industry might dictate the specific architectural map, several constants are emerging as the basis for the new data center. For instance, the infrastructure is becoming a series of services that can be built in-house, outsourced or both.

The generic model comprises five layers, plus two cross-layer elements:

  • The client layer. As always, the client includes computers and voice devices, and combination voice/data wares. But a different kind of client emerges in the new data center: one that will end up generating most of the traffic. This will be a machine-to-machine automated device, such as anything sporting a radio frequency ID chip (appliances, automobiles and pallets of goods).

With so many more clients on the average network, the infrastructure will be handling a "quantum leap" of data, Gens says.

This is one of the pressing reasons that a long-term move to the new data center will be necessary. With the client/server-based architecture of today, "if you throw an order or two more of data on top of it, the thing breaks. Let's build a more solid foundation," he says.

  • The network data access services layer. This includes LANs and other private enterprise networks, leased-line WAN services, public access networks like the Internet or metropolitan-area networks and, eventually, business-class public networks, such as infranets (see story ).
  • The federated identity management services layer. This layer provides the security backbone. Through a centralized service, users and machines will gain access to specific business services. Automated roles-based provisioning will play a big part, quickly granting and revoking access based on job function or type of machine. The business process management layer. This is emerging as the heart of the infrastructure, composed of three pieces - management, analytics and the business processes.

In Daniel's view, network executives would be wise to make a distinction between business processes offered as services (say a funds-transfer application, available to all financial applications) and processes that ensure a service's availability (such as performance monitoring tools and network/storage/CPU capacity). This distinction will let network executives understand the costs involved in providing specific business services. With that they can conduct ROI analysis or build an adaptive, utility computing structure complete with charge-backs. Analytics also should be an element. These are IT systems that help the business folks document performance and predict future needs.

  • The virtualized infrastructure services layer. This is the computational and data access foundation on which all else rests and is the layer that most companies currently are exploring or implementing.

Servers and storage become managed as a single pool of resources. Over time, grid computing will be added to many enterprise infrastructures and will boost CPU power for computational-intensive tasks.

In fact, the concept of virtualization is being applied generally across the entire model - making network capacity available on demand, and purchasing enterprise applications such as salesforce automation or enterprise resource management in application service provider form.

Two architectural beams will support all the layers: industry-standard best practices and procedures, such as the IT Infrastructure Library (ITIL), and systems and security management, for monitoring and performing QoS functions.

Getting from here to there

Of course, envisioning an ultimate architecture is the easy part. Taking action to get the IT organization there is another. The migration path requires testing out architecture planning for smaller time frames and in smaller chunks, says Scott Richert, director of network services for Sisters of Mercy Health System, in St. Louis (see story ).

Richert is restructuring IT to standardize and automate server and application management and to implement identity management, among overhaul projects. He creates three-year technology road maps for individual portions of his infrastructure using what he calls "life-cycle phasing." Technologies are investigated and used in pilot programs during the "sunrise" phase, he says. Buying and implementation take place during the "mainstream" phase. During the "sunset" phase, IT supports the systems but does not procure new systems. After sunset, the technologies are retired.

While Richert has yet to create a single, long-term life-cycle phasing plan for his entire network, he is inching toward one. "I'll do multiple parts to a map . . . and I encourage my team to take the time to map and to take it seriously. The data center infrastructure is being built around a business process framework and we need to plan for key metrics such as cost vs. benefit, redundancy and meaningful service-level metrics, tied to businesses processes. That's not easy to do, but it's important," he says.

Plus, as any traveler can tell you, the value of an accurate map cannot be underestimated.

Learn more about this topic

Data center research page

Breaking news, in-depth coverage, vendor whitepapers and more await you at the data center research page.

Data center RSS feed

Get the latest data center news on your site or in your aggregator.

Latest data center news, free to your inbox

Written by experts at Nemertes Research, this new newsletter will include an ongoing assessment of current business drivers and future trends of the new data center.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

Copyright © 2005 IDG Communications, Inc.

IT Salary Survey: The results are in