Tooling up for the new data center

Research analyst Andreas Antonopoulos identifies best-of-breed tools for the next-generation data center.

A research analyst identifies the best tools for the next-generation data center.

By now we're all well versed on the attributes of the "new data center," characterized by service-oriented applications running over a virtualized service-oriented infrastructure. This next-generation data center brings the benefits of agility, lower operational costs, better utilization and rapid application deployment.

Architecturally, a next-generation data center relies on commoditized pools of resources that can be combined to support a variety of applications. This architecture applies to the four critical pillars of data center infrastructure: management, storage, computing and networking. But how can organizations transform their data centers to the next-generation model? The trick lies in translating this vision into a series of discrete, incremental steps - a road map, in other words. The road map comprises four major steps: consolidation, standardization, virtualization and utility.

With consolidation, multiple devices are consolidated into a single location. While standardization ensures that devices have consistent interfaces and protocols. Virtualization abstracts the physical infrastructure creating one or more virtual (logical) instances running on a single physical resource. For example, one physical server might be virtualized to appear as eight virtual servers, perhaps running different operating systems. And utility describes an infrastructure that appears as a service for purchase on demand, similar to a utility such as water, electricity or phone service.

These four steps apply across each of the critical infrastructure pillars. An IT organization can start with whichever pillar makes the most sense for it - or even all at once. The best part is that even an incremental step in one area can deliver tangible benefits.

After extensive research on the new data center, Nemertes Research has identified some of the most interesting products that "move the needle" in innovation. For each category, we looked at approximately 30 products - 120 in all - and selected those that best demonstrate customer-driven design that responds to the needs of IT executives implementing the new data center. Each highlighted product adds a key innovation or implements a novel approach to data center design. (Product descriptions and features are derived from vendor documentation. Nemertes has not tested the products highlighted in this story.)

Pillar 1: Management

Management has become an increasingly difficult data center discipline, primarily because real-time management and provisioning has replaced infrastructure design as the means for delivering application performance.

Specifically, in the old data center model, every application would have a set of dedicated servers, an infrastructure designed to the required tolerances for the delivery of the application. In the new data center model, the infrastructure acts as a blank slate: Commoditized servers are loaded with operating system images and applications and provisioned in real time. Management tools now are the key to provisioning tailored infrastructures (composites of servers, storage, networking and security) in real time and in response to demand fluctuations. This new model is far more efficient in terms of utilization and can create cost savings by postponing purchases of servers and disks. But it dramatically increases the demands on management systems. An effective management solution must be able to translate application requirements into a set of configuration directives that can be applied during provisioning of resources. It also must be able to monitor the individual elements, such as servers and disks, and be able to relate an equipment failure to the business processes that are supported by that resource.

Featured tools: IBM Tivoli Provisioning Manager and Intelligent Orchestrator

IBM Tivoli Provisioning Manager, through IT service management automation packages, automates the manual provisioning and deployment process. Prebuilt automation packages provide control and configuration, as well as allocation and reallocation, of major vendors' products, while user-customized workflows allow for implementation of a company's best practices and procedures. Provisioning Manager reduces the need for just-in-case provisioning and helps automate on-demand provisioning and configuration across an application environment - servers, operating systems, middleware, applications, storage and network devices.

The results are powerful: streamlined IT systems management, improved human and technology resource productivity, higher systems availability and fewer unnecessary infrastructure purchases.

The Intelligent Orchestrator tool extends the provisioning functionality, allowing automation and orchestration of IT resources on demand based on business priorities.

Intelligent orchestration can help an IT manager get better utilization out of existing resources, minimize implementation time and improve responsiveness. The tool monitors the servers, middleware and applications under its control, senses degrading performance and determines an action plan. It can determine where (for which application) a resource is needed and instruct the Provisioning Manager to deploy a server automatically, install the necessary software and configure the network. Using capacity management capabilities, Intelligent Orchestrator can predict resource availability or need and begin the provisioning process, on demand, to help match IT resources with fluctuating workloads.

Runner-up: HP OpenView Management Suite for Servers

HP OpenView Management Suite for Servers, using Radia, is policy-based change and configuration-management software that lets administrators inventory, provision and maintain software and content across heterogeneous server platforms.

Runner-up: BladeLogic Operations Manager

BladeLogic Operations Manager addresses the full life cycle of server management, change control, administration and compliance for a heterogeneous infrastructure.

Pillar 2: Storage

Data is the focus of any data center, and data storage, management and retrieval are critical disciplines. Data center  storage encompasses "live" data, which is frequently accessed and processed, and various shades of "near-live" data, which is stored on slower media or offline archival media. Key technologies are storage-area network (SAN), network-attached storage (NAS), virtual SAN (VSAN) and Fibre Channel.

IT executives have had the most success using SANs to implement consolidation and virtualization. The success in storage consolidation and virtualization can provide broader insights into the power of the next-generation model.

Featured tool: Cisco MDS with IBM TotalStorage SAN Volume Controller software

The Cisco MDS 9000 family is an open platform for network-hosted storage applications. Cisco MDS 9000 multilayer directors and switches with IBM TotalStorage SAN Volume Controller software provide the ability to virtualize storage securely, anywhere in the storage network.

Cisco MDS 9000 Fibre Channel directors and switches house the Cisco Caching Services Module (CSM). Each CSM performs the storage virtualization functions of IBM TotalStorage SAN Volume Controller.

Higher security and greater stability can be achieved in Fibre Channel fabrics by using VSANs. They provide isolation among devices that are physically connected to the same fabric. With VSANs, multiple logical SANs can be created over a common physical infrastructure, offering the following advantages:

  • Security - Isolation of fabric services keeps traffic within a single VSAN.

  • Scalability - Ability to add or move individual ports to a VSAN, taking advantage of the physical infrastructure.

  • Role-based access - Role-based permissions for switch configuration or administration are assigned to users on a per-VSAN basis.

  • Host VSANs and disk VSANs - Disks that are put into a pool to be virtualized are contained in their own VSANs. Similarly, multiple VSANs can be created for managing tiered storage. The virtualized logical unit numbers are exposed to the hosts in host VSANs. This limits the scope for potential configuration errors when adding hosts or storage to an environment.

Runner-up: NetApp V-Series

The network-based NetApp V-Series family virtualizes tiered, heterogeneous storage arrays, allowing companies to leverage the dynamic virtualization capabilities across existing Fibre Channel SANs.

Pillar 3: Computing

Computing is obviously the core data center discipline. This can be seen in the way data centers are often depicted in architecture diagrams: Servers are prominent and other resources such as storage, networking and management are drawn as background. The server-centric view of the data center is changing to a service-centric view. In the new data center model, computing resources (servers) are not dedicated to a single application. Instead, pools of commoditized servers or blade servers are sliced up and provisioned dynamically. Instead of designing a tailored infrastructure for each application, the infrastructure is created on the fly as a composite of different resources.

Virtualization has two faces. Partitioning is where single servers are sliced into multiple virtual servers running different applications or even different operating systems. Thus, a single physical server can be fully utilized even though each application only requires a small slice of capacity. Clustering is the opposite face of virtualization, in which several servers are combined to deliver a powerful virtual computer for high-performance computing applications. The greatest benefit of this server virtualization is the ability to reuse resources for different purposes and to maximize the utilization of each resource, thereby postponing purchases of new servers.

Featured tools: VMware's ESX Server, VirtualCenter and VMotion

VMware ESX Server transforms physical systems into a pool of logical computing resources. Operating systems and applications are isolated in multiple virtual machines that reside on a single physical server. System resources are dynamically allocated to virtual machines based on need and administrator-set guarantees, providing mainframe-class capacity utilization and control of server resources. Advanced resource management controls allow IT administrators to guarantee service levels across the enterprise.

Centralized management of VMware servers comes with VirtualCenter. This virtual infrastructure management software provides a central point of control for computing resources. It allows users to instantly provision servers, globally manage resources and eliminate scheduled downtime for hardware maintenance.

With VirtualCenter, IT organizations can benefit from server consolidation, the ability to allocate resources based on business demand and better disaster recovery, with the opportunity to simplify deployment of critical systems and applications to recovery sites and generate alerts in case of service interruptions.

VMotion, the third VMware technology, enables intelligent workload management so changes can be made dynamically without affecting users. VirtualCenter-managed ESX Server nodes with VMotion let IT executives respond to a variety of data center needs. For example, they can migrate a running virtual machine to a different physical server connected to the same SAN without service interruption or perform zero-downtime maintenance by moving virtual machines around so the underlying hardware and storage can be serviced without disrupting user sessions.

Runner-up: Egenera BladeFrame

The Egenera BladeFrame combines the utility of stateless servers with software that virtualizes processing, storage and networking resources into a "computing fabric." Companies can provision systems and allocate resources to optimize mission-critical applications in real time.

Pillar 4: Networking

Data center networking encompasses a much broader range of technologies than those found in campus-area networks or WANs, such as:

  • Server-to-server high-performance interconnect networks. These can be based on InfiniBand or Gigabit Ethernet and provide for high-speed and low-latency interconnect between servers. This type of interconnect is most often used in high-performance computing environments containing clusters of commoditized servers acting as one large supercomputer.

  • Server-to-storage networks. This includes Fibre Channel and iSCSI SANs, as well as NAS.

  • Data center-to-data center interconnects for replication of data between data centers.To maintain high availability, many companies deploy a secondary data center. The primary and secondary data centers are connected using SONET or DWDM routers, which aggregate different network services on a single multi-gigabit optical link.

  • Data center-to-enterprise WAN and LAN networking. This final category includes acceleration products and Wide-Area File System products that provide data center services to the rest of the enterprise WAN and LANs.

Featured product: Juniper DX Application Acceleration (formerly from Redline Networks)

1 2 Page 1
Page 1 of 2
IT Salary Survey 2021: The results are in