• United States
Senior Editor

Automation know-how

Oct 25, 200415 mins
Data Center

Emerging automation tools are making the new data center more self-reliant than ever.

The new  data center , with its rapid rate of change and growing complexity, demands software that integrates seamlessly to intelligently automate a range of IT management tasks.

True end-to-end automation in the new data center would also eliminate the chance that human errors could cause outages or performance problems. Without such overarching automation, collecting data from multiple sources, making sense of it, putting it into a common format and then knowing what action to take based on business policies will challenge many IT shops in the coming years.

“It will take time, money and know-how about the capabilities available from vendors and those that can be used in-house, but the technology will eventually be available and a culture shift will happen. Automation will mean we can make more services available, at a lower cost, with more accuracy – and that really matters,” says Janice Newell, CIO of Group Health Cooperative in Seattle.

Understanding automation resource by resource is the first step in that process.


Automation works best where multiple, simple tasks must be performed repeatedly. More often than not, that means automating at the user client.

“Desktops are a good place to start if you want to get your feet wet automating,” Newell says.

She uses  HP’s  Radia software distribution tool (acquired by HP with its  purchase of Novadigm  in April) to push software updates and patches to more than 50,000 end users at 30 healthcare clinics across Idaho and Washington. Automated software distribution provides the nonprofit healthcare organization a way to get all the disparate sites the same technology, Newell says. “In our previous world, we had to have someone go out and address the problem manually. Now we just re-image the workstation from a central location” and send multiple updates on a one-to-many basis, she adds.

Besides HP, vendors such as Altiris,  Computer Associates  and Reflectent Software offer tools that automate software distribution based on pre-defined rules. These tools can scan machines for applications, usage, license compliance and patch information. Because they keep an updated repository of information pertaining to each desktop, many products also can detect changes to a machine. The tools verify that any change syncs up with policies and can take automated actions to remedy problems. For example, the software might deny network access to a client machine on which it has detected the removal of anti-virus software.

Some toolmakers provide best practices and process models for defining what automated steps are needed. But in many scenarios, IT managers must identify a process, define it in the software and update any changes to the process or the IT components. Don’t overlook how much work this entails, Newell says.

“The hurdles we encountered most were around our processes and policies. A lot of people liked to have control of their own systems and the ability to change things,” Newell says. “In the automated world, individuals can’t have that control and flexibility on a case-per-case basis. They need to follow enterprise-wide policies.”

Server infrastructure

Automation capabilities are more extensive at the server level than at the desktop.  Server  automation includes provisioning of new machines, doling out virtualized resources and perhaps providing a foundation on which to build more automation.

Server software vendors such as BladeLogic and Opsware promise to enable automation by combining automated tasks. This would help corporate IT managers roll out new servers, distribute software updates and patches, and get IT staff started on documenting their more frequent fixes in software.

For example, Opsware offers template software for helping IT managers build automation applications for customer processes and proprietary systems. The idea is to move IT shops from manually executing custom-written scripts and turn such tasks over to the software. The software would detect when it needs to take the pre-defined actions. Opsware says value comes when IT managers can automate scripting, provisioning and patching together so they can see the benefits of linking separate jobs with tools or policies, says Tim Howes, the vendor’s CTO.

Technologies from such vendors have attracted the attention of stalwarts such as  IBM , HP,  Sun  and Veritas Software. Each company has acquired server automation start-ups with the aim of boosting their new data center offerings. They all want to provide what could be the “automation fabric” in future data centers, says Frank Gillett, a principal analyst with Forrester Research.

“Ultimately, customers will need to get a fabric operating system, an [operating system] class set of software that runs across the data center rather than on any particular server,” Gillett says. “And vendors must include in that fabric an element that responds to failures and quickly deploys new resources to add capacity. To some degree, these products also have to be able to manipulate network storage and configure routers and switches and create [virtual] LANs.”

This type of software could spur wider adoption of automation across the new data center, he says.

Storage and backup

With today’s practice of doling out  storage  across multiple machines based on application need, rather than keeping storage specific to a piece of hardware or database, IT managers need to pool resources, automate their provisioning and track with software.

“Storage resources need to be pulled out and centrally managed as storage migrates from the database tier to the application tier and so on,” says Mike Karp, a senior analyst with Enterprise Management Associates. Besides resource management and provisioning, backup and recovery need to be automated too, he adds.

At PacifiCorp in Portland, Ore., automated backup and server clustering technology from Veritas helps with management of shared storage resources, says Steve White, manager of system engineering for the electric utility.

“The IT department couldn’t function without a centrally managed, automated back-up system” for storage, White says. “The idea of each device having its own tape drive just boggles my mind. It just doesn’t seem like something [a midsize or large enterprise] could do. They’re not going to want to be wasting time switching tapes on every server.”

In addition to Veritas, companies such as  EMC , HP and IBM are working to automate resource management and storage provisioning. Products that support manual intervention remain available, but many storage automation efforts within enterprise companies employ a two-phased approach. The software notifies an administrator of a recommended action. It proceeds only after it receives an official OK to do so. “In vendors’ visions of  utility computing , the first step is eliminated, and the software would just act and keep a record of its actions for audit or review purposes,” Karp says.

At PacifiCorp, White is well on his way to realizing this ultimate utility computing vision. “Clustering server software brings up a database and associated services on another server without intervention from anybody,” he says. “If I didn’t use cluster server [software] for failover, every time I had a server die it would consume a huge amount of staff time. My group could not manage the amount of infrastructure they manage today if it weren’t for these automation tools.”

Offloading the mundane  management  tasks to automation tools will give IT staff more time to focus on the big storage picture, experts say. For example, IT managers could use the time previously devoted to loading tape drives for backups to investigating what and how data is being stored and eliminating inefficiencies, says Danny Milrad, senior product marketing manager at Veritas. One result might be the ability to reclaim storage space taken up by duplicate, stale files that haven’t been accessed in years. “A whole lot of storage isn’t being effectively utilized,” he says.

Network infrastructure

Network vendors have been building intelligence into their gear for some time. Today most enterprise switches can fail over to a back-up processor module if need be, and the majority of high-end routers can run side by side and back up each other using the IETF’s  Virtual Router Redundancy Protocol . Lately, vendors have integrated security functions into their wares, to keep viruses and other breaches from taking down the network.

Financial Partners in Agawam, Mass., relies on a combination of automation capabilities on its network, says Jim Mileski, systems administrator for the firm. HP OpenView software, Nortel routers and homegrown applications work in concert to ensure the headquarters and 54 branches are connected and getting top network performance internally and over the WAN, he says. Financial Partners hasn’t had one outage since putting this combination into play as part of a network redesign last year, he says. This compares with one or two outages per week previously, he adds.

“The design automates the failover, and the software automates the monitoring and alerting,” Mileski says. The monitoring software helps him determine the cause of the failure and perhaps prevent future problems.

BMC Software , CA, IBM Tivoli and Micromuse also make software that automates network event notifications. But the tools often aren’t used to take corrective actions.

“Automated monitoring is there, but automated management is not,” says Deb Curtis, research vice president at Gartner. “Vendors are working to provide more management capabilities out of the box, but at this point, most automation is in the realm of notification and escalation.”

While management vendors fine-tune automated management software, specialist gear vendors such as Inkra Networks look for ways to virtualize and automate the distribution of network resources in the new data center.

“You can begin to automate your networking infrastructure by using virtualized instances of IP services rather than deploying new appliances for each firewall, VPN or [intrusion-detection and intrusion-prevention] device that your network requires,” says David Roberts, CTO at Inkra in a white paper describing how the company’s Virtual Service Switch functions. “As old equipment is ready to be phased out, you can replace it with virtualized services as well.”

Switch and  server  vendors also are teaming in efforts to expand automation beyond the network. Cisco and IBM earlier this year integrated their respective data center switch and server management software to allow better communication between the products. Integrated products such as these are meant to let applications hosted in data centers run faster and better survive network or server failures. IBM also revamped the Tivoli SAN Manager software to work with Cisco’s MDS 9000 switch for management of virtual SAN deployments.


Automation efforts continue to get more complex as enterprise IT managers look to reduce the number of manual tasks needed to keep applications running smoothly. While applications use server, storage and network resources, automating the allocation of resources based on application demands isn’t enough, experts say.

“Automation to support applications aligned with business services requires mature configuration management practices. Unfortunately, that’s one of the least mature areas for IT organizations,” Gartner’s Curtis says. “An understanding of underlying configuration is an important [first] step, and then that knowledge needs to be mapped to business services.”

By configuration management practices, Curtis means keeping an inventory of all IT assets, including how they should be configured, how they touch each other and how a problem on one asset will affect another downstream and – ultimately – the business service. Products that track device configuration are available from vendors such as AlterPoint, Intelliden and Voyence. Such products will need to work with software tackling server, operating system and application configurations to build logical topology maps of applications and their dependencies.

Technology from companies such as Appilog (which Mercury Interactive acquired this year), Cendura, Collation, mValent, nLayers, Relicore and Troux can build application dependency. These maps would help the automation software determine where to take corrective actions. For example, Appilog technology could work with Mercury software, such as Topaz application management software and Resolution Center products, to identify and automatically fix problems that arise with application performance.

Mercury software features pre-defined “run books” of problem fixes for popular applications. Run books also can be customized so that senior-level application and network administrators can put their own processes for fixing applications into the hands of lower-level staffers and ultimately be entirely automated. Appilog’s technology is designed to help companies adjust applications on the fly as the underlying infrastructure changes, such as when a virtualized server takes on a bigger load or a router is reconfigured.

Start-up Vieo offers another approach – an application management appliance. Santa Clara County in California plans to improve on application problem detection, diagnosis and resolution practices by using the Vieo Adaptive Application Infrastructure Management (AAIM) appliance, says Satish Ajmani, CIO for the county.

AAIM will support 25 county applications for more than 13,000 WAN-connected desktops. The appliance will augment Tivoli and Micromuse software the county uses to automate problem detection on network devices and in the IP transport layer, respectively, Ajmani says. AAIM will alert him to problems that directly affect applications, thereby eliminating the task of hunting down the source of problems. Ultimately, he will let AAIM take automated corrective actions, Ajmani says.

“The first stage we have planned for Vieo will look across all our resources and alert us if something on a server or in the IP network will affect application performance,” he says. “The second stage would involve dynamically allocating resources, meaning servers, if there is an outage.”


Security perhaps represents the biggest automation challenge within the new data center. Trust is the big issue. Enterprise security managers must balance the need to keep security systems open enough to integrate with other tools with the need to protect against external threats.

Today some security-related IT tasks – be they for access rights, or compliance, intrusion detection or what-not – can be managed. For example, Netegrity offers an engine for its eProvision software that lets administrators build workflow processes, such as creating a series of actions needed to change a user’s access to network resources, rather than writing a script to create new processes. Netegrity and tools like it from NetIQ can automate the provisioning of user access rights and provide authentication and authorization. These tools also should remove user access rights and privileges automatically when, for example, an employee is fired.

On the automation track

The race toward end-to-end automation in the new data center begins by automating one technology layer at a time.
Distribute software updates to clients.
Scan machines for software licenses, usage and compliance.
Remotely re-image workstations.
Provision new servers.
Send application upgrades on a one-to-many basis.
Reallocate virtualized resources.
Allocate resources per application demands.
Perform automated backups and restores.
Failover to clustered and shared resources.
Failover to back-up switch processors and redundant routers.
Monitor gear to alert and escalate events.
Distribute virtualized network resources.
Build application infrastructure topology maps.
Compare ideal configurations against actual rollouts to optimize performance.
Detect network and server issues that could affect application performance.
Lock down network segments to isolate security breaches.
Provision and remove user privileges and access rights.
Scan systems for vulnerabilities and distribute patches.
Enforce pre-defined policies among IT segments.
Integrate data from multiple, disparate systems.
Orchestrate actions across data center hardware, software and tools.

Still, fully automated security will be a long time in coming. “You’ll see the processes involved in identity, patch or compliance management automated, but you most likely won’t see software automating how an organization secures its network,” says Mike West, senior program director at Saugatuck Technology.

Anti-virus protection and patch management need more automation, Santa Clara County’s Ajnani says. “We have firewalls between agencies and between departments. How do you get across those devices to apply patches without putting your network at risk?” he asks.

Companies such as Citadel Security Software and Symantec are looking to couple their vulnerability technologies with management software that could automate patch management. And don’t forget the desktop management software vendors such as Altiris, Configuresoft and CA, which promise their software distribution tools coupled with security intelligence can further automate those processes and automatically detect issues in a dynamic environment.

Management and integration

Enterprise IT managers should begin to approach IT automation through efforts in the respective IT segments, but in the end, they will need software that integrates the disparate systems and then automates actions across various segments of the data center.

Companies such as Singlestep, with its Unity platform, intend to ease the process of integrating management data, making sense out of the various data formats that different systems use and automating actions based on the findings. A management event collected by CA software would use a different format than syslog data collected from a Cisco router, which would use a different language than SNMP events pulled off a Sun server.

Vendors such as Aprisma, Managed Objects and Micromuse also are working to better integrate and manage the information collected by their monitoring systems. CA, HP and IBM have started discussing more integration capabilities that would let their software tools more easily share data and integrate with data collected by third-party systems.

IT managers will need to implement processes and management systems across desktop, server, storage, network, application and security resources to ensure smooth handoffs in the race toward data center automation. As Gartner’s Curtis says: “IT managers have got to reduce the day-to-day decisions that software could automate into a fluid chain or decision tree that software can follow, and it needs to include human processes, workflow and business priorities mapped into IT components.”