Americas

  • United States
abednarz
Executive Editor

Autonomic authority

Feature
Mar 22, 200412 mins
Data CenterIBMMicrosoft

Six vendor execs tell us why autonomic computing is one of the new data center’s most powerful technologies.

Whether you believe all you are hearing these days about autonomic computing – or remain skeptical – you have to admit, the message is appealing. The major infrastructure players, HP, IBM and Sun, paint a picture of a new data center which is self-managing, self-healing and self-provisioning. Such autonomic computing capabilities will contribute to a broader utility infrastructure that can react on the fly to changes in demand, providing a constant level of service, much like a public utility. This, in turn, will allow IT executives to focus on strategic business issues rather than manual maintenance tasks.

Network World asked key strategists at these companies, and at Microsoft, storage management vendor Veritas Software and server virtualization specialist VMware (now an EMC business unit), to share their ideas about autonomic computing.

Why do we need autonomic computing?

Greg Papadopoulos, CTO, Sun: In a word, ‘complexity.’ Complexity has accumulated to a point where we no longer have an economy of scale in IT. What’s called for is real automation of service levels in the data center, not some organic-sounding marketing buzzword. We’re talking about designing in the capability for resource virtualization, application provisioning, service-level provisioning – tools to help make the most of existing IT resources. I’m a little skeptical of the idea that computers can act like people and heal themselves. True automation of the data center is not a soft, abstract principle; it requires a very strict, deep, engineering discipline. We have to do the hard work of making things simple.

Nora Denzel, senior vice president and general manager of HP’s Software Global Business Unit and Adaptive Enterprise Program: There was always a need, but the need couldn’t be fulfilled earlier. Up until the past few years, there weren’t industry standards. There wasn’t the bandwidth capacity to link disparate computers across long distances. Some of the technology we needed, such as virtualization, the ability to break up programs into small pieces, hadn’t been invented yet. What we’re seeing is a new cycle of computing that is finally possible.

Irving Wladawsky-Berger, vice president of technology and strategy, IBM: We need to find innovative ways to manage the increasing complexity of business and IT. Because technologies have been more readily available to us, most environments are now made up of heterogeneous components. More than 40% of all IT expenditures are invested in just getting these technologies to work together. At the end of the day, we need autonomic computing to make IT more self-managing – that is, self-configuring, self-optimizing, self-healing and self-protecting. People then will be shielded from IT’s complexity, the infrastructure will work around the clock and be as near-impervious to attacks and threats as possible, and the business can make the most efficient use of increasingly scarce support skills.

Gary Bloom, CEO, Veritas: Many IT executives these days are caught between two conflicting imperatives. At one end are the users. They want more applications to automate their business, more data to make better decisions, and they want it all yesterday, if not sooner. At the other end are the CEO and CFO. They want the CIO to spend less on data centers, less on hardware and less on people. Of course, it all must be done with the existing technology – a complex and heterogeneous environment of Web servers, application servers, databases and hardware. At both ends of this CIO squeeze there is one common demand: perfection. What everyone is asking is that the IT environment be as dependable and predictable as water, gas, electricity or any other utility.

Diane Greene, president and CEO, VMware: Autonomic computing can be defined as a number of things, [but] the goal is to simplify management and improve the reliability of computer systems. The perfect autonomic computing system doesn’t need anyone to manage and maintain it once it’s been installed. If an application starts to perform badly, it can automatically get increased resources. If software or hardware fails, it is transparent to end users, no service interruption. The benefits are straightforward – lower costs and higher service levels.

How will autonomic computing affect the work of skilled IT personnel?

Denzel: The best part is that the technology is going to be used to automate many of the manual things skilled IT personnel have to do today, such as roll out patches, provision new users, set up servers. Skilled IT professionals can look forward to understanding the needs of the business more, understanding business processes and redefining business processes rather than working on maintenance. They’ll get a much more satisfying job.

Papadopoulos: With the increase in complexity of the network computing environment, we’ve seen IT managers devoting as much as 60% or 70% of their budgets to maintaining the status quo – keeping the network up and running, deploying patches, service packs, and trying to document and maintain dozens of software versions, platforms and operating systems across highly complex environments. Autonomic computing automates repetitive tasks and monitors service levels and availability, freeing IT staff to focus on more strategic projects. This will call for many of the same skill sets, but will require a deeper understanding of the power of information, and the need to manage that information by developing and deploying applications and services that provide competitive advantage to the business.

When will autonomic systems be available, and what’s being done now?

Wladawsky-Berger: Autonomic function is being incorporated into systems today. But integrating it deep into a business is an evolutionary process. One doesn’t build an automated infrastructure overnight. We treat autonomic computing as a process that will let businesses introduce more advanced autonomic technologies into their infrastructure in a systematic, logical, business-like way. This way, customers get the benefits of autonomic computing as they naturally upgrade at their own pace.

Analysts say moving to an autonomic or utility environment is likely a seven- to 10-year effort. What can users do to get started?

Denzel: It all begins with an assessment. With autonomic computing or an adaptive enterprise, you don’t buy one, you build it. Each customer will build it in different ways, depending on where they are today and where they want to end up.

Bloom: To get started, the first thing customers need to do is make sure that all their hardware and software can work together. Additionally, customers need to get a handle on how the IT resources are being used and who is using them. The majority of customers that we talk to have heterogeneous systems that are underutilized and labor intensive to manage. Customers should consider and implement the software building blocks that begin to enable the key benefits of utility computing – availability, performance and automation in a shared infrastructure of heterogeneous resources.

Papadopoulos: One of the first things to think about, particularly in large, multi-national companies, those that have grown by acquisition or those comprising numerous operating units, is server consolidation. The payoff can be fairly impressive, and in addition to reducing the cost and complexity of managing the data center, there are other quantifiable returns. British Telecom was able to consolidate 100 servers to six, reduced failover from one hour to just 5 minutes and has achieved 99.97% availability.

Eric Rudder, senior vice president of servers and tools, Microsoft : Moving to a self-managing, self-healing environment is an enormous software effort and will take years to fully realize. But you can start today by embracing a service-oriented architecture using Web services. Get your data into XML and start to expose your new and existing systems as Web services so you can reuse them as your systems evolve. Make sure your applications are instrumented appropriately by supporting Microsoft’s Windows Management Instrumentation API. From a hardware standpoint, start planning your migration to x86 servers.

How will the industry get users to invest in autonomic computing in this economy?

Denzel: Clearly, they will only take a little at a time. There won’t be a big bang. It will be more evolutionary, very slow and methodical. You’ll see 30-, 60-, 90-day programs and ‘how did it go?’ assessments. Although IT spending around the globe will pick up this year, everything companies do has to be tied into a specific business benefit that is demonstrable, measurable and has a significant return on IT investment. Dollars are available, but users will consider a lot of options and vendors will compete for every one.

Greene: When you start to see the return on investment associated with autonomic computing, there will be a strong push toward adoption. The return on investment from adopting virtual infrastructure alone has pushed widespread adoption of server and storage virtualization already, without even the full benefits of autonomic computing. Making the process of adoption incremental and fully compatible with existing systems is another facilitator.

Papadopoulos: Data center automation is about reducing operating costs. With technology like service-level provisioning, one systems administrator can manage 10 or 20 times the number of servers that could be managed in a traditional environment. That kind of efficiency goes straight to the bottom line.

What are the biggest technical hurdles to autonomic computing?

Bloom: The biggest challenge for many aspiring IT utilities is how to build in the necessary flexibility to accommodate multiple platforms and a variety of hardware devices from different vendors. Open, heterogeneous software can provide centralized visibility into these disparate resources, helping the IT utility bring the pieces together into a single holistic view.

Wladawsky-Berger: Generally speaking, autonomic computing requires extremely sophisticated software, and that software must be based on open standards. That’s the only way elements of a vast, distributed network of heterogeneous systems can communicate. Communication is the key to self-management. Developing those open standards and making them pervasive are challenges, but the industry is up to it.

Denzel: Technology-wise, commercialization of the grid needs to happen. We’re three to five years away from that because we need packaged software that can understand the grid, and we need the interfaces proposed by the standards bodies to mature a bit. We’ll also need to see more maturation of Web services. Today, Web services are predominantly deployed inside the firewall. The ultimate is Web services deployed across the unsecure Internet, where they come together and form a function. We’re many years away from that.

Greene: There’s the problem of how to recognize when your system is behaving in a way that you want to correct. Particularly in distributed systems, such as Web-services-based applications, systems behavior isn’t just black and white, good and bad. There’s a whole gray area in the middle that the industry is going to have to develop pretty sophisticated diagnostic software in order to automate.

Papadopoulos: There are two things going on here, and they both need to happen at the same time. First, you’ve got a wildly heterogeneous environment. The idea is to take costs out opportunistically, by creating a layer of abstraction to sufficiently contain these legacy systems. That’s kind of the ‘old school’ aspect of data center automation. With the need to develop and deploy applications efficiently across all kinds of environments and devices, like smart cards and mobile phones, and the need to scale at Internet kinds of rates, there’s a new architecture that includes Web servers, app servers, database servers, messaging. The challenge is to design a data center automation strategy that maintains the heterogeneous legacy environment alongside the new architecture, allowing the two to exchange information and processes, and to intermingle freely and securely.

What excites you most about the future of autonomic computing?

Wladawsky-Berger: What I find exciting is the challenge of putting autonomic computing to work for the benefit of our customers certainly, but also the challenge of working with the industry to develop the standards that will make autonomic networks a reality. It’s really more than a commercial challenge. The IT infrastructure is a strategic necessity for our nation and for society in general, and when something is that critical, it better be as easy to use and as resilient as you can make it.

Papadopoulos: The network of 1990 connected millions of workstations and computers. In the past 10 years we saw that number approach billions, with the addition of things [such as] PDAs, mobile phones and network gaming. We’re about to experience the next wave, which is really about trillions of things connected to the network, like [radio frequency identification] tags and sensors in the workplace and at home and in our cars. All of this is going to put our IT infrastructure to its most punishing test yet. The infrastructure is going to have to scale like crazy, and it’s going to have to do so painlessly and simply.

Greene: Autonomic computing is about bringing useful innovation to the market in a way that eliminates significant drudgery from people’s lives as well as reduces costs and increases the quality of service offerings. Autonomic computing will allow businesses to manage their business with fewer computing infrastructure concerns. I also believe that it is just another example of the progress that technology can afford – progress in the sense that people’s focus can move up a level and costs go down.

Bloom: Utility computing represents the maturation of information technology, [which] is gracefully transforming into a service model that is more centralized, better managed and most importantly, precisely aligned with business objectives. This transformation to utility computing is enabled by open, heterogeneous software. It is exciting to be at the forefront of driving and enabling this evolution in IT history.

Rudder: For decades our ability to innovate in software has been gated by innovation elsewhere. We waited much longer than we would have liked for hardware performance to make graphical user interfaces a reality. More recently, we’ve been dependent on networking innovations, including the Internet. Today, it is incredibly exciting to live in a world where software innovation is not limited by these other developments, but rather only our own imaginations and ability to translate ideas into software. In the next couple years, we’re going to see tremendous software innovation that harnesses the abundance of computing hardware and networking.