How to create a business-boosting virtualization plan

Tony Bishop, CEO of start-up IT consultancy Adaptivity and virtualization go-to guy, explains how to shift from the tactical to the strategic

Tony Bishop, CEO of start-up IT consultancy Adaptivity and virtualization go-to guy, explains how to shift from a tactical to a strategic virtualization strategy.

Tony Bishop

As an IT executive at Wachovia, Tony Bishop earned kudos among his peers and the industry for his sophisticated views on virtualization. He oversaw the development of a services-oriented, virtualized next-generation IT infrastructure that allowed the company to reap huge savings while ushering in drastic improvements in application performance, processing times and overall efficiencies.Today, Bishop is CEO of Adaptivity, a start-up IT consultancy. Here he shares his thoughts about what makes a great virtualization strategy, the best tools for use with a virtual infrastructure, and what the future holds in store.

Most enterprises -- 70% by some industry estimates -- either have completed or are engaged in a server virtualization project. While this means people are embracing a logical view of IT -- which is a good thing -- you suggest there's a lot of room for improvement. Explain.

The bad of it is this: People have approached virtualization as a bottom-up, not a top-down, strategy. They're looking at their servers, utilization rates and maybe trends, and they're basically saying, 'I'm going to split these servers up and partition them to get more efficiency.' [This view doesn't take into account that] applications and the information are the consumers of servers. If IT focuses only on the utilization efficiency and doesn't incorporate a top-down assessment of what resources applications and information are consuming, then it's not going to be able to drive the broadest impact for the business through virtualization. That's No. 1. No. 2, [a narrow approach] will cause performance issues that will either negate the value or slow down the adoption of virtualization.

We had seen this at Wachovia, and I've heard it from my peers. Think about it this way. For servers, we used to rely on symmetrical multiprocessing, or SMP, where your Sun and IBM boxes -- the Unix servers -- created logical partitions. They shared memory, processors, I/O and disk. As soon as we did that, we started seeing applications where performance and processes were constrained because of the partitioning. This led people to dedicate SMP boxes to single applications so the applications wouldn't be constrained. And then we moved to clusters, and then dedicated clusters to those applications. Why? We needed to be able to handle peak periods of performance and processing. The bottom-up approach didn't align to the demand side, and that's the same problem we see with virtualization and why a lot of virtualization strategies are failing.


Link to our podcast series: More voices of virtualization 


So, in a sense, x86 server virtualization's simplicity is its curse as well?

Yes, partly. People do love to adopt technology that is easy to implement, works relatively well and has some good concepts to it. But this breaks down at organizations that are not looking at end-to-end service delivery and using that to drive how they build, design, implement and transform.

How does one go about changing a virtualization strategy from a bottom-up to a top-down approach?

Three things must occur. First, you need to virtualize at the demand layer. You have to understand what the user is asking for, what the application needs, where the destination is and where to do the processing, based on what's being asked.

Then you need to virtualize all types of supply-the network, storage, compute, memory and I/O. You need a strategy for ensuring that every single component is virtualized. You can't have your virtual servers over here, but leave everything else physical.

The third step is to incorporate life-cycle management of the virtualization platform, which to me means the virtualized demand and virtualized supply combined and the service life-cycle. I need to sustain the life of that business with this virtualized demand matched with the virtual resources; and understand how they're matched, managed and provisioned.

If I look at the user experience and my processes, can I ask, 'Where am I not meeting the needs of the business, in terms of performance, cost or efficiency?' If you think of those three factors, and then decompose your most critical business processes and look at where you're not meeting those business needs, you can prioritize your road map.

Then, from the bottom up, you want to understand what you have. That's where you use a tool like Tideway Systems' [Foundation, which] tells you the physical inventory and the dependencies of your existing infrastructure. You're able to understand how every system interconnects, down from the network and up to the application. That's powerful, especially because then, when you virtualize and add changes, it can show you the new dependencies. Now, when you come bottom-up, with 'what do I have' and 'where can I target opportunities to virtualize,' you can understand what that would do to the infrastructure. Then, if you use a tool like OpTier's [CoreFirst], which shows you user transactions as they evolve through the platform -- not a simulation, that user-experience view, correlated with what's actually happening or with what I have, allows me to say, now, if I virtualize both the demand and the supply, now look at what I can optimize.

So, you map top-down -- where am I not meeting the needs of the business? Then map bottom-up -- what do I physically have and how's it all working? Then apply real-time user experience and, then, guess what you have the ability to do? You have the ability to optimize on a broad basis incrementally and quickly.

How do you convince IT to move from the tactical approach to virtualization?

It can be tough. But what is IT in business to do? It's in business to deliver and constantly improve service. IT executives who are committed to doing more with less and doing more better, can in a very simple, three-step process, using commonsense dialog and best-in-class tooling, start to radically drive useful virtualization strategies and create the virtual data center.

And the other pieces, like the network and storage -- how do they fit in?

Network, server, storage, disk, application components, data components -- these all make up a service unit. [An enterprise application-virtualization platform like DataSynapse's] FabricServer understands what that service unit is and what's being asked of the demand side, and matches it.

Do you still have to create your virtual-LAN partitions for the network and [virtual logical-unit-numbers] for storage? Yes. Those capabilities are provided with your standard networking tools; you've got to turn them on so they're being consumed as a virtual service-unit. You can do that with the legacy stuff. That's the beauty of it. You can actually do more, faster because you're not causing the performance problems that traditional virtual-machine partitioning does.

You mention several products -- DataSynapse's FabricServer, OpTier's CoreFirst and Tideway's Foundation. What other virtualization products, companies or trends are you watching?

When you get into the service unit, you have to figure out how to manage all of that, in a consistent, life-cycle way. So, you have tools like Scalent [Systems' Virtual Operating System software], Cisco's VFrame and FastScale [Technology's Composer and Virtual Manager] that are combining compute, storage, the operating system and the application build, and doing infrastructure repurposing. You've got to track this evolving area and think, 'Which one am I going to adopt as a second phase?' You've got to be really, really smart with your whole management and design approach when you start repurposing an entire infrastructure.

You'll also see network equipment providers like Cisco start to put more infrastructure into the switches themselves because then creating a logical abstraction is even easier. If everything is in one place, then I have less that I have to connect physically to make logical. Remember, virtualization is a precursor to cloud computing-enhanced services anywhere in the network can be consumed and provisioned based on what's being asked.

That leads to what I call virtual appliances -- not just software types, but the actual hardware and software combined. This is where you get things like Cisco's [Application Control Engine] product, which is part of its application-oriented networking, and IBM's DataPower. You create a virtual address into it; and the appliance gives you routing, transformation and integration services. The more you use those, the better control, more abstraction and less of a footprint in the data center you get, too.

The final thing to track is how you use virtualization to start creating information as a service.

If you're going to create a virtual information environment, where you can expose that to be a service you can call upon at different points in time with different requirements, you're going to see the Oracles and IBMs of the world and then some leading companies like Composite Software provide a way to physically abstract from the data environment (so, from the databases themselves) and make that available. Do you do that in memory as a single abstraction, or do you do that as a collection that's distributed over the environment?

This is where you see technologies like Oracle's Coherence, and IBM's information framework caching product. That's one layer. A second layer becomes something like the Composite Software [suite] that lets you do a federated join of information from multiple sources without having to hard code that.

So, how would all of this come into play within an enterprise?

Let's say you're a bank, and you have information about customer "Beth" in 12 different places. Beth not only has a credit card and checking and savings accounts; she also has CDs, a home equity line, a personal line of credit and a mortgage. Those accounts cross maybe three or four businesses lines. So, how do you get a single view of a customer so you can do analysis? Say Beth is using her credit cards and checking account more often than her line of credit. But if she used her line of credit, which is only at 5% interest rate vs. the 18% she's paying on her credit card, I could get her to consume even more; and maybe she'd leave her deposits in the bank account longer, which we then could leverage.

This is a business scenario that says, 'How do I use virtualization of information without having to create a new data warehouse that would take me years to build at a cost of hundreds of millions of dollars?' If I could pull that information together in seconds and have it be as up-to-date as repositories are up-to-date, then wouldn't that allow me to make better decisions on how to market and provide service to that individual? Virtualization allows you to do powerful things that you couldn't do in the past.

And this is what you did while at Wachovia?

Yes, we did that for our traders and our bankers. We created information-as-a-service, using the technologies I talked about from Oracle and Composite. We were able to expose information and provide a single view of a client's transactions in a real-time manner without creating a whole new data warehouse. So, that meant users could make better decisions and cross-sell more.


Read a case study about Wachovia's application-virtualization project 


How about at other firms? Are you seeing others embrace this type of strategy?

Yes, and we're beyond the early visionaries. We're seeing adoption [by IT executives] who are recognizing that they have to continue to drive better shareholder value, productivity and return on equity. I see this happening at manufacturers, Wall Street firms, service providers-multiple industries.

But a lot of firms do say, 'Oh my God, oh my God, oh my God!' But all you're really saying is you're going to change the way you do it. You're going to improve service levels. You're going to do more with less. And you're going to make things more real-time. That really is the net of it. Guess what? The tools, the technology and the evolution of it are out there. Get started on your journey.

Editor's note: You can get a regular dose of Bishop's IT insight in his "Intelligent Network Computing" blog.

< Previous story: Wanted: virtualization skills | Next story: Slideshow: 10 must-have virtualization tools >

Copyright © 2008 IDG Communications, Inc.

The 10 most powerful companies in enterprise networking 2022