Americas

  • United States

How to build a private cloud

Feature
May 10, 201010 mins
Cloud ComputingIBMLinux

Expert advice on how to approach an on-premises cloud, from conception to implementation

If you’re nervous about running your business applications on a public cloud, many experts recommend that you take a spin around a private cloud first.

Cloud: Ready or not?

But building and managing a cloud within your data center is not just another infrastructure project, says Joe Tobolski, director of cloud computing at Accenture.

“A number of technology companies are portraying this as something you can go out and buy – sprinkle a little cloud-ulator powder on your data center and you have an internal cloud,” he says. “That couldn’t be further from the truth.”

An internal, on-premise private cloud is what leading IT organizations have been working toward for years. It begins with data center consolidation, rationalization of OS, hardware and software platforms, and virtualization up and down the stack – servers, storage and network, Tobolski says.

Elasticity and pay-as-you-go pricing are guiding principles, which imply standardization, automation and commoditization of IT, he adds.

And it goes way beyond about infrastructure and provisioning resources, Tobolski adds. “It’s about the application build and the user’s experience with IT, too.”

Despite all the hype, we’re at a very early stage when it comes to internal clouds. According to Forrester Research, only 5% of large enterprises globally are even capable of running an internal cloud, with maybe half of those actually having one, says James Staten, principal analyst with the firm.

But if you’re interested in exploring private cloud computing, here’s what you need to know.

managing cloud computing

First steps: Standardization, automation, shared resources

Forrester’s three tenets for building an internal cloud are similar to Accenture’s precepts for next-generation IT.

To build an on-premises cloud, you must have standardized – and documented — procedures for operating, deploying and maintaining that cloud environment, Staten says.

Most enterprises are not nearly standardized enough, although companies moving down the IT Information Library (ITIL) path for IT service management are closer to this objective than others, he adds.

Standardized operating procedures that allow efficiency and consistency are critical for the next foundational layer, which is automation. “You have to be trusting of and a big-time user of automation technology,” Staten says. “That’s usually a big hurdle for most companies.”

Automating deployment is probably the best place to start because that enables self-service capabilities. And for a private cloud, this isn’t Amazon-style in which any developer can deploy virtual machines (VM) at will. “That’s chaos in a corporation and completely unrealistic,” Staten says.

Rather, for a private cloud, self-service means that an enterprise has established an automated workflow whereby resource requests go through an approvals process.

Once approved, the cloud platform automatically deploys the specified environment. More often, private cloud self-service is about developers asking for “three VMs of this size, a storage volume of this size and this much bandwidth,” Staten says. Self-service for end users seeking resources from the internal company cloud would be “I need a SharePoint volume or a file share.”

Thirdly, building an internal cloud means sharing resources – “and that usually knocks the rest of the companies off the list,” he says.

This is not about technology. “It’s organizational — marketing doesn’t want to share servers with HR, and finance won’t share with anybody. When you’re of that mindset, it’s hard to operate a cloud. Clouds are highly inefficient when resources aren’t shared,” Staten says.

Faced with that challenge, IT Director Marcos Athanasoulis has come up with a creative way to get participants comfortable with the idea of sharing resources on the Linux-based cloud infrastructure he oversees at Harvard Medical School (HMS) in Boston. It’s a contributed hardware approach, he says.

At HMS, which Athanasoulis calls the land of 1,000 CIOs, IT faces a bit of a unique challenge. It doesn’t have the authority to tell a lab what technology to use. It has some constraints in place, but if a lab wants to deploy its own infrastructure, it can. So when HMS approached the cloud concept four years ago, it did so wanting “a model where we could have capacity available in a shared way that the school paid for and subsidized so that folks with small needs could come in and get what they needed to get their research done but also be attractive to those labs that would have wanted to build their own high-performance computing or cloud environments if we didn’t offer a suitable alternative.”

With this approach, if a lab bought 100 nodes in the cloud, it got guaranteed access to that capacity. But if that capacity was idle, others’ workloads could run on it, Athanasoulis says.

“We told them – you own this hardware but if you let us integrate into the cloud, we’ll manage it for you and keep it updated and patched. But if you don’t like how this cloud is working, you can take it away.” He adds, “That turned out to be a good selling point, and not once [in four years] has anybody left the cloud.”

To support the contributed hardware approach, HMS uses Platform Computing’s Platform LSF workload automation software, Athanasoulis says. “The tool gives us the ability to set up queues and suspend jobs that are on the contributed hardware nodes, so that the people who own the hardware get guaranteed access and that suspended jobs get restored.”

Don’t proceed until you understand your services

If clouds are inefficient when resources aren’t shared, they can be outright pointless if services aren’t considered before all else. IBM, for example, begins every potential cloud engagement with an assessment of the different types of workloads and the risk, benefit and cost of moving each to different cloud models, says Fausto Bernardini, director IT strategy and architecture, cloud portfolio services, at IBM.

Whether a workload has affinity with a private, public or hybrid model depends on a number of attributes, including such key ones as compliance and security but others, too, such as latency and interdependencies of components in applications, he says.

Many enterprises think about building a private cloud from a product perspective before they consider services and service requirements – and that’s the exact opposite of where to start, says Tom Bittman, vice president and distinguished analyst at Gartner.

“If you’re really going to build a private cloud, you need to know what your services are, and what the [service-level agreements], costs and road maps are for each of those. This is really about understanding whether the services are going toward the cloud computing style or not,” he says.

Common services with relatively static interfaces, even if your business is highly reliant on them, are those you should be considering for cloud-style computing, private or public, Bittman says. E-mail is one example.

“I may use it a lot, but it’s not intertwined with the inner workings of my company. It’s the kind of service moving in the direction of interface and independence – I don’t want it to be integrated tightly with the company. I want to make it as separate as possible, easy to use, available from self-service interface,” Bittman says. “And if I’ve customized this type of service over time, I’ve got to undo that and make it as standard as possible.”

Conversely, services that define a business and are constantly the focus of innovative initiatives are not cloud contenders, Bittman says. “The goal for these services is intimacy and integration, and they are never going to the cloud. They may use cloud functions at a low level, like for raw compute, but the interface to the company isn’t going to be a cloud model.”

Only once you understand which services are right for the cloud and how long it might take you to get them to a public-readiness state will you be ready to build a business case and start to look at building a private cloud from a technology perspective, he says.

The final tiers: Service management and access management

Toward that end, Gartner has defined four tiers of components for building a private cloud.

At the bottom sits the resource tier comprising infrastructure, platforms or software. Raw virtualization comes to mind immediately, but VMs aren’t the only option – as long as you’ve got a mechanism for turning resources into a pool you’re on the way, Bittman says. Rapid re-provisioning technology is another option, for example.

Above the resource pool sits the resource management tier. “This is where I manage that pool in an automated manner,” says Bittman, noting that for VMware environments, this is about using VMware Distributed Resource Scheduler.

“These two levels are fairly mature,” Bittman says. “You can find products for these available in the market, although there’s not a lot of competition yet at the resource management tier.”

Next comes the service management tier. “This is where there’s more magic required,” he says. “I need something that lets me do service governance, something that lets me convert pools of resources into service levels. In the end, I need to be able to present to the user some kind of service-level interface that says ‘performance’ or ‘availability’ and have this services management tier for delivering on that.”

As you think about building your private cloud, understand that the gap between need and product availability is pretty big, Bittman says. “VMware, for example, does a really good job of allowing you to manage your virtualization pool, but it doesn’t know anything about services. VMware’s vCenter AppSpeed is one early attempt to get started on this,” he adds.

“What we really need is a good service governor, and that doesn’t exist yet,” says Bittman.

Sitting atop it all is the access management tier, which is all about the user self-service interface. “It presents a service catalog, and gives users all the knobs to turn and lets you manage subscribers,” Bittman says. “The interface has to be tied in some way to costing and chargeback, or at least metering – it ties to the service management tier at that level.”

Chargeback is a particularly thorny challenge for private cloud builders, but one that they can’t ignore for long. “It’s tricky from a technology perspective — what do I charge based on? But also from political and cultural perspectives,” Bittman says. “But frankly, if I’m going to move to cloud computing I’m going to move to a chargeback model so that’s going to be one of the barriers that needs to be broken anyways.”

In the end, it’s about the business

And while cloud-builders need to think in terms of elasticity, automation, self-service and chargeback, they shouldn’t be too rigid about the distinctions at this stage of cloud’s evolution, Bittman says. “We will see a lot of organizations doing pure cloud and a lot doing pure non-cloud, and a whole lot of stuff somewhere in the middle. What it all really comes down to is, ‘Is there benefit?'”

Wentworth-Douglass Hospital, in Dover, N.H., for example, is building what it calls a private cloud using a vBlock system from Cisco, EMC and VMware. But it’s doing so more with an eye toward abstraction of servers and not so much on the idea of self-provisioning or software-as-a-service (SaaS), says Scott Heffner, network operations manager for the hospital.

“Maybe we’ll get to SaaS eventually, and we are doing as much automation as we can, but I’m introducing concepts slowly to the organization because the cloud model is so advanced that to get the whole organization to conceive of and understand it right off the bat is too much,” he says. 

As HMS’ Athanasoulis says, “The reason why people use our cloud … is because it provides compelling value to them – and that’s not a bad place for IT to be.”

Schultz is a longtime IT writer and editor in Chicago. She can be reached at bschultz5824@gmail.com.