Blame it on the public cloud service providers. It was, after all, the Amazons of the world that raised the bar by making the provisioning of IT resources look so easy. Why should users have to wait? If I can get it quickly and easily there, the reasoning goes, why can't I get the same agility from my internal data center?
No one cares that the 1% of enterprises that built their business in the cloud aren't dragging decades of legacy infrastructure with them, says Zeus Kerravala, principal analyst at ZK Research. For the 99% — traditional enterprises such as banks and manufacturers — the existential challenge is how to catch up.
"Every big company now has to compete with startups that are trying to disrupt their business," says Mark Collier, chief operating officer at the OpenStack Foundation. "The No. 1 driver for SDDC is speed and the need to empower developers who are writing applications for their companies to move more quickly. Velocity, these days, is everything."
Building a software-defined data center (SDDC) is the first step toward a private cloud infrastructure that can achieve those goals, but technical limitations and cultural issues make it a challenging one.
SDDC is a catch-all term that includes at a minimum software-defined computing, networking and storage elements, as well as an orchestration layer to coordinate the configuration of data center infrastructure, as driven by the resource and service-level requirements set by those applications hosted on top of it.
What's more, the single control point that an SDDC establishes through a series of APIs shouldn't stop at the four walls of the traditional data center. A well-designed architecture should serve as the foundation for a broader software-defined infrastructure that extends control over all IT resources, including those available in private clouds, public clouds and traditional data center resources both on-premises and in colocation facilities.
To continue reading this article register now