Nutanix founder and CEO Dheeraj Pandey doesn’t want you to get too excited by today’s hyperconverged infrastructure offerings because they’re just ‘a pit stop’ on the way to making all infrastructure invisible. Pandey, whose company is preparing for an initial public offering, talked with IDG Chief Content Officer John Gallant about the competitive landscape in hyperconvergence today and he pulled no punches in assessing rivals like Simplivity and VCE. In Pandey’s view, only VMware is on the same path of building, essentially, the operating system for hybrid cloud but Nutanix is starting from a clean slate. Pandey also discussed Nutanix’s partnership with Dell Technologies and explained why Cisco has no love for his company these days.
I want to ask you the same question I recently asked the head of EMC’s VCE business unit. Can you explain to our readers what are the most appropriate use cases for hyperconverged infrastructure and where is it not appropriate?
People tend to focus too much on the what, and when I say what, it doesn’t mean hyperconverged itself. We have to focus on the why. Why is any of this happening? Basically, people don’t want too many components and too many boxes. IT needs to abstract itself out of stitching together things from ground zero. In that sense, just like the word ‘smart’ in smartphone went away over time, the word hyperconverged in infrastructure will fade away over time as well.
What we’re seeing is the coming together of things very similar to the way they all came together with iPhone, which eventually became a platform on which other devices became pure software. There are similar amalgamations happening in data center infrastructure because there are too many silos. You’ve seen the same phenomenon with the public cloud where everything is invisible. You don’t care what boxes, what operating system, what hypervisor, what storage arrays are being used in the underbelly of Amazon AWS. Developers just use APIs. Infrastructure is becoming code and people just want to program the code itself.
Hyperconverged is a mere pit stop in the journey to making everything invisible. The only way to fully automate is when everything is pure software. Hyperconverged takes the baby step towards making storage into software but then everything else has to become software; security has to become pure software, networking has to become pure software, all these things become pure software and now the data center is eminently programmable.
I get that vision but today companies have existing infrastructure and a set of applications that range from absolutely critical applications that have been built and tuned over the years to run the business to newer and some less critical applications. If I’m looking at this from the perspective of a CIO, am I ripping and replacing existing infrastructure or am I moving only the new to hyperconverged? I guess I’m trying to get a clear picture from them as to where this fits in.
Anything that is virtualizable is a candidate. You can virtualize the application in the last 10 years and now we will virtualize everything around it. That’s the real dream of virtualization, which is abstracting all these things and making it programmable. Whatever ran on VMware is very easy to run in this stack as well. Along the way we also figured out that we need to expand the scope of this platform because at the end of the day, we’re talking about a new operating system and this operating system is what Nutanix calls the Enterprise Cloud Operating System.
+ Tech Q&As: The IDG Enterprise Interview Series +
The idea that you need to make storage software-defined is one step in the journey but we need to make many other things software-defined and, more than that, we need to start thinking about consumption models. Can we have a consumer-grade way of consuming infrastructure which doesn’t involve even filing tickets? A lot of what IT does manually today could be done in pure software.
I’m fascinated by the shape of this HCI market today where you have standalone providers like you or Simplivity. You have major players like Dell Technologies that have their own offerings and then you have this semi-bewildering array of partnerships where the standalone companies are working with established companies. All of that seems like it is part of a transitional phase. I’m interested in understanding from you how you envision this market ultimately shaking out.
Two things: One is there are only two operating system players. There is VMware and there is Nutanix. Everybody else depends on one of the two. Simplivity depends on VMware or will depend on Microsoft in the future. Cisco depends on VMware. HP depends on VMware and then they add their little storage thing on the side. We, on the other hand, have built our own hypervisor because we realize the operating system is not complete without the hypervisor being your own in the stack. In that sense, there are two horses emerging here, VMware and Nutanix in the on-prem cloud infrastructure landscape.
Everybody wants to put their own value add like HP puts their own storage value add, Cisco is trying to put its own storage value add. But there is a lot more to this operating system than just storage. The management plane itself has to be rewritten, which is why the work we have done with Prism is so important. It has to be so simple, so elegant, so one-click that Oracle, SQL, Splunk, virtual desktop admins can actually consume infrastructure as well. One of the biggest reasons why the public cloud is succeeding is because their definition of operating system includes ecommerce metaphors. Computing and ecommerce are coming together.
Outsiders like AWS, who used to deal with toothpaste and mouthwash, they are able to sell computing because that is the new definition of the cloud operating system. Not just boxes and storage and networking and security and compute and virtual machines, but how you end up consuming them is equally important. In that sense, it’s not just about software-defined, it’s also about making it consumer grade as well. VMware is also trying very hard to figure out how to rewrite some of its management stack to be able to deal with the public cloud onslaught. But we have done this with an empty canvas. We have to look at this from the next-decade point of view as opposed to thinking about it the way VMware thought about vCenter 12 to 15 years ago.
In a year or two, what does the market look like? Are customers still primarily getting infrastructure from the folks they’re getting it from today, the HPs, the Dells, companies like that? Who are the big winners and losers in this transition?
First of all, the market is huge. If you look at the CapEx spend on infrastructure, that’s $215 billion between servers, storage, networking, security, operations management, software virtualization, software. Then there is another $450 billion of OpEx which is professional services and systems integrators and so on. There is a lot of money being spent on people as well that will become fully automated into software, which is one of the reasons why Amazon is doing well because the OpEx piece is also being fully automated and pure software.
Given that observation, I think there will be a place for multiple players. It’s really a red ocean rather than a blue ocean but there will be a lot of consolidation happening as well because there is too much dust here. Just like in the VM space three years ago there was a lot of consolidation that will happen. There was too much noise around software-defined networking and not much came out of it. Companies that have not been able to generate free cash flow over time cannot be independent companies. They will probably be acquired.
One dark horse in this is Microsoft, which is trying really hard to figure out how they will stitch together the public cloud itself, given that they have on-prem assets with Hyper-V, System Center, Azure and so on and they have value in the public cloud. That’s where the real battleground will be because at the end, the operating system has to virtualize the cloud itself, not just virtualize compute, networking and storage. The public cloud cannot be a separate silo. It has to be something that people can drag and drop between a principal workload running in on-prem environments or elastic workloads running in off-prem, rentable or rented environments.