Red Hat nicely positioned for the turn to cloud

CEO James Whitehurst talks about cloud, containers, OpenStack and competition

jim whitehurst
Red Hat

Red Hat CEO James Whitehurst kicked off the company’s Summit meeting in Boston this week, which attracted more than 6,000 people, up 20% from last year. Network World Editor in Chief John Dix caught up with Whitehurst at the show for an update on the company’s position and prospects.

One of your keynote speakers said 84% of Red Hat customers have cloud deployment strategies. Is the shift to cloud accelerating your business?

I do think the shift to cloud is helping. We have data that shows our customers who use cloud actually grow faster in total with us than ones who don’t. The promise of cloud accelerates the Unix-to-Linux migration as people modernize applications to be able to move to cloud -- whether they move immediately or not -- because clouds primarily run Linux. In general, anything that makes people move to a new architecture is good for us because we have a high share of new architecture relative to old. I think that’s a big, big driver.

One of the benefits of Red Hat Enterprise Linux (RHEL) is it is fully supported on bare metal, on VMware, on Hyper-V, on Amazon, Google, Azure. We’ve architected it so customers can write applications for RHEL and they’ll run anywhere.

Conversely, all the cloud providers want to be RHEL-certified and work with us because they want the ability to try to track those workloads. It works well for us to be the glue between the application and its ultimate deployment.

Advertisement

Speaking about hosting apps wherever they may be, how close are we to the Nirvana vision of hybrid cloud, being able to have on premise apps spill over into the cloud when peak capacities are reached?

Hybrid cloud is a journey, not a destination. We’re there in some ways. We have banks and hedge funds today that have a RHEL estate and they run analytics on-premise and they will burst out on the public cloud, either at the end of the trading day or right before the trading day. That’s very possible.

The key is not every application and not every context. Scaling out data is still really hard. Applications built with scaling out in mind help a lot, but the idea that I’m going to take my SAP ERP system and burst it onto Amazon, that’s not going to work. Where it does work it’s for a subset of workloads. One of the things we’re trying to do over time is expand the share of workloads where that’s relevant and possible.

You made an announcement at the show about Amazon. Tell us more about that.

There are several components. One is better RHEL support. A lot of what we do is enable hardware underneath RHEL, and we have to do that for every hardware vendor. So we are building a tightly joined engineering team so we can enable Amazon’s hardware more quickly.

But the most important thing is we will jointly expose all of Amazon’s services natively on OpenShift [Red Hat’s container platform]. So, if you’re running our container platform on-premise and you want to use any Amazon services -- they have all kinds of interesting things -- you can natively use those things from OpenShift, which we think is a big deal for developers.

Wow. Speaking of OpenShift, how do you support or compete with the other container tools out there?

We are the second largest contributor to the Docker project (renamed last week to the Moby Project) behind Docker, and in many ways we partner. As an open-source company, we recognize it’s hard to sell a framework or a specification. You have to have a runtime. Our container platform is a bundle we call RHEL Atomic, which is our lightweight Linux for running containers. It is the Docker specification, but we don’t pay them, we use the open-source Docker specification. And we use Kubernetes for orchestration. We are the second largest contributor to the Kubernetes project behind Google. And then we use Ansible for automation and have some management tools called CloudForms. Then there’s a whole DevOps toolchain that attaches to that.

If you think about the fundamental runtime, it does Docker containers, it does orchestration and it does automation. It does all that stuff. Docker, in order to build a runtime, started a project called Swarm which competes with Kubernetes. So at that level, I guess we compete with Docker in the sense they’re trying to offer a runtime via Swarm and we offer a runtime via Kubernetes. But we’ve been and continue to be technically partners and have been for a long time.

Red Hat’s role is not to start open-source projects. We hate starting open-source projects. It’s really hard to start an open-source project. Our role is to identify what we think are the most popular projects and get involved and create life-cycle versions. Again, OpenShift isn’t a project. The project is Red Hat Linux, it’s Docker containers, it’s Kubernetes and Ansible all bundled together, which we think are the leading projects for their components, all brought together in a consumable container runtime.

Looking at it broadly, when we went from a physical data center to a virtual data center a whole new management paradigm needed to develop. VMware is a virtualized data center management company, and the reason they went from zero to $6 billion and passed everybody else is a new paradigm emerged and they did a great job and executed in creating that platform.

When you think about the application level, going from monolithic applications to microservices running in containers, there’s going to be a whole new application platform and management paradigm required to run that. Think about when you have 400 enterprise applications that are instantiated in 1.2 million containers, all microservices talking to each other via APIs and messaging. What if you have a performance management issue there? How do you handle that?

All the things that have existed in the past in the traditional world will need to be re-implemented in a containerized world. Red Hat is not necessarily trying to do all of that, but OpenShift is a core platform that allows containers to run at scale, and we’ll work with ISVs to build functionality on top of that. It is a fundamentally different application architecture.

The other thing I would say about containers is that, I think people want to immediately run to the idea that containers are a form of virtualization, and to some extent that’s true in the sense of I’m going to run multiple applications on one physical or virtual server, but the difference is it’s not running just applications. What you’re doing is splitting the operating system. All of the user space of the operating system that the application needs is in the container, and the reason that’s important is more than 95% of the security vulnerabilities in the operating system have been in user spaces.

The last big bug hit probably 95% of the containers out there. What’s in the container has to be life-cycled. All the operating system components of that need to be life-cycled, so it’s not like “There’s application logic in the container and I don’t have to worry about the rest of the world.” Actually, the majority of the operating system is now in that container too.

One of the things we say about Linux containers is somebody’s got to life-cycle that and patch that. Even a “Hello, World” application running in a container over the course of a year will need 50 security updates.

Is it possible to quantify where we are in terms of container adoption?

I think it’s exploding like no other technology I’ve ever seen, even more so than virtualization. It’s a dramatically cheaper way to deploy and it’s a much more effective way to develop software. It’s a win/win/win all around. But to deploy at scale, we’re only at the bleeding edge now. Everybody is toying around with it. Some of the banks are starting to roll it out, but in terms of truly running production applications, it’s the very early adopters.

Public cloud is much further along in containers. Everybody rightly is exploring it and playing around with it, but we’re by far the leader around this and we deal with all these companies. I would say you’re talking about a few hundred companies that actually have it in production.

Another perspective question, OpenStack stumbled early on, but on a recent earnings call you apparently said a third of your recent wins were for OpenStack environments. Do I have that right?

Yeah, a third of our big deals. OpenStack, bluntly, in the early days was the worst of all worlds. Every OEM jumped on it fast thinking it was going to be their savior versus Amazon. They took really, really early versions and threw it out there to customers because nobody wanted to be late. But if you could get it up and running you couldn’t update it. It took PhDs to do it. I think we knew that and so we were a little slower in adoption, but honestly, we put it out as well. But over time it’s matured. The last couple of revs are very stable, very good for a whole set of workloads. It’s by far the most economic way to run workloads relative to public cloud.

We don’t have a dog in the hunt because all these environments run RHEL. For the telcos, it’s virtually inevitable that their next-generation infrastructure is going to be OpenStack because, with 5G and the explosion in traffic, they have to move to open-source commodity hardware, and they’ve basically chosen OpenStack.

We’re a big player in OpenStack and we are by far the largest upstream contributor. But so many times we’ll go into a customer that called us about OpenStack and they end up buying Red Hat Enterprise Virtualization, which is our VMware equivalent. What virtualization does versus what OpenStack or Amazon does is relatively different. Virtualization is about virtualization and for us, RHEV is very much a scaled up platform and it’s actually quite good at scale-up, fault tolerant, etc., etc.

1 2 Page 1
Page 1 of 2
Take IDG’s 2020 IT Salary Survey: You’ll provide important data and have a chance to win $500.