I want to go back to the difference between virtual appliances and being in the kernel. Can you elaborate on that.
One very simple difference thing is, if you’re running something in the kernel it’s just faster. You don’t have the overhead of having a virtual appliance. Now depending on what you’re doing, that may or may not matter. Something like a firewall, that really matters because you touch every packet and you have to do this at 10 or more gigs. That’s purely a performance issue.
But there’s another issue, which is more nuanced and not well understood. If I take a physical appliance and turn it into a virtual appliance I haven’t distributed it. If I deploy 10 of these it’s just like deploying 10 physical appliances. There’s no difference. My background is distributive programming. That’s what I did before all this stuff. To distribute you have to rewrite the code so it’s distributed, so you can have one view and it looks like one thing, which means you have to share all sorts of state, you have to rewrite the control plane, and you have to rewrite the way the application works.
A lot of companies do this sleight of hand where they’ll take a physical appliance and move it to a virtual appliance and deploy them and then put a management layer on top and say, “Oh, look, it’s distributed.” But the reality is there’s no global view. There’s only a management side.
There are many problems for which appliances are just fine. For example, on the North/South border you might use virtual and physical appliances, but if you want to scale a service with a global view to handle all of the traffic within the data center, which is terabytes, you need to distribute it.
What we do is create this notion of a distributed firewall. This is a purely logical notion. It’s a fully stateful firewall that has one port per VM. So if you have 10,000 VMs you have 10,000 ports in a distributed firewall. And then you take this distributed firewall and chop it into little, little pieces and you run those pieces in the hypervisor kernel, so there’s a logical view of this 10,000 port firewall but the reality is only a little piece is running in the kernel.
So every packet still goes at wire speed, but we can also synchronize state if we need to because we’re running it as a distributed application. For example, if a VM moves, the state moves with it, or you can share that state and so forth. It’s actually written as a distributed application within the kernel. So every kernel has a little piece of this.
Didn’t VMware have a distributed firewalling capability?
They had the stateless firewall capability before.
Did you leverage some of that?
Absolutely. When we came in there was an enormous team here with this set of assets. We came in with another set. That’s why it took us a year and a half to integrate these things.
I presume you add other security services in time?
I think you can do load balancing, you could probably do WAN optimization, I think you can do it for IPS, but there are some tradeoffs we’re going to have to make. Web application firewalling, I’m not sure. It would be interesting to see.
But we can also start getting into things like vulnerability assessment. Vulnerability assessment is normally a box that sits on the network and scans things and it’s like, “Oh, my database says this is vulnerable based on the responses given me from the network.” Instead, we can actually run a little bit of code that looks directly into the applications, at the files in the memory so they can’t be tricked by, and then mitigate the problem so it can’t reach the network. Which is exciting because, wow, now we have an entirely new approach to address security concerns.
How much of this security work will you do internally versus with partners?
Ours is very much an ecosystem approach. We’re really good at building distributed services, but I’m not an expert in IDS, I’m not an expert in virus detection. So we want to provide a platform that will provide context that others can’t get, and even provide native distribution capabilities, but otherwise it’s very much an ecosystem play.
But the firewall was home built?
The firewall was home built. But again, it’s fully distributed. We’re going to have to lead with a few core products that demonstrate this capability in order to drive the ecosystem because nobody wants to invest money speculatively if they’re in a growing business.
Palo Alto Networks is a good example. They provide a next-generation firewall and are a huge partner. They run a virtual appliance with integration in the kernel and we handle the operational side of distribution and provide additional context by allowing them to peer into the hypervisor. So there’s quid pro quo here. For us, our platform gets more attractive and we get to sell a layer that adds value, and for them, they get an insertion vehicle and the insertion vehicle to a large market.
They’re not threatened by your own firewall?
We’re not a next-generation firewall. They’re a $600 million company, or something like that. We’re focused on kind of a minimum thing internally. It’s very difficult to have absolutely zero overlap in partnerships. But we’re not going after their core business at all. We’re partnering as much as we can as best as we can. The only time we’ve built up functionality is to kind of lead the space and to address our customer demand.
So how do you see customers adopting your firewall tool? They have 20 security tools already, so is this a bolt-on that complements what they already have or does it enable them to unplug something?
By and large this is a net add, meaning customers today are unprotected within the data center and we add a layer of protection.
Were you surprised to see this security functionality emerge given you started out looking to solve another problem?
When you start with a new technology you’re throwing it against the world and seeing how people find it useful. It’s very non-obvious, actually. As a technologist you’re always like, “I created this thing and the value is implicit and it carries its own destiny. “
That’s totally false. It’s the wrong way to think about it. What carries the destiny of the thing you created is the person that carries it to the customer. It’s the sales guy. You give the guy that’s carrying it to the customer a story that he pitches, but so much of how people view what you have is the guy that carried it in there.
This has been probably my number one lesson from the business side in the last seven years: the person who’s presenting your technology is actually going to impact how it gets adopted and how it gets viewed.
Going back to original mission of the company, speeding up provisioning, which you said still represents half of the business, has adoption happened as you would have expected?
The market matures at the rate the market matures, and now we’re starting to cross the chasm. We’re building out our sales force and growing with the market. The market was like one customer seven years ago, and then it was two, and then it was ten and it takes time.
But the operational stuff, yes, I think there’s huge value there.
My sense is we’re going to see the operational use case and the security use case move in parallel for a while and then bifurcate. They will both be healthy businesses. I feel like with security we’re selling to a more mature market because people know how to think about it and acquire it. The operational use case addresses a much less mature market because it’s a larger departure from the way we’re used to thinking of things.
How many sales folks do you have? And do some specialize on security and the others on the operations pitch?
My direct sales force is about 100 people and we only have one SKU, so it is up to them to position the product for the customer. But then, of course, we’ve got thousands of channel partners we sell through.