Skip Links

ExtraHop mines the network to glean operations intelligence

By , Network World
October 04, 2013 03:12 PM ET
Jesse Rothstein
Jesse Rothstein

Network World - Jesse Rothstein, who was the lead architect of F5's flagship product line, founded ExtraHop in 2007 to develop products to derive IT operations intelligence from data gleaned from the network. Network World Editor in Chief John Dix recently caught up with Rothstein for an update on the company and what it has learned about things like virtual packet loss (hint: it can be the bane of highly virtualized environments).

How does your background at F5 help you at ExtraHop?

My co-founder Raja Mukerji and I were both at F5 for many years. And what we did at F5 was bring application awareness and application fluency to what was the load balancer, and that created a whole new product category called the application delivery controller. Over at ExtraHop, we leverage that same domain expertise in high-speed packet processing and application fluency, but we’ve brought it to a new space, much more on the IT operations side, and we’re starting to call this IT operations intelligence.  

Raja and I had conversations with IT organizations and people we’d worked with in the past and it became apparent to us the end result of megatrends like server virtualization, where VMs spin up and spin down and jump across the data center, and agile development, where we roll out new versions of applications every two weeks or every two days, was resulting in an unprecedented level of scale, complexity and dynamism. And the previous generation of tools and technologies that companies use to manage these environments are no longer tenable. And that’s if they have those tools at all. More often than not companies just throw smart people at the problem of figuring out what’s going on.  

So I would say, No.1, the situation has become such that we’re beyond the capability of just throwing smart people at the problem and pulling a few all-nighters and ordering pizza. And No.2, the previous generation of tools were built for much smaller environments that were not dynamic. Those tools basically start off as bricks, and you parachute in teams of sales engineers and systems engineers and consultants to configure them in order to provide the visibility you need. Then if the environment changes, rather than automatically detecting the changes, you have to rinse and repeat that process.  

So we started with the notion that these IT megatrends were occurring, that we had the domain expertise to solve some of the problems around scale and dynamism, and that we could provide visibility into these environments.

What are you lumping into the current generation of tools?

This is a taxonomy I’ve been thinking about for a while. In enterprise IT there are four or so sources of data that you can use to derive some intelligence about your environment.

So No.1 we have machine data, and I’m using a term that Splunk popularized. Machine data includes log files, SNMP and WMI, and all of these data sources are largely unstructured. Splunk and others like them realized that enterprises are producing a lot of this unstructured machine data and not really doing anything with it. So they built a platform to index it, archive it, and analyze it to derive some intelligence from it.  

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News