Skip Links

Why Baron Capital uses Netuitive for performance management

The software is unique among all others, network director says

Network/Systems Management Alert By Beth Schultz, Network World
February 02, 2011 06:02 AM ET
Beth Schultz
Sign up for this newsletter now!

Industry analysis by Beth Schultz, plus the latest news headlines.

As enterprises continue the inexorable move to the virtual data center and cloud computing, the performance management problems that have long hounded IT managers get all the more irksome.

In a recent interview, Henry Mayorga, director of network topology for Baron Capital, a New York investment firm, described the problem as such:

"You've got very rich data coming at you -- a ton of information from across your five layers: the physical network, the physical servers, the VMware operating system, the operating system of the guest machines and, on top of all that, the applications. And hopefully, just hopefully, you have enough wherewithal to understand that data and be able to figure out where the problem is coming in or, better yet, how well it all is performing."

The trouble is, as much as network and systems management vendors talk about providing end-to-end visibility and comprehensive physical/virtual/cloud management, most haven't come anywhere near to the heart of the issue, Mayorga says.

"We've got all these disparate systems, each sending data in different formats, and then someone expects me, the network manager, to put it all together ... and -- here's the kicker -- decide what a good performance parameter is," he says.

Say, for example, Baron's servers are running at 65% CPU, and Mayorga decides he wants an alert if utilization hits 75%. "That sounds reasonable, right? But what if 75% CPU is OK at week's end when we're doing more processing or at month's end when we do all this reporting or at quarter's end when we have financial statements? What if 85% CPU is perfectly acceptable at those times? You'll get an alert, but it's a false positive because the servers are doing work they're supposed to be doing," he says. "OK, so maybe 95% CPU is the threshold -- but by that time you're already in big trouble because everything is slowing down."

And CPU is just one of thousands of parameters in a system that must be monitored. Worse yet, parameters don't come in a normalized manner so figuring out whether everything is working fine or not is near impossible, Mayorga adds.

Enter Netuitive, which offers an analytics-based, self-learning approach to performance management. "It has the right idea," Mayorga says.

"I feed Netuitive my easy data, stuff coming from VMware, my WMI interfaces, my network. It takes this data and builds correlations for me. ... It's one of the few products that allows you some chance of understanding the normality of the performance of a very complex system," Mayorga says.

After all, he adds, "it's not for me to decide what the good performance characteristic is -- let the system define what the performance characteristics are, map those performance characteristics, store them over a period of time, look at them historically and now you have a chance of looking at how your systems should be performing over a period of time and if something deviates from that, now you know you have a problem."

Is Netuitive, now in the 5.0 version, completely there? "Not by a long shot," Mayorga says. "But at least Netuitive has the right idea and is doing the hard work."

Schultz is a longtime IT journalist. You can email her or find her here.

Our Commenting Policies
Latest News
rssRss Feed
View more Latest News