Americas

  • United States

Predictive technologies and self-healing networks

Opinion
May 12, 20034 mins
Data Center

* Report from two panels at N+I

Like many in the industry – although, perhaps, not that many – I was at NetWorld+Interop in Las Vegas for the last week in April. Among the countless meetings and chances to peek across the show floor, two panels were sandwiched in – both of which I had the privilege of moderating. One was about predictive technologies and the other on self-healing networks.

Both panels faced a similar set of questions – with some good, solid discussion points. For this and next week’s column, I thought I’d share some of highlights.

First, let me introduce the panelists, by company. For self-healing networks, they were:

* Aprisma Technologies

* Cisco

* NetQoS

* Packeteer

* Vieo

And for predictive technologies, they were:

* Entuity

* Micromuse

* NetScout

* Opnet

* SMARTS

The panels began with a definitional discussion – what do we (collectively and individually) believe “self-healing networks” and “predictive technologies” to be?

Virtually all of the panelists agreed on critical objectives for self-healing networks – such as supporting service levels more effectively, building redundancy into networks (or as Cisco would stress, more broadly, “resiliency”), and finding more ways to predict requirements and take automated actions. There was some real discussion around what constitutes “networks.” Generally, the panel felt strongly (Aprisma set the initial tone) that “networks” in this sense are no longer a Layer 1-3 discussion, but should ideally represent Layers 1-7 of the OSI stack, from physical transport through application. Why? Because in the end, a “self-healing” network without sensitivity to application performance issues can become a “who cares?” or even worse – a counterproductive overinvestment.

Someone in the audience suggested that this direction might be best called a “self-healing infrastructure.” This might suggest a new direction of growth for Interop overall – as the “networked infrastructure” redefines the meaning of “network.”

Within “Predictive Technologies,” the definitional discussion, not surprisingly, focused more on technologies per se. There was also more contention in the second panel among members, as many tended to answer in terms of their own technological investments.

However, one answer, from Opnet, posited two overall categories for “Predictive.” Using my own words, these are: pattern-recognition-related analyses primarily for diagnostics of problems before they occur; and multidimensional “what/if” analyses – for example, “Is my network ready for VoIP?” – to enable optimization, operational planning, business/service planning and business assessment. I believe these two categories do well as overall “submarkets,” with a very distinct set of technologies and benefits. This is true even if some vendors, such as NetQoS, address both types of conditions within a single suite.

The next question for both panels was “What are the three key technologies for (self-healing/ predictive) management?” The hands-down winner in both panels was some form of analytics. Enterprise Management Associates has already published reports stating that this is the “Age of Analytics” within the management marketplace (in which “analytics” is the single key differentiator across brands), and our panelists generally agreed.

In “Self-Healing Networks,” the requirements for interoperability and standards was a strong second. Vieo made an impassioned plea for customers to play a more insistent role in promoting “maturity” in an industry that so far as been all too casual about working together. Packeteer and other panelists also stressed the need for some form of automated action, whether it’s rerouting traffic, QoS, application acceleration, or simply resetting a threshold.

On the Predictive Technologies panel, a very clear technological pattern emerged in three phases. The first phase involved some form of intelligent interaction with the infrastructure to gather element-specific and other information (e.g., application traffic), which variously focused on either intelligence (as with SMARTS) or policy and breadth (as it did for Micromuse). The next phase generally involved some way of representing and storing information with strong contextual relevance to such parameters as connectivity and topology, configuration and/or traffic flow and usage. Whether this was put forward in terms of CIM (the Distributed Management Task Force’s Common Information Model), or NetScout’s Common Data Model (CDM) for service traffic across the network, or Entuity’s new StormWorks (including topology, performance and availability related information) – I was struck with the strong agreement that some form of open, contextually sensitive data store was required.

Upon this, then the third phase – analytics in all shapes and flavors – could unfold with renewed power and relevance. I’m not sure if we had a quorum, but for the moment, we had the beginnings of a compelling, industrywide architecture. 

Next week we’ll complete our “coverage” of these panel discussions, looking at such questions as buyer’s concerns – “Who buys and who cares?” – and a few of views (including my own), about what’s real, what’s still in the future, and what landmarks to look for.