Like many in the industry - although, perhaps, not that many - I was at NetWorld+Interop in Las Vegas for the last week in April. Among the countless meetings and chances to peek across the show floor, two panels were sandwiched in - both of which I had the privilege of moderating. One was about predictive technologies and the other on self-healing networks.Both panels faced a similar set of questions - with some good, solid discussion points. For this and next week\u2019s column, I thought I\u2019d share some of highlights.First, let me introduce the panelists, by company. For self-healing networks, they were:* Aprisma Technologies* Cisco* NetQoS* Packeteer* VieoAnd for predictive technologies, they were:* Entuity* Micromuse* NetScout* Opnet* SMARTSThe panels began with a definitional discussion - what do we (collectively and individually) believe \u201cself-healing networks\u201d and \u201cpredictive technologies\u201d to be?Virtually all of the panelists agreed on critical objectives for self-healing networks - such as supporting service levels more effectively, building redundancy into networks (or as Cisco would stress, more broadly, \u201cresiliency\u201d), and finding more ways to predict requirements and take automated actions. There was some real discussion around what constitutes \u201cnetworks.\u201d Generally, the panel felt strongly (Aprisma set the initial tone) that \u201cnetworks\u201d in this sense are no longer a Layer 1-3 discussion, but should ideally represent Layers 1-7 of the OSI stack, from physical transport through application. Why? Because in the end, a \u201cself-healing\u201d network without sensitivity to application performance issues can become a \u201cwho cares?\u201d or even worse - a counterproductive overinvestment.Someone in the audience suggested that this direction might be best called a \u201cself-healing infrastructure.\u201d This might suggest a new direction of growth for Interop overall - as the \u201cnetworked infrastructure\u201d redefines the meaning of \u201cnetwork.\u201dWithin \u201cPredictive Technologies,\u201d the definitional discussion, not surprisingly, focused more on technologies per se. There was also more contention in the second panel among members, as many tended to answer in terms of their own technological investments.However, one answer, from Opnet, posited two overall categories for \u201cPredictive.\u201d Using my own words, these are: pattern-recognition-related analyses primarily for diagnostics of problems before they occur; and multidimensional \u201cwhat\/if\u201d analyses - for example, \u201cIs my network ready for VoIP?\u201d - to enable optimization, operational planning, business\/service planning and business assessment. I believe these two categories do well as overall \u201csubmarkets,\u201d with a very distinct set of technologies and benefits. This is true even if some vendors, such as NetQoS, address both types of conditions within a single suite.The next question for both panels was \u201cWhat are the three key technologies for (self-healing\/ predictive) management?\u201d The hands-down winner in both panels was some form of analytics. Enterprise Management Associates has already published reports stating that this is the \u201cAge of Analytics\u201d within the management marketplace (in which \u201canalytics\u201d is the single key differentiator across brands), and our panelists generally agreed.In \u201cSelf-Healing Networks,\u201d the requirements for interoperability and standards was a strong second. Vieo made an impassioned plea for customers to play a more insistent role in promoting \u201cmaturity\u201d in an industry that so far as been all too casual about working together. Packeteer and other panelists also stressed the need for some form of automated action, whether it\u2019s rerouting traffic, QoS, application acceleration, or simply resetting a threshold.On the Predictive Technologies panel, a very clear technological pattern emerged in three phases. The first phase involved some form of intelligent interaction with the infrastructure to gather element-specific and other information (e.g., application traffic), which variously focused on either intelligence (as with SMARTS) or policy and breadth (as it did for Micromuse). The next phase generally involved some way of representing and storing information with strong contextual relevance to such parameters as connectivity and topology, configuration and\/or traffic flow and usage. Whether this was put forward in terms of CIM (the Distributed Management Task Force\u2019s Common Information Model), or NetScout\u2019s Common Data Model (CDM) for service traffic across the network, or Entuity\u2019s new StormWorks (including topology, performance and availability related information) - I was struck with the strong agreement that some form of open, contextually sensitive data store was required.Upon this, then the third phase - analytics in all shapes and flavors - could unfold with renewed power and relevance. I\u2019m not sure if we had a quorum, but for the moment, we had the beginnings of a compelling, industrywide architecture.\u00a0Next week we\u2019ll complete our \u201ccoverage\u201d of these panel discussions, looking at such questions as buyer\u2019s concerns - \u201cWho buys and who cares?\u201d - and a few of views (including my own), about what\u2019s real, what\u2019s still in the future, and what landmarks to look for.