Flesh and blood metrics for IT success

* Measuring IT success

Last week, I wrote about how and why network management is "cool again" based on a panel I moderated at Interop earlier this month. This week, I'm basing my column in part on another panel, as well as some active correspondence and dialog on multiple fronts.

The Interop panel was called "Is there a Single Metric for IT Success?" and was hosted by Allistair Croll from Coradiant. The panel included Eric Siegel from the Burton Group (as did my panel), and Peter Sevcik from NetForecast. The subsequent correspondence and dialog has been with multiple clients on quality of experience (QoE). The overarching theme between the two is, of course, how IT can most effectively and meaningfully set metrics to account for success - in satisfying its customers and in aligning to the business.

I should start by saying that Sevcik has developed a fairly compelling formula - Apdex - to capture performance values for enterprise applications in a single metric. I won't try to do full justice to it here, but it acknowledges the need for pervasive and observed insights into actual application response at the desktop or end station - which is of course where the user "experiences" the application service.

However, the panel was entitled "A Single Metric for IT Success." So I suggested that just like any business, IT needs to assess itself based on three overall areas of concern: quality, cost and demand.

The first two are perhaps self evident to most in IT. The third - "demand" - may seem like a stretch. But in an era of accountability and business alignment, IT's ability to capture and anticipate demand for its service "products" not only helps account for costs but also helps to plan for more effective service offerings. It may also expose business or consumer behaviors that are unexpected or undesirable, or conversely desirable but not anticipated.

Having these insights places IT as a truly proactive partner with the customers it serves. Not having insight into demand - beyond the raw assumptions that if someone thinks they want a service they probably do - leaves the door open for unused and wasteful service offerings, while neglecting trends and requirements for new services, or extensions of existing services.

I am happy to say that the panel recognized the fact that there are other metrics for success in this way, and so we redefined the focus around the implied topic - a single metric for success in assessing application service performance. And this brings me to some subsequent dialogs on QoE. More than one vendor has advocated the term 'end-to-end QoE" and I've been inclined to challenge this.

Like Sevcik's "Apdex" - I would place QoE solidly at the intersection of the human being with the application service. While other parameters may reflect technical requirements that can impact QoE - and in some ways may even be more important in proactively managing the service so that experience is not degraded - they cannot be aligned in one-to-one fashion with QoE. Server, or network latency, for instance, may well impact QoE, but in my opinion, only those metrics that "reside' where the end user actually experiences the application performance in all of its dimensions - response time, consistency of response time, look and feel, ease of use, etc. - specifically apply to QoE.

A failure to make this distinction opens the door to mixing up technical metrics and diagnostic detail with actual experienced service quality. From a service-level agreement planning perspective, it creates an environment where it's all too easy to confuse means with ends. The Mean Opinion Score (MOS) score used for VoIP is an interesting case in point.

Many formulas for MOS are designed as vehicles for assessing, as accurately as possible, those technical metrics across the infrastructure that are mostly likely to impact end user experience. As such, they are immensely valuable - but by my definition, they are not a pure play example of QoE. These approaches to MOS are ways of controlling a service effectively - not open-ended assessments of actual user experience. MOS is also interesting in that its roots are based on customer opinion - a five scale from imperceptible quality issues to very annoying - which truly is about experience in all of its thorny subjectivity.

All this is in no way to downgrade the importance of technical metrics and certainly those metrics that enable IT to more effectively focus attention on those variables that are most likely to impact a service's performance. What I'm suggesting, though, is that there is a whole other world out there to account for - that's even vaster and more complex than even the largest network. And that world is the flesh and blood human being consuming IT services, with all their idiosyncracies, expectations and pressures. Good QoE metrics can only approximate concerns for that flesh-and-blood consumer, but insofar as they do so wisely, they are hugely valuable. And of course, there is that old-fashioned approach called "customer dialog." Proactive customer dialog targeted on capturing changing perceptions, requirements and needs is as much a part of QoE as observed response time at the desktop.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Now read: Getting grounded in IoT