For mission-critical SaaS applications to displace their locally accessed cousins, SaaS end-user response times must rival those of the on-premises incumbents. Achieving this goal is challenging, and requires continuous end-user experience monitoring. SaaS vendors must use performance information to continuously improve response times to meet the same performance standards as on-premises software.
The performance of SaaS applications is subject to more variables than on-premises solutions. These variables include the performance of:
- the application itself,
- the data center infrastructure on which the application runs,
- all of the network segments along the path between application and user, and,
- the browser/client software on the user's computer or mobile device.
Users are likely to access a SaaS app through a variety of means, such as from a smart phone on a cellular data network, from a desktop computer with a broadband connection, or from a tablet connected to a public Wi-Fi hotspot.
Not only do these individual variables contribute to how a SaaS application performs, interactions among them also affect overall performance. For example, if an application has a high turn count and is located far from a user, lots of round trips over long distances will slow application response times.
But how do you know if a SaaS application is really performing up to snuff. Many people assume that if the data center infrastructure is working, users must be happy. Maybe--but then again maybe not. The data center infrastructure must be adequately sized, scale easily with demand, and be load-balanced to avoid bottlenecks. Hosting providers are generally quite good at monitoring whether servers are delivering what's expected of them. But although data center infrastructure is important, it is only a piece of a bigger picture.
To their chagrin, SaaS vendors often learn about poor performance from help desk complaints, flaming blog postings, negative tweets, and bad online reviews.
So what can a SaaS provider do?
We suggest continual end-user experience monitoring from pre-production on, so you can baseline your application's performance and gather information to improve performance at every step. This can be done by simulating user behavior using active synthetic transactions--an approach taken by vendors like Compuware, Keynote Systems and CA, through its Nimsoft line--or by passively monitoring the actual user experience, an approach offered by the likes of New Relic and AppDynamics.
In our next posting, we'll describe how the Apdex application performance index can be a useful tool in understanding and improving SaaS performance.