- 15 Non-Certified IT Skills Growing in Demand
- How 19 Tech Titans Target Healthcare
- Twitter Suffering From Growing Pains (and Facebook Comparisons)
- Agile Comes to Data Integration
Network World - This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
Despite rapid evolution in the application performance management (APM) market, few enterprise IT organizations would say they have sufficiently solved their application performance problems. If anything, the complexity challenges posed by virtualization, agile development practices, multi-tier application architectures and other IT mega-trends are outpacing the capabilities of legacy APM products.
In light of this, IT organizations must judiciously evaluate the effectiveness of technologies and management practices they use to manage application performance. One conventional APM practice that deserves some scrutiny is the derivation of application health and performance data from host-based instrumentation.
SURVEY SAYS: Bad alignments hamper app management
Most APM technologies rely on agents deployed on servers or within application components to gather diagnostic data. These agents typically perform byte-code instrumentation or call-stack sampling within the Java Virtual Machine (JVM) or the .NET Common Language Runtime (CLR) -- basically using profiling techniques common to software development tools.
Certainly, this practice can yield useful information for managing application performance, including memory usage and the frequency and duration of function calls. However, this legacy APM approach suffers from five inherent drawbacks that make it increasingly untenable in today's IT environments.
* Susceptibility to changes in application code, architecture and environment. During test and development, software engineers often use profilers to locate hot spots and remove bottlenecks in their code. While annotated source code and deep call stacks are acceptable for the developers, they are less useful to operations teams. In production, operations teams need to answer higher-level questions about application health and performance. To provide this view, agent-based APM tools require complicated configurations that are sensitive to changes in the application code, architecture or environment.
This limitation may not have been a serious problem in the static environments of the past, but today's applications undergo ongoing, iterative development, use loosely coupled multi-tier architectures, run on heterogeneous software and hardware platforms, and operate in virtualized environments where virtual machines are spun up, spun down and migrated across the data center. With such rapid change at the application tier, host-based data gathering requires continual recertification and redeployment to ensure that it is functioning properly.
* System and network overhead. APM vendors that rely on host-based data gathering claim that their approach imposes "minimal overhead" or "low overhead" on system performance, yet these vendors seldom offer guarantees. While the actual overhead incurred depends on the specificity of the data gathered and the application itself, less than 5% performance overhead is an optimistic general estimate.