This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Capacity management was relatively easy when workloads grew incrementally, but those days are gone. Customer-facing Web and mobility services can spike unpredictably, and Big Data workloads can quickly overwhelm existing capacity. In this new world of wild workload fluctuations, IT has to do a better job of managing capacity.
Today many IT groups are manually collecting performance data from siloed tools and systems. A virtualization administrator will gather metrics from their virtualization vendor’s operations platform, a hardware administrator will leverage a hardware-monitoring platform, and so on. Capacity management tends to be done on an ad hoc, inconsistent basis. As a result, if a CIO is looking to make plans or report on capacity, he or she may have multiple teams delivering many different reports.
Ultimately, without comprehensive, effective capacity management, IT organizations are flying blind and working in reactive mode. This not only makes it difficult to manage current infrastructure and capacity demands, but significantly hinders the organization’s ability to support emerging requirements and initiatives.
Here are six key steps to help you advance your capacity management objectives:
* Step 1: Establish a Unified View of Component Capacity Management Data: To move past the siloed approaches of the past, it’s vital that IT organizations establish a central archive and management interface for all components and elements across the enterprise. Gathering complete data sets is critical. If gaps exist, they can serve to mask great spikes in demand, and therefore lead to greater risk in your decision-making processes.
* Step 2: Establish Application/Service Capacity Management Capabilities: Once you have established a central, enterprise-wide archive of component metrics, tap into configuration management databases (CMDBs) or equivalent systems and combine configuration, dependency and relationship data for workloads, applications and complete business services. Use this information to map the relationship of business services to their associated infrastructure components. By understanding business service workloads and how they correlate to the usage of specific resources, IT executives can move from component capacity management to understanding and evaluating capacity at an enterprise-wide business service level.
* Step 3: Leverage Scenario Planning Capabilities: “What-if” modeling capabilities can help you 1) understand the impact of hardware upgrades, virtualization efforts or cloud migration initiatives, and 2) assess the impact of planned growth or changes in service demand, identify trends, discover bottleneck components and evaluate remediation efforts. Leverage application performance management data to help improve automation of workload definitions and profiles and provide input to modeling capabilities with real transactional data to right-size infrastructure environments for workload peaks, taking into account throughput and response times while reducing risk of capacity related performance issues.
* Step 4: Leverage Business Data: Continue to establish the visibility needed to do capacity management of business services. In addition to IT data, start leveraging business data such as sales forecasts and hiring plans to factor into capacity management. By comparing changing workloads against changes in the business environment, you can bring entirely new levels of intelligence to bear in understanding evolving capacity demands. For example, IT managers can more accurately forecast how a 20% growth in new customer wins will affect demand on the organization’s order tracking system, or how a huge increase in the number of Web portal users creates risk to the user experience as performance levels increase.
* Step 5: Leverage Data from Across the Technology Market: Do detailed what-if scenario analysis for how both current and new technologies will accommodate emerging service requirements, so organizations can more effectively plan technology migration options, and more efficiently manage their execution. For example, if an IT organization needs to wring more value out of technology investments, executives may want to migrate to a new virtualization platform or IaaS vendor that purports to offer significant cost savings. Based on industry metrics of these virtualization and cloud alternatives, the IT team can build detailed models, including specific performance metrics and optimally-sized VM template configurations, in order to do a detailed cost comparison, identify the most cost effective alternative and ultimately provide ample financial justification to senior leadership in proposing the move.
* Step 6: Implement Continuous Optimization and Improvement: It’s critical to deploy ongoing processes not just for enhancing infrastructure capacity and application delivery, but to institute ongoing processes for optimizing modeling and forecasting efforts. For example, once a new deployment has been rolled out into production for some time, it’s important to compare the performance of this new implementation with the performance levels predicted by the models, and determine where the predictions were on the mark, where they weren’t and why. The insights gathered can help managers make appropriate adjustments in the calibration of predictive models, and help foster more intelligent analysis and predictions across the organization’s spectrum of deployments and initiatives.