This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Overprovisioning has been the go-to approach for ensuring infrastructure and application performance. But when performance degradations and unplanned outages occur, even the most experienced teams move into “react-and-guess” mode.
Where to start? Every level of the infrastructure stack comes with its own possible issues, and tracking the culprit down takes time. And with IT infrastructures growing at an exponential pace and workloads to the cloud, the typical approach of overprovisioning and reacting-and-guessing is no longer a viable option.
There are three steps IT professionals can take to prevent emerging issues from becoming recurring problems that impact performance and productivity:
* Understand the system’s history, in addition to its present. Understanding how an infrastructure arrived at its current state will provide a clearer picture of what has been integrated throughout the system and the purpose of each component. Each part was put in place for a specific reason. Every application, whether on-premise or hosted, comes with its own dependencies. Patching together the history of the IT infrastructure will help you understand exactly what you are dealing with and why.
It will also give you an idea of the problems the system experienced in the past, which will help you detect issues more quickly. Auditing critical IT infrastructure is another process that helps teams benchmark systems and identify areas that may call for upgrades or more efficient processes. Knowing precisely which application workloads an infrastructure is supporting helps you detect wasteful assets and plan for the necessary size and scale of future deployments.
* Focus on the end user, in both the near and long term. Guaranteed availability isn’t just about alleviating IT headaches. Frequent latency delays are frustrating for users, and in the end, user issues matter far more than internal frustrations. Overprovisioning is no longer tenable given the explosive infrastructure growth, and there is a clear mandate to maximize existing assets.
What’s more, while overprovisioning does take into account workload fluctuations to ensure enough capacity to deliver a good end-user experience, it ties up resources that could be used for valuable new applications, products or services. Understanding traffic patterns, in terms of behavior during peak periods and the tasks that need to be completed during those high-demand times, will help you provision appropriately and ensure all critical workloads function properly.
* Use performance monitoring solutions that integrate with disparate environments. Assessing performance requires a solution that analyzes system-wide health, utilization and performance to identify issues that may increase latency. There are a number of technologies available that attempt to solve this puzzle, such as enterprise systems management (ESM) and network performance management (NPM) tools. However, these monitoring platforms were developed before data centers became as virtualized and heterogeneous as they are today.
With the disparate systems working together in enterprise environments, an understanding of the way these solutions and systems collaborate is critical. Vendor-neutral IT monitoring and management technologies enable workers to measure the outputs and activities in cloud, virtual and on-premise applications from different vendors.
This integration of performance standards should also be reflected in a company’s service-level agreements (SLAs); as each component in an IT infrastructure has come to overlap so heavily with the rest, isolating each element in siloed SLAs no longer makes sense. Rely on SLAs that look at your infrastructure as the holistic entity it is, and focus on performance, not just availability.
The pace of IT demands agility, accuracy and answers that drive optimal performance at all times, but guaranteeing performance isn’t easy. It requires a vendor–neutral, unbiased understanding of system-wide performance, and accurate analytics to support and inform immediate action. All of this starts with a better comprehension of IT infrastructure assets, which only becomes more crucial as additional investment of resources, both financial and personnel, becomes necessary.
Answers are the silver bullet in the modern IT landscape, and they’re not only about the data stored in an application infrastructure, but how that data is correlated and analyzed to deliver value. All of the knowledge gained from an infrastructure’s operation is significant to the ultimate success of the business, and the companies that take proactive steps toward gaining those insights will be the ones that find themselves ahead of the curve.
Gentry is the vice president of marketing and alliances at Virtual Instruments. He has more than 18 years of experience in marketing, sales and sales engineering, and has established his expertise in the open systems and storage ecosystems.