IT has long known there is value in log data. However, the most significant value surfaces when organizations are able to make the shift from focusing on traditional forensic log analysis after an event to proactive baselining and alerting about troubling trends or unique activity uncovered in log data.
When organizations take a proactive approach, algorithms identify patterns of log activity and create a picture of what’s “normal” behavior. When log entries vary from that baseline, the opportunity exists to drill down to the relevant logs to see what changed and why. As a result, it’s possible to correlate polled performance metrics on networks, servers, applications, storage and more with the corresponding log details on device or application actions and changes in state.
There are quite a few instances where taking a fresh approach to log data can make a meaningful difference for an organization. For instance, proactively utilizing log data can assist in managing unexpected changes, as well as a wealth of deliberately planned changes to infrastructure such as a bug fix, a sever upgrade or OS update, a new application or a cluster reconfiguration.
Essentially, log data becomes a way to measure and manage these changes; to confirm that a change has achieved its performance objectives; or to identify how a change may have triggered a cascade of unexpected issues. And, when combining log data with performance metrics, users can accurately forecast future growth in network activity and usage. Those projections can then become the basis for network changes and upgrades to handle that growth.
Log data also makes it possible to capture user activity at a granular level. This information can baseline behaviors for the average number of users, or for peak number of users. Performance metrics then reveal how much of the network resources and processes are associated with each. This combination of capabilities makes it possible to do things like deconstruct online shopping activities once users press the checkout button. Operations teams can see how long the transaction takes and measure that against the backend CPU load and other metrics. By bringing together detailed, event-based log data with overall performance metrics, they can also see the weight a single customer puts on the infrastructure, and on the overall health of an application.
Of course, there are many other ways to derive actionable insight from log data. For some valuable ideas, check out this whitepaper on 7 Ways to Use Log Data for Proactive Performance Monitoring.