It’s easy to understand the potential the Internet of Things (IoT) represents. Having a growing realm of seamlessly connected assets capable of constantly enhancing operations through distributed intelligence and automation could ultimately revolutionize how businesses function. Yet, IoT introduces significant challenges as well. For instance, how does IT detect performance anomalies among tens of thousands of highly distributed sensors and connected devices?
No matter how you look at it, the numbers associated with IoT are monstrous. For instance, analysts conservatively anticipate that the number of connected devices, including non-hub devices such as sensor nodes and accessories, to more than double by 2020. In addition to the challenge posed by the volume of data these devices generate and additional load on the digital infrastructure, network and IT teams will also have to monitor the performance of the devices themselves. This will force them to consider protocols other than old favorites such as SNMP. When gathering performance metrics from IoT devices, organizations need to look for a monitoring solution that takes a data agnostic approach to collection.
With so many connected devices, the aggregate data increase will place significant pressure on the network. In the past, many organizations have side-stepped scalability challenges by choosing to monitor “only the important” parts of the network, concentrating on the central parts of the network and ignoring the edge. With the profusion of IoT, and the criticality of the data it creates, this is no longer an option. If IT’s performance monitoring platform can’t intuitively and cost-effectively scale with this increase in data, organizations risk creating a dangerous visibility gap.
The sensible approach is to build upon a performance monitoring platform engineered for speed at scale. This means abandoning products built around a monolithic centralized database architecture that does not smoothly or horizontally scale out, and may fold under the weight of massive data. Needless to say, a modern organization can’t function without access to near real-time information about the health its infrastructure. By keeping performance data distributed, however, IT is better equipped (one may say “by design”) to handle the challenge of massive data generated by the IoT.
The nature of IoT traffic also demands closer scrutiny, especially if the goal is to understand the actual activity transpiring at any given time. In this new environment, it may be impossible to troubleshoot a performance issue when looking at five minute – or even one minute – snapshots of your infrastructure. Higher granularity data may need to be generated on demand to provide the necessary visibility.
The solution is to embrace a monitoring platform capable of high frequency polling down to the second. While you might not always ratchet up your polling cycles to such granularity, you’ll need to do so when investigating performance issues in the IoT world.
For more information, check out this whitepaper on How the IoT Will Disrupt Your Performance Monitoring Strategy.