We live in an age of instrumentation, where everything that can\u00a0be measured is\u00a0being measured so that it can be analyzed and acted upon, preferably in real time or near real time. This instrumentation and measurement process is happening in both the physical world, as well as the virtual world of IT.\nFor example, in the physical world, a solar energy company has instrumented all its solar panels to provide remote monitoring and battery management. Usage information is collected from a customers\u2019 panels and sent via mobile networks to a database in the cloud. The data is analyzed, and the resulting information is used to configure and adapt each customer\u2019s system to extend the life of the battery and control the product. If an abnormality or problem is detected, an alert can be sent to a service agent to mitigate the problem before it worsens. Thus, proactive customer service is enabled based on real-time data coming from the solar energy system at a customer\u2019s installation.\n\nIn the IT world, events are being measured to determine when to autoscale a system\u2019s virtual infrastructure. For example, a company might want to correlate a number of things taking place at once \u2014 visitors to a website, product lookups, purchase transactions, etc. \u2014 to determine when to burst the cloud capacity for a short time to accommodate more sales or other kinds of activity.\nThe idea of measuring everything is to become more data-driven as a business, to be able to make better business decisions and take timely actions based on events, metrics, or other time-based data. This is happening across all industries as companies use their digital transformations to change the way they do business.\nDatabases and time-series data \u2014 what\u2019s required\nMuch of this data is time-series data, where it\u2019s important to stamp the precise time when an event occurs, or a metric is measured. The data can then be observed and analyzed over time to understand what changes are taking place within the system.\nTime-series databases can grow quite large, depending on how many events or metrics they are collecting and storing. Consider the case of autonomous vehicles, which are collecting and evaluating an enormous number of data points every second to determine how the vehicle should operate.\nA general-purpose database, such as a Cassandra or a MySQL, isn\u2019t well suited for time-series data. A database that is purpose-built to handle time-series data has to have the following capabilities, which general-purpose databases don\u2019t have.\n\nThe database needs to be able to ingest data in almost real time. Some applications\u2014like the one for the autonomous vehicle \u2014could conceivably produce millions or hundreds of millions of data points per second, and the database must handle the ingest.\nYou have to be able to query the database in real time if you want to use the database to monitor and control things, and the queries have to be able to run continuously. With a general-purpose database, queries are batches and not streaming.\nCompression of data is important and is relatively straight forward if the database is specifically designed for time-series data.\nYou have to be able to evict data as fast as you ingest it. Time-series data is often only needed for a specific period, such as a week or month, and then it can be discarded. Normal databases aren\u2019t constructed to remove data so quickly.\nAnd finally, you have to be able to \u201cdown sample\u201d by removing some but not all data. Say you are taking in data points every millisecond. You need that data to be high resolution for about a week. After that, you can get rid of much of the data, but keep some at a resolution of one data point per second. In time-series data, high resolution is very important at first, and then lower-resolution data is often fine for the longer term.\n\nOpen-source projects aimed at time-series data\nThe founder of InfluxData, Paul Dix, saw this unique need, and he built the InfluxData Platform specifically to accumulate, analyze, and act on time-series data. He started with an open-source project that contained InfluxDB, the core database. InfluxDB was a quick hit on GitHub among developers. After that, he raised some funding and kicked off three more open-source projects to round out the InfluxData Platform. Those projects included:\nTelegraf\u00a0\u2014 This is a data collector that goes on things such as a network device, an application, a sensor, or a standalone server. It collects all the data and sends it to the InfluxDB database. Open source contributors have developed more than 160 Telegraf plug-ins to date.\nChronograf\u00a0\u2014 This visualization engine allows you to graph, visualize, and perform ad hoc exploration of the data. You can chart the data in a dashboard as it is coming into the database.\nKapacitor\u00a0\u2014 As a co-processor to the database, Kapacitor allows you to act on the data. It has its own scripting language and its own capabilities, so you can plug in custom logic or user-defined functions. It can run on the back end to allow let you run machine learning algorithms against the data as it comes in. Kapacitor is a very powerful open-source project.\nKnown as the TICK stack (Telegraf, InfluxDB, Chronograf, Kapacitor), these four components make up a powerful and popular platform for working with time-series data. Everything is available as open-source software for developers. InfluxData offers a closed-source commercial version for production scenarios that require clustering, high availability, and strong security.\nEverything is instrumented for measurement\nThe IoT world has an inherent need for the TICK stack. The physical world of the Internet of Things is highly sensored. Everything \u2014 our bodies, our clothes, healthcare devices, industrial plants, our homes, our cars, etc. \u2014 is getting instrumented for measurement of time series data. These sensors are looking at pressure, temperature, speed, heart rate, volume, light, and so much more, and quite often, some action needs to be taken as a result of changes over time in that data. For instance, a physical activity tracker tells you to slow your running pace to lower your heart rate. A car with a collision avoidance system automatically applies the brakes when the car is approaching a stationary object. The sensors all around us are continuously collecting and monitoring data to help us (or programs) make better decisions.\nInstrumentation of everything is the way of the future, and a time-series database and associated tools will be necessary to collect, analyze, and act on data when it is still meaningful.\nAnd then in the IT world, the virtualization of our systems has created a strong use case for the InfluxData Platform. It started with virtual machines, so instead of having one server, you have five. Then VLANs came along, so now there are multiple LANs talking to multiple VMs on one machine. Now we have containers, so maybe there is one server running six VMs and 40 containers. Then each of those containers has a set of microservices.\nWhat has happened is that the whole software infrastructure is ephemeral; everything is virtual, portable, temporary, up and down. However, we still need a real-time view of what\u2019s happening within these systems. Thus, the software is being instrumented to provide real-time situational data, or what\u2019s called observability. It provides a system of record to capture all those metrics and events that are coming off the software infrastructure and the hardware infrastructure and stores them all in one place. Now it\u2019s possible to see what is happening with the infrastructure. And if something happens that is a concern, there is an awareness of it and the system has a record of it. Taking this a step further, it\u2019s possible to correlate events and metrics to understand why an SLA is or is not being met.\nInstrumentation of everything is the way of the future, and a time-series database and associated tools \u2014 such as the InfluxData Platform \u2014 will be necessary to collect, analyze and act on data when it is still meaningful.