Datadog was raised towards the end of 2010, inspired by its founders' experience as long-time engineers. That experience included a move to agile processes and an adoption of cloud-based infrastructure. Those changes, while enabling great outcomes for the applications they were working on, also had problematic side-effects: the magnitude of the problems around managing those services, applications, and infrastructure required a new approach to management and analytics.
That is what the team embarked upon, to build a new type of solution for monitoring - one that would bring together data from servers, databases, applications, tools, and services to present a unified view of the applications that run at scale in the cloud. Fast forward to today and Datadog has seen success, counting companies such as Airbnb, Netflix, EA, Spotify, Warner Bros. Games, and AdRoll as customers. Datadog integrates around 100 commonly used technologies in modern applications and processes hundreds of billions of records per day for its customers.
That success has seen it raise a truckload of money - well over $50 million in its funding rounds thus far. That total is being added to today with an oversubscribed $95 million Series D round being led by Iconiq Capital with participation from existing investors Index Ventures, OpenView Ventures, Amplify Partners, Contour Ventures and other equity holders.
Alongside the funding, Datadog co-founder and CEO Olivier Pomel sent me some thoughts via email about why he thinks the growth in their own user base has been so strong in recent years. As an aside, Datadog isn't the only company riding the wave of new ways of doing IT - competitors such as recently IPO'd NewRelic are doing similarly.
Anyway, Pomel spoke of seeing their approach validated on a far broader scale than they had originally anticipated.
Pomel sees four dimensions that are growing rapidly, as detailed in the image below:
So, what does Pomel mean by these four different dimensions?
- Number of infrastructure units: This is what most people directly associate with scale. We have seen the number of “infrastructure units” involved in any production environment increase by orders of magnitude over the past five years. These infrastructure units used to be physical servers or fairly long-lived VMs, but are now increasingly made of ephemeral cloud instances, containers, and micro-services. Any company that was operating hundreds of servers in 2010 is easily managing thousands to tens-of-thousands of units today. In other words, enterprises have replaced a few fairly static things with a lot more moving pieces.
- Frequency of code and configuration changes: In the not-so-distant past, software teams only used to ship products once or a couple of times a year. Today, some of us are shipping code several times a day as companies big and small have switched from Waterfall to Agile development processes. Multiply that by the large number of teams any enterprise is made of, and you obtain production environments that keep changing all the time.
- Number of different platforms, tools, or services involved in the stack: As an additional consequence of transitioning from Waterfall to Agile, companies have switched from having one centralized enterprise architecture group making all infrastructure choices ahead of time to empowering each team to make their own decisions—so that they can ship products every week or month and don’t have to block on centralized decision-making. The result is a considerably more diverse ecosystem, as different teams will pick different platforms and tools. This trend is compounded by the rise of open source and SaaS, which has drastically increased the number of components to choose from. In short, all enterprises are using a much broader set of technologies to build and run their applications today than they used to a few years ago.
- Number of engineers interacting with the infrastructure: This has been probably the biggest cultural change engineers have felt over the past few years. Where infrastructure used to be managed solely by the ops team—or, in larger enterprises, “shared services” groups—it is now touched by multiple teams spanning operations and development. As a result, the number of engineers interacting with the infrastructure has increased dramatically.
That’s it. Because of rapid changes across each of these four dimensions, the magnitude of the monitoring problem has changed drastically. All indicators are showing that 2016 will be a banner year for adoption of public and private clouds, and will usher in the era of Monitoring at Scale.
I'm a firm believer that the way enterprise IT looks into the future will be hugely different than in the past. The rise of mobile, cloud computing, collaboration, and a demand for agility all make IT move in ways unthinkable only a few years ago. This all puts pressure on the IT organization to adopt new tools, processes, and infrastructures to enable that agility. As Pomel asserts, this, in turn, introduces severe management headaches for the organization, and it is these headaches that vendors such as Datadog and NewRelic are trying to resolve.
Of course, this is just a first step, and the addition of flexible monitoring and analytics should be backed up with the development of automated solutions to remove the human element from as many IT management problems as possible. That said, faced with antiquated monitoring tools that are next to useless in the face of modern applications, even a modest step into the future is worthwhile taking. It would appear that Datadog is taking real advantage of this fact.
This article is published as part of the IDG Contributor Network. Want to Join?