• United States

Network verification: predicting the future of complex networks

Nov 27, 20176 mins
Network Management SoftwareNetwork MonitoringNetworking

A new kind of automation uses math to help ensure business goals continuously match reality.

wireless network - internet of things [IoT]
Credit: Thinkstock

Across all sorts of networks today – in enterprises large and small, service providers, government agencies, and beyond – there is a transformation towards a software-driven architecture. Where procedures were once manual, they are becoming more automated, driven perhaps foremost by the need for agility, while maintaining availability and protection.

That transformation is coming in a broad arc, not a single drop-in solution, and the industry is still figuring it all out; you’ll see me write more about “softwarization” of the network here in the future. In this post, I’ll discuss a burgeoning aspect of this transformation towards automation: a new technology called network verification, which helps predictively ensure that business goals match reality, even throughout frequent change. With this automated assurance of resilience and security intent, the network can become more agile. And network verification does that with math!

We’ll see how. But first let’s talk about why. A large network with thousands of devices and dozens of device models and protocols is a complex entity. In some ways, the new technologies recently introduced seem to make the network even more complicated: there are new layers of software-defined data centers, orchestration systems, and cloud networks. One large enterprise I’ve spoken with has multiple virtual networks with three cloud providers, and these virtual networks need to be stitched together along with the on-premises network (which hasn’t gone away) in overlays that achieve security, availability, and performance goals.

What methods are commonly used today to deal with all this complexity and make sure it achieves the goals?

How does your team verify the network is functioning as desired?*

Manual checks: 69% Monitoring: 57% Homegrown scripts: 47% Commercial product: 12% We don’t: 7%

The chart summarizes results from a survey of 315 IT professionals conducted by Dimensional Research in October 2016 (you can see the full results; disclosure: my employer Veriflow sponsored this survey). Manual checks like traceroutes, pings, and opening up a browser to try out a service don’t provide strong assurance that the network-wide intent is met. Monitoring traffic and flows is useful, but only finds problems after the fact.

Given the challenging environments and the limitations of common tooling, it’s perhaps not surprising that 59 percent of respondents in the survey said growing complexity in the network has led to more frequent outages.

Here, then, is the key question: Given that the organization’s broader success depends on the network, how can we assure that the business intent – such as resilience or security of the network – was actually carried out through design and implementation in a complex environment? And even if initially implemented correctly, is the intent being met all the time? For example, the organization may desire to verify that:

  • All critical services in the data center are available to remote sites, across multiple paths.
  • A segment of the network that needs to maintain regulatory compliance, spanning a hybrid cloud and on-premises databases, is fully isolated.

Using math to predict the future

Within just the past several years, a new technology area has been developed that can answer questions about complex networks with mathematical rigor.

The technology is inspired by an area of computer science called formal verification. The idea of formal verification is that rather than just sampling how a system responds in a few test cases, we can prove that the entire system always achieves a desired property.

That’s not easy of course. Formal verification of the design of a system involves reasoning about all the different inputs that system might possibly receive, and all the ways it could respond, to determine whether a property is always satisfied. But with good technology applied to appropriate domains, it’s achievable: for example, IBM and other companies formally verify hardware before it’s fabbed into silicon, and NASA verifies their Mars rover flight software to find concurrency flaws.

Can we formally verify a network? In a sense, the network is like software: it takes inputs (packets injected at certain locations, and device configurations), executes actions step by step based on those inputs (using forwarding rules like “if the packet’s IP destination lies in then send it out port 24”), and finally produces results (the packet is dropped, modified, forwarded along a certain path, or delivered to a certain port). If we can verify software, why not verify a network?

Network verification research was pioneered in academic research labs, including early work at Carnegie Mellon University, AT&T Labs, the University of Illinois at Urbana-Champaign (with myself and colleagues), Stanford University, and Microsoft Research. Inspired by formal verification, this line of research developed new techniques for the networking domain. The technology first builds a formal model of the network which represents all ways that data can flow through the network. Since the network depends on many devices working together, the verifier’s model has to incorporate many kinds of gear – routers, firewalls, load balancers, virtual networks in the cloud, and so on – and many vendors, models, and protocols, in order to build a model of the whole network as one system. Next, the technology uses the model to verify that all possible data flow behavior matches a property of interest.

The value of this approach is to provide strong assurance even amid the complexity. “There’s a point where your brain can’t remember it all and can’t keep it all straight,” Ethan Banks commented in a recent Packet Pushers podcast. “You might be able to go ‘well, if this circuit goes down, traffic’s going to converge like this,’ and you might be right, but what takes the guesswork out of it … is that model that looks at exactly what the state of the network is.”

Take the examples of intent previously mentioned, involving availability and segmentation. A network engineer, while considering a change to an access control list (ACL), may wish to have the verifier check whether a certain list of critical services would still be available to every remote site after the change, and whether the secure segment of the data center is still fully isolated. The engineer no longer has to wait and see what happens the morning after the change.

Expanding the scope of automation

We sometimes focus on automation of configuration of the network: orchestration systems, for example, help create configurations from templates and deploy changes across the network.

But as the network infrastructure becomes more complex, verification that the desired result is achieved is critical as well. Commonly, checking that the network is protected is often reactive or manual. Network verification technology provides a path to automated, proactive verification.

The goal of that automated verification is ultimately to prevent outages and vulnerabilities in the infrastructure, which in turn reduces risk associated with changes therefore enables networking teams to evolve the network more quickly. As a result, continuously verifying the network is a valuable first step before deploying other kinds of automation and orchestration systems. In a future post, I’ll discuss how network verification helps achieve those goals in practical use cases, and how it compares with traffic monitoring.


Brighten Godfrey is the co-founder and CTO at Veriflow. Brighten has made an impact in the technology world by conducting research in networked systems and algorithms for over a decade and co-inventing the key technology used at Silicon Valley-based Veriflow. As co-founder, he made Veriflow the first networking company to utilize formal verification to eliminate change-induced network outages and vulnerabilities.

Brighten’s work has developed architectures and systems for Internet routing, data center networking, high performance data transport, and network data plane verification. His work has also advanced theoretical analysis of network algorithms. He has co-authored more than 50 scientific publications, and many of those technologies have been deployed by hyperscale cloud computing providers. Not only has he made his mark in the technology world, but Brighten also shares his knowledge with aspiring computer scientists.

When he isn’t in the Silicon Valley, Brighten can be found advising young researchers as Associate Professor of Computer Science at the University of Illinois at Urbana-Champaign. He is also the co-instructor of “Cloud Networking,” a popular Coursera course that allows researchers to learn remotely.

The opinions expressed in this blog are those of Brighten Godfrey and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.