Across all sorts of networks today \u2013 in enterprises large and small, service providers, government agencies, and beyond \u2013 there is a transformation towards a software-driven architecture. Where procedures were once manual, they are becoming more automated, driven perhaps foremost by the need for agility, while maintaining availability and protection.\nThat transformation is coming in a broad arc, not a single drop-in solution, and the industry is still figuring it all out; you\u2019ll see me write more about \u201csoftwarization\u201d of the network here in the future. In this post, I\u2019ll discuss a burgeoning aspect of this transformation towards automation: a new technology called network verification, which helps predictively ensure that business goals match reality, even throughout frequent change. With this automated assurance of resilience and security intent, the network can become more agile. And network verification does that with math!\nWe\u2019ll see how. But first let\u2019s talk about why. A large network with thousands of devices and dozens of device models and protocols is a complex entity. In some ways, the new technologies recently introduced seem to make the network even more complicated: there are new layers of software-defined data centers, orchestration systems, and cloud networks. One large enterprise I\u2019ve spoken with has multiple virtual networks with three cloud providers, and these virtual networks need to be stitched together along with the on-premises network (which hasn\u2019t gone away) in overlays that achieve security, availability, and performance goals.\nWhat methods are commonly used today to deal with all this complexity and make sure it achieves the goals?\nHow does your team verify the network is functioning as desired?*\nManual checks: 69%Monitoring: 57%Homegrown scripts: 47%Commercial product: 12%We don't: 7%\nThe chart summarizes results from a survey of 315 IT professionals conducted by Dimensional Research in October 2016 (you can see the full results; disclosure: my employer Veriflow sponsored this survey). Manual checks like traceroutes, pings, and opening up a browser to try out a service don\u2019t provide strong assurance that the network-wide intent is met. Monitoring traffic and flows is useful, but only finds problems after the fact.\nGiven the challenging environments and the limitations of common tooling, it\u2019s perhaps not surprising that 59 percent of respondents in the survey said growing complexity in the network has led to more frequent outages.\nHere, then, is the key question: Given that the organization\u2019s broader success depends on the network, how can we assure that the business intent \u2013 such as resilience or security of the network \u2013 was actually carried out through design and implementation in a complex environment? And even if initially implemented correctly, is the intent being met all the time? For example, the organization may desire to verify that:\n\nAll critical services in the data center are available to remote sites, across multiple paths.\nA segment of the network that needs to maintain regulatory compliance, spanning a hybrid cloud and on-premises databases, is fully isolated.\n\nUsing math to predict the future\nWithin just the past several years, a new technology area has been developed that can answer questions about complex networks with mathematical rigor.\nThe technology is inspired by an area of computer science called formal verification. The idea of formal verification is that rather than just sampling how a system responds in a few test cases, we can prove that the entire system always achieves a desired property.\nThat\u2019s not easy of course. Formal verification of the design of a system involves reasoning about all the different inputs that system might possibly receive, and all the ways it could respond, to determine whether a property is always satisfied. But with good technology applied to appropriate domains, it\u2019s achievable: for example, IBM and other companies formally verify hardware before it\u2019s fabbed into silicon, and NASA verifies their Mars rover flight software to find concurrency flaws.\nCan we formally verify a network? In a sense, the network is like software: it takes inputs (packets injected at certain locations, and device configurations), executes actions step by step based on those inputs (using forwarding rules like \u201cif the packet\u2019s IP destination lies in 10.2.1.0\/24 then send it out port 24\u201d), and finally produces results (the packet is dropped, modified, forwarded along a certain path, or delivered to a certain port). If we can verify software, why not verify a network?\nNetwork verification research was pioneered in academic research labs, including early work at Carnegie Mellon University, AT&T Labs, the University of Illinois at Urbana-Champaign (with myself and colleagues), Stanford University, and Microsoft Research. Inspired by formal verification, this line of research developed new techniques for the networking domain. The technology first builds a formal model of the network which represents all ways that data can flow through the network. Since the network depends on many devices working together, the verifier\u2019s model has to incorporate many kinds of gear \u2013 routers, firewalls, load balancers, virtual networks in the cloud, and so on \u2013 and many vendors, models, and protocols, in order to build a model of the whole network as one system. Next, the technology uses the model to verify that all possible data flow behavior matches a property of interest.\nThe value of this approach is to provide strong assurance even amid the complexity. \u201cThere's a point where your brain can't remember it all and can't keep it all straight,\u201d Ethan Banks commented in a recent Packet Pushers podcast. \u201cYou might be able to go \u2018well, if this circuit goes down, traffic's going to converge like this,\u2019 and you might be right, but what takes the guesswork out of it ... is that model that looks at exactly what the state of the network is.\u201d\nTake the examples of intent previously mentioned, involving availability and segmentation. A network engineer, while considering a change to an access control list (ACL), may wish to have the verifier check whether a certain list of critical services would still be available to every remote site after the change, and whether the secure segment of the data center is still fully isolated. The engineer no longer has to wait and see what happens the morning after the change.\nExpanding the scope of automation\nWe sometimes focus on automation of configuration of the network: orchestration systems, for example, help create configurations from templates and deploy changes across the network.\nBut as the network infrastructure becomes more complex, verification that the desired result is achieved is critical as well. Commonly, checking that the network is protected is often reactive or manual. Network verification technology provides a path to automated, proactive verification.\nThe goal of that automated verification is ultimately to prevent outages and vulnerabilities in the infrastructure, which in turn reduces risk associated with changes therefore enables networking teams to evolve the network more quickly. As a result, continuously verifying the network is a valuable first step before deploying other kinds of automation and orchestration systems. In a future post, I\u2019ll discuss how network verification helps achieve those goals in practical use cases, and how it compares with traffic monitoring.