From time to time, I like to write about something a bit more futuristic, something that is not yet common enough to get into the Cisco cert track. And because I was headed to Interop this week, the folks at Network World suggested I take a closer look at a new initiative called OpenFlow. Today I've give a bit of background, and once I've been to the OpenFlow lab and the vendor booths, and understand it all better, I'll give you an update.
Before the show, I spoke with Glenn Evans, Lead Network Engineer, InteropNet. Besides OpenFlow, we discussed a bit of the history of Interop, and InteropNet in particular, and I found it interesting, so while we're here...
Interop started in the late 1980s. This predates 10BaseT, LAN switches, and was a world in which most companies did not yet own routers. Yep, that far back in technology years. When Interop started, part of the motivation - and even the name - came from the idea that the computer technology world, and particularly the network part, needed to have interoperable parts. Back in the 1970s and 1980s, most companies used networking gear from one vendor, with that gear implementing the proprietary protocol stack from that vendor (EG, IBM and SNA, DEC and DECNet). But that era was coming to a close with the emergence of TCP/IP.
As part of the show, Interop built InteropNet. The vendors showed up, set up their gear in their booths, and connected to the InteropNet. At Interop, vendors could show off what worked with other gear, see what didn't, test, get better at Interoperability, and help build a world where all the pieces parts work together.
Part of the marketing copy for Interop's web site mentioned something about InteropNet being a part of the process to create useful networking standards. I asked Glenn for some examples: TCP/IP, OSPF, and MPLS. Essentially, in the early years, InteropNet was really a "plugfest" - everyone plugged in to see what happened, and we all improved together.
Today, InteropNet is essentially the production network for the show, available to anyone who wants to get out to the Internet or between booths at the show. But this year's OpenFlow test lab at the show is a throwback to the old InteropNet plugfest, where the vendors playing in the OpenFlow game will connect and show off OpenFlow technologies.
To understand the big ideas with OpenFlow, first think for a moment about how layer 2 switches work. (We could use routers as well, but let's stick with switches.) The switches each use Spanning Tree Protocol (STP). The net result? Each switch chooses whether to forward or block on each interface. Collectively, that STP topology defines the one and only path through the layer 2 domain, typically per VLAN. So, from one perspective, STP on the collective switches chooses the paths through the switches that could be used, and which cannot.
And we use the term control plane to refer to that work, because it controls the forwarding path.
When the switch forwards frames, they do not have to send a bunch of control plane STP messages before forwarding the frame. Instead, the switches use the MAC address table and some basic forwarding logic of matching the frame's destination MAC address to the table. The table lists the correct outgoing interface.
Today, the actual forwarding processing occurs on ASICs, purpose built to do low-latency/high-volume frame forwarding. These ASICs rely on the MAC address table, or some derivation of that table. The contents of that table depend on the work the control plane did: only STP forwarding interface are used to forward frames. For example, a switch learns a MAC table entry that lists F0/1 as an outgoing interface only if F0/1 is currently in an STP forwarding state.
So, the data plane does the frame forwarding, but the paths used by the data plane depend on the choices made by the control plane.
The high-speed, high volume data plane forwarding has to occur on the switches. However, the relatively slow speed (happens in tens of seconds, rather than tens of microseconds) control plane functions could be done elsewhere, on some other device; OpenFlow does just that.
OpenFlow takes the control plane function and moves it to a server - not to make another place for existing control plane logic to be done, but as a place to develop new control plane logic and concepts. Along with that, OpenFlow creates protocols with which the server talks to each switch. The server performs control plane logic, and uses these protocol messages to program the forwarding tables on the switches, to prepare the data plane to do its usual job.
Why, do you ask? Well, the answers fall into two categories, coming next.
Researchers in universities want to do R&D about how to do networking better - how to create new rules for choosing the path through networks, how to get lots of devices to work together, how to do management better, anything that comes to mind. But to do that, and to afford the gear, and to do stress testing, they need gear, and they need access to the internals of the gear. If you want to toss STP out the door and choose the path through switches with totally different logic, and you want to experiment, you have options, but none are perfect. You can build a PC with lots of Ethernet cards so you can program your new protocols and rules, but it performs slowly compared to the ASICs in switches. Or, you beg a vendor to get access to how to manipulate their hardware, how to load new control plane code that you wrote for their devices, and so on. It's messy.
OpenFlow solves that problem for researches. When a vendor comes out with a device that supports OpenFlow (I imagine we'll hear timelines at Interop from various vendors), such announcements probably mean this: It supports the ability for some external OpenFlow server to perform control plane functions, and program the data plane on the switch. Then, researchers can include that device in their research, create new control plane logic/code that runs on the server, but also do functional and stress testing by running traffic through the real gear from the vendor.
Note that implementing OpenFlow is not all or nothing - you could split off say a VLAN to do the testing, but everything else happen with the usual logic you'd expect on a switch. (We could even call it virtualization, slicing off a small part of the switches for development, but I think that term has enough usages already.)
In short, one answer to "why" is that OpenFlow should open up the world to move innovation, which should ultimately be good for us all. At least, that's the intent.
The 2nd big reason for OpenFlow relates to how you deliver a new feature to market. Say a researcher comes up with a new layer 2 forwarding paradigm to replace STP. One path to market would be to sell it to a vendor, and let the vendor figure out how to implement it on their gear, how to feed it into a standards process, so that other vendors would eventually also support the same feature. Another would be to send it to the IETF, or IEEE, etc, to get it standardized, with vendors coming on board as they see fit. Regardless, in the end, these vendors would each add the feature to their products by putting the control plane function into the OS that runs on each box, just like they normally do today.
OpenFlow creates an option to deploy such new features using the same architecture used during development. With OpenFlow, a vendor could sell you that control plane feature as part of an OpenFlow server, and you run your production network with the control plane sitting on that server. If the existing network devices (switches, routers) already support OpenFlow, then the upgrade requires new software added to the server, but no new switches, and now software upgrades. (Well, that's what's possible at least.) The server pushes the forwarding entries down to each individual switch, based on the cool new control plane logic.
I must admit, my first impression was that OpenFlow made perfect sense to me for the first goal. For the second part, well, I'm still having trouble figuring out if this is a useful revolutionary technology, or a solution that's looking for a problem, or if its one of many solutions to well known existing problems.
On the flip side, a lot of vendors seem to think OpenFlow matters. Nope, Cisco isn't on this list of OpenFlow lab sponsors at Interop, but I'll ask the folks at the Cisco booth to see where they sit. The list includes many everyday names, like Broadcom, Extreme, HP, and Juniper, and several others.
So, I go to the Interop show a skeptic; I'll write again at the end of the show and tell you whether I was turned into a believer, or remain a skeptic. More to come!
Before I go: here are some related links if you want to read more.
A discussion about OpenFlow from Packet Pushers.
Interop's main OpenFlow link: http://www.interop.com/lasvegas/it-expo/interopnet/openflow-lab/
OpenFlow organization: http://www.openflow.org/
Nice OpenFlow overview PDF:
Wendell Odom, CCIE No, 1624, has been a network guy for almost 30 years, working as a network engineer, SE, consultant, instructor, and author. He’s been writing and teaching about Cisco CCNA since its introduction in 1998, authoring all Cisco Press CCNA Exam Certification Guides. His primary job is to create Cisco certification content and tools. These cert tools include bestselling Cisco Press titles for CCNA, CCNP, and CCIE R/S; refer to this page for a complete list of titles. Wendell blogs here at Network World’s Cisco Subnet site, and keeps certification links and tools at his web site, www.certskills.com.
Wendell Odom's Cisco Cert Zone blog is also featured on the Cisco Learning Network. See it there, along with the blogs of other Cisco Experts.
Again, check out all of Wendell Odom's books on CertSkills.com.