Technology is always evolving. However, in recent time, two significant changes have emerged in the world of networking. Firstly, the networking is moving to software that can run on commodity off-the-shelf hardware. Secondly, we are witnessing the introduction and use of many open source technologies, removing the barrier of entry for new product innovation and rapid market access.\nNetworking is the last bastion within IT to adopt the open source. Consequently, this has badly hit the networking industry in terms of slow speed of innovation and high costs. Every other element of IT has seen radical technology and cost model changes over the past 10 years. However, IP networking has not changed much since the mid-\u201990s.\nWhen I became aware of these trends, I decided to sit with Sorell Slaymaker to analyze the evolution and determine how it will inspire the market in the coming years.\nThe open development process\nOpen source refers to the software, which uses an open development process that has allowed the computed functions to become virtually free. In the past, networking used to be expensive and licensing came at a high cost. It still has to run on proprietary hardware that is often under patent or trade-secret protection.\nThe main disadvantages of proprietary hardware are the cost and vendor software release lock-in. A lot of major companies, such as Facebook, AT&T, and Google are using open source software and commodity white box hardware on a huge scale. This has slashed the costs dramatically and has split-open the barriers to innovation.\nAs software eats the world, agility is one of the great benefits. Thus, the speed of change becomes less inhibited by long product development cycles and new major functionality can be achieved in days and months, not years. Blackberry is a great example of a company that did nothing wrong, over and above they had multi-year development cycles but still, they got eaten by Apple and Google.\nThe white box and grey box\nThe white box is truly an off-the-shelf gear while the grey box is taking off-the-shelf white box hardware and making sure it has, for example, specific drivers, version of the operating system so that\u2019s it's optimized and supports the software. Today, many say they are a white box but in reality, they are a grey box.\nWith grey box, we are back into \u201cI have a specific box with a specific configuration\u201d. However, this keeps us from being totally free. Freedom is essentially the reason why we want white box hardware and open source software in the first place.\nWhen networking became software-based, the whole objective was that it gave you the opportunity to run other software stacks on the same box. For example, you can run security, wide area network (WAN) optimization stack and a whole bunch of other functions on the same box.\nHowever, within a grey box environment, when you have to get specific drivers, for example for networking, it may inhibit other software functions that you might want to run on that stack. So, it becomes a tradeoff. Objectively, a lot of testing needs to be performed so that there are no conflicts.\nSD-WAN vendors and open source\nMany SD-WAN vendors use open source as the foundation of their solution and then add additional functionality over the baseline. Originally, the major SD-WAN vendors did not start from zero code! A lot came from open source code and they then added utilities on the top.\nThe technology of SD-WAN did hit a sore spot of networking that needed attention - the WAN edge. However, one could argue, that one of the reasons SD-WAN took off so quickly was because of the availability of open source. It enabled them to leverage all the available open source components and then create their solution on top of that.\nFor example, let\u2019s consider FRRouting (FRR), which is a fork off from the Quagga routing suite. It\u2019s an open source routing paradigm that many SD-WAN vendors are using. Essentially, FRR is an IP routing protocol suite for Linux and UNIX platforms which includes protocol daemons for BGP, IS-IS, LDP, OSPF, PIM, and RIP. It\u2019s growing with time and today it supports EVPN type 2, 3, and 5. Besides, you can even pair it with a Cisco device running EIGRP.\nThere is a pool of over 60 SD-WAN vendors at the moment. Practically, these vendors don\u2019t have 500 people writing code every day. They are all getting open source software stacks and using them as the foundation of the solution. This allows rapid entrance into the SD-WAN market. Ultimately, new vendors can enter really quickly at a low cost.\nSD-WAN vendors and Casandra\nToday, many SD-WAN vendors are using Casandra as the database to store all their stats. Casandra, licensed under Apache 2.0, is a free and open-source, distributed, wide column store and NoSQL database management system.\nOne of the issues that some SD-WAN vendors found with Casandra was that the code consumed a lot of hardware resources and that it didn't scale very well. The problem was that if you have a large network where every router is generating 500 records per second and since most SD-WAN vendors track all flows and flow stats, you will get bogged down while managing all of the data.\nA couple of SD-WAN vendors went to a different NoSQL database management system stack that didn\u2019t take up too much hardware resources and rather distributed and scaled much better. Basically, this can be viewed as both an advantage and a disadvantage of using open source components.\nYes, it does allow you to move quickly and at your own pace but the disadvantage of using open source is that sometimes you end up with a fat stack. The code is not optimized, and you may need more processing power that you would not need with an optimized stack.\nThe disadvantages of open source\nThe biggest gap in open source is probably the management and support. Vendors keep making additions to the code. For example, zero-touch provision is not part of the open source stack, but many SD-WAN vendors have added that capability to their product.\nBesides, low code\/no code coding can also become a problem. As we now have APIs, users are mixing and matching stacks together and not doing raw coding. We now have GUIs that have various modules which can communicate with a REST API. Essentially, what you are doing is, you are taking the open source modules and aggregating them together.\nThe problem with pure network function virtualization (NFV) is that a bunch of different software stacks is running on a common virtual hardware platform. The configuration, support, and logging from each stack still require quite a bit of integration and support.\nSome SD-WAN vendors are taking a \u201csingle pane of glass\u201d approach where all the network and security functions are administered from a common management view. Alternatively, other SD-WAN vendors partner with security companies where security is a totally separate stack.\nAT&T 5G rollout consisted of 5G\nPart of AT&T 5G rollout consisted of open source components in their cell towers. They deployed over 60,000 5G routers that were compliant with a newly released white box spec hosted by the Open Compute Project.\nThis enabled them to break free from the constraints of proprietary silicon and feature roadmaps of traditional vendors. They are using disaggregated network operating system (dNOS) as the operating system within the white boxes. The dNOS' function is to separate the router\u2019s operating system software from the router\u2019s underlying hardware.\nPreviously, the barriers to entry for creating a network operating system (NOS) have been too many. However, due to the advances in software with Intel\u2019s DPDK, the power of YANG models and in hardware, the Broadcom silicon chips have marginally reduced the barriers. Hence, we are witnessing a rapid acceleration in the network innovation.\nIntel DPDK\nIntel\u2019s DPDK that consists of a set of software libraries are a data plane development kit that allows the chipsets to process and forward packets in a lot quicker fashion. Therefore, it boosts the packet processing performance and throughput, allowing more time for data plane applications.\nIntel has built an equivalent of an API at the kernel level to allow the packet to be processed much faster. They also added AES New Instructions (NI) that allows an Intel chip to process encryption and decryption much faster. Intel AES NI is a new encryption instruction set that improves on the Advanced Encryption Standard (AES) algorithm and accelerates the encryption of data.\nFive years ago, no one wanted to put encryption on their WAN routers because of the 10x performance hit. However, today, with Intel, the cost of CPU cycles from doing the encryption and decryption is much less than before.\nThe power of open source\nIn the past, the common network strategy was to switch when you can and route when you must. Considerably, switching is fast and cheaper at gigabit speeds. However, with open source, the cost of routing is coming down and with the introduction of routing in the software; you can scale horizontally and not just vertically.\u00a0\nTo put it in other words, instead of having a 1M dollar Terabit router, one can have 10x100 Gigabit routers at 10x10K or 100K, which is a significant 10x reduction in costs. It is close to 20x if one figures in redundancy. Today\u2019s routers require a 1:1 primary\/redundant router configuration, whereas when you scale horizontally, an M+N model can be used where one router can be used as the redundant for 10 or more production routers.\nIn the past, for a Terabyte router, you would have to pay a heap as you needed a single box. Whereas today, you can take a number of Gigabyte servers and the combination of horizontal scaling allows the total of Terabit speeds.\nThe future of open source\nEvidently, the role of open source will only grow in networking. Traditional networking leaders, such as Cisco and Juniper are likely to see a lot of pressure on their revenues and especially margins as the value add for proprietary will become less and less.\nThe number of vendors getting into networking will also increase as the cost to create and deploy a solution is lower which will also challenge the big vendors. In addition, we will witness more and more gigantic companies, like Facebook and AT&T that will continue to use more open source in their networks to keep their costs down and scale out the next-generation networks, such as 5G, edge computing, and IoT.\nOpen source will also bring about changes in the design of networks and will continue to push routing to the edge of the network. As a result, more and more routing will occur at the edge, so you don\u2019t need to backhaul traffic. Significantly, open source brings the huge advantage of less cost to deploy routing everywhere.\nThe biggest challenge with all the open source initiatives is standardization. The branches of source code and the teams working on them split on a regular basis. For instance, look at all the variations of Linux. So, when an AT&T or other big company bets on a specific open source stack and continues to contribute to it openly, this still does not guarantee that in 3 years this will be the industry standard.\nA larger retailer in the U.S. has chosen an overall IT strategy of using open source wherever possible, including the network. They feel that to compete with Amazon, they have to become like Amazon.\nWhere to go from here?\nEvery technology and product has its place and time. The said enterprises should start investigating where open source networking fits into their strategy. Some common use cases include:\n\nOpen VPN \u2013 Moving to opensource on remote connectivity.\nOpen Container Internetworking \u2013 Networking Kubernetes of other container environments in hybrid, multi-cloud architectures. Evolving from VNFs to CNFs.\nLabs \u2013 Testing new concepts and features for virtually free.\nNetwork Management \u2013 Open source and\/or freemium tools that can add value with minimal investment.\nAdding open source-based networking vendors into the RFP process, if nothing more than to put price pressure on the incumbent vendor.