Open source software has been a key underpinning of enterprise IT for years, so it\u2019s no surprise that it\u2019s helping to drive the infrastructure part of the equation forward just as much as application development.\nSome projects are much more influential than others, and here are five that are doing the most to help enterprise infrastructure keep pace with the demands of an ever-more sophisticated operating environment.\nOpenStack\nOpenStack is notable in part for being an open-source competitor to the most important proprietary virtualization software on the market\u2014VMware\u2019s VSphere. For the basic task of virtualizing servers into a flexible pool of computing resources, the difference appears to be ease of use\u2014it\u2019s simpler to use VMware when there isn\u2019t a lot of in-house virtualization or private-cloud expertise.\nOpenStack is important in networking for one main reason: the telecom sector and network function virtualization (NFV), which uses enterprise virtualization technology to perform networking tasks previously allocated to dedicated hardware tied to proprietary software. Telecom providers love this idea because it lets them replace expensive, proprietary products with general-purpose switches and servers. Also, software used for NFV, like OpenStack, lets them dynamically provision workloads and deploy new capabilities more flexibly.\nArpit Joshipura, general manager of networking and orchestration at the Linux Foundation, said that OpenStack and other NFV-enabling projects have quickly become central to telecom operations.\n\u201cTelecom was all proprietary, everything from the [radio access network] to the edge to the core,\u201d he said. \u201cIn the last five years, telecom networks have become totally open-source dependent.\u201d\nAnsible\nOriginally developed by Red Hat, Ansible is an open-source IT-automation and configuration-management tool that offers an alternative to configuring hardware manually. The idea is that the IT team writes a script that describes the network and what it\u2019s supposed to do and then Ansible automatically configures the relevant devices. It doesn\u2019t use agent software, instead pushing \u201cAnsible modules\u201d directly to the devices via SSH for easy deployment.\n\u201cAnsible\u2019s important because you need to be able to orchestrate your machines once you have a lot of them,\u201d said Elizabeth K. Joseph, a developer advocate and open-source expert at IBM. \u201cYou can manage a server or two or 10 on your own, but it\u2019s much easier to deploy them and manage them automatically.\u201d\nRed Hat offers an array of paid add-ons for Ansible, as well, including improved security, role-based access control, and job scheduling. Ansible offers a method of network configuration that lets IT workers set configurations once on a single controller and automatically pushes them to devices on their networks. Software can also be pushed to all devices on the network or just a relevant subset by editing to the main playbook. Changes can be tracked and identified in Git or some other versioning system.\nAkraino\nAkraino launched in 2019 and is the product of the Linux Foundation\u2019s LF Edge program, which is aimed at creating open frameworks for edge-computing deployments. Akraino is a collection of configuration blueprints designed to offer a freely available, off-the-shelf recipe for network and hardware configurations for specific use cases.\nAkraino currently includes 11 blueprint families grouped by general use area and 27 specific blueprints. One example is StarlingX Far Edge Distributed Cloud, which specifies a hardware setup, containerization providers, and an orchestration framework to enable applications to run in high-density locations like airports, sports stadiums, and malls. Other blueprints focus on AR\/VR infrastructure, telecom radio deployment, and various types of IoT.\nThe idea behind StarlingX is to offer vendors and sophisticated end-users a way to streamline the configuration of the common elements of edge deployments. A company with a new application for a particular vertical market\u2014say, providing real-time monitoring to connected factories\u2014can focus on that without having to design the underlying computing infrastructure.\nKubernetes\nKubernetes is a containerization platform for all kinds of enterprise workloads, originally the product of Google engineers, but released as open source in 2014. It\u2019s since become an industry standard, accounting for 71% of containerization use in the enterprise, according to a study from 451 Research.\nEnterprises like containerization in general, and Kubernetes in particular, because it\u2019s an effective simplification vs. monolithic models of service deployment. Instead of a single application offering a range of services and requiring a specialized infrastructure, Kubernetes breaks each process used by the application into its own container and virtualizes it.\nWhat that means is that the containerized workloads can run wherever\u2014on premises, public cloud, private cloud, or various combinations thereof \u00a0simultaneously\u2014and work just as they would if they were bundled in a single application running on dedicated hardware. Consequently, developers can create a file that outlines how the services are supposed to work, and Kubernetes automates everything from provisioning to failover to updates.\nKubernetes was released as open source with the goal of simplifying the underlying infrastructure while leaving vendors and users the option of creating modifications to address a particular market or a particular enterprise need, according IBM's Joseph.\n\u201cA lot of huge companies got together to build these things that give a skeleton or core of how to function,\u201d she said. \u201cA small company can run it themselves, but that\u2019s actually kind of hard. The reason these companies invested in [these projects] is because they know they can sell things on top of that skeleton, letting them skip having to write the basic, boring stuff that it\u2019ll have to do anyway.\u201d\nLinux itself\nAny listing of open-source projects that are important to the enterprise networking must include the Linux kernel. Linux fundamentally underpins huge amounts of modern enterprise networks, including all of the other projects listed here. By extension that means it\u2019s also the basic operating system behind 90% of the public cloud, according to a 2019 survey by Red Hat.\nEven by itself, the operating system includes robust networking features that make it easy to deploy on white-box hardware. As the tasks of deploying and managing networks become more and more software-based, Linux skills are increasingly critical to just about every network IT professional out there. \u201cI think people take it for granted,\u201d said Joseph.