- 15 Non-Certified IT Skills Growing in Demand
- How 19 Tech Titans Target Healthcare
- Twitter Suffering From Growing Pains (and Facebook Comparisons)
- Agile Comes to Data Integration
Network World - The PCI Express bus has emerged as an efficient and cost-effective platform for network applications. Created to address the performance, scalability and configuration limitations of older parallel computer bus architectures, this general-purpose serial I/O interconnect has been widely adopted in enterprise, desktop, mobile, communications and embedded applications.
Despite its widespread deployment, however, there is a common perception that the bus cannot meet the unique I/O demands of high-performance storage and networking. New work on extensions to the PCIe standard is revising that notion. The PCI-SIG Working Group is developing a specification that adds I/O virtualization capability to PCIe. This functionality lets network administrators virtualize or share peripherals and endpoints across different CPUs or CPU complexes.
Base PCIe topologies have dedicated endpoints mapped to specific root complexes. In this environment, each physical endpoint in the network is associated with one system image and cannot be shared.
In the new specification, root complex topologies provide two levels of I/O virtualization. In the first level, called single-root I/O virtualization (IOV), the virtualization capability is provided by the physical endpoint itself. The endpoint supports one or more virtual endpoints, and mechanisms are used to enable each virtual endpoint to directly sink I/O and memory operations from various system images, and source direct memory access, completion and interrupt operations to a system image without run-time intervention.
In the second level, called multiroot IOV, the virtualization capability is extended by the use of a multiroot switch and a multiroot endpoint. These switches and endpoints have mechanisms to let multiple root complexes and system images share common endpoints.
I/O virtualization has a number of benefits. First, it can be used to improve system use. While each virtual system requires its own dedicated I/O resources, in many physical configurations the number of I/O slots available on a client or servers may be insufficient to provide each virtual system with its own dedicated I/O endpoint. Even when an adequate number of physical I/O endpoints is available, this topology lets virtual systems share underused endpoints.
Moreover, the use of a centrally managed I/O resource improves the scalability of I/O while simplifying the management of the network. Both blade and rack-mount servers can access the resources they need, when they need them. And, because I/O can be managed from a centralized switch, administrators can allocate resources more easily and efficiently.
The centralized approach to I/O virtualization also offers network administrators a new opportunity to maximize network I/O load balancing and bandwidth management. If a virtual system needs additional bandwidth, for example, network managers can allocate more physical endpoint capacity. And if a virtual system consumes more I/O resources than necessary, its consumption can be reduced to a preset level.