Site icon Davoud Teimouri – Virtualization and Data Center

Single Root I/O Virtualization (SR-IOV) – Part 2

vSphere 5.1 and later supports Single Root I/O Virtualization (SR-IOV). SR-IOV is a specification that allows a single Peripheral Component Interconnect Express (PCIe) physical device under a single root port to appear to be multiple separate physical devices to the hypervisor or the guest operating system.

SR-IOV uses physical functions (PFs) and virtual functions (VFs) to manage global functions for the SR-IOV devices. PFs are full PCIe functions that include the SR-IOV Extended Capability which is used to configure and manage the SR-IOV functionality. It is possible to configure or control PCIe devices using PFs, and the PF has full ability to move data in and out of the device. VFs are lightweight PCIe functions that contain all the resources necessary for data movement but have a carefully minimized set of configuration resources.

SR-IOV-enabled PCIe devices present multiple instances of themselves to the guest OS instance and hypervisor. The number of virtual functions presented depends on the device. For SR-IOV-enabled PCIe devices to function, you must have the appropriate BIOS and hardware support, as well as SR-IOV support in the guest driver or hypervisor instance.

vSphere 5.1 supports SR-IOV. However, some features of vSphere are not functional when SR-IOV is enabled.

Supported Configurations

To use SR-IOV, your environment must meet the following configuration requirements:

Supported Configurations for Using SR-IOV

Component

Requirements

vSphere

Hosts with Intel processors require ESXi 5.1 or later.

Hosts with AMD processors are not supported with SR-IOV.

Physical host

Must be compatible with the ESXi release.

Must have an Intel processor.

Must not have an AMD processor.

Must support input/output memory management unit (IOMMU), and must have IOMMU enabled in the BIOS.

Must support SR-IOV, and must have SR-IOV enabled in the BIOS. Contact the server vendor to determine whether the host supports SR-IOV.

Physical NIC

Must be compatible with the ESXi release.

Must be supported for use with the host and SR-IOV according to the technical documentation from the server vendor.

Must have SR-IOV enabled in the firmware.

PF driver in ESXi for the physical NIC

Must be certified by VMware.

Must be installed on the ESXi host. The ESXi release provides a default driver for certain NICs, while for others you must download and manually install it.

Guest OS

Red Hat Enterprise Linux 6.x

Windows Server 2008 R2 with SP2

VF driver in the guest OS

Must be compatible with the NIC.

Must be supported on the guest OS release according to the technical documentation from the NIC vendor.

Must be Microsoft WLK or WHCK certified for Windows virtual machines.

Must be installed on the OS. The OS release contains a default driver for certain NICs, while for others you must download and install it from a location provided by the vendor of the NIC or of the host.

To verify compatibility of physical hosts and NICs with ESXi releases, see the VMware Compatibility Guide.

Availability of Features

The following features are not available for virtual machines configured with SR-IOV:

vMotion

Storage vMotion

vShield

Netflow

Virtual Wire

High Availability

Fault Tolerance

DRS

DPM

Suspend and resume

Snapshots

MAC-based VLAN for passthrough virtual functions

Hot addition and removal of virtual devices, memory, and vCPU

Participation in a cluster environment

Note

Attempts to enable or configure unsupported features with SR-IOV in the vSphere Web Client result in unexpected behavior in your environment.

Supported NICs

The following NICs are supported for virtual machines configured with SR-IOV. All NICs must have drivers and firmware that support SR-IOV. Some NICs might require SR-IOV to be enabled on the firmware.

Products based on the Intel 82599ES 10 Gigabit Ethernet Controller Family (Niantic)

Products based on the Intel Ethernet Controller X540 Family (Twinville)

Emulex OneConnect (BE3)

Upgrading from earlier versions of vSphere

If you upgrade from vSphere 5.0 or earlier to vSphere 5.1 or later, SR-IOV support is not available until you update the NIC drivers for the vSphere release. NICs must have firmware and drivers that support SR-IOV enabled for SR-IOV functionality to operate.

Virtual functions (VFs) are lightweight PCIe functions that contain all the resources necessary for data movement but have a carefully minimized set of configuration resources. There are some restrictions in the interactions between vSphere 5.1 and VFs.

When a physical NIC creates VFs for SR-IOV to use, the physical NIC becomes a hidden uplink and cannot be used as a normal uplink. This means it cannot be added to a standard or distributed switch.

There is no rate control for VFs in vSphere 5.1. Every VF could potentially use the entire bandwidth for a physical link.

When a VF device is configured as a passthrough device on a virtual machine, the standby and hibernate functions for the virtual machine are not supported.

Due to the limited number of vectors available for passthrough devices, there is a limited number of VFs supported on an vSphere ESXi host . vSphere 5.1 SR-IOV supports up to 41 VFs on supported Intel NICs and up to 64 VFs on supported Emulex NICs.

The actual number of VFs supported depends on your system configuration. For example, if you have both Intel and Emulex NICs present with SR-IOV enabled, the number of VFs available for the Intel NICs depends on how many VFs are configured for the Emulex NIC, and the reverse. You can use the following formula to roughly estimated the number of VFs available for use:

3X + 2Y < 128

Where X is the number of Intel VFs, and Y is the number of Emulex VFs.

If a supported Intel NIC loses connection, all VFs from the same physical NIC stop communication, including between VFs.

If a supported Emulex NIC loses connection, all VFs stop communication with the external environment, but VF communication still functions.

VF drivers offer many different features, such as IPv6 support, TSO, and LRO Checksum. See your vendor’s documentation for further details.

SR-IOV offers performance benefits and tradeoffs similar to those of DirectPath I/O. DirectPath I/O and SR-IOV have similar functionalty but you use them to accomplish different things.

SR-IOV is beneficial in workloads with very high packet rates or very low latency requirements. Like DirectPath I/O, SR-IOV is not compatible with certain core virtualization features, such as vMotion. SR-IOV does, however, allow for a single physical device to be shared amongst multiple guests.

With DirectPath I/O you can map only one physical funtion to one virtual machine. SR-IOV lets you share a single physical device, allowing multiple virtual machines to connect directly to the physical funtion.

This functionality allows you to virtualize low-latency (less than 50 microsec) and high PPS (greater than 50,000 such as network appliances or purpose built solutions) workloads on a VMWorkstation.

Configure a Virtual Machine to Use SR-IOV in the vSphere Web Client
Configure a Virtual Machine to Use SR-IOV
Configure the Passthrough Device for a Virtual Function in the vSphere Web Client
Configure the Passthrough Device for a Virtual Function
 

Reference:

 
Exit mobile version