Single Root IO Virtualization

by [Published on 16 July 2013 / Last Updated on 16 July 2013]

This article provides an overview of SR/IOV in Windows Server 2012 and how to enable it in a Hyper-V environment.

Introduction

Single Root I/O Virtualization (SR-IOV) is a standard developed by the PCI-SIG that works in conjunction with system chipset support for virtualization technologies. SR-IOV enables network traffic to bypass the software switch layer of the Hyper-V virtualization stack to allow SR-IOV capable devices to be assigned directly to a virtual machine. It does this by providing remapping of interrupts and DMA.

Hyper-V in Windows Server 2012 includes built-in support for SR-IOV–capable network devices to allow an SR-IOV virtual function of a physical network adapter to be assigned directly to a virtual machine. The result is increased network throughput and reduces network latency for virtual machines running on Hyper-V hosts while also reducing the host CPU overhead required for processing network traffic.

To learn how SR-IOV works and how to enable it in a Hyper-V environment, I'm going to share with you a short excerpt from the unedited draft of my new ebook Optimizing and Troubleshooting Hyper-V Networking (Microsoft Press, 2013). The content for this topic was contributed by Keith Hill, a Sr. Support Escalation Engineer with the Windows Server Core High Availability Team. Keith started his Microsoft journey in 1999 on the afterhours support team. He moved to the cluster team about 7 years later, and 2 years ago became the Support Topic Owner for Hyper-V within Commercial Technical Support (CTS). Keith also had the assistance of John Howard, Program Manager for Hyper-V, in developing this content.

Overview

One of the new features included with Hyper-V 2012 is Single Root I/O Virtualization (SR-IOV). SR-IOV is a specification that was created by Peripheral Component Interconnect Special Interest Group (PCI-SIG) in 2010. This new standard, SR-IOV, allows a PCIe device to appear to be multiple separate physical PCIe devices.

Note:
For some light reading you can read the SR-IOV standard at this web link.

It is important to note that SR-IOV standard applies to all PCIe devices including storage. However with Windows Server 2012 Microsoft looked where the biggest gains would be in using SR-IOV. Microsoft decided to exclusively work on SR-IOV for networking as the only supported device.

In Windows Server 2008 R2 we have two types of virtual network cards: the emulated network adapter (Legacy Network Adapter) and the synthetic network adapter (Network Adapter). I think that most of us have read the reasons as to which one is better to use and why shouldn't use an emulated network card over a software network card. But for those who have yet to understand that, and its actually important to the topic of this chapter to understand it, I will briefly go over them.

The emulated network card is the worst performing card out of the two and should be used to PXE boot a VM. The software NIC is the default and gains a performance boost because of the VMBus, an in memory pipeline, which forwards the device request to the parent partition and then to the physical  device. But there is overhead associated to the I/O path with the software NIC.

In short, software devices introduce latency, increase overall path length and consume compute cycles. With higher network speeds, and the number of supported VM's on a system it would not be uncommon to see a single core being consumed by 5-7 GB/Sec of network traffic that is generated by the virtual machines running on Windows Server 2008 R2 SP1.

So that led Microsoft to offer other alternatives for those scenarios. All welcome the arrival of SR-IOV in Windows Server 2012. This is a secure device model, well relative to the software based device sharing I/O, that has lower latency, higher throughput, and lower compute overhead. All this and it scales well as the number of VM's increase in the future.

How SR-IOV works

So how does SR-IOV work? In Windows Server 2012 Hyper-V SR-IOV works through physical functions (PF) and virtual functions (VF).

PF's are PCIe function of a network adapter that supports the SR-IOV specification. The PF includes all of the extended capabilities in the PCIe base specifications. This capability is used to configure and manage the SR-IOV functionality of the network adapter including enabling virtualization and exposing the VFs. VFs are "lightweight" functions that lack the configuration resources. Each VF shares one of more physical resources on the network adapter.

For example the VF shares the external network port with the PF and other VFs. While VFs are transient, keep in mind that the PF's are always available (that is if the PCIe device is not disabled). It is important to understand that a VF cannot exist without a PF. For illustration purposes let's take a look at the software components in the following diagram:

ImageFigure 1: A diagram showing the software components that make SR-IOV work.

Note:
Hyper-V child partitions are also known as Virtual Machines.

So let's take a deeper look into the components that are listed in the diagram above.

  • Physical Function (PF): The PF is exposed as a virtual network adapter in the management operating system of the Parent partition.
  • PF Miniport Driver: It is the PF miniport driver's responsibility to manage resources on the network adapter that are used by one or more VFs. The PF miniport driver is loaded in the management OS before any resources are allocated for a VF. If the PF miniport driver is halted all the resources that were allocated for VFs will be freed.
  • Virtual Function (VF): As we stated earlier the VF is a lightweight PCIe function on a network adapter and it supports the SR-IOV interface. The VF is associated with the VF on the network adapter, and represents a virtualized intake of the network adapter. Each VF shares one or more physical resources on the NIC; for example the external network port.
  • VF Miniport Driver: The VF miniport driver is installed in the guest OS and is used to manage the VF.
  • Network Interface Card (NIC) Switch: The Network Interface Card switch is the hardware component of the network adapter that supports the SR-IOV interface. This forwards network traffic between the physical port and the internal virtual ports. Keep in mind that each Vport is attached to either a PF or VF.
  • Virtual Ports (VPorts): A Vport is nothing more than a data object that is tied to an internal port on the Network Interface Card switch and supports the SR-IOV interface. This allows the transmissions of packets to and from VFs or PF's.
  • Physical Port: This is the actual hardware's physical port that is used to connect the hardware to the external networking medium.

Note:
It is important to understand that VFs are hardware resources and because of this there are limitations on the number of VFs which are available on different hardware devices. Currently such devices are offering up to 64 VFs per PF.

SR-IOV sounds great, but there are some caveats to it. SR-IOV must be supported from the BIOS as well as the NIC and the operating system that is running Hypervisor. One thing that some people get a tad bit confused over is that Hyper-V on the Server platform does not require SLAT, however for SR-IOV to work it is a requirement.

Note:
A device that is SR-IOV capable can be used as a regular I/O device outside of virtualization.

Enabling SR-IOV

So this all sounds very interesting, but how do you enable it? Well that would be the next question that I would be asking. Assuming that you have set up the BIOS correctly, your processors support Second Level Address Translation (SLAT), and you have a SR-IOV PCIe network card in the system; the first step to have any network connectivity (whether it is to enable SR-IOV or not) is to create an external virtual switch. You could do this by using the Virtual Switch Manager in Hyper-V Manager, or you could do this in PowerShell.

Let's start by looking at the Hyper-V Manager. You open Hyper-V Manager, then click on Virtual Switch Manager on the right hand side. This will open up the Virtual Switch Manager interface. From there it is much like creating any other virtual switch with one difference:

Image
Figure 2:
A screenshot showing how to enable SR-IOV for a virtual switch on the host.

Note:
At the bottom on the window you can see the SR-IOV warning. Once a switch is created you cannot add this option again. If you wish to add SR-IOV later you will have to delete the switch and recreate it.

As I suggested above you can also create the Virtual Switch in PowerShell. Using PowerShell's extensions for Hyper-V you run the command New-VMSwitch. This command does require a parameter to specify the physical network that you wish to use. And Microsoft thought of that as well and we have a command for that. You can run Get-NetAdapter to list them out.

So here you can see that I have listed out the network adapters in PS below:

Image
Figure 3:
A screenshot showing how to use Windows PowerShell to list the network adapters.

So now that we have the network adapters name we can use the New-VMSwitch command:

Image
Figure 4:
A screenshot showing the New-VMSwitch cmdlet.

So now with the Get-VMSwitch command we can see the properties that were exposed on the VMNetworkadapter object:

ImageFigure 5: A screenshot showing output from running the Get-VMSwitch cmdlet.

I know…that's cool but what does it mean? Well this would be a great time to take a look at the output in greater details.

At the top we have IovEnabled. This is true only when the virtual switch is created in SR-IOV mode…and well false in any other configurations. The rest require a bit more explanation:

  • IovVirtualFunctionCount: This is the number of VFs that are currently available for use by guest operating systems. Keep in mind this is a hardware setting on the physical network adapter and may vary by vendor. Also note that each software based NIC can be backed by a VF. Also keep in mind that each VM can have up to eight software based NIC's.
  • IovVirtualFunctionsInUse: This is the current number of VFs in use by guest operating systems. In the screen shot above you see this as listed with 1. This is because I have one VM that is running one NIC in SR-IOV mode.
  • IovQueuePairCount: This is the number of pairs that are available as hardware resources on the physical NIC. Again this may vary from hardware vendor to hardware vendor. In most cases there will be as many pairs available as there are VFs. Depending on the vendor additional functionality might be included in the VFs, for instance a hardware vendor my support RSS in a guest operating system that is backed by a VF and more than one queued pair may be required for this. Again this is all based on hardware so any questions about this should be directed to the vendor.
  • IovQueuePairsInUse: This is the number of queued pair that are currently assigned to VFs and assigned to a guest operating system.

For the last two, I will address these together as they are similar.

IovSupport and IovSupportReasons are numeric code and descriptions regarding the status of the physical network adapter. I will address this more in the troubleshooting section.

Enabling the guest OS

OK now we have the switch created, is that it? And the correct answer would be not yet. We now have to enable the guest operating system. So with that in mind we open up the settings for the guest and under the sub-node for the network adapter you will see hardware acceleration. You guessed it…that is where we would enable SR-IOV for the guest:

ImageFigure 6: A screenshot showing how to enable SR-IOV for the guest.

Note:
Placing a check mark in the "Enable SR-IOV" box sets IovWeight setting to some number greater than 0.

And now let's take a look at PowerShell to see how we can set this in PowerShell.

So with the command Set-VMNetworkAdapter we can set this setting:

ImageFigure 7: A screenshot showing the Set-VMNetworkAdapter cmdlet.

Here you can see IovWeight set to 50. This should be explained some, but the concept here is very simple. So you might be familiar with VMQWeight in Windows Server 2008 R2, and well the IovWeight functionality operates the same in Windows Server 2012. This setting expresses the desire for a hardware offload, but it's not a guarantee. So any number greater than 0 turn this setting on…so in short 1-100 is on, and 0 is off.

Note:
Additional steps on how to implement network redundancy when using SR-IOV along with guidance from Keith Hill on how to troubleshoot SR-IOV can be found in my ebook Optimizing and Troubleshooting Hyper-V Networking from Microsoft Press.

Conclusion

More information on how to troubleshoot networking in Hyper-V environments can be found in my ebook Optimizing and Troubleshooting Hyper-V Networking, which includes content contributed from a number of different experts on the Windows Server team at Microsoft.

Advertisement

Featured Links