What's New in Windows Server 2012 Networking? (Part 6)

by [Published on 14 Nov. 2013 / Last Updated on 14 Nov. 2013]

In this article we’re going to take a deeper dive into Windows Quality of Service.

If you would like to read the other parts in this article series please go to:

Introduction

The end is finally somewhere in sight for our never-ending series (at least it seems that way) on the new features and functionalities in Windows Server 2012 networking technologies. In Parts 1 through 5, we’ve already covered eleven important changes, improvements and additions. In this, Part 6 of the series, we’ll tackle Windows Server 2012’s implementation of a complicated feature: Windows Quality of Service (QoS).

Here’s our list again, showing the items we’ve already addressed in past articles, those we’ll look at in this article and those that are yet to come:

Past articles:

  • 802.1x Authenticated Wired and Wireless Access
  • BranchCache
  • Data Center Bridging (DCB)
  • Domain Name System (DNS)
  • DHCP
  • Hyper-V network virtualization
  • IP Address Management (IPAM)
  • Low Latency Workloads technologies
  • Network Load Balancing
  • Network Policy and Access Services
  • NIC Teaming

This article:

  • Windows QoS

Future articles:

  • DirectAccess and Unified RRAS
  • Windows Firewall with Advanced Security

As you can see, this time we’re going to take a deeper dive into Windows Quality of Service, leaving us with DirectAccess and Unified Routing and Remote Access Services (RRAS) and changes to the Windows Firewall with Advanced Security (WFAS) in Windows Server 2012 for our last articles.

Windows Quality of Service (QoS)

QoS has been a part of Windows Server for a long time; it was first introduced in Windows Server 2000. QoS is about assigning priorities to different types of network traffic (based on applications, users, and computers). This allows you to make sure high bandwidth applications such as videoconferencing and telepresence get the bandwidth they need to work properly, or allow you to ensure that less business-critical applications such as online games don’t hog all the bandwidth by throttling their usage.

QoS is not a proprietary Microsoft technology; standards for QoS are established in several different IETF Requests for Comments (RFCs). Microsoft’s implementation of QoS has evolved over the thirteen years since it was introduced.

Minimum bandwidth

QoS in Windows Server 2012 gives you expanded control in managing bandwidth by adding minimum bandwidth controls to the maximum bandwidth that you were (and still are) able to set in previous versions of Microsoft’s QoS. This minimum bandwidth feature gives you the ability to specify how the bandwidth is to be shared when multiple applications are competing for it over the same network interface in a congested network.

The difference is that the metrics you assign to the various applications will only apply when the network is congested. They aren’t absolute throttling rates like the maximum bandwidth feature that imposes a fixed limit on an application or workload. With maximum bandwidth, if you set a weight (for example, 40) to a particular workload, it will never be able to go above that even if there are no other workloads using bandwidth. With a minimum bandwidth setting, it will be limited to 40 percent of the bandwidth when the network is congested but when the other applications aren’t using bandwidth, it will be able to go over that limit. Both features can be enabled, so you can set a maximum bandwidth on one application and a minimum bandwidth on another.

Simpler workload classification

Something else that Windows Server 2012 has done with QoS is to make it simpler to classify the different workloads. Workloads can be filtered to match TCP or UDP ports. For example, SMB traffic is classified to match TCP or UDP port 445.

PFC support

Windows Server 2012 supports Priority Flow Control (PFC), which is an IEEE standard (802.1Qbb), which can selectively pause traffic, based on the classification. Cisco originally developed PFC. PFC can deal with workloads that require lossless transport, ensuring that there is no loss of packets when the network is congested. An example of traffic that needs lossless transport would be Remote Direct Memory Access (RDMA) traffic that goes over Ethernet (RDMA over Converged Ethernet or ROCE).

The caveat here is that the NIC will have to support PFC. Then you can enable PFC on both ends, the virtual ROCE link becomes lossless, but other workloads won’t be affected as they are with other solutions such as link level flow control.

Hyper-V QoS

Windows Server 2012, like its predecessors Windows Server 2008 and 2008 R2, supports policy-based QoS, which is implemented as part of Windows Server’s Group Policy. Your QoS policies define priorities. They do this via a value in the TOS (Type of Service) field in a packet header (for IPv4, this is the Type of Service field and for IPv6, it’s called the Traffic Class field). Routers can utilize the value here to determine how to queue packets. Policy-based QoS lets you configure and enforce QoS policies at the user level, something that is hard to do when setting QoS on routers and switches. Another advantage is that you can specify policies based on URLs instead of IP addresses.

Windows Server 2012 takes all this further with new features for managing bandwidth on virtual networks. Hyper-V QoS is especially useful for cloud providers or enterprises that need Hyper-V virtual machines to have reliable, foreseeable performance. You can enforce both minimum bandwidth and maximum bandwidth for traffic flows. A Hyper-V virtual switch port number is used to identify the workload. For PowerShell aficionados, you can use PowerShell cmdlets to configure the minimum and maximum bandwidth settings for the virtual switch ports and you can also configure QoS individually for multiple NICs.

Using Hyper-V QoS will allow you to make sure that a particular VM or virtual NIC doesn’t hog all the bandwidth to the detriment of other interfaces. You can configure the minimum and maximum values as percentages (weight) or as absolute values. Microsoft recommends that you keep the total of the weights at or near 100 as a best practice, although that isn’t required. If there is a type of traffic that doesn’t have a weight assigned to it, it will end up in the default flow category. It’s best to give your most business-critical workloads large weights, even if you don’t anticipate that they will use that percentage of bandwidth.

You can set minimum and maximum absolute bandwidth values in the graphical interface, but with PowerShell, you can set a minimum bandwidth on a per-virtual machine basis. In fact, PowerShell is the best way to manage all Windows Server 2012 QoS settings.

Managing QoS with PowerShell

Creating new Quality of Service policies can be done with the New-NetQosPolicy cmdlet. To create a policy, you need to define both the filters and the actions. The filter identifies the traffic type, for example by the name of the application or the port number the traffic uses. The action refers to how the traffic will be handled. There are parameters for well-known applications such as iSCSI, NFS, SMB and LiveMigration. To specify the name of an application, you use the –AppPathNameMatchCondition parameter with a string identifying the application name or file path (for example, %ProgramFiles%\application.exe).

To create a policy for traffic for which you haven’t created a specific filter, you can use the –Default parameter. All other traffic will be handled according to this policy.

You can run the cmdlet in the background if you want to keep working in the session while it runs. To do this, use the –AsJob parameter. You can run it on a remote computer with the –CimSession parameter.

Once you’ve created your policies, you specify the location where they will be stored by using the –PolicyStore parameter. You can identify the location by the following values: COMPUTERNAME, GPO:COMPUTERNAME, GPO:DOMAIN\GPONAME, LDAP://LDAP-URL, or you can store the policy in the active store (ActiveStore). Note that when they are stored in the active store, policies are not persistent but are lost upon a reboot of the system.

What happens if you have policies that conflict with one another? In that case, the policy with the highest priority will take precedence. You can set priority values on your policies using the –Precedence<Ulnt32> parameter. You can set a value on each policy that is anywhere between 0 and 255. If you have a policy that doesn’t have a precedence value set, by default its value will be presumed to be 127.

Not sure about the consequences of a cmdlet as you’ve constructed it? You can use the –WhatIf parameter to show what will happen when the cmdlet is run, without actually running it. For more information about Windows Server 2012 network QoS cmdlets in PowerShell, see the TechNet web site.

Summary

One more feature down, and now we only have two more to go. Next time around, we’ll be looking at DirectAccess and Unified RRAS and Windows Firewall with Advanced Security as implemented in Windows Server 2012. Both of these have been around for a while in previous versions of the operating system, but as with QoS, Microsoft has made some changes that should enhance the experience of Windows Server administrators. So be sure to join us again for Part 6 when we start wrapping up this series.

See you then – Deb.

If you would like to read the other parts in this article series please go to:

Advertisement

Featured Links