QoS in Windows Server 2012 (Part 1)

by [Published on 30 Aug. 2012 / Last Updated on 30 Aug. 2012]

This article series will discuss Quality of Service (QoS) in Windows Server 2012. This first article in the series explains what QoS is and why it is useful.

If you would like to read the other parts in this article series please go to:

Introduction

As organizations begin to depend more heavily on cloud services, network bandwidth management becomes even more critical. Thankfully, bandwidth can be managed through a Windows component known as Quality of Service (QoS). In this article, I will explain what QoS is, how it works, and what you need to know about using QoS in Windows Server 2012.

An Introduction to QoS

Although the main focus of this article series will be on using QoS with Windows Server 2012, there are two important things that you need to know right off the bat. First of all, QoS is not new to Windows Server 2012. Microsoft first introduced QoS over a decade ago when it debuted in Windows 2000. Of course Windows support for QoS has been modernized in Windows 2012.

The other thing that I want to clear up right away is the notion that QoS is a Microsoft technology. Even though QoS is built into the Windows operating system (and has been for quite some time) it is an industry standard rather than a Microsoft technology. Microsoft has a long history of including industry standards in Windows. For example IPv4 and IPv6 are also industry standard networking protocols that are included in the Windows operating system.

So with that out of the way, I want to go ahead and talk about what QoS is. In order to really understand what QoS is and why it is important, you have to consider the nature of networks in general. Without a mechanism such as QoS in place, most networks use what is known as best effort delivery. In other words, when a computer sends a packet to another computer, the two machines and the networking hardware between them will make an earnest attempt to deliver the packet. Even so, delivery is not guaranteed. Even if the packet does make it to its destination, there are no guarantees as to how quickly it will get there.

Often times the delivery speed is based on the network speed. For example, if a packet is being sent between two PCs that reside on the same gigabit network segment then the packet will most likely be delivered very quickly. However, this is anything but a guarantee. If anything, the network speed (1 gigabit in this example) can be thought of as an unbreakable speed limit rather than a guarantee of fast delivery. Of course having a fast network certainly increases the chances that a packet will be delivered quickly, but there are no guarantees. An application that consumes a lot of bandwidth has the potential to degrade performance for every other device on the network.

This is where QoS comes into play. QoS is essentially a set of standards that are based on the concept of bandwidth reservation. What this all boils down to is that network administrators are able to reserve network bandwidth for mission critical applications so that those applications can send and receive network packets in a reasonable amount of time.

It is important to understand that although QoS is implemented through the Windows operating system, the operating system is not the only component that is involved in the bandwidth reservation process. In order for QoS to function properly then each network device that is involved in communications between two hosts (including the hosts themselves) must be QoS aware. This can include network adapters, switches, routers, and other networking hardware such as bridges and gateways. If the traffic passes through a device that is not QoS aware, then the traffic is dealt with on a first come, first serve basis just like any other type of traffic would be.

Obviously not every type of networking supports QoS, but Ethernet and Wireless Ethernet do offer QoS support (although not every Ethernet device is QoS aware). One of the best networking types for use with QoS is Asynchronous Transfer Mode (ATM). The reason why ATM works so well with QoS is because it offers connection oriented connectivity. When QoS is used, ATM can enforce the bandwidth requirements at the hardware level.

Before I move on, I want to clear up what might seem like a contradiction. When I talked about Ethernet, I said that Ethernet supports QoS, but that the underlying hardware must be QoS aware. Even so, Ethernet does not enforce QoS at the hardware level the way that ATM does. So what gives?

The reason why Ethernet does not enforce QoS at the hardware level is because Ethernet is a very old networking technology that has been retrofit many times over the last couple of decades. The concept of bandwidth reservation did not exist when Ethernet was created, and bandwidth reservation at the hardware level just does not work with the existing Ethernet standard. That being the case, QoS is implemented at a higher level in the OSI model. The hardware does not perform true bandwidth reservation, but rather emulates bandwidth reservation through traffic prioritization based on the instructions provided by QoS.

Additional Considerations

Although I have given you an overview of what is required for implementing QoS, there are a few other considerations that should be taken into account. For starters, Windows Server 2012 does not impose any bandwidth requirements that would keep you from using QoS in certain situations. Even so, Microsoft states that QoS works best on 1 gigabit and 10 gigabit network adapters.

Presumably the main reason behind Microsoft’s statement is that adapters that operate at speeds below a gigabit simply do not provide enough bandwidth to make bandwidth reservation worthwhile.

I might be reading too much into Microsoft’s recommendation, but there is something that I just can’t help but notice. Microsoft said that QoS works best on 1 gigabit or 10 gigabit adapters – not connections. Although this might at first seem trivial, I think that Microsoft’s wording is deliberate.

One of the new features in Windows Server 2012 is NIC teaming. NIC teaming will allow multiple network adapters to work together as one in order to provide higher overall throughput and resilience against NIC failure. I have not seen any official word as to whether or not NIC teaming will work with QoS, but I would be very surprised if Microsoft did not allow the two features to be used together.

One last thing that I want to quickly mention about QoS is that it is designed for traffic management on physical networks. As such, Microsoft recommends that you avoid using QoS from within a virtual server. However, QoS can be used on a physical server that is acting as a virtualization host.

Conclusion

In this article I have tried to give you a basic explanation of what QoS is, as well as some of its primary limitations. In the next article in this series, I plan to show you how QoS is implemented in Windows Server 2012 and how you would go about enabling QoS. Being that Windows Server 2012 focuses so heavily on the cloud, Microsoft does offer QoS for Hyper-V so I plan to eventually discuss that as well.

If you would like to read the other parts in this article series please go to:

Featured Links