Throttling Bandwidth through QoS (Part 2)

by [Published on 26 June 2008 / Last Updated on 26 June 2008]

This article continues the Throttling Bandwidth Through QoS series by discussing the QoS architecture that is used by Windows Server 2003.

If you missed the first part in this article series please read Throttling Bandwidth through QoS (Part 1).

If you would like to be notified when Brien M Posey releases the next part of this article series please sign up to the WindowsNetworking.com Real time article update newsletter.

In the first part of this article series, I talked about what QoS does, and what it is used for. In this article, I want to continue the discussion by explaining how QoS works. As you read this article, please keep in mind that the information presented here is based on the Windows Server 2003 implementation of QoS, which works differently from the original QoS implementation found in Windows 2000 Server.

The Traffic Control API

One of the biggest problems with prioritizing network traffic is that you can’t prioritize traffic based on the computer that generated it. It is very common for a single computer to run multiple applications, and for each of those applications (and the operating system) to produce its own traffic stream. When this happens, each traffic stream must be prioritized individually. After all, one application may require reserved bandwidth, while best effort delivery is fine for the other traffic streams.

This is where the Traffic Control API comes into play. The Traffic Control API is the Application Programming Interface that allows QoS parameters to be applied to individual packets. The Traffic Control API works by identifying individual traffic streams, and then applying various QoS controls to those streams.

The first thing that the Traffic Control API does is to create what is known as a filterspec. A filterspec is essentially a filter that defines what it means for a packet to belong to a particular stream.  Some of the attributes used by a filterspec include the packet’s source and destination IP address, and the port number.

Once a filterspec has been defined, the API allows for the creation of a flowspec. A flowspec identifies the QoS parameters that will be applied to a sequence of packets. Some of the parameters defined by the flowspec include the token rate (the permitted rate of transmission) and the service type.

The third concept defined by the Traffic Control API is that of a flow. A flow is simply a sequence of packets that are all subject to the same flowspec. To put it simply, the filterspec identifies which packets will be included in a flowspec. The flowspec determines whether or not those packets receive preferential treatment, and the flow is the actual transmission of the packets that are subject to the flowspec. All packets within a flow are treated equally.

It is worth mentioning that one of the advantages that the Traffic Control API has over the Generic QoS API that was used by Windows 2000, is the ability to use flow aggregation. If a host has multiple applications transmitting multiple data streams to a common destination, then those packets can all be combined into a common flow. This is true even if the applications are using different port numbers, so long as the source and destination IP addresses are the same.

The Generic Packet Classifier

In the previous section, I explained the relationships between the flowspec, the filterspec, and the flow. It is important to remember though, that the Traffic Control API is just that; an API. As such, it’s job it to identify and prioritize traffic streams, not to create the actual flow.

Creating the flow is the job of the Generic Packet Classifier. As you may recall from the previous section, one of the attributes that was defined within the flowspec was the service type. The service type basically defines the flow’s priority. The Generic Packet Classifier is responsible for examining the service type that has been assigned to a flowspec, and then placing the related packets into a queue that corresponds to the service type. Each flow is placed into a separate queue.

The QoS Packet Scheduler

The third QoS component that you need to be aware of is the QoS Packet Scheduler. To put it simply, the QoS Packet Scheduler’s primary job is traffic shaping. To accomplish this, the packet scheduler retrieves the packets from the various queues, and then marks the packets with a priority and flow rate.

As I explained in the first part of this article series, in order for QoS to work properly, the various network components located between a packet’s source and its destination must be QoS aware. Although these devices need to know how to deal with QoS, they also need to know how to process normal, non prioritized traffic as well. In order to make this possible, QoS uses a technique called marking.

There are actually two types of marking that take place. The QoS Packet Scheduler uses Diffserv marking that is recognized by layer 3 devices, and 802.1p marking that is acknowledged by layer 2 devices.

Setting Up the QoS Packet Scheduler

Before I show you how marking works, I need to point out that you will need to setup the QoS Packet Scheduler in order for it to work. In Windows Server 2003, the QoS Packet Scheduler is treated as an optional network component, just like the Client for Microsoft Networks or the TCP/IP Protocol. To enable the QoS Packet Scheduler, open the properties sheet for your server’s network connection, and select the check box next to the QoS Packet Scheduler, as shown in Figure A. If the QoS Packet Scheduler is not on the list, then click the Install button and follow the prompts.


Figure A: The QoS Packet Scheduler must be enabled before you can use QoS

Another thing that you need to know about the QoS Packet Scheduler is that in order for it to work properly, your network adapter must support 802.1p marking. To check your network adapter, click the Configure button that’s shown in Figure A, and Windows will display the properties sheet for your network adapter. If you look at the properties sheet’s Advanced tab, you will see the various properties that are supported by your network adapter.

If you look at Figure B, you can see that one of the properties that is listed is 802.1Q / 1P VLAN Tagging. You can also see that this property is disabled by default. To enable 802.1p marking, simply enable this property, and click OK.


Figure B: You must enable 802.1Q/1P VLAN Tagging

You might have noticed in Figure B that the property that you enabled is related to VLAN tagging, not to packet marking. The reason for this is that the priority markings are embedded inside VLAN tags. The 802.1Q standard defines VLANs and VLAN tags. This standard actually reserves three bits within a VLAN packet that are to be used to hold a priority code. Unfortunately, the 802.1Q standard never defines what these priority codes should be. 

The 802.1P standard was created as a compliment to 802.1Q. 802.1P defines the priority markings that can be embedded into a VLAN tag. I will show you how these two standards work together in Part 3.

Conclusion

In this article, I have discussed some of the basic concepts behind Windows Server 2003’s QoS architecture. In Part 3 I will continue the discussion by talking more about the way that the QoS Packet Scheduler marks packets. I will also be discussing the way that QoS works in low bandwidth environments.

If you missed the first part in this article series please read Throttling Bandwidth through QoS (Part 1).

If you would like to be notified when Brien M Posey releases the next part of this article series please sign up to the WindowsNetworking.com Real time article update newsletter

Featured Links