What's New in Windows Server 2012 Networking? (Part 4)

by [Published on 18 April 2013 / Last Updated on 18 April 2013]

In this article we will take a deeper look into DCB.

If you would like to read the other parts in this article series please go to:


In Part 1 of my on-going series on What’s New in Windows Server 2012 Networking, I touched briefly on the topic of Data Center Bridging (DCB). It’s also a part of the low latency technologies, the rest of which will be discussed in Part 5.

But DCB deserves a little more attention, so I’m taking a little detour this month to take a more detailed look at this feature. Unfortunately, the documentation on the TechNet web site is very incomplete at the time of this writing; note that the planning, deployment, operations and troubleshooting content is all marked “not available.”

Low Latency Workloads technologies

The next new feature on our “What’s New” list is really a whole set of features, some of which are new to Server 2012 and some of which were introduced in previous versions of Windows Server and have been improved, all of which work to help with the needs of scenarios in which low latency (that is, reduced delays between the steps of processing a workload) is important. There are a number of different types of scenarios that fit this description.

If you have applications or processes where you need the fastest possible communication between processes and/or between computers, these features can help to reduce network latency that’s caused by many different things, both hardware and software related.

Data Center Bridging (DCB) is one of these low latency features and we’re going to look now into how it can help to ameliorate the effects of those pesky latency issues.

Data Center Bridging (DCB) overview

Data Center Bridging has been around for a while; it’s based on a set of IEEE standards for enhancements to Ethernet that allow physical Ethernet devices to communicate with a SAN or LAN, thus making Ethernet work more reliably in the data center and allowing many different protocols to run over the same physical infrastructure. DCB has also been known as Data Center Ethernet (DCE) and Converged Enhanced Ethernet (CEE).

The relevant IEEE standards are:

  • 802.1Qbb (priority-based flow control, a.k.a. per-priority PAUSE)
  • 802.1Qaz (enhanced transmission selection/bandwidth management)
  • 802.1au (congestion notification)
  • Data Center Bridging Capabilities Exchange Protocol (DCBX).

All of these work together to reduce packet loss and make the network more reliable.

With DCB, bandwidth allocation is hardware based and flow control is priority-based. Instead of the operating system having to handle the traffic, it’s handled by a converged network adapter. A converged network adapter (CNA or C-NIC) combines the functions of a traditional NIC with those of an interface to a SAN using Fibre Channel over Ethernet (FCoE), iSCSI, or Remote Direct Memory Access (RDMA) over converged Ethernet. It has a Host Bus Adapter that can connect to Fibre Channel. Such adapters are offered by a number of vendors, including HP, Dell, QLogic, Broadcom and others.

So what are the advantages of convergence? You no longer have to run your Fibre Channel or iSCSI network separately from your Ethernet network. This saves money and simplifies management. The equipment requires less space in the data center and uses less power; it generates less heat and consequently lowers the need for cooling. You also need less cabling. The number of switches can also be reduced.

DCB and Fibre Channel

Gigabit Ethernet has become very affordable, and the prices for 10GigE are falling. That makes Ethernet a very cost-effective networking fabric. Fibre Channel is expensive in comparison to Ethernet, and no longer has a performance advantage, but companies that already have FC SANs don’t want to scrap that hardware, in which they have already invested considerable money.

Microsoft has also built into Windows Server 2012 the ability to connect to Fibre Channel directly from within Hyper-V virtual machines so you can support virtual workloads with your existing Fibre Channel storage devices. This further helps to extend the usability of your organization’s investments in Fibre Channel. You can find out more about that in the Hyper-V Virtual Fibre Channel Overviewon the TechNet website.

By running Fibre Channel over Ethernet (FCoE), the Fibre Channel frames are encapsulated within Ethernet frames. One problem with Fibre Channel is that it’s “finicky” – it requires a reliable network with very little (or ideally no) loss of packets. That means there has to be a way to reduce or eliminate the usual packet loss that occurs on Ethernet networks when they’re congested. With DCB, organizations can create an Ethernet based converged network that can communicate with the FC SAN.


What about iSCSI? This is a protocol for running SCSI traffic across the Ethernet IP network, generally for shared storage. It usually costs less than Fibre Channel so it’s especially attractive to organizations that need expanded storage on a tight budget. Deployment is generally not quite as complex, either – which can save money indirectly in terms of administrative overhead and/or consulting fees.

DCB can make an iSCSI deployment more reliable and predictable and enhances performance through its bandwidth allocation capabilities. DCB makes it easier to upgrade to 10GigE, converge the network traffic, and maintain performance for applications, and it’s very scalable, from small business to enterprise level. You’ll need a NIC with an iSCSI Host Bus Adapter that supports iSCSI over DCB as well as a storage array that supports it, and finally, a DCB-capable Ethernet switch.


RDMA over Converged Ethernet (RcCE) is a link layer protocol by which two hosts can communicate with remote direct memory access over an Ethernet network, for shared storage or cluster computing. RDMA can also be done over an InfiniBand network, another point-to-point architecture similar to Fibre Channel that offers high bandwidth and low latency, but InfiniBand is not routable and doesn’t scale well. It’s also less familiar to most IT pros than Ethernet, which they’ve been working with for decades. RoCE is used primarily in High Performance Computing (HPC) environments (supercomputing).

How DCB works

With DCB, the different types of traffic can run across the same physical media and you can manage them according to priorities. DCB produces a “lossless” environment; that is, no data frames are lost thanks to the use of flow control. This increases overall performance because there is no longer a need to retransmit lost frames, which can slow everything down.

As mentioned above, the four protocols that make up DCB create an Ethernet infrastructure that’s suitable for Fibre Channel and iSCSI:

Priority-based flow control, as its name implies, allows you to implement flow control on a per-priority basis. It works at the MAC Control Sublayer. It puts additional fields in a PAUSE frame so transmission of specific frames can be inhibited.

Enhanced Transmission Selection lets you allocate bandwidth according to priorities. It also adds a new field, the PGID (Priority Group ID). Priorities can be assigned to a PGID and you can allocate bandwidth, on a percentage basis, to each PGID. This limits the amount of bandwidth that can be used.

Congestion notification transmits congestion information and comes up with a measure of the degree of network congestion; it uses algorithms to reduce transmission rates upstream to avert secondary bottlenecks. There are two algorithms involved: CP (Congestion Point Dynamics) and RP (Reaction Point Dynamics, also known as a Rate Limiter).

Data Center Bridging Exchange can exchange configuration data between peers and can also detect misconfigurations.

You can find much more detailed explanations of how each of these protocols work in this Data Center Bridging Tutorialfrom the University of New Hampshire.

Microsoft’s DCB solution

There are plenty of DCB solutions out there from different vendors. Windows Server’s implementation of DCB provides some distinct advantages. First, of course, is the fact that it’s a part of the Windows Server operating system and so you don’t have to buy a separate solution. Since it’s based on the IEEE standards, you get interoperability that you might not get with all proprietary DCB solutions. Note that you will need DCB-enabled Ethernet NICs and DCB-capable switches on the network in order to deploy Windows Server 2012 DCB.

Enabling DCB in Server 2012

To enable Data Center Bridging through the GUI, on a Windows Server 2012 machine with a DCB-capable network adapter, Open Server Manager and perform the following steps:

  1. Click Add Roles and Features.
  2. Select Roles-based or feature-based installation.
  3. In the Select destination server dialog box, click Select a server from the computer pool.
  4. In the Select server roles dialog box, click Next.
  5. In the Features list of the Select Features dialog box, check the checkbox next to Data Center Bridging.

As mentioned, Microsoft has been slow to produce documentation for Windows Server 2012 DCB, but we hope that will be remedied in the near future.

Meanwhile, PowerShell fans will also be happy to know that the Windows Server 2012 implementation of DCB can be installed and configured via PowerShell cmdlets. You can find more information on that in the DCB Windows PowerShell User Scripting Guide on the TechNet website.


Data Center Bridging is an important technology for businesses that want to increase performance, decrease administrative overhead and leverage existing Fibre Channel or iSCSI mass storage assets. Microsoft’s implementation of DCB in Windows Server 2012 will make it easier and less expensive to deploy on Windows-based networks.

Next time, we’ll look at the rest of the low latency workload technologies, which includes Data Center Transmission Control Protocol (DCTCP), Kernel Mode RDMA (kRDMA), NIC teaming, Network Direct and more.

If you would like to read the other parts in this article series please go to:

Featured Links