If you would like to read the other parts in this article series please go to:
- What's New in Windows Server 2012 Networking? (Part 1)
- What's New in Windows Server 2012 Networking? (Part 3)
- What's New in Windows Server 2012 Networking? (Part 4)
- What's New in Windows Server 2012 Networking? (Part 5)
- What's New in Windows Server 2012 Networking? (Part 6)
If you’re an “oldie but goodie,” you probably remember when this site was called “World of Windows Networking” or WOWN. The site has changed quite a bit since those days, but there are still a lot of good articles that are dedicated to VPN, TCP/IP, WINS, DNS, Windows Routing and the Routing and Remote Access Service, and all those other Windows networking topics and the networking components of the Windows operating system. I try to build on that by trying to make sure that all my articles are as close to the original spirit of WOWN as possible. And it’s in that spirit that I’m writing this article series about what’s new and cool in Windows Server 2012 networking. In part 1 of the series, we talked about new and improved features in 801.1x authenticated wired and wireless, BranchCache, Data Center Bridging (DCB), DNS and DHCP.
As a quick refresher, here’s a list of some of the categories that offer new features for Windows Server 2012 networking:
- 802.1x Authenticated Wired and Wireless Access
- Data Center Bridging (DCB)
- Domain Name System (DNS)
- Hyper-V network virtualization
- IP Address Management (IPAM)
- Low Latency Workloads technologies
- Network Load Balancing
- Network Policy and Access Services
- NIC Teaming
- Windows QoS
- DirectAccess and Unified RRAS
- Windows Firewall with Advanced Security
This time we’ll cover Hyper-V network virtualization. Let’s get started.
Hyper-V Network Virtualization
As you probably know, Microsoft has gone “all in” and the cloud is what drives most of the innovation in the Windows Server 2012 OS today, and that most likely will hold true for future updates to the Windows Server operating system for some time to come. The philosophy now seems to be: if it’s not about the cloud, then forget about it. We’ve discussed in the past the characteristics of a cloud (public or private) service. But what is the cloud from an operating system viewpoint? That’s something we need to understand before discussing the new network virtualization feature.
One perspective on what the operating system has to offer cloud is the ability to abstract resources. A fully enabled cloud would have all the core compute, networking and storage components completely abstracted from the workloads that run on it. This would enable new scenarios for workload mobility. The reason for that is that the workloads don’t “care” which storage devices they might be connected to, they don’t “care” about the network they’re connected to, and they won’t “care” about which compute host they run on.
Think about how this abstracted cloud world differs from the world we have today. Most of our applications are not cloud optimized. What does that mean? It means that these applications are what we call “stateful” applications. Network state binds these applications to a particular network and if they are moved off that network, they will fail to run correctly. Similarly, compute state binds these workloads to a particular server or to a particular server in a cluster. Storage state binds the workloads to a particular storage array. If any of these or other states are lost, the workload won’t work. That is how the traditional datacenter works.
In contrast, with the cloud, the applications are written so that the they maintain virtually no state, hence they are referred to as “stateless” applications. These application workloads can be moved dynamically from machine to machine, from cluster to cluster, from network to network and from storage target to storage target and they will still continue to work because all of these infrastructure resources have been abstracted and the workload only needs to worry about accessing the abstraction. It doesn’t care about the actual networking, compute and storage components that lie underneath the abstractions.
If you’re as old as I am, you probably recall when the HAL in Windows NT was a big deal. HAL is the “Hardware Abstraction Layer”. What was so special about HAL? First, let’s look at a definition of the HAL from the article “Windows NT Hardware Abstraction Layer (HAL)”:
“The Windows NT hardware abstraction layer (HAL) refers to a layer of software that deals directly with your computer hardware. Because the HAL operates at a level between the hardware and the Windows NT executive services, applications and device drivers need not be aware of any hardware-specific information. The HAL provides routines that enable a single device driver to support a device on different hardware platforms, making device driver development much easier. It hides hardware dependent details such as I/O interfaces, interrupt controllers, and multiprocessor communication mechanisms. Applications and device drivers are no longer allowed to deal with hardware directly and must make calls to HAL routines to determine hardware specific information. Thus, through the filter provided by the HAL, different hardware configurations can be accessed in the same manner”
Early computer systems didn’t have hardware abstraction. As the underlined sections above make clear, the HAL makes it possible for multiple applications to take advantage of device drivers that provide an abstraction on which these applications can call. HAL acts as the “go between” that allows the motherboard and connected devices to receive instructions from higher level computer languages. This greatly simplifies application development and deployment and it enabled Windows NT to get the strong foothold in the data center that it subsequently gained. Note that other operating systems, such as OS X, Linux, BSD, Solaris etc. also have hardware abstraction layers, although they may not call them HALs.
What the HAL did for applications that ran on a single server and the hardware on that server, “cloud” will do for the data center. Essentially, Windows Server 2012 aims to be the HAL for the data center by extending the abstraction to the entirety of networking, compute and storage.
Virtualization and workload mobility
All of that leads to our subject for today: Windows Server 2012 Hyper-V Network Virtualization. Hyper-V Network Virtualization enables you to abstract the networking component from the virtual machines that run on your cloud data centers. When we think about networking, what is the “state” that we are most concerned about? If you think about it, you’ll probably agree that it’s the IP addressing information. This includes the device’s IP address, subnet mask, default gateway, DNS server settings, maybe WINS settings, and maybe custom metrics. All of these are stateful and if they are changed, it could have a deleterious effect on the workloads that run on that virtual machine.
This has important implications when it comes to workload mobility. If you are tied to this networking state, it’s going to be hard to easily move workloads from one site to another – and that really makes it difficult to realize the benefits of Hybrid IT, where some of your data center is located on premises and other parts of your data center might be hosted by a cloud service provider, and the cloud service provider is providing services for multiple organizations.
What you want to be able to do is make it possible to assign IP addressing information to your workloads when they are on premises, and then be able to move them automatically to a cloud service provider when and if you need to, and then be able to move them back to your on-premises data center if and when you need to do that. If you can also wrap this all into an automated process, then you really start to see the benefits of cloud computing.
You might think that keeping the same IP addressing information when you move your resources to the cloud service provider wouldn’t be all that difficult and you’re right; in theory it’s not. However, it is difficult for the cloud service provider who is trying to run all the virtual machines on a shared network infrastructure to accommodate all the consumers of the cloud service. If you want to be able to keep your IP addressing information on your mobile workloads, the cloud service provider has to be able to accommodate your IP addressing infrastructure. The problem with that is that other consumers of the cloud service provider’s cloud service also want to keep their IP addressing information. That is a problem because it can lead to two or more consumers of the cloud service wanting to use the same IP addressing information on the cloud service provider’s network. If two or more consumers of the cloud service want to use network ID 192.168.1.0/24, for example, then there’s obviously going to be a problem.
Hyper-V Network virtualization to the rescue
Hyper-V Network Virtualization solves this problem and allows multiple users of the cloud networking infrastructure to use the same IP addresses, same network IDs, and same DNS infrastructures they use on their on-premises networks. With Hyper-V Network Virtualization, you can assign a subnet the network ID 192.168.2.0/24 and move that subnet to a cloud service provider hoster network, and if there is another customer on the hoster’s network who wants to use the same network ID, no problem. Hyper-V Network Virtualization takes care of that for you.
The scenario is not limited to hoster networks, either. Maybe you’d like to treat one of your corporate data centers as a secondary or tertiary data center and be able to move subnets around like you do when moving them to a hoster network. Again, no problem! Hyper-V Network Virtualization will do that for you.
At this point, you might be wondering: What’s the catch? Well, it’s difficult to configure Hyper-V Network Virtualization out of the box. In order for it to really work in practice, you are going need to deploy System Center Virtual Machine Manager 2012. SCVMM will do the heavy lifting so that you don’t have to get lost in a morass of PowerShell scripts. I suspect in the future Microsoft will do more to make Hyper-V Network Virtualization easier to configure, so that network virtualization becomes essentially “push button” and doesn’t require that you become a programmer or a Ph.D in PowerShell.
Today we spend a lot of our time talking about the cloud and how abstraction is a key feature of cloud computing. When we are able to abstract network, compute and storage completely, we will come very close to fulfilling the promise of cloud computing. Microsoft has taken a big step in that direction by introducing Hyper-V Network Virtualization. In this article, I described how you can use Hyper-V Network Virtualization and the benefit you can gain from its use. Next time, we’ll move down the list of the new and cool networking features in Windows Server 2012 to take a look at IP Address Management (IPAM). See you then! –Deb.
If you would like to read the other parts in this article series please go to: