- Monthly Newsletter - June 2014

Welcome to the newsletter by Debra Littlejohn Shinder, MVP. Each month we will bring you interesting and helpful information on the world of Windows Networking. We want to know what all *you* are interested in hearing about. Please send your suggestions for future newsletter content to:

1. Planning for Failure

At first glance, you might be scratching your head over the title of this editorial. Who in the world plans for failure? If you’re smart, you do. Because whether we’re talking about the equipment in your datacenter, the software running on your servers, or your own performance as an IT professional, failure at some point in time is not just a distinct possibility; it’s inevitable.

First, let’s talk about the datacenter (or, if your organization is very small, what’s also known as the server room). It would be great if you could just set up all your servers, switches, routers, flip the “on” switch and have everything hum along smoothly for the next five years. Of course, if that were the case, there would be no need for humans to maintain things and you would be out of a job. Thank your lucky stars for failure.

Hardware breaks, burns up or otherwise stops working. Hard drives die. Memory goes bad. CPUs succumb to heat or humidity (often after the failure of fans). Cables are damaged by environmental factors, workers, or animals that think wiring is a tasty treat. Both mechanical and electronic components wear out over time. A hardware failure can result in serious consequences. Failure of network hardware reportedly was responsible for 36 percent of mobile Internet outages in Europe in 2012. In 2010, a hardware failure was blamed for a PayPal “service interruption” that impacted many merchants and buyers. Earlier this year, a hardware failure crashed the Commonwealth Bank network in Australia, inconveniencing and frustrating customers.

Because hardware failure will happen, we plan for it. We build redundancy into our systems. We back up hard drives and utilize “hot swap” technologies. We implement failover server clusters. We have replacement switches and routers and NICs and gateway devices ready to go. And if we’ve planned properly, a failure will mean a little extra work for us and perhaps some slight, brief inconvenience for users – but it won’t be a catastrophic event that causes our company to lose business and goodwill and impact its bottom line.

Hardware isn’t the only thing that fails. Software is even more fragile – and usually more difficult to troubleshoot because of its greater complexity. One wrong move, such as installing an application that’s incompatible, can bring an operating system to its knees. A virus or exploit that slips through our protective mechanisms can wreak havoc. Even trying to do the right thing can render a computer unbootable if a “bad” security update or service pack that wasn’t fully tested has unintended consequences.

Thus we also plan for software failure. We create system restore points or snapshot images of our VMs so we can roll the system state back to a time when things were working properly. We have our backup systems to which we can divert the workload and copies of the data that reside elsewhere. We know from experience and accept the fact that failures will occur – usually at the most inconvenient time possible. So we prepare, and we don’t panic when it happens.

Why, then, do most of us expect so much more of ourselves than we do of our hardware and software? I’ve found that many of the most dedicated IT pros are at least a little obsessive-compulsive (like me). With that comes a streak of perfectionism, and perfectionists see failure – especially our own personal failures – not just as part of the process but as an affront to everything we try so hard to be. To miss a deadline, to be unable to complete a project, to exceed the allocated budget, to deliver a job that’s not up to our high standards – even the mere thought of experiencing any of these types of failure can send our stress levels into the stratosphere.

But that’s not good for our health and it’s not good for our work, either. It’s the quickest way to career burnout. Just as an overloaded processor eventually gives up the ghost, an overloaded IT pro who drives him/herself too hard to do everything perfectly in every way is headed for the ultimate failure of total collapse.

Am I suggesting that we give up and settle for mediocrity in our work? Not at all. Those people I’m describing couldn’t do that if we tried. What I am saying is that we have to stop expecting to do a perfect job every time. We have to accept that sometimes we’ll fail, and plan for that failure. There are some projects that we won’t be able to pull off, no matter how hard we try, how much overtime we put in, or how talented and skilled we are.

Even more difficult to accept: sometimes we’ll make mistakes even on the easy projects. Even the best of us have our “off” days, times when we’re ill or tired or distracted by personal joys or sorrows. Those are the mistakes we beat ourselves up over, the ones we stay awake at night thinking about how we should have done it differently. But guilt, like worry, is a useless emotion. The former expends energy on past events that can’t be changed and the latter expends energy on future failures that might or might not happen.

Planning for failure isn’t the same thing as worrying about it – in fact, knowing that you have a plan in place to deal with it makes for less worry. Planning for failure involves using if/then thinking: If I can’t get this OS upgrade rolled out by the target date, then what do I do? What are the real consequences of missing the deadline? What’s Plan B and what personnel and resources do you need to implement it?

Having a plan makes the prospect of failure less scary and frees you to focus on getting the job done. Ironically, planning for failure can be an important step toward ensuring your long-term success.

By Debra Littlejohn Shinder, MVP

Quote of the Month - Success is not final, failure is not fatal; it is the courage to continue that counts. (Winston Churchill)

2. Windows Server 2012 Security from End to Edge and Beyond – Order Today!

Windows Server 2012 Security from End to Edge and Beyond

By Thomas Shinder, Debra Littlejohn Shinder and Yuri Diogenes

From architecture to deployment, this book takes you through the steps for securing a Windows Server 2012-based enterprise network in today’s highly mobile, BYOD, cloud-centric computing world. Includes test lab guides for trying out solutions in a non-production environment.

Order your copy of Windows Server 2012 Security from End to Edge and Beyond. You'll be glad you did


Click here to Order your copy today


3. Articles of Interest

4. Administrator KB Tip of the Month

Installing Management tools on Server Core

If you are using Server Manager to install the Hyper-V role on a remote server, you might want to select the Hyper-V Module for Windows PowerShell on the Features page of the Add Roles And Features Wizard to install this module locally on the server. That way, if at some future time you are unable to manage the Hyper-V role on the remote server using either the Hyper-V Manager snap-in or Windows PowerShell, you might still be able to establish a Remote Desktop session with the remote server so that you can run Windows PowerShell commands locally on the server.

Alternatively, you might decide to install the Hyper-V role on a Windows Server 2012 instance that has been configured with the Minimal Server Interface installation option, which will allow you to install both the Hyper-V Management snap-in and Hyper-V Module For Windows PowerShell while retaining some of the security and servicing advantages of the Server Core installation option. When the Hyper-V role is installed on a server that has the Minimal Server Interface installation, you can launch the Hyper-V Management snap-in locally on the server by typing virtmgmt.msc at the command prompt. Note that Minimal Server Interface is not available on the standalone Windows Server 2012 Hyper-V product, which has only the Server Core installation option.

For more great admin tips, check out

5. Windows Networking Links of the Month

6. Ask Sgt. Deb


Hi Deb,

I heard that at TechEd 2014 in Houston that Microsoft released a lot of information about new features included in Microsoft Azure. I haven’t had to time to find out what these things are yet – I’m hoping your can give me the short course. Thanks! –Amed.


Hi Amed,

Sorry you couldn’t make it over to Houston! I wasn't able to go to TechEd this year either (because I was in Alaska at the time), but Tom was there and he said that it was one of the best Microsoft conferences ever. However, he also said that some of the people who went there this year were commenting that they should rename it from “TechEd 2014” to “Azure 2014”. What I get from that is that there was a ton of new information about Azure coming out during TechEd. Since it seems that Azure is going to be the future of Microsoft, it’s a good idea to stay on top of what’s happening in the Azure world.

From what I can find, here are the key new things that Microsoft announced at TechEd in regard to Azure:

  • Virtual Machines: Integrated Security Extensions including Built-in Anti-Virus Support and Support for Capturing VM images in the portal
  • Networking: ExpressRoute General Availability, Multiple Site-to-Site VPNs, VNET-to-VNET Secure Connectivity, Reserved IPs, Internal Load Balancing
  • Storage: General Availability of Import/Export service and preview of new SMB file sharing support
  • Remote App: Public preview of Remote App Service – run client apps in the cloud
  • API Management: Preview of the new Azure API Management Service
  • Hybrid Connections: Easily integrate Azure Web Sites and Mobile Services with on-premises data+apps (free tier included)
  • Cache: Preview of new Redis Cache Service
  • Store: Support for Enterprise Agreement customers and channel partners

That’s a lot of new stuff! I haven’t had a chance to get into the details of all of these things, but here are a couple of the ones that I find the most interesting:

  • ExpressRoute – this new feature allows organizations to create an MPLS connection to an Azure Virtual Network. They claim that they can support transfer rates of up to 10Gpbs! Wow. I think I want one of these. :)
  • Multiple site-to-site VPNs - It was a little frustrating before in that you were limited to creating a single site to site VPN to your Azure Virtual Network. Now you can connect multiple on-premises sites to an Azure Virtual Network. I know this is going to solve a lot of problems.
  • Azure Virtual Network to Azure Virtual Network direct connections – in the past, if you wanted two Azure Virtual Networks to communicate with each other, you had to route those connections through the Internet or through your on-premises VPN gateway. No more; now Azure Virtual Networks will be able to communicate with one another directly through the Microsoft Azure datacenters.

Those are just a few of things that are new. For more information, check out Scott Guthrie’s blog here.