WindowsNetworking.com Monthly Newsletter of November 2009 Sponsored by: SolarWinds
Welcome to the WindowsNetworking.com newsletter by Thomas W Shinder MD, MVP. Each month we will bring you interesting and helpful information on the world of Windows Networking. We want to know what all *you* are interested in hearing about. Please send your suggestions for future newsletter content to: firstname.lastname@example.org
The media is awash with stories on cloud computing - you get the sense that the cloud is not just an option, but that it is going to be your only option in the future. While the media spinners might like to proffer that opinion to upset the ranks, I do see a future where cloud based services will make sense and provide a nice supplement to what you currently have on premises. For enterprise IT groups, the cloud will be a place for you to offload duties that mostly waste your time. This way, you can spend more time on projects that will bolster your company's bottom line. These types of projects are generally more interesting and have a better chance of getting you a raise in the future.
A journey always begins with the first step. If you have not started your journey into cloud computing yet, you might be wondering where to start. Some people are considering putting their Exchange Servers in the cloud (or e-mail in general, be it Exchange or some other service), some people are thinking of putting SharePoint in the cloud, others wonder if it is worthwhile to put your CRM in the cloud, and others are even considering putting their entire office suite of applications in the cloud. All of these options are available now.
However, not all of them are wise options. Online office applications, such as Google Apps, provide a very poor end user experience - and your users are going to be very unhappy if they are stuck with them. Online e-mail might make more sense - but there are security, confidentiality, and business continuity concerns. CRM is very popular so you might want to think about that, but the same security issues of having large amounts of corporate data at rest on someone else's network might make you a bit queasy.
One place where you might want to seriously begin your trek into cloud computing is to offload your inbound and outbound e-mail filtering to the cloud. When you do this, you can take the heat off of your network connection - the reason for this is that when the incoming mail for your domains hits the cloud filtering infrastructure, it does not suck up bandwidth on your company's Internet connection. Your Internet connection only has to deal with the much smaller amount of e-mails, which is the cleaned mail coming into your mail server. Also, you do not have to worry about putting together edge servers for inbound and outbound spam and malware detection - that all happens in the cloud.
One of the best inbound and outbound e-mail filtering solutions I have run into is the Forefront Online Protection for Exchange (FOPE). This solution replaces your on-premise anti-spam, anti-malware and policy compliance solution for mail coming from the outside and for mail leaving your network. Using the administrator console, you can set up your domains, enter your users, and away you go! If you do not do any other special configuration, you will get 98% spam protection, 100% protection against known malware, and less than one false positive out of 250K of e-mail messages. Pretty nice! Of course, you can customize the configuration even more and fine tune your anti-spam and anti-malware policies to meet any specific requirements in your organization.
The cost of the service varies, and many companies already have the rights to use Forefront Online Protection for Exchange and they don't even know it. Depending on your current Forefront or Exchange licensing arrangement, you might be able to start using FOPE right away at no additional cost. And even if you do not already have the licenses, you might find that it's less expensive to run your edge connection filtering, anti-spam, anti-virus, and e-mail policy enforcement solution in the cloud, where you do not have to purchase the software or the hardware to make it work on-premises.
I have tried and liked it. There are other solutions such as Google's Postini based service, but the FOPE solution is more robust and more secure, from what I can tell. Maybe one of the most important advantages that FOPE has over the Google option is that FOPE will spool your incoming mail for up to 5 days, which is useful in the event that your incoming SMTP hub fails. In contrast, because of the proxy nature of the Google solution, that mail just gets lost and no one is ever informed that the incoming connection to your site failed.
Quote of the Month - "If only we'd stop trying to be happy we could have a pretty good time." - Edith Wharton
3. WindowsNetworking.com Articles of Interest
When to Use GPT Disks
If your DNS server is running Windows Server Core, you can configure zone transfers on your DNS server from the command-line by using the DNSCMD command. AD DS-integrated zones store their DNS information in AD DS and replicate this information between domain controllers by using AD DS directory replication. Standard zones store their information in zone files and replicate this information between DNS servers by a process called a zone transfer. When a zone transfer occurs, a primary DNS server for the zone provides the zone information for the secondary DNS server. In this situation, the primary DNS server is called the master DNS server for the zone.
The master server is specified when you create a secondary zone. However, you can specify a different master server afterwards by using Dnscmd. For example, if you are changing the master DNS server for the hr.fabrikam.com zone from SEA-SC2 (172.16.11.31) to SEA-SC4 (172.16.11.33), then you can use the following command to configure the new master on SEA-SC1 (the secondary DNS server for the zone):
dnscmd SEA-SC1 /zoneresetmasters hr.fabrikam.com 172.16.11.33
Before the secondary DNS server can load the zone information from the master DNS server for the zone, you must configure the master server to allow zone transfers with the secondary server. For example, to configure SEA-SC4 as the master server for the hr.fabrikam.com zone so that it allows zone transfers only to SEA-SC1 (the secondary server for the zone), do this:
dnscmd SEA-SC4 /zoneresetsecondaries hr.fabrikam.com /securelist 172.16.11.30
Zone transfers take place automatically according to their default schedule, but you can also use Dnscmd to force a secondary server to initiate a zone transfer with its master server. For example, to force SEA-SC1 (the secondary server for the hr.fabrikam.com zone) to update its zone information from SEA-SC4 (the master server for the zone), do this:
dnscmd SEA-SC1 /zonerefresh hr.fabrikam.com
You can find this tip over here.
For more admin tips, check out the entire database here.
For your enterprise configurations, you probably already have a detailed backup and restore plan. However, where does that leave small and midsized businesses, as well as the computers in your home? How do you back up those computers? In the past many of us looked to third party solutions because of the limitations with regards to the built-in Windows client backup routines.
Windows 7 changes the game with a much improved backup application. Not only can you schedule multiple backup jobs, but you can now backup your entire system image. What is even better is that the system image backup is not just a "one time" deal, where you backup the system image once and have only that one to go back to if you want to restore. With Windows 7, you can backup the system image, and then schedule a regular backup after that, so that changes to the original image are backed up. This allows you to restore to one of several image backups, depending how far back in time you want to go.
Check out our article on Windows Backup on WindowsNetworking.com for more information here.
I went to TechEd a couple of weeks ago and it was a great experience! I was especially interested in the Forefront Edge products - which included UAG 2010 and TMG 2010. I have had some experience with ISA Server in the past, but never worked with UAG's predecessor - IAG 2007. From what the speakers seemed to be saying during the conference, all the inbound stuff would now go to the UAG and all the outbound connections would go through the TMG. Is that right? While I think the UAG looks interesting, I would like to use TMG to publish sites like I used to do with ISA. I get the impression that's not possible now. Can you clarify things for me?
Thanks! - Dave M.
Good question! I've heard something similar from some other people, although I did not have the chance to hear the actual presentations. From what I understand, it was said that UAG is for inbound only and TMG is for outbound only.
I would like to point out that, if you are acquainted with ISA, you will be able to do all the same publishing you used to do with ISA with the new TMG. While it is true that no new major investments are being made in Web and Server Publishing for TMG, the fact is that the Web and Server Publishing features that were included in ISA are also included with the TMG. So if you want the same basic functionality that was available in ISA 2006, you will be able to do that kind of publishing with the TMG.
However, if you want to stay ahead of the curve in terms of controlling and secure remote access connections (be they Web, Server or VPN connections), then you will want to start looking at the UAG to move forward. All major new investments in terms of inbound access will be focused on the UAG - that is why I think it is a good idea to start looking at the UAG in the future.
Of course, UAG is in beta now and TMG is RTM - so you might want to move up to TMG now, take advantage of the existing Web and Server Publishing, as well as the advanced SSTP configuration, and then look at moving those features to a UAG sometime next year. Then you can dedicate your TMG for outbound access only. The truth is, in most deployments of ISA (and now TMG) it has always been a good idea to separate inbound and outbound access duties - there are management as well as performance advantages for doing so.