The Definition of a Small Network
The word small means different things to different people. For example, I consider my own network to be small. I’m running a one man show with about 20 computers. On the other hand a fortune 500 company might consider a subsidiary with a thousand users to have a small network. For the purpose of this article, I will define a small network as a network with under a hundred users.
One of the first things that you will need to plan for your small network is the domain structure. At first, this probably sounds like overkill. After all, most small networks are single forest, single domain. The argument could be made that you need to plan for future growth, but a single Windows Server 2003 domain controller can accommodate millions of objects in the directory. Even if you were using an ancient Windows NT 4.0 domain controller, the limit is still somewhere around 40,000 users. So why is it so important to plan a domain structure for such a small network?
It has to do with administrative control. More than likely, if your network has less than a hundred users, you are probably going to be the network’s only administrator. Geography has a way of changing that though. For example, imagine that those hundred employees are scattered among three different offices in three different parts of the country. Are you still going to try to manage the network for all three offices yourself, or would you prefer to have some help?
Let’s say for the sake of argument that you decide that you do want some help running the networks in the remote offices because they are so far away. The questions now are how much help do you want and how much trust do you have in the remote administrators?
These questions are important because you have a couple of options. If you just want the remote administrators to be able to reset passwords, unlock user accounts, and things like that, then you are probably best off creating an organizational unit for each remote office and then placing the user accounts from each office into the appropriate organizational unit. If on the other hand you want to completely hand over control of the remote offices to the remote administrator (you keep control of the forest) then you are probably better off creating a separate domain for each office.
For the sake of argument, let’s assume that you decided to go with a single domain for your network. Regardless of whether there are any remote administrators in the picture or not, you are going to have to make some important design decisions regarding your remote offices. These decisions have to do with what types of servers (if any) you want to place in the remote facilities.
These types of decisions are always a big deal, but they are even more important in small companies because you have to balance the cost of the servers (and their impact on your budget) with the benefit that they will provide.
Let’s pretend that your company has a really tight IT budget (hmm… maybe we aren’t pretending on that part) and that you decide not to put any servers at all in the remote offices. Your network can function like this, but you are completely at the mercy of the speed and reliability of the WAN link between the remote office and the main office. If the WAN link were to go down, then nobody in the remote office will even be able to log in.
Of course WAN links go down all the time, and having a whole office full of people who can’t log in until the problem is fixed probably isn’t good for business. So let’s say that we are going to put a domain controller in each remote office so that people can log in whether the WAN link is available or not. Does this really solve the problem though? Not really. If anything, it creates some other problems.
First of all, having a locally available domain controller does not guarantee that users will be able to log in (unless we are talking about Windows NT). In an Active Directory environment, users must be able to contact a global catalog server in order to log in. The only user who can log in without access to a global catalog server is the domain Administrator. This problem is easy to fix though. You can just designate each office’s domain controller to be a global catalog server. This will allow users to log onto the network when the WAN link is down; assuming that the users can communicate with the domain controller.
Even if a domain controller is available locally, and the domain controller is designated to be a global catalog server, users won’t be able to log in if they can’t communicate with the domain controller. There are a couple of things that can cause this to happen. One reason why users might not be able to communicate with the domain controller is because they don’t have an IP address assigned to their computer. Think about that one for a minute. If the only available DHCP server is in another building and the WAN link goes down then nobody in the remote office will be able to lease an IP address. Therefore, it’s probably a good idea to have a DHCP server in the remote office.
Another reason why a domain controller might be inaccessible is because the Active Directory is completely dependant on the DNS. If the DNS server is in the main office and the WAN link goes down then clients in the remote office may not be able to resolve the name of the local domain controller.
So let’s say that you decide to spend some bucks and put a DNS server, a DHCP server, and a domain controller in the remote offices. There are still a couple of issues that you may have to deal with. One issue is excessive replication traffic. Every time the Active Directory is updated in any one of the offices (such as adding a user account or changing a password) the update is propagated across the WAN link to the other domain controllers in the other offices. If the Active Directory is updated frequently, this replication traffic can really put a strain on your bandwidth.
The solution here is to create a separate Active Directory site for each office. Active Directory replication traffic will still need to be sent to the domain controllers in the remote offices, but it can be scheduled and sent in batches rather than constantly flooding the WAN link with replication traffic.
The other problem that you might run into is availability of data. Assuming that the remote offices have a domain controller, a global catalog server, a DHCP server, and a DNS server, then users in that office will be able to log in even if the WAN link goes down. However, being able to log in doesn’t mean much if the users can’t access their data.
There are a couple of ways around this problem. The appropriate course of action would depend on whether or not data is shared among the various offices. If there is no need to share data between offices, then the best course of action is probably to put a file server in each office and have the users save their data directly to that server. If data does need to be shared among offices, then you are probably best off setting up a DFS server in each office. That way, each office contains a server with a full replica of the company’s data. If a WAN link goes down, users can still access the entire data set. When the WAN link comes back up then any changes that have been made to the data are synchronized with the other DFS servers in the other offices.
In this article, I have mentioned a lot of things that need to be present in the remote offices so that users can continue to work even if a WAN link goes down. If budget is a concern, you can probably get by with lumping all of these roles into a single server. It’s usually considered to be a best practice not to use a domain controller as a file server (for security and performance reasons). You have to do what’s appropriate for your individual company. In a situation like the one that I described above, I would recommend placing two servers in each remote office. One server would act as a domain controller, global catalog, DHCP, and DNS server. The other would act as a file server (possibly a DFS server).