While I was preparing to write this article, my mind drifted to one of my college physics classes. During one particular lecture, the professor was explaining that absolute zero was the temperature at which there was absolutely no heat left in an object. Someone in the class asked if anything exists that is actually that cold. The professor explained that scientists had come close to producing absolute zero temperatures, but that actually achieving absolute zero was impossible. The reason that he gave was because there is heat in the mechanism that’s being used to measure the object’s temperature. He went on to explain that one of the fundamental laws of physics is that you can’t measure an object (temperature or otherwise) without effecting the object to at least some small degree.
The reason that I bring that up is because I have noticed lately that there seems to be a rather heated debate on various Web sites as to whether it is more appropriate to run the Windows Performance Monitor locally or remotely. There are good arguments on both sides of the issue, but the reason for the debate has to do with the little physics lesson that I just gave you. The Performance Monitor is designed to measure what’s going on inside of a computer. As the laws of physics dictate, you can not measure your computer’s performance without affecting it to some degree.
Both sides of the local / remote debate agree that the Performance Monitor’s results are skewed. The debate centers not around preventing the skewing, but rather on whether the results are less skewed when you run the Performance Monitor locally or remotely.
Before I Begin
Personally, I think that it would be short sighted for me to tell you that you should always run the Performance Monitor locally, or that you should always run it remotely. The reason why I say this is that the Performance Monitor is a complex tool that can do a lot of different jobs. Rather than me telling you to always run Performance Monitor locally or to always run Performance Monitor remotely, I think that it makes more sense to look at the debate at the task level rather than at the tool level. Of course, the Performance Monitor includes hundreds, if not thousands of counters. There is no way that I can write about all of them. I will therefore focus my attention on some of the more common performance monitoring tasks. The sections below focus on specific aspects of monitoring a system’s performance.
One of the most common Performance Monitoring tasks involves watching the computer’s CPU utilization. As you probably know, Windows sees each application or service as an independent process. Each process is made up of one or more threads. The threads are what the computer’s CPU actually executes.
If you stop and think about it, the Performance Monitor is an application. Like any other application, it must execute threads when it runs. Like it or not, drawing all of those cool graphs consumes some processing power, which makes it look like the CPU is working harder than it really is.
As I said in the beginning though, nobody debates the fact that the Performance Monitor produces skewed results. The question is are the results more accurate when you run the Performance Monitor locally or remotely. Going into this article, I suspected that the results were probably more accurate when you run the Performance Monitor remotely, because you are not consuming CPU cycles by drawing Performance Monitor charts. Rather than just telling you to run the Performance Monitor remotely when measuring CPU utilization though, I decided to put it to the test.
I decided to run the Performance Monitor for one minute and forty seconds against one of my servers. I performed the tests during a period of minimal activity at 10:30 on a Sunday night. Figure A shows the outcome of the test when it was run locally. Figure B shows the outcome of the same test run remotely.
Figure A: This was a CPU utilization test that was run locally
Figure B: This is the outcome when the CPU utilization test was run remotely
Granted, this test isn’t exactly scientific, because to gain truly accurate results, the test would have to be run over a long duration. Over the testing period though, both tests produced roughly the same number of spikes in activity. However, the spikes were a lot lower when the tests were run remotely. As such, the average CPU utilization was also lower. After seeing this, I tend to agree with my prediction that CPU utilization tests are probably more accurate when run remotely.
There are a whole lot of counters that can be used to measure hard disk utilization. I decided to use the %Disk Time counter for this test since it reflects the amount of time that the hard disk actually has to work. Going in my theory was that whether you run the Performance Monitor locally or remotely, probably doesn’t make much difference since the Performance Monitor doesn’t actually save data when you are passively monitoring disk usage.
I tested the hard disk activity in roughly the same way as I tested CPU usage. The tests were performed during a period of minimal activity, late at night. Figure C shows the results when the test was run locally. Figure D shows the results when the test was run remotely.
Figure C: This is the %Disk Time counter when run locally
Figure D: This is the %Disk Time counter when monitored remotely.
In the end, there were a couple of good spikes of disk activity when the test was run remotely, but I don’t think that the spikes were related to the Performance Monitor being run remotely. There was way too much time when the percentage of disk utilization was zero for me to believe that running the test remotely really makes a difference. Of course the only way to be sure would be to run the test over a long duration under controlled conditions.
When testing the system’s memory, I was initially inclined to look at the Pages / Sec counter. This counter monitors the frequency at which memory pages are read from or written to disk. However, the last test showed that there wasn’t much disk activity going on. I therefore decided to monitor the available bytes of memory instead.
Normally, the available bytes should remain fairly constant unless you are running a memory hungry application or you are opening or closing applications. Going in, my expectations were that there would be slightly less memory available when the test is run locally since the Performance Monitor application requires some memory. Figure E shows the test results when the test was run locally, and Figure F shows the results when the test was run remotely.
Figure E: This is the Available Bytes of memory as shown by the Performance Monitor when running locally
Figure F: This is the Available Bytes of memory as shown by the Performance Monitor when running remotely
When I actually ran the tests, the results were exactly what I expected. More memory is available to the system when you run the Performance Monitor remotely.
Network Usage Monitoring
The last test that I decided to perform was to see if there was a difference in the way that Performance Monitor reported network traffic usage based on whether Performance Monitor was being run locally or remotely. Going in, I am assuming that the Performance Monitor will show way more network traffic when run remotely. For the tests, I will be measuring the Network Interface performance object’s Bytes Total/Sec counter. The results are shown in Figures G and H.
Figure G: This is the server’s total volume of network traffic as reported with the Performance Monitor running locally
Figure H: This is the server’s total volume of network traffic as reported with the Performance Monitor running remotely
If you look at the graphs shown in Figures G and H, it would at first appear as though both test measured an identical amount of network traffic. However, if you look at the average, minimum, and maximum values, you can see that there was significantly more traffic when the test was run remotely.
The Performance Monitor isn’t perfect. Whether you choose to run the Performance Monitor locally or remotely, the results are going to be skewed to some degree. In my opinion though, the skewed results are usually not a big deal unless you are working on something that requires a high degree of precision. The reason why I say this is because your copy of Performance monitor produces results that are just as valid as anyone else’s. We are all working with the same limitations. Therefore, when you see documentation from Microsoft that tells you to look for specific Performance Monitor counter values to tell whether that aspect of the system is operating normally or not, you have to remember that Microsoft’s testing results are just as skewed as yours are. My point is that it really doesn’t even matter if the results are skewed, so long as everyone’s results are skewed in roughly the same way. Granted, everyone’s computers are configured differently, so not everyone’s Performance Monitor results are going to be skewed by exactly the same amounts, but that’s kind of what Performance Monitoring is all about; finding which values are normal for your system and at what threshold the counters represent a problem.
If I was pressed to make a recommendation though, I would suggest running the Performance Monitor remotely for measuring CPU, hard disk, and memory usage. However, I would recommend running the Performance Monitor locally if you are measuring anything network related.