When a volume on Windows Servers is found to be corrupted, NTFS schedules a Chkdsk operation for the next reboot. This is done by invoking Autochk.exe during startup. From personal experience you can tell that reboots occur for many reasons such as, in the event of Windows updates and an inadvertent delay is caused if an Autochk is scheduled. Although, it is highly recommended not to delay fixing dirty volumes but you may need to perform a quick server reboot and fix the volume manually after the system is up and running. Server administrators can choose to not delay the boot up process by turning off checking the data volumes at reboot time. This is performed by using the chkntfs command and will allow you to include specific volumes. In addition, the setting is persistent across all reboots hence, you need to invoke the command again to restore the default behavior. For the full list of options type chkntfs /? as shown below: To improve Chkdsk performance Microsoft updated this tool with a better caching mechanism where larger blocks are now handled. By caching larger blocks of the disk in RAM, Chkdsk execution time is reduced. In addition, this feature also reduces the need to re-access data from the disk thus impacting positively on I/O time. As you can appreciate, the benefits gained by the new caching mechanism will cause an increase in memory consumption. Therefore, a server that is tight on memory space may see no performance improvement as regards to Chkdsk execution times. You may be under the impression that the bigger the volume size is, the longer the execution time is, but reviewing Microsoft's benchmarks shows that the size of the volume is the lowest factor. The number of files in the volume has the highest burden on the Chkdsk execution time while available memory is second. Actually, the results show that the volume size has no effect on the execution time of Chkdsk. The same results show that Windows Servers 2008 R2 Chkdsk is faster than the Ch
The new look and feel experience of IE 9 and other benefits I was enjoying were cut short as I started to encounter various problems during my day-to-day work. There are many web applications that are not fully compatible with the latest html standards and hence, I had to switch IE 9 to compatibility mode many times and in some case even to Software Rendering mode. The major setback is with web publication software such as, wordpress where some controls in the admin console do not function and prevent you from performing the most basic things. For example with wordpress the control to insert, edit or remove a hyperlink is completely dead while writing content and doing basic formatting in text boxes turned out to be an impossible task. Switching IE 9 to compatibility mode may help you avoid some issues with content publishing systems and you can find Compatibility View in the Tools menu but you need to display the Tools menu first as this is hidden by default. To show the command bar you need to right click the topmost row next to the websites' tabs and select Command bar. Then from the Tools menu, select Compatibility View. To enable Software Rendering Mode go to Internet Options in the Tools menu, and on the Advanced tab check the Use software rendering instead of GPU rendering box under the Accelerated graphics section. This option may show some graphics that were previously failing to load or causing rendering problems. Running IE 9 in Software rendering mode may result in performance degradation and Microsoft recommends users to install the latest video driver that supports GPU hardware acceleration, and switch to IE 9 native GPU hardware acceleration.
Network Monitor is a free tool available from Microsoft. You can capture data using either the graphical Network Monitor or the command-line NMCap tool. Analysis of the captured data must be done through the graphical interface. As network traffic is in abundance especially on busy servers, you would need to use filters to reduce the number of packets collected and remove the packets not related to the application you are examining. This blog post is about a typical troubleshooting scenario using NMCap where we use a DNS capture filter as to find out what is breaking our DNS test environment. We will capture DNS traffic from a client workstation in a domain while pinging valid and inexistent external web servers, and also when our DNS services are down! The Network Monitor tool is installed on a domain controller which happens to be the DNS server as well. For these tests we will use the following syntax: NMCap /network * /capture "DNS" /StopWhen /TimeAfter 2 min /file DNS.cap Where we are saving our captured data in a file called DNS.cap located in c:\Prorgam Files\Microsoft Network Monitor\ and only DNS related traffic will be collected for a period of 2 minutes. Now, from the client workstation we ping a valid URL such as, windowsecurity.com for several times or using the ping –t option. You can also use the command nslookup www.windowsecurity.com. After the 2 minute period, go to Network Monitor and open dns.cap. You should be able to verify the complete path of the DNS requests from the client to the DNS server to the gateway and vice-versa as shown below: The request completed successfully and this means that the client can resolve domain names without any problems. There is no need to go into much detail, it is sufficient to select each frame and verify its DNS query flag! As shown above, the first frame represents the DNS request from the client (192.168.100.2) to the DNS server (192.168.100.2) ending with a success query flag. The s
The answer to this question is not easy and I reckon that no one should try to answer it for you without having full knowhow and understanding of your present IT infrastructure! Your organization needs to research the Cloud Infrastructure and related costs and finally compare the results against the on-premise setup. The research should include hands-on experience and test runs of some of the organizations’ critical services with adequate sample data. To help you with the feasibility study Microsoft has made available an online Total Cost of Ownership (TCO) calculator: “Use the Windows Azure platform TCO calculator, and in 10 minutes or less, you’ll see how Windows Azure compares to on-premises solutions, quantify migration costs, and get a pricing overview.” This is a recommended starting point, why? Since, getting migration and operational costs at an early stage can help senior management to decide whether moving or not to the Cloud (in this specific case to Azure). This would save the IT team from conducting further research and training on the specific solution if the decision by senior management is a no-go! They can skip to next solution or provider. So, I suggest do the costs homework first! The costs calculator will help you determine the right Windows Azure Platform and provide a pricing overview, help you quantify the migration costs to the Cloud infrastructure and application delivery costs. The cost analysis are based on the company’s industry, location, services/applications required and specs, user requirements, user and application growth and foreseeable intermittent spikes, etc. As you can see, the tool gathers quite a number of elements in order to compute an accurate estimate. The Windows Azure platform TCO calculator can be found here.
SANTA CLARA, Calif., March 28, 2011 – Intel Corporation announced today its highly anticipated third-generation solid-state drive (SSD) the Intel® Solid-State Drive 320 Series (Intel® SSD 320 Series). Based on its industry-leading 25-nanometer (nm) NAND flash memory, the Intel SSD 320 replaces and builds on its high-performing Intel® X25-M SATA SSD. Delivering more performance and uniquely architected reliability features, the new Intel SSD 320 offers new higher capacity models, while taking advantage of cost benefits from its#25nm process with an up to 30 percent price reduction over its current generation. "Intel designed new quality and reliability features into our SSDs to take advantage of the latest 25nm silicon, so we could deliver cost advantages to our customers," said Pete Hazen, director of marketing for the Intel Non-Volatile Memory (NVM) Solutions Group. "Intel's third generation of SSDs adds enhanced data security features, power-loss management and innovative data redundancy features to once again advance SSD technology. Whether it's a consumer or corporate IT looking to upgrade from a hard disk drive, or an enterprise seeking to deploy SSDs in their data centers, the new Intel SSD 320 Series will continue to build on our reputation of high quality and dependability over the life of the SSD." The Intel SSD 320 is the next generation of Intel's client product line for use on desktop and notebook PCs. It is targeted for mainstream consumers, corporate IT or PC enthusiasts who would like a substantial performance boost over conventional mechanical hard disk drives (HDDs). An SSD is more rugged, uses less power and reduces the HDD bottleneck to speed PC processes such as boot up and the opening of files and favorite applications. In fact, an upgrade from an HDD to an Intel SSD can give users one of the single-best performance boosts, providing an up to 66 percent gain in overall system responsiveness.1 The Intel SSD 320 Series comes in 40 gigaby