Optimizing server power usage

by [Published on 18 June 2013 / Last Updated on 18 June 2013]

This article examines the issue of power usage by servers, how to plan for it, and how to reduce it in order to keep your costs under control.

The problem

Servers can eat a lot of electrical power, and datacenters are voracious consumers of it. Unfortunately electricity costs money and that means you need to figure this carefully into your budget when planning your server infrastructure.

Where do you start? A useful rule of thumb is Figure 1 which says that you can apportion electricity consumed into three roughly equal portions. This means for example that for every 100 kWh of electrical power consumed in a datacenter, you can expect that:

  • Approximately 33 kWh will be utilized for cooling purposes (air conditioning)
  • Approximately 33 kWh will be dissipated by server hardware (e.g. disks drives, power supply units, fans) and by associated things like voltage regulators and UPS units
  • Approximately 33 kWh will be available for processing purposes

Image
Figure 1: Datacenters often waste twice as much power as they use to run workloads.

Of course this can vary greatly depending on how "green" your servers are and how efficiently you design the cooling and airflow system of your datacenter.

But why is power wasted in the first place? There are two main reasons:

  1. Electrical devices and systems are not 100 percent efficient. For example, the typical UPS has only about 90% efficiency while the typical power supply unit (PSU) in a server has only about 80% efficiency. This means a UPS wastes about 10% of the power it consumes as heat while a PSU dissipates about 20% as heat.
  2. Cooling and airflow systems are often not designed very efficiently. For example, if you're standing 10 feet away from the server rack and feel cold air blowing on your face, that cold air is wasted because it's cooling you instead of the servers.

Some solutions

The solutions to optimizing server power usage are manifold. First, the two reasons behind power waste described above immediately suggest two corresponding solutions to increasing power efficiency in your datacenter:

  1. Purchase electrical devices and systems that are more energy efficient. But make sure you read the specs of each device or system you're considering buying, because "green" doesn't always mean energy efficient to the degree you might hope for.
  2. Design and implement an air conditioning system that efficiently delivers cooling where it's most needed i.e. the CPUs of your servers first and other components such as disk drives second.

Of course, the main problem with solution 1 is that it's disruptive--replacing old servers with new ones involves migrating server workloads, and migration always involves downtime as well as risk. And the problem with solution 2 is that it takes two things companies are often short of: brains and time. Of course, you can always train and time brains for money by hiring outside experts to analyze your current airflow and cooling system and recommend or implement changes, but most companies are also short of money or they at least act as if they're short on it.

The best time to implement solution 1 is probably during your server operating system refresh cycle, for example when moving from Windows Server 2003 to Windows Server 2012. And the best way to implement it is by virtualizing your server workloads using Physical to Virtual (P2V) conversion so that your many servers can run as virtual machines on a small number of virtualization host machines such as Hyper-V hosts or VMware ESX servers. The reason for this is because power efficiency is largely a matter of scale, so high-end big-iron multiprocessor systems tend to be more energy efficient in a relative sense than cheaper systems that have only a few processors in them.

Of course there are other steps you can take to optimize power usage in your datacenter. Here are a few more things you can do that you might not have considered:

  • Implement a data tiering solution and migrate as much of your business data as you can from Tier 1 to Tier 2. Because Tier 2 data storage systems typically use low-cost high-capacity 5400 RPM hard disk drives, data storage on Tier 2 utilizes less electrical power per GB stored and accessed than for data stored on Tier 1 storage devices. See my articles Data Tiering and Overprovisioning, Data Tiering Strategies, and Data Tiering and Service Level Agreements here on WindowsNetworking.com for more information about planning and implementing data tiering.
  • Use thin provisioning to allocate data storage "just in time" from a storage area network (SAN) or from a Windows Server 2012 Storage Spaces solution. For more information about Storage Spaces see this link.
  • Purchase and implement an enterprise monitoring system and use it to monitor server power usage and other indicators of energy, processing, and data transfer efficiency. Check with your server hardware vendor for the best solution to address these needs. The point here is that if you don't measure your energy efficiency, you can't know whether the steps you take or propose will have add any significant savings to your company's bottom line.

Implementation strategies

Where do you begin? Should you look for "low-hanging fruit" that you can implement with small effort and low cost? Or should you address major issues that may take time but can pay off in big ways in terms of cost savings for your company? The answer to that dilemma is that it depends largely on the scale of your operations.

For example, you're a large company running dozens of servers at several different locations, you might focus on retiring old server hardware and consolidating server workloads onto a handful of powerful, energy-efficient virtualization host machines. You could also look at migrating some or all of your server workload into the cloud provided the cost/benefit analysis indicates significant savings might be achieved by following this approach (and assuming you're comfortable with having a cloud hosting provider handle the infrastructure side of your IT implementation).

On the other hand, if your organization already has dozens or hundreds of virtualization host machines running in a data center at head office, you might want to take the time to carefully analyze the efficiency of your air conditioning system in delivering cooling to these servers. You might discover that by making some low-cost changes to your air ducting system, or by installing a few dozen more thermostats adjacent to servers, you might be able to double the efficiency of your air conditioning system and knock 10 or 15 percent of your datacenter electricity bill. Other small changes that can garner big results if you analyze carefully can include:

  • Blocking cable openings to reduce leakage
  • Bundling cables to allow air to flow more freely
  • Adding more air return ducts
  • Reorganizing equipment into hot racks and cool racks
  • Positioning A/C units so they can perform better

Believe it or not, the biggest problem with air conditioning systems in datacenters is usually overprovisioning--too much cooling to keep temperatures unnecessarily low. Overprovisioning is the bane of IT departments and stems from a reluctance to analyze risk. I talked about this issue earlier in my article Data Tiering and Overprovisioning here on WindowsNetworking.com.

Featured Links