Making Big Cuts in Data Center Energy Use

The energy used by our nation’s servers and data centers is significant. In a 2007 report, the Environmental Protection Agency estimated that this sector consumed about 61 billion kilowatt-hours (kWh), accounting for 1.5 percent of total U.S. electricity consumption. While the 2006 energy use for servers and data centers was more than double the electricity consumed for this purpose in 2000, recent work by RMI Senior Fellow Jonathan Koomey, a researcher and consulting professor at Stanford University, found that this rapid growth slowed because of the economic recession. At the same time, the economic climate led data center owner/operators to focus on improving energy efficiency of their existing facilities.

So how much room for improvement is there within this sector? The National Snow and Ice Data Center in Boulder, Colorado, achieved a reduction of more than 90 percent in its energy use in a recent remodeling (case study below). More broadly, Koomey’s study indicates that typical data centers have a PUE (see sidebar) between 1.83 and 1.92. If all losses were eliminated, the PUE would be 1.0. Impossible to get close to that value, right? A survey following a 2011 conference of information infrastructure professionals asked, “…what data center efficiency level will be considered average over the next five years?”


More than 20 percent of the respondents expected average PUE to be within the 1.4 to 1.5 range, and 54 percent were optimistic that the efficiency of facilities would improve to realize PUE in the 1.2 to 1.3 range.
Further, consider this: Google’s average PUE for its data centers is only 1.14. Even more impressive, Google’s PUE calculations include transmission and distribution from the electric utility. Google has developed its own efficient server level construction, optimized power distribution, and utilized many strategies to drastically reduce cooling energy consumption, including a unique approach for cooling in a hot and humid climate using recycled water.

SO WHERE DOES THE ENERGY GO IN DATA CENTERS?

For every unit of IT power produced, energy is used to cool and light the rooms that house the servers. Additionally, energy is lost due to inefficient power supplies, idling servers, unnecessary processes, and bloatware (pre-installed programs that aren’t needed or wanted). In fact, about 65 percent of the energy used in a data center or server room goes to space cooling and electrical (transformer, UPS, distribution, etc.) losses. Several efficiency strategies can reduce this.

For more information on best practices on designing low energy data centers, refer to this Best Practices Guide from the Federal Energy Management Program.

REDUCING COOLING LOADS

About half of the energy use in data centers goes to cooling and dehumidification, which poses huge opportunities for savings. First, focus on reducing the cooling loads in the space. After the load has been reduced through passive measures and smart design, select the most efficient and appropriate technologies to meet the remaining loads. Reducing loads is often the cheapest and most effective way to save energy; thus, we will focus on those strategies here.

Cooling loads in data centers can be reduced a number of ways: more efficient servers and power supplies, virtualization, and consolidation into hot and cold aisles. In its simplest form, hot aisle/cold aisle design involves lining up server racks in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. In more sophisticated designs, a containment system (anything from plastic sheeting to commercial products with variable fans) can be used to isolate the aisles and prevent hot and cold air from mixing.

But one of the simplest ways to save energy in a data center is simply to raise the temperature. It’s a myth that data centers must be kept cold for optimum equipment performance. You can raise the cold aisle setpoint of a data center to 80°F or higher, significantly reducing energy use while still conforming with both the American Society of Heating, Refrigerating, and Air Conditioning Engineers’ (ASHRAE) recommendations and most IT equipment manufacturers’ specs. In 2004, ASHRAE Technical Committee 9.9 (TC 9.9) standardized temperate (68 to 77°F) and humidity guidelines for data centers. In 2008, TC 9.9 widened the temperature range (64.4 to 80.6°F), enabling an increasing number of locations throughout the world to operate with more hours of economizer usage.

For even more energy savings, refer to ASHRAE’s 2011 Thermal Guidelines for Data Processing Environments, which presents an even wider range of allowable temperatures within certain classes of server equipment.

CASE STUDY: NSIDC GREEN DATA CENTER

Just up the road from RMI’s office in Boulder, The National Snow and Ice Data Center is running around the clock to provide 120 terabytes of scientific data to researchers across the globe. Cooling the server room used to require over 300,000 kWh of energy per year, enough to power 34 homes. The data center was recently redesigned with all major equipment sourced within 20 miles of the site. The redesign resulted in a reduction of more than 90 percent in the energy used for cooling. The new Cooleradosystem, basically a superefficient indirect evaporative cooler that capitalizes on a patented heat and mass exchanger, uses only 2,560 kWh/year.

Before the engineers from RMH Group could use the Coolerado in lieu of compressor-based air conditioning, they had to drastically reduce the cooling loads. They accomplished this with the following strategies:

  • Less stringent temperature and humidity setpoints for the server room—this design meets the ASHRAE Allowable Class 1 Computing Environment setpoints (see Figure 2)
  • Airside economizers (enabled to run far more often within the expanded temperature ranges)
  • Virtualization of servers
  • Rearrangement and consolidation into hot and cold aisles

The remaining energy that is required for cooling and to power the server is offset with the energy produced from the onsite 50.4 kW solar PV system. In addition to producing clean energy onsite, the battery backup system provided added security in the case of a power outage.

Rick Osbaugh, the lead design engineer from RMH Group, cites three key enabling factors that allowed such huge energy savings:

  • A Neighborly Inspiration: The initial collaboration between NREL and NASA on utilizing a technology never used on a data center was the starting point of the design process. This collaboration was born from two neighbors living off the grid in Idaho Springs—but in this case, these neighbors also happened to be researchers at NREL and NASA.
  • Motivated Client: In this case, the client, as well as the entire NSIDC staff, wanted to set an example for the industry, and pushed the engineers to work out an aggressive low-energy solution. In order to minimize downtime, the staff members at the NSIDC all pitched in to help ensure that the entire retrofit was done in only 90 hours.
  • Taking Risks: Finally, the right team was assembled to implement a design that pushes the envelope. The owner and engineer were willing to assume risks associated with something never done before.

CASE STUDY: TOP 5 SEARCH ENGINE COMPANY

In 2011, Mortenson Construction completed an 85,000-square foot data center expansion for a top five search engine company in Washington state. This scalable, modular system supports a 6 MW critical IT load and has a PUE of only 1.08! This level of efficiency was possible because of a virtual design process that utilized extensive 3D modeling coupled with an innovative cooling strategy. Referred to as “computing coops,” the pre-engineered metal buildings incorporate many of the same free-air cooling concepts chicken coops utilize by bringing outside air through the sides of the building through the servers, and then exhausting hot air through the cupola, creating a chimney effect.

With a tight construction schedule (only eight months), the design team created an ultraefficient data center while also saving over $5 million compared to the original project budget.

A special thanks to Rick Osbaugh of the RMH Group, and Hansel Bradley of Mortenson Construction for contributing content for this article.

 

 

 

from: http://blog.rmi.org/blog_making_big_cuts_in_data_center_energy_use

Some note for Data Centre Power Better Practice Guide

ICT Efficiency
Reducing the power needed by the ICT equipment (the productive use) is often the most effective way to maximise power efficiency. Reducing the ICT equipment’s power load means smaller overhead power is needed, as for example, less heat is generated so less cooling is needed.

Actions that reduce the power needed by ICT equipment include:

  • Virtualisation – moving workloads from dedicated ICT equipment (including servers, storage and networks) to shared ICT equipment can reduce the amount of power required by 10% to 40%.
  • Decommissioning – disused ICT equipment can be left powered on rather than decommissioned and removed.
  • Modernising – the latest models of ICT hardware are using much less power for equivalent performance. Gartner advises that server power requirements have dropped by two thirds over the past two generations.
  • Consolidation – Physical and logical consolidation projects can rationalise the data centre ICT equipment.

Cooling Efficiency

The cooling systems are usually the major source of overhead power consumption, and so there is usually value in making cooling more efficient. There is a wide range of data centre cooling technology, which provides agencies with great flexibility about investing in an optimum solution.

Common techniques to minimise power use include:

  • Free air cooling brings the cooler air outside the data centre into the data centre through dust and particle filters. In most Australian cities free air cooling can be used over 50 per cent of the time, and in Canberra over 80 per cent of the time.
  • Hot or cold aisle containment is a technique that aligns all the ICT equipment in the racks so that all of the cold air arrives on side of the rack and leaves on the other side of the rack. This means that the chilled air produced by the cooling system is delivered to the ICT equipment without mixing with the warmer exhaust air.
  • Raising the data centre temperature exploits the capability of modern ICT equipment to operate reliably at higher temperatures. Data centres can now operate at between 23 and 28 degrees Celsius, rather than the 18 to 21 degrees Celsius. Operating at higher temperatures means much less power is needed for cooling, and free air cooling becomes even more effective. The American Society of Heating Refrigeration and Air-conditioning Engineers (ASHRAE) publish guidance on maintaining optimum air temperatures in data centres.

 

PUE-chart

 

Agencies should also evaluate the environmental impact of cooling solutions. The environmental impact of cooling systems is typically excessive water use, however some cooling systems use hazardous chemicals.

The investment case for cooling systems is quite different to ICT equipment. The asset life is usually 7 to 15 years. During the life of the cooling systems, the ICT equipment can be expected to change between two and five times. The amount of cooling required will vary significantly as the ICT equipment changes. This variability means that agencies should seek cooling solutions that can adjust as the demand for cooling rises and falls.

iSpace đưa công nghệ Data Center vào chương trình đào tạo

GD&TĐ – Lễ ký kết hợp tác chiến lược giữa trường Cao đẳng CNTT (iSpace) và công ty DataCenter Services, nhằm đưa công nghệ Data Center vào chương trình đào tạo cho SV diễn ra sáng nay (2/4).

Lễ ký kết hợp tác chiến lược giữa trường Cao đẳng CNTT (iSpace) và công ty DataCenter Services

Đây được xem là trường đầu tiên trên cả nước đưa chương trình đào tạo về công nghệ Data Center chuyên sâu vào giảng dạy cho SV ngành CNTT và định hướng phát triển nghề chuyên gia Data Center tại Việt Nam.
Ông Nguyễn Hoàng Anh, hiệu trưởng trường Cao đẳng CNTT iSpace cho biết: Kiến thức về công nghệ Data Center sẽ được thiết kế nằm trong chương trình đào tạo chung của nhà trường cho SV năm cuối và là điều kiện bắt buộc cho chuẩn đầu ra của nhà trường. SV ngoài việc được trang bị kiến thức để đảm bảo thông tin của tổ chức, doanh nghiệp được an toàn, bảo mật, các em sẽ được bổ sung thêm các kiến thức và kỹ năng cần thiết để làm việc tại các Data center có công nghệ hiện đại nhất.Theo thiết kế chương trình đào tạo (do đội ngũ chuyên gia cao cấp của DataCenter Services phối hợp với trường iSpace), SV năm cuối sẽ được học trong khoảng thời gian 2-3 tháng và được cấp một chứng nhận hoàn thành khóa học sau khi ra trường. Bên cạnh đó, các em còn được trang bị những kiến thức mới nhất, hiện đại nhất trong việc quản lý, vận hành các Data Center.