Log In   |   Sign up

New User Registration

Article / Abstract Submission
Register here
Register
Press Release Submission
Register here
Register
coolingZONE Supplier
Register here
Register

Existing User


            Forgot your password
August 2006

New Techniques for Energy-Efficient Data Center Cooling


new techniques for energy-efficient data center cooling
apply to both large and small centers

by wally phelps
data center product manager

introduction

a recent it industry study identified power and cooling as the top two concerns of data center operators. for example, microsoft and google are building ultra-large data centers in the northwest where electricity costs are low. however, few companies who rely on their it infra­struc­ture have the freedom or budget to relocate to low electricity cost areas. most companies must look for other solutions to reduce power and cooling costs. recent advances in airflow control for data centers improve cooling and lower energy costs, with utility savings paying for the upgrades in a year or less. new airflow control technology applies to any size data center from 1,000 sq. ft. to 100,000 sq. ft. solutions are scalable, so they are cost effective.

problem statement

data center professionals have traditionally been forced to focus their attention on introducing and maintaining high availability hardware and software under strict deadlines, often with little advance notice. now with both power densities and energy costs rising; cooling and power require more systematic attention. evaluations conducted by degreec on more than a dozen data centers have determined that many of the causes of poor cooling are widespread. in most cases these problems are easily corrected without disrupting it operations.

common problems of data center cooling

most data centers are run far below their network critical physical infrastructure (ncpi) capacity for cooling. while a certain margin is desireable for safety and redundancy, running at data center at 30-50% of capacity, as many do, causes significant waste and results in improper cooling. 

here are the common causes of poor and inefficient cooling, found in all but the newest and fully airflow-engineered data centers.

1. mixing. when hot and cool airstreams combine, they increase rack intake tem­per­a­tures, which causes server failure. they also lower the temperature of the air re­turn­ing to the computer room air con­di­tion­ers (cracs). both are undesirable con­ditions.

2. recirculation. when hot air from server exhausts is drawn into server intakes instead of returning to the crac. this over­heats the equipment at the top of the racks.

3. short circuiting. when cool air returns directly to the crac before doing any coo­ling work; this reduces the crac return air temperature.

4. leakage. cool air being delivered uncontrolled, usually by cable cutouts. this cool air is both wasted and reduces crac return air temperature.

5. dehumidification. when the crac return air is below the dewpoint of the cooling coils. rehumidifying is expensive and reduces overall cooling capacity.

6. underfloor obstructions. cables, pipes, conduits, junction boxes, all impede the normal flow of air and cause unpredictable flow through perforated tiles.

7. underfloor vortex. poorly planned crac placement reduces floor pressure and produces insufficient cooling in the affected locations.

8. venturi effect. racks placed too close to the crac will not get enough cooling because the cool air exiting the crac at high velocity limits flow from the perforated tile. in extreme cases air can be sucked from the room to the underfloor.

9. poor return path. hot air that has no clear route to return to the crac causes mixing, higher rack tem­per­atures, and lower crac return tem­per­atures.

10. low crac return temperature. com­mon result of many of the above con­ditions, it reduces the efficiency of the cool­ing system. it also may trick the system into thinking the room is too cool and throttle back.

11. defective cooling system operation. many cooling systems are improperly configured or operated. common conditions are setpoints incorrect, chilled water loops not operating as intended or cracs in need of maintenance.

12. rack orientation. legacy equipment often uses non-standard airflow paths or rows of racks may not be configured to hot aisle/cold aisle.

13. external heat and humidity. data centers are often affected by outside weather conditions if they aren’t protected within climate controlled confines of a larger facility.

cause-and-effect interactions

knowing the common causes of poor cooling forms the starting point for solving the problem. without a fundamental understanding of the interactions (fig. 1), typical remedies such as moving tiles, reconfiguring racks or changing operating points can have unintended and potentially disastrous effects on cooling in the data center.

fig. 1. simplified interaction diagram

however, by carefully studying the cause-and-effect interactions in the data center (fig. 1) and determining root cause (fig. 2), one can develop a systematic approach that can be applied to any size data center. although each data center is unique, the root causes are similar and so are their solutions.

fig. 2. simplified fishbone diagram, a structured problem solving tool used to help determine root cause.

new analytic tool

the adaptivcool™ product line and services solve data center cooling problems in a struc­tured fashion. a key tool used to understand the complex interactions is computational fluid dynamics (cfd). the cfd technique produces a computer simulation of airflow and thermal conditions within the data center. cfd allows rapid testing of a variety of different parameters. (please see fig 3).

fig 3. cfd results of a data center with a failed crac

(in the lower left-hand corner) correlate well with actual measurements.

cfd analysis is extremely useful if the input data is accurate and if experienced hands analyze the output. cfd skills improve dramatically with experience. repeated applications of the technique in varied conditions increases the user’s knowledge base significantly. with each new simulation solutions come closer and closer to measured conditions. this progressive enrichment has built the adaptivcool™ cfd knowledgebase. attempting cfd analysis without accurate input, analysis or verification skills produces results which correlate poorly to measured values. such off-target solutions can be hazardous to live data centers. fig. 4 illustrates a typical project sequence.

fig 4. typical adaptivcooltm data center project

one of the benefits of rigorously including exper­ience back into the knowledgebase is the ability to remediate lower-level issues without necessarily having to do a complete cfd analysis. the adaptivcooltm cfd knowledgebase allows rapid and cost-effective solutions in smaller data centers and sites that have a specific hotspot despite the rest of the data center operating normally.

results at two centers

two data centers at opposite ends of the size spectrum illustrate how cfd analysis and the adaptivcool cfd knowledgebase can be used in different ways.

site a is a 9,000 sq. ft. build-out for a major international telecom company. the site consists of an existing 3,300 sq. ft. installation with 152 racks that degreec had optimized earlier using cfd analysis of the existing infrastructure. the results of this previous optimization were added to the knowledgebase.

fig 5. data center map showing original space, on the left, and the proposed expansion space, the shaded area to right.

due to the planned expansion and the addition of 250 racks (shown in fig. 5) the owner commissioned a new cfd analysis to determine the best location for new racks and cracs (please see figs. 6 & 7). this study produced a detailed list of enginee­ring recommendations before con­struc­tion began. a few examples follow.

1. improved siting. the best location for cracs in the expansion space centralized them, instead of distributing them around the perimeter of the room as the original plan did. cooling is more consistent this way and a crac failure does not disrupt an entire section of the room. this set-up means cracs can be added in­cre­men­tally as required. redundant cracs can be turned off and kept in reserve. 

2. turning liabilities into assets. the improved rack layout uses existing support columns as passive dams in several cold aisles. this step allows higher rack density.

3. improving airflow. dc power-supply cables and cable trays designated to be installed under the raised floor obstructed underfloor airflow. lowering these barriers by two inches creates consistent flow through the perforated tiles.

fig 6. analysis of original plan for expansion

fig 7. optimized layout for expansion area. area to the right has more uniform cooling.

without accurate cfd analysis of the facility the expanded area would have inherited many of the same problems the original facility had. in the best case this would have caused inefficient cooling and required more cracs and electricity than necessary. in the worst case the load would not be cooled properly regardless of installed crac capacity and the space could not be utilized fully. the cost of the cfd analysis was quickly recovered through lower utility bills, less infrastructure, better space utilization, smoother commissioning of the site, and improved cooling redundancy.

site b is the primary data center for a technology leasing corporation and consists of 1400 sq. ft. of raised floor housing 24 racks and several dozen pcs. this relatively small site has experienced over-temp con­ditions especially when the main building ac is shut down. site b has installed three auxiliary spot coolers to supplement its two 10-ton cracs even though one crac could theoretically handle the load on paper. 

the initial audit found no major hotspots but did find a large swing between coldest (66ºf) and warmest (76ºf) intakes. most of the cool air was delivered through floor cut­outs, usually in the wrong places. in addition the crac setpoints were set very low (68ºf). (please see fig. 8.)

fig 8. small site b - baseline measurements

based on the experience derived from pre­vious projects the solution could be developed without extensive cfd analysis. first, airflow was engineered to the rack intakes. then the crac setpoints were raised to normal levels. (please see figs. 9, 10, 11.)

fig 9. adaptivcool cfd knowledgebase solution for small- to medium-sized data centers and isolated hot spots.

fig 10. small site b - after adaptivcool™ optimization

fig 11. temperature distributions

as a further demonstration of the efficiency gained by applying the adaptivcool cfd know­ledgebase, one of the cracs was deliberately turned off. the temperatures throughout the room rose by 2f and stabilized. while this test proved there was enough margin to run with one crac, this was advised against as there is no automatic backup if the running crac fails.

cooling electricity savings at this site is around 20% or in the range of $3000-$4000 per year depending on the going electricity rate. roi in this case is 10 months or less due to the low installed cost. the economies resulted from reducing duty cycle of the crac compressors and the dry cooler.

summary

while server heat density and energy costs continue to soar there is relief for both by engineering airflow to eliminate hotspots while saving energy. furthermore, recent advances in this technology make it affordable and cost effective for nearly any size data center.

Choose category and click GO to search for thermal solutions

 
 

Subscribe to Qpedia

a subscription to qpedia monthly thermal magazine from the media partner advanced thermal solutions, inc. (ats)  will give you the most comprehensive and up-to-date source of information about the thermal management of electronics

subscribe

Submit Article

if you have a technical article, and would like it to be published on coolingzone
please send your article in word format to [email protected] or upload it here

Subscribe to coolingZONE

Submit Press Release

if you have a press release and would like it to be published on coolingzone please upload your pr  here

Member Login

Supplier's Directory

Search coolingZONE's Supplier Directory
GO
become a coolingzone supplier

list your company in the coolingzone supplier directory

suppliers log in

Media Partner, Qpedia

qpedia_158_120






Heat Transfer Calculators