chapter 3, 2010 update ...viewer.media.bitpipe.com/979246117_954/1265058410_959/...chapter 3, 2010...

21
Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency 1 COOLING PRINCIPLES HOT AISLE/ COLD AISLE SUPPLEMENTAL COOLING ECONOMIZERS POWER DISTRIBUTION DIRECT CURRENT The 21st Century brings a shift in data center efficiency. BY MATT STANSBERRY Energy Efficient IT

Upload: others

Post on 11-Mar-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

1COOLINGPRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWERDISTRIBUTION

DIRECTCURRENT

TThhee 2211sstt CCeennttuurryy bbrriinnggss aa sshhiifftt iinn ddaattaa cceenntteerr eeffffiicciieennccyy.. BBYY MMAATTTT SSTTAANNSSBBEERRRRYY

EnergyEfficient IT

Energy-efficientbest practicesand designs are available tohelp enterprisesreduce their datacenter energyuse.

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT 2

CHILLERS, AIR HANDLERS, power distribution, backup power—all of the mechanical processes that keep servers runningsmoothly account for more than half of an IT energy bill. Butthis is changing. Energy-efficiency best practices and equip-ment designs are available to help enterprises reduce their data center energy usage. This chapter is a primer on energy-efficient cooling and power distribution principles. It willcover the following topics:qData center cooling principles and raised-floor coolingqHot-aisle and cold-aisle containmentqHigh-density supplemental systems (such as forced air and liquid cooling)

qThe pros and cons of economizers in the data centerq Energy-efficient power distributionqThe direct-current (DC) power debate

DATA CENTER COOLING PRINCIPLESAND RAISED-FLOOR COOLINGData center cooling is where the great-est energy-efficiency improvements canbe made.The fundamental rule in energy-effi-

cient cooling is to keep hot air and coldair separate. For many years, the hot-aisle/cold-aisle, raised-floor design hasbeen the cooling standard. Surprisingly,though, few data centers fully imple-ment this principle.Hot aisle/cold aisle is a data center

floor plan in which rows of cabinets areconfigured with air intakes facing themiddle of the cold aisle. The cold aisleshave perforated tiles that blow cold airfrom the computer room air-condition-ing (CRAC) units up through the floor.The servers’ hot-air returns blow heatexhaust out the back of cabinets into hotaisles. The hot air is then sucked into aCRAC unit where it’s cooled and redis-tributed through cold aisles.While this configuration is widely

accepted as the most efficient data center layout, many companies haven’t

adopted it, and an even greater numberof data centers execute it incorrectly ordon’t take the principles far enough.Preventing hot and cold air from mix-

ing requires the separation of airflow inthe front of the cabinets from the back,but some data center managers activelysabotage this cooling design. Data cen-ter design experts often recount horror

3COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

THE HOT-AISLE/COLD-AISLE APPROACH

stories about clients with highly engi-neered hot-aisle/cold-aisle layouts,where a data center manager has putperforated or grated tiles in a hot aisle or used fans to direct cold air behind the cabinets.

Hot aisles are supposed to be hot. Ifyou paid good money for data centerdesign consultants, you should reallytake their advice. Data center managerscan become unnerved by how hot thehot aisle can get, or they capitulate tothe complaints of administrators whodon’t want to work in the hot aisle. Butmixing cold air into a hot aisle is exactlywhat you want to avoid.Even data center managers who have

adopted a hot-aisle/cold-aisle designrun into air-mixing problems. “Odd-sized cabinets, operations con-

soles and open rack space cause biggaps in the rows of cabinets, allowinghot air to recirculate into the cold aislesand cold air to bypass into the hotaisles,” said Robert McFarlane, a datacenter design expert and the principal at

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT 4

Raised Floor FundamentalsEIGHTEEN INCHES IS the minimum recommended raised-floorheight; 24 inches to 30 inches is better, but it isn’t realistic forbuildings without high ceilings. Here are some tips for gettingthe most out of your raised floor:

� Keep it clean. Get rid of the clutter—unused cables or pipes,for example—under your raised-floor. Hire a cleaning serviceto clean the space periodically. Dust and debris can impedeairflow.

� Seal off cable cutouts under cabinets as well as spacesbetween floor tiles and walls or between poorly aligned floortiles. Replace missing tiles or superfluous perforated tiles.

� Use a raised-floor system with rubber gaskets under each tile that allow each tile to fit more snugly onto the frame, minimizing air leakage.

� To seal raised floors, data center practitioners have severalproduct options available to them, including brush grommets,specialized caulking and other widgets.

For more on blocking holes in a raised floor, read Robert Mc-Farlane’s tip on SearchDataCenter.com “Block those holes!”�

Put sensitiveequipment nearthe middle of the row, aroundknee height.

5COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

New York-based engineering firm ShenMilsom & Wilke.The way to avoid hot and cold air mix-

ing is to block the holes. Fasten blankingpanels—metal sheeting that blocks gapsin the racks—over unused rack space.Tight ceilings and raised floors are also a must. Lastly, use air seals for all the cabling

and other openings in the floor. Despiteall of these efforts, hot and cold air willmix around the tops of cabinets and atthe ends of aisles. Data center pros canmitigate these design problems by plac-ing less important equipment—patchpanels and minor equipment that doesnot generate a lot of heat—in these mar-ginal areas.So where should you put the sensitive

equipment, the energy hogs that needthe most cooling? According to McFar-lane, the answer is counterintuitive. Inalmost all cases, under-floor air-condi-tioning units blast out a large volume ofair at high velocity, which isn’t an opti-mal approach. The closer you are tothose AC units, the higher the velocity

of the air and, therefore, the lower the airpressure. It’s called Bernoulli’s Law. As a result, the cabinets closest to the airconditioners get the least amount of air.That means you should probably put

your sensitive equipment near the mid-dle of the cabinet row, around kneeheight, rather than right up against theCRAC or the perforated floor. Becausethe air gets warmer as it rises, don’tplace your highest heat-generatingequipment at the top of a cabinet. It’snot rocket science, but it is physics.

Common cooling mistakes. Unfortu-nately, there are no shortcuts in physics,and McFarlane points to areas wheredata center pros can run into trouble.Some facilities have opted to put fansunder cabinets to pull higher volumes ofair out of the floor. But there is a limitedamount of air in the plenum, and if theair volume that all of the fans demandexceeds the amount of air under thefloor, cabinets without fans are going to be air-starved. Cabinets with fans placed farther from the air conditioners

may also get less air than they need.Another ill-advised shortcut is using

tiles with extra perforation. Traditionalraised-floor perforated tiles are only25% open, but some grate tiles on themarket are 56% open. More air is good, right? Not necessarily.

According to McFarlane, if you have toomany tiles with too much open area, thefirst few cabinets will get a lot of air, butthe air to the rest will diminish as you getfarther away from the air conditioners.“The effect is like knifing a tire or pop-

ping a balloon,” McFarlane said. “Airtakes the path of least resistance, andthe data center is a system: If you startfiddling with one thing, you may affectsomething else.” You need to balancethe air you have so it’s distributed whereyou need it.According to Robert Sullivan, a data

center cooling expert at the Santa Fe,N.M.-based Uptime Institute Inc., thetypical computer room has twice asmany perforated tiles installed as itshould. Sullivan said having too manytiles can significantly reduce static pres-

sure under the floor. This translates into insufficient air-

flow in the cold aisles. Thus, cold air getsonly about halfway up the cabinets. Theservers at the top of racks are going toget air someplace, and that means theywill suck hot air out of the top of theroom, re-circulating exhaust air anddeteriorating the reliability of the server.Another common cooling mistake is

to cool your data center like it’s a meatlocker. Constantly keeping the air in theroom at 60 degrees is unrealistic. The air conditioners will use more energyunnecessarily and will start a short-cycle pattern. “When you do that, you wear the hell

out of the air conditioner and completelylose your humidity control,” McFarlanesaid. “That short burst of demandwastes a lot of energy too.” McFarlanesaid oversized air conditioners do thesame thing. “They cool things down too fast, and

then they shut off. Things get hot quick-ly, and they turn back on. If I don’t match

The typical computer roomhas twice asmany perforatedtiles installed as it should.

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT 6

(Continued on page 8)

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT 7

WHEN YOU MANAGE several thousand square feet of raised floor, supporting dozens of racks of sensitive IT equipment with various coolingdemands, it is difficult to determine how to moveair handlers or change perforated tiles to run yoursystem more efficiently.Luckily, computational fluid dynamics (CFD)

modeling tools are available to convey the impactof what you plan to do. Ansys Inc., which acquiredFluent Inc., offers CoolSim, a software tool thatallows you to enter a data center’s design specsand perform airflow modeling. The other option is to install computational

fluid dynamics software—such as TileFlow fromPlymouth, Minn.-based Innovative ResearchInc.—on your own servers.Pete Sacco, a data center consultant and the

founder of PTS Data Center Solutions Inc.,warned that not all CFD software is made equal.He uses software from Future Facilities Ltd., aLondon-based data center design company. But itcomes at a steep price: about $100,000 perlicensed seat. That's about three times as much

as TileFlow, but Sacco said that more expensivesoftware is worth it for him.Fred Stack, the vice president of marketing at

Liebert Corp., agreed: When the company doesCFD assessments for customers, the Future Facilities CFD tool its go-to software. Stack said that a full CFD assessment for a

7,000-square-foot data center could costbetween $10,000 and $12,000, but the servicescan pay for themselves pretty quickly.“A CFD analysis helps the customer understand

air-management issues. Quite often you can savea lot of energy without spending money on morehardware,” Stack said. “The biggest issue is leaks. People are always

amazed by the percentage of air not reaching thecold aisle. The obvious ones are the cutouts in theback. But there are leaks in data centers peopledon’t recognize. They think they’re small andminute. There could be an eighth-of-an-inch crackin the floor around a support pillar. Sealing that isa relatively easy thing to do and has a majorimpact on the reduction of bypass air.” �

Modeling the Airflow in Your Data Center

capacity to the load, I waste energy withstart/stop cycles.”In January 2009, The American Socie-

ty of Heating, Refrigerating and Air-con-ditioning Engineers (ASHRAE) widenedits recommended data center tempera-ture and humidity ranges, which are thetemperatures taken at server equipmentair inlets. ASHRAE expanded the highest recom-

mended temperature from 77 degreesFahrenheit to 80.2 degrees. That meansmanagers can keep cold aisles operatingat a much warmer temperature thanthey currently do. An excellent resource for effectively

matching data center cooling to IT loadis ASHRAE’s Thermal Guidelines forData Processing Environments.

HOT-AISLE/COLD-AISLE CONTAINMENTBy itself, hot-aisle/cold-aisle design isn’tenough with today’s data center serverdensities. IT pros need to take the con-

cept of isolating hot air and cold air astep further, and today they’re usingcontainment systems. Hot-aisle/cold-aisle containment

systems use a physical barrier that separates the hot- or cold-aisle airflowthrough makeshift design solutions likevinyl plastic sheeting, ducted plenumsystems and commercial products fromdata center cooling vendors. The combination of hot-aisle/cold-

aisle containment and variable fan drivescan create significant energy savings.The separation of hot and cold air canprovide much better uniformity of airtemperature from the top to the bottomof the rack. That uniformity enables datacenter pros to raise the set-point tem-perature more safely. Storage vendor NetApp has used

vinyl curtains similar to those in meatlockers to contain the air in the hot aislesin its Silicon Valley data center. Thosecurtains alone save the company 1 mil-lion kilowatt-hours, or kWh, of energyper year. Yahoo Inc. has also employed vinyl

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT 8

The combinationof hot-aisle/cold-aisle containmentand variable fandrives can createsignficant energysavings.

(Continued from page 6)

9COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

curtains in one of its data centers for air-flow containment. The search giantreduced its power usage effectiveness,or PUE, and saved half a million dollarswith cold-aisle containment.According to SearchDataCenter.com’s

Purchasing Intentions 2009 survey,30% of data center managers haveimplemented some kind of airflow con-tainment, and an additional 15% plan to in the coming year. Chuck Goolsbee, an executive at Seat-

tle-based hosting company digital.forest,said he was surprised that only half therespondents planned to deploy contain-ment. "By 2010, the response will bemore than 90%," he said.While the number of companies

employing airflow containment strate-gies continues to grow, there is littleconsensus on whether it is better to contain the hot air or the cold air. Kingston, R.I.-based infrastructure

vendor American Power Conversion.(APC) promotes hot-aisle containment,while Liebert Corp.’s product line sup-ports cold aisle.

Jeremy Hartley, in technical support at U.K.-based rack vendor Dataracks,said cold-aisle containment is a betterstrategy, especially for existing data centers. According to Hartley, companies now

deploy newer servers that are deeperthan the racks. These servers extend outthe back of the rack into the hot aisleand can reduce data center space, whichis often at a premium. Also, server fansare often not powerful enough to pull inhot air efficiently. Phil Dunn, the senior product mana-

ger at Emerson Network Power, whichproduces rack products, says cold-aislecontainment systems can save 20% to30% on cooling power usage. Its Emer-son Knurr systems consist of a translu-cent Lexan suspended ceiling and a slid-ing door system that takes up only fourinches at the end of each aisle. According to Dunn, the higher the

density in a particular rack, the fasterthis kind of upgrade can pay for itself,and return on investment for customersis less than a year in most cases.

There is little consensus onwhether it is better to containthe hot or thecold air.

So what is the downside to contain-ment systems? At this point, there areno prevailing standard practices, so dif-ferent people have different ways to execute these systems. But the biggestproblem with containment strategies ispossibly fire suppression. Where do you put sprinklers? Inside

each of the contained systems? Somepeople attach plastic sheeting with melt-away tabs that allow the sheets to dropto the floor if the air reaches 130degrees. Most data center design con-sultants are leery about this scheme,though. Dunn said Emerson is referringprospective customers to fire suppres-sion experts to determine the best firesuppression strategy.

HIGH-DENSITY SUPPLEMENTAL COOLINGThere is only so much air you can pushout of a plenum without blowing thetiles out of the floor. So how do you coolhigh-density heat loads? Some data cen-ters have turned to supplemental high-

density cooling.Supplemental cooling systems like

the InfraStruXure InRow from APC, theLiebert XD and other models from Rittal,AFCO Systems and Wright Line place acooling unit next to or on top of the cabi-net, delivering a higher volume of coldair directly to the server intake. Theresult is more cooling than a raised floor

10COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

A Liebert cold-aisle containment system that has been deployed in a raised-floor environment

FIGURE 2: LIEBERT’S COLD-AISLE CONTAINMENT SYSTEM

Hot spots. Quite the turn-on for our variable-speed fans.Now, cool at the row level with the first-ever variable-speed fan technology.

Today’s data centers are really heating up. Racks are packed with more and more equipment, driving the highest-ever rack power densities. The result: unprecedented heat levels, row by row. Meanwhile, virtualization is everywhere, leading to more dynamic loads and shifting hot spots. Tackling this challenge with traditional raised floors and perimeter cooling alone presents a real struggle: How can you bring enough cooling exactly where it’s required? Too often, the result is inefficiency, worsened by soaring energy costs. What’s the efficient and effective solution? InRow cooling from APC by Schneider Electric.

Variable-speed fans target heat and improve efficiency. Rack-mounted sensors monitor the temperature, giving you real-time information on where heat is hiding. As heat loads shift around the room, unique variable-speed fans automatically adjust to meet the demand. By closely matching cooling with the heat load, you use the cooling that’s required in the right place at the right time, reducing waste by preventing hot and cold air mixing and eliminating hot spots. You improve efficiency and avoid overcooling.

Modular design delivers maximum flexibility. Scalable, modular InRow cooling units can be easily deployed as the foundation of your entire cooling architecture or in addition to current perimeter cooling for a high-density zone within an existing data center. With this kind of hybrid environment, there is no need to start over, and installation is quick and easy.

So go ahead: Pack the racks without fear of hot spots or inefficiency. Intelligent, efficient InRow cooling handles high-density heat at the source.

Energy Efficient Cooling for Data Centers: A Close-Coupled Row Solution

©2010 Schneider Electric, All Rights Reserved. Schneider Electric, APC, InRow, NetworkAIR, and InfraStruxure are owned by Schneider Electric, or its affiliated companies in the United States and other countries. All other trademarks are property of their respective owners. e-mail: [email protected] • 132 Fairgrounds Road, West Kingston, RI 02892 USA • 998-1793 Full details are available online.

Download a FREE copy of APC White Paper #137: “Energy Efficient Cooling for Data Centers: A Close-Coupled Row Solution.”

Visit www.apc.com/promo Key Code r256w Call 888-289-APCC x6146• Fax 401-788-2797

APC is proud to be a member of the green grid.

Achieve greater efficiency with InRow cooling.

1. Hot spot emerges.

2. Row-based temperature probes send signal through intelligent controls.

3. Based on required cooling, variable-speed fans fire up or level down.

4. With row-based cooling, air mixing and overcooling are prevented. Heat is handled with the lowest energy consumption possible.

1

2

4

3

APC offers the most efficient, comprehensive line of cooling solutions for any IT environment.

Room-level cooling:InRoom Chilled Water, InRoom Direct Expansion, NetworkAIR PA

Row-level cooling:InRow RC, InRow RD, InRow RP, InRow SC

Rack-level cooling:RackAIR Removal Unit SX, RackAIR Distribution Unit, Rack Side Air Distribution, Rack Fan Tray

12COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

can possibly deliver.High-density cooling systems have the

following advantages:

q Deliver more cooling than raised-floor options

q Deliver air more evenly up a cabinet

q Deliver cooling closer to the heatsource

Some units offer a further advantagein that they prevent hot and cold air from mixing by putting an intake on a hot aisle or by sucking exhaust airdirectly into the AC unit. Hot air doesn’thave to travel back to the CRAC units 40 feet away.On the downside, these systems can

be more expensive and more complex to operate. In many cases, you still needthe raised floor and traditional CRACdesign to maintain a baseline of coolingand humidity for the rest of the datacenter. Additionally, many top-blow sys-tems have to be ducted in order to work,

and duct systems can be pricey and takeup lots of space.According to the Uptime Institute’s

Sullivan, supplemental cooling systemsdeliver more cooling capacity with less energy, specifically in high-density situations. But he also warned that users can get locked into supplementalsystems.If your needs change, it’s hard to get

that cooling capacity across the room.“You don’t have the flexibility unless youuninstall it and physically move the unit,”he said, “whereas with the larger under-floor units, you can move the perforatedtiles based on the load.”

Liquid cooling in data centers. So whatis the most efficient means for coolingservers? Water is about 3,500 timesmore efficient than air at removing heat.Server vendors and infrastructure equip-ment manufacturers alike have lined up to offer all sorts of products, fromchilled-water-rack add-ons to pumpedliquid refrigerants. But evidence suggests that data center

Water is about35,000 timesmore efficientthan air atremoving heat.

managers aren’t ready for liquid cooling.The majority of respondents said theywould never use liquid cooling in theirdata centers—an unsurprising finding,said experts. In fact, the number of ITmanagers using liquid cooling is in thesingle digits, shows data from Search-DataCenter.com’s 2009 Data CenterPurchasing Intentions Survey.Gordon Haff, a senior analyst at

Nashua, N.H.-based Illuminata Inc., saidthat liquid cooling probably scares mostmainstream IT managers. He suggestedthat if increasing server density ulti-mately requires liquid cooling, compa-nies may be more likely to outsource thedata center than deal with the complexi-ty of this cooling method.A lot of this debate rages around

water, but cooling technology vendorLiebert is quick to note that usingpumped refrigerant eliminates a lot ofthe headaches associated with liquidcooling. Rather than water, Liebert’s XD systems use R134a, a coolant thatchanges from liquid to gas as it passesthrough the system.

In data centers, pumped refrigeranthas some significant advantages overwater:

q Liquid refrigerant takes up a lot lessspace than water systems, both in the cooling coils and the piping sys-tems, which is a major plus for datacenters trying to pack cooling into asmall space.

q If water leaks, it can damage equipment. If your refrigerant leaks,you won’t have gallons seeping ontothe floor. For data centers runninglines overhead, the difference is significant.

q Because a refrigerant changes fromliquid to gas, it takes less energy topump than water.

q Because of the plumbing involved,water-based systems are less recon-figurable than refrigerant-based cool-ing that uses tubing and closed-circuitsystems.

Using pumpedrefrigerant eliminates a lot of the headachesassociated withliquid cooling.

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT 13

14COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

On the other hand, water is cheaperand easier to replace. Many roomsalready have chilled-water lines comingin, and facility engineers are more famil-iar with water-based systems. Addition-ally, leaked refrigerants can contribute togreenhouse gas emissions and harm theenvironment.

PROS AND CONS OF AIR-SIDE AND WATER-SIDE ECONOMIZERSOver the past year, data center design-ers have debated and tested the effec-tiveness of using air-side and water-sideeconomizers as an alternative to tradi-tional heating, ventilation and air condi-tioning (HVAC) systems. Economizersuse outside air temperatures to coolservers directly or to cool water withoutusing a chiller.Air-side economizers bring large

quantities of cold air from the outsideinto a computer room with air handlers.The energy savings comes from notusing mechanical refrigeration (such as chillers and compressors) to cool the

air. Air handlers duct the air in, filter itand expel waste heat back outside.There are reasons to be wary of econ-

omizers, specifically air-side, because of particulates and fluctuating humiditylevels. Sullivan said particulates andchemicals are bad news for sensitiveelectronics, and he worries about thecorrosive effect of the salt air in placeslike Seattle, Portland and San Francisco.

An Argument Against Water CoolingNEIL RASMUSSEN, the CTO at American Power Conversion Corp.,said that direct water cooling is a bad application for data centers in flux. “Every day, servers are changing. It’s a muchmore difficult environment in which to plan a structured cooling system,” Rasmussen said. “Furthermore, not everything is a server in a data center.

There are routers, patch panels, storage. There is a dynamichodgepodge, where it would be very impractical to plan water piping.” �

“I have customers that have beenburned,” Sullivan said. “Some have datacenters sitting on the outskirts of town,and when the farmers start plowing thefields, the dust clogs the AC units.”Humidity is also a concern. “When the

air outside is dry and cold and you bringit in and heat it up, it becomes really dry,and you have the potential for exposureto electrostatic discharge,” Sullivan said.“When it’s moist outside—if it’s hot—Icould get condensation that would pro-mote corrosion in the equipment. Youonly need 75% humidity for protectedsteel to rust.”Despite these concerns, in 2007 the

Lawrence Berkeley National Laboratory(LBNL) in Berkeley, Calif., published astudy on the reliability of outside air tocool data centers and found that humidi-ty sensors and filters can mitigate therisks. According to the report, “IT equip-ment reliability degradation due to out-door contamination appears to be a poorjustification for not using economizers indata centers.”Water-side economizers are substan-

tially less controversial and avoid theissue of particulates and humidity. Whenthe outside air is dry and temperaturesare below 45 degrees Fahrenheit, water-side economizers use a cooling tower tocool building water without operating achiller. The cold water in a cooling tower is

used to cool a plate-and-frame heat ex-changer. The exchanger is a heat transferdevice constructed of individual plates,which is inserted between the coolingtower and a chilled-water distributionsystem that runs through a building.Ben Stewart, a data center facilities

engineering executive at the hostingcompany Terremark is a proponent offree cooling—water side and air side—but users need to be careful, he said.Terremark uses air-side economizers

in its Santa Clara, Calif., facility. “That isunconditioned air, and its humidity andcleanliness is in question,” Stewart said.“You need to carefully monitor humidityand adjust as necessary and filter the airto remove dust and dirt.”“Also, since you are adding air volume

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT 15

Water-side economizersavoid the issue of particulatesand humidity.

16COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

In the comingyears, you canexpect the EPAand other agen-cies to begintracking andquantifying datapoints.

to the space, you need to be removingan equal amount of volume somewhere,or you will pressurize your data centerand your doors will not close properlyand may even blow open,” Stewartwarned.Terremark recently constructed a facil-

ity in Culpeper, Va., and it will feature thecompany’s first use of water-side freecooling, according to Stewart. He saidthe closed system avoids humidity, con-taminant and pressurization issues, buthe’s had to factor in a lot of other con-cerns, such as the addition of Glycol tothe chilled water in order to prevent itfrom freezing.As for the energy savings on forgoing

mechanical refrigeration, the jury is still out. A lot of the savings equationdepends on the outside air tempera-tures, how much more energy the airhandlers use to filter huge amounts ofair and a few other factors.In the coming years, you can expect

the Environmental Protection Agencyand other agencies to begin tracking andquantifying these data points.

DATA CENTER POWER DISTRIBUTIONWhile not as dramatic as removingwaste heat, data center power distribu-tion and backup inefficiencies are signifi-cant targets for data center managers.

Raise the voltage, save power. Lately,infrastructure vendors have paid a lot ofattention to distributing power at highervoltages. According to Chris Loeffler, aproduct manager at Eaton Corp., virtual-ly all IT equipment is rated to work withinput power voltages ranging from 100volts (V) to 240 V AC. The higher thevoltage, the more efficiently the unit oper-ates. Most equipment runs off lower-voltage power: the traditional 120 V.According to research from data cen-

ter power infrastructure vendor Eaton, aHewlett-Packard Co. ProLiant DL380Generation 5 server, for example, oper-ates at 82% efficiency at 120 V, at 84%efficiency at 208 V and at 85% efficien-cy at 230 V.A data center could gain that incre-

mental advantage by simply changing

17COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

the input power and the power distribu-tion unit (PDU) in the rack.Liebert’s Peter Panfil agrees that users

can get a 2% to 3% efficiency increaseusing 208 V versus 120 V. “Data center managers say that virtu-

ally all of the equipment is coming in at208 V, but in reality they have lots ofequipment coming in at 120 V,” he said.“The IT people are more comfortablewith 120 V, but there is no safety trade-off.”McFarlane offers advice for data cen-

ter pros exploring this approach in thefuture. “The first step is to look at yourservers,” he said. “See if they auto-sense208 V, and see what you can do aboutrunning 208 [V] to your cabinetsinstead of 120. There are plenty of PDUsthat will deliver 208 and 120 to the samestrip if you wire it right.”

Modular UPS system design. Thebiggest energy-loss item in the powerchain is the uninterruptible power sup-ply. A double-conversion UPS takes theAC power from the line and converts it

to DC; the DC then charges the batteriesand goes through a converter thatchanges it back to AC. All of these stepsinvolve some loss of energy.

Flywheels: Old-School Green Technology FLYWHEEL ENERGY STORAGE technology has been around fordecades. The primary power source spins a heavy disk called a flywheel. This builds up kinetic energy based on the mass ofthe flywheel and the speed at which it rotates, which can be as fast as 54,000 rotations per minute. When the power goesout, even if it’s for a second or two, the flywheel releases thebuilt-up kinetic energy back into the data center until powerresumes or a backup generator turns on, which usually takesbetween 10 and 20 seconds.In most operations, flywheels work side by side with batter-

ies. Short outages can kill battery life, and according to theElectric Power Research Institute in Palo Alto, Calif., 98% ofutility interruptions last less than 10 seconds. If a flywheel cansustain power for that time, it can prolong the life of a string of batteries by reducing how many times they are “cycled.” �

18COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

Modular UPS systems are oneway to mitigateproblems associ-ated with low efficiency.

Vendors generally claim good efficien-cy for double-conversion UPS systems,but they usually publish efficiency rat-ings only at full load. Since companiestend to purchase large, traditional UPSsystems with all the capacity anticipatedfor the future, they often run well belowcapacity for a number of years, if not for-ever, said McFarlane. “Also, good operat-ing practice says you’ll never run theUPS at full load because it leaves you noheadroom. And the efficiency curve onmost of these UPSes drops like a rock asthe load level goes down.”McFarlane noted that the need for

redundancy exacerbates this problem.Three 500 kilovolt-amp (kVA) UPSes,for example, would be able to deliver amaximum of 1,000 kVA in an N+1 redun-dant configuration, so if one unit fails oris shut down for service, the full designcapacity is still available.Even at full design load, you’re running

at only 67% of actual system capacity.Now put in two of these systems for a2N configuration of N+1 UPSes per theUptime Institute’s Tier 4 of its Tier Per-

formance Standards, and you have eachUPS system running at less than 50% ofits already less than 67% potential load.Under these circumstances, the data

center could easily run at 65% efficiencyor less. The major UPS manufacturershave taken steps to improve this situa-tion as much as possible, said McFar-lane, and new products in the pipelinewill address the problem even moreeffectively.In the meantime, modular UPS sys-

tems are one way to mitigate the prob-lems associated with low efficiency.With careful planning, modular UPS

systems can be configured and readilyreconfigured to run closer to capacity.Some UPSes on the market are modularand operate in much smaller increments,such as 10 kW to 25 kW models.A smaller data center that needs 80

kW of capacity, for example, can pur-chase nine 10 kW modules for 90 kWcapacity. If one module breaks down, the system has enough headroom tocover it while running at far higher uti-lization.

THE DIRECT-CURRENT DEBATEEngineering experts have lined up onboth sides of the direct current powerdata center debate, and the feud is asheated as the original between ThomasEdison and George Westinghouse.The idea of powering data center

equipment with DC has generated inter-est in the industry as a way to save ener-gy in data centers, especially since therelease of a 2006 LBNL study indicatingthat companies could see 10% to 20%energy savings if they adopt DC powerover AC.In a traditional system, the utility com-

pany sends electricity to a data center inAC, which is easier to distribute in thatform over long distances. The AC is con-verted to DC at the power distributionunit, converted back to AC to begin itspath to servers and finally convertedback to DC by each individual server.In a DC system, there is only one con-

version from the utility (AC) to the DCdistribution plant and servers. Fewerconversions mean less energy is lost inthe course of distribution.

But the road to DC is rocky; there aremyriad potential pitfalls:

q You can’t just go plugging serversinto racks with DC. Every time youplug something in, it changes the cur-rent draw. In fact, experts say you’llneed an electrical engineer on staff todeal with “moves-adds-changes” in aDC-powered data center.

q A DC UPS can cost 20% to 40%more than AC.

q Some users say DC equipment isscarce. Cisco Systems, Rackable Sys-tems and Sun Microsystems offer alot of DC products, but HP, IBM andHitachi Data Systems are lacking.

In researching this article, one UPSmanufacturer suggested that LBNL com-pared cutting-edge DC with outdatedAC technology. But William Tschudi, aproject leader at LBNL and a longtimedata center efficiency advocate, put thatrumor to rest.

Engineeringexperts havelined up on bothsides of the directcurrent powercenter debate.

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT 19

“We were accepting the equipmentvendors loaned us,” Tschudi said “Wegot their best in class, and [DC power]still saw 10% savings against a very efficient [AC] UPS system.” Nonetheless, the general consensus

from UPS vendors is that before revert-ing to DC to save just a bit more, thereare easier ways to save energy in datacenters. Time, effort and money can be better spent elsewhere.Tschudi conceded that there are issues

concerning voltage sags, connectionsand grounding that worry data centermanagers. But companies are overcom-ing these problems in other applications,and as the price of power skyrockets,more data center managers and vendorsmay explore DC power technology.

Prioritizing. For data center managerswho are planning to implement a greenstrategy, it’s important to have short-,mid- and long-term goals. When itcomes to mechanical infrastructure efficiency, the alternatives range fromthe mundane to the experimental.

Near-term strategies include auditinghot-aisle/cold-aisle implementation orimprovement with containment. Usersshould also consider regular raised-floorcleaning and maintenance. Another tac-tic is to ensure that the voltage from thePDU to the server runs at 208 V and not120 V. These approaches are low or nocost.In the midterm, data center managers

should investigate high-efficiency sup-plemental cooling units for high-densityserver deployments, and smaller UPSsystems for modular growth.And over the long term—when new

construction is warranted, more energy-efficiency data is available and stan-dards are in place—companies shouldinvestigate economizers, liquid coolingand DC power. �

Matt Stansberry is the executive editor of SearchData-Center.com. Since 2003, Stansberry has reported on theconvergence of IT, facility management and energy issues.

Previously, Stansberry was the managing editor ofToday’s Facility Manager magazine and a staff writer at the U.S. Green Building Council.

He can be reached at mstansberry@ techtarget.com.

As the price ofpower skyrockets,more data centermanagers andvendors mayexplore DC powertechnology.

Chapter 3, 2010 Update Principles of Data Center Infrastructure Efficiency

COOLING PRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWER DISTRIBUTION

DIRECT CURRENT 20

q Data Center Effiiciency Portal: Generate numerous “what if” scenarios to help supportyour virtualization, power sizing, efficiency, power density, and cooling decisions.

q Management Software Demo: What are the benefits of management software?Learn more by watching our interactive flash tour today!

q Security & Environmental Monitoring Systems: Learn more about our Netbotz products,watch our flash presentation!

About APC by Schneider Electric: APC by Schneider Electric, a global leader in critical power and coolingservices, provides industry leading product, software and systems for home, office, data center and factoryfloor applications. APC delivers pioneering, energy efficient solutions for critical technology and industrialapplications.

chapter 1 Resources from our sponsor

21COOLINGPRINCIPLES

HOT AISLE/COLD AISLE

SUPPLEMENTALCOOLING

ECONOMIZERS POWERDISTRIBUTION

DIRECTCURRENT