A Guide to Energy Savings in Data Centres

Data Centres
Special Working Group Spin I
A Guide to Energy Savings in Data Centres
Table of Contents
1.
Introduction
1
1.1 Systems Covered
1
2.
Design Flaws
4
3.
Savings Initiatives
5
3.1 The Actual Demand
5
3.2 Recommendations
5
Quantifying the Impact of Savings
8
4.1 Reduce bypass and mixing of cold supply air: impact on temperature demand
9
4.
5.
4.2 Cold supply air temperature demand: impact on COP
11
4.3 Cold supply air flow demand: impact on energy consumption
12
4.4 Free cooling of chilled water: impact on energy consumption
13
4.5 Free cooling with outside air, HVAC-style: humidification impact on energy
consumption
14
References
15
Data Centres SWG
A Guide to Energy Savings
in Data Centres
1. Introduction
This study covers traditional air-cooled data centres and two of the most common free-cooling
methods. It describes the most common design flaws and their impact on energy consumption, and
outlines the impact that various savings initiatives can have on energy consumption. Finally, it
quantifies the impact of selected savings initiatives in tables and diagrams.
1.1 Systems Covered
This report covers traditional air-cooled data centres (or rooms with electronic equipment in general)
with at least one row of racks; air is recirculated and conditioned to temperature and humidity, and
fresh-air supply is minimal. It does not deal with small rooms with a few PC-servers and a split cooling
unit, or with energy savings from virtualisation, new semiconductor or software technology.
Glossary of terms
1.
Module (electronic): could be a blade server, a switch, communication module, a power
supply, or a disk module, etc, usually mounted vertically together with several other modules
in a crate.
2.
Crate: contains several modules. The crate is sometimes also called a rack or a bin. The
mounting brackets of a crate are usually 19 ins apart.
3.
Units are just bigger modules that are mounted directly into the rack, often 19 ins wide (e.g.
‘non-blade’-computers, power supplies, fan-units).
4.
Rack contains several crates or units. The racks are often much wider than the crates/units to
provide for cables and air in the sides. Some racks are open (or with a grid door) front and
back to provide for horizontal airflow. Others are closed with doors to provide for confined
vertical airflow.
5.
Aisle is the space between two rows of racks (or one row and a wall).
6.
Cold air supply is the air produced by the CRAC (see below).
7.
Cold air supply temperature and relative humidity (RH) is the air condition right after leaving
the cooling coil and humidifier in the CRAC.
8.
CRAC: Computer Room Air Conditioner
9.
DX is a CRAC using direct expansion on the cooling coil, usually a standalone unit with builtin compressor.
10.
Cooling air is the air going into the various modules and units to cool the electronic
components.
11.
Cooling air temperature and relative humidity (RH) is the condition of the air just before it
enters the various modules and units.
12.
Heated air is what comes out of the modules and units.
13.
Hot return air is the mix of air that is returned to the CRAC for cooling.
14.
Elevated floors contain cables and most often also provide the ‘duct’ for cold air supply.
15.
Cold aisle is an aisle where an attempt is made to keep the cold supply air confined, so that it
will not mix with the heated air, in order to obtain the coldest possible cooling air for the
modules and units.
16.
Room temperature is a very questionable issue, but often refers to the temperature of the hot
return air.
1
Data Centres SWG
A Guide to Energy Savings
in Data Centres
Figure 1: Data centre air-cooling components
The most common types of free-cooling systems
There are several options for free cooling. Five of the most common methods are described below:
1.
Cooling Tower. In systems with chilled water cooled CRAC units, a cooling tower (dry or
wet) is often ‘cut’ into the chilled water loop over a bypass valve, and, when the outside
temperature is low, the compressors are stopped and the tower valve is opened. In DX units,
the free cooling is often established with an extra (water) cooling coil inside the DX. This coil
is connected to the same outside cooling tower that also cools the condenser. Note that
different types of towers exist: open, spray, dry, etc, and that they all have different
advantages and disadvantages (e.g. water cost, noise).
Evaporator
Compressor
Pump
Expansion valve
Condenser
CRAC units
Cooling tower for free-cooling
Figure 2: Chilled water system with cooling tower
2.
Free Cooling with Direct Supply of Outside Air is also an option. One method is to build a
balanced HVAC system that can modulate from full fresh air to full recirculation. Duct work is
needed together with good filters, cooling coil, dehumidifier, humidifier, dampers, and
2
Data Centres SWG
A Guide to Energy Savings
in Data Centres
control equipment. One has to consider the extra power needed for humidification and
dehumidification, and filter maintenance will be an extra cost. The higher complexity and
need for extra space must also be considered.
VSD
F7
G3
Ventilated rooms
VSD
F7
Figure 3: A balanced HVAC system
3.
Free Air Cooling. A very simple free air cooling option – in this system fresh air is only used
when the outside temperature and humidity is within the acceptable limits. If not, it is shut
off and the conventional CRAC takes over. Filters and control system is still needed, but the
complexity is much lower than with a balanced HVAC system.
4.
Evaporative Cooling. In the last few years a new type of free cooling has been installed in
the US (and at least one in Denmark). The principle is called evaporative cooling, and is
based on cooling with water without the aid of compressors. Outside air is drawn over a big
water-filled sponge or through a water curtain, and in this way cooled by the water
vaporisation. At the same time the air is humidified and to some extent particles are also
caught by the water. This principle has been known for many years and used in many
industries. The drawbacks are water consumption, possible infections (e.g. Legionella) and
rather bulky installations. The obvious benefits are very low electrical power consumption
covering only fans, pumps and water treatment plant. Note that this principle is useful for a
few degrees of cooling only, and that another form of cooling will usually be necessary
(chilled water, DX, etc).
5.
Liquid Cooling. The last method worth mentioning is liquid cooling. Computers and other
electronic equipment with liquid cooling have existed for many years (e.g. CDC and other
vintage super-computers). Heat pipes to transport the heat out from a closed module were
seen many years ago in fighter-aircraft radios. A mix of these technologies could result in
very energy-efficient server cooling, using direct wet tower free cooling most of the year. The
high-power density in future servers will probably revive these systems. With these systems
the issues of cooling temperature are the same, and the possibilities for free cooling should
still be used. There is a move back to this technology already. IBM’s latest High Performance
Computer has liquid cooling directly to the heat-producing chips. It is worth noting that
water has far superior heat-removal capacities than air (approx. 3,500 times greater), hence
the move back to this technology.
Establishing free cooling only gives savings on the energy bill. Capital investment will always be
larger, since the compressors must still be installed to cater for the increased cooling demand when
free cooling is not possible on days with very warm weather.
3
Data Centres SWG
A Guide to Energy Savings
in Data Centres
2. Design Flaws
There are many reasons why data centres are not designed with efficient cooling systems. The most
common reasons, mainly relating to the cold air supply, are listed below:
•
Cold supply air passes direct to the return without ever reaching cabinets. Further reading in
section 3; recommendation 5
•
Cold supply air is mixed with heated air around cabinets. In order for all units to receive the
demanded cooling air temperature, the supply air temperature is set very low. Further
reading in section 3; recommendation 5
•
Hot and cold aisles are not functioning correctly; grids in the floor are covered, racks are not
the same height, cable holes are too large, obstructions in the ceiling causing turbulence,
uneven air distribution, etc. Further reading in section 3; recommendation 5
•
Cold supply air passes through holes in cabinets without getting heated. A big part of the
cold air supply never reaches the server units. A lot of air is pumped around for no reason,
and excess fan power is added to the room load, as well as to the total electricity
consumption. Further reading in section 3; recommendation 7
•
Air passes though several units, getting hotter and hotter, so the cooling air temperature for
these units must be very low to provide the last unit with acceptable temperature. Further
reading in section 3; recommendation 5
•
Cold air supply temperature is dictated by only a few units (see above), where the rest could
have operated at a higher temperature. Further reading in section 3; recommendations 1 to 3
•
Actual demands from unit suppliers are not followed/questioned. Often suppliers specify
wider span for cooling air going into the unit than they do for the ‘not very well defined’
room temperature. Further reading in section 3; recommendations 1 to 3
•
Uneven distribution of cold supply air (several issues) under the elevated floor results in
some units getting too little cooling air. Airflow is increased and temperature decreased,
both leading to excess energy consumption. Further reading in section 3; recommendations 5
and 6
•
Demand for latency in the cooling system. If cooling fails, operators must have time to shut
down in a controlled manner. Since air is not good storage for cooling, this leads to excess
low cold supply air temperatures and large room volumes in order to get just a few minutes’
latency. Further reading in section 3; recommendation 14
•
Room and cooling are designed for other purposes. Excess low chiller coil temperature is
dictated by other demands not related to the server room, e.g. office ventilation, process
cooling. Further reading in section 3; recommendation 5
•
Excess low coil temperatures in CRAC units. Further reading in section 3; recommendation 10
•
Room temperature is not uniform. If recirculation is very high, the mixing of air with different
temperatures in the room will become more efficient, and thus the temperature uniform. In
this case one can talk about a single ‘room temperature’. With low air change rates, many
different temperatures can be measured. This is probably reminiscent of a time where all
sorts of ‘odd boxes’ with different shapes were put together in a room. Further reading in
section 3; recommendation 1
•
Room humidity is not well defined.
•
Unnecessary power load on room. Further reading in section 3; recommendation 9
•
Poor coefficiency of performance (COP) for built-in compressors in DX units (several issues).
Further reading in section 3; recommendation 5
4
Data Centres SWG
A Guide to Energy Savings
in Data Centres
•
Poor COP for chilled water systems (several issues). All the usual issues with cooling systems:
poor maintenance of condensers, filters, etc. Low focus on energy in the design phase. Poor
control system and lack of monitoring of energy. Further reading in section 3;
recommendations 12 and 13
•
Energy consumption is not an issue at all; lack of meters and energy management.
3. Savings Initiatives
3.1 The Actual Demand
A common argument is that little can to be done to reduce cooling energy consumption – i.e. the
energy that goes into all the electronics modules in a given installation must be transported out
again (balance) in order to maintain temperature equilibrium at a certain level, and this temperature
equilibrium is of less importance if the room is fairly well isolated. This argument is correct regarding
the energy balance, but the savings potential lies more in how we provide the cooling, and in this
context the temperature and airflow are of considerable importance. If good data about the actual
demand can be provided, the safety margins outlined in the previous chapter can be minimised.
As in all work with energy savings, it is important to identify the actual demand. To do this, one has to
look at the individual modules that need cooling. Inside every unit, power is dissipated from one or
more hot components (usually CPUs) by convection from cooling fins to the surrounding air. The unit
will burn if this air is not removed, and in most cases small fans are installed to ensure the correct
airflow.
In some (older) units with low power dissipation, the air is designed to move by natural flow without
fans. As the air passes through the unit, it gets hotter for each component it meets, and the optimum
airflow and cooling air temperature for the incoming air is in principle determined by the demand
from the last and hottest component in the unit. Increased airflow can, to a certain extent,
compensate for increased temperatures in general.
Electronics designers have focused a lot on these matters, because, from the early days of the
transistor age, the electronics industry has combatted heating problems. For each unit found in a
server room it should be possible to find precise data for the airflow and temperature necessary to
keep the unit operational for the guaranteed period.
With a population of several different types of units, the ideal common air cooling demand should be:
•
Cold air supply temperature equal to the lowest cooling air temperature demanded by any
unit
•
Cold air supply flow equal to the sum of all unit demands
Since no cooling installation is ideal, both demands must include a safety margin, mainly because
some of the airflow will shortcircuit and never reach the units, and some of the cold air will be heated
in advance, from mixing with heated air or by passing from one unit to another. Another common
reason for adding a safety margin is the need for a response time if the cooling fails. A further safety
margin must also be considered to deal with the situation where the doors are left open for longer
periods (e.g. unintentionally, for service or rebuilding). The main question is how big all these safety
margins have to be. The energy consumption can be kept at a minimum if the margins can be kept
low.
3.2 Recommendations
To establish an energy-efficient server room, the actions listed below are recommended, in the
following order:
5
Data Centres SWG
A Guide to Energy Savings
in Data Centres
1.
Find the actual cooling air temperature demand. Cold air supply temperature should
equal the lowest cooling air temperature demand set by any module/unit (+ any safety
margin).
2.
Challenge the cooling air temperature demand. Do not accept a vague temperature
demand such as ‘room temperature’. Suppliers of any electronic module must know and
specify what the temperature of the cooling air going into the module must be, and how
much is needed.
o
3.
Disk drives temperature should be challenged. See Google’s reference paper on
reliability, which indicates much less sensitivity to heat than normally assumed.
4.
Find the actual cooling airflow demand. Cold air supply flow should equal the sum of
cooling airflow demands from all modules (+ any safety margin). Suppliers of any electronic
module must know and specify what the flow of the cooling air going into the module must
be.
5.
Direct all cold supply air to racks (several solutions). Make sure that all cold air supply goes
directly, and only, to all the individual modules/units, without being preheated or mixed with
heated air. This can be done in many ways, but commonly by establishing closed cold aisles,
with roof and doors, or by ducting the air directly to the racks. Closed cold aisles should,
together with the space under the elevated floor, form a plenum for the cold supply air
where temperature is almost even, and with sufficient pressure everywhere.
o
Closed cold aisles (with roof and doors) can be difficult to establish if racks are of
different heights. Empty ‘dummy’ racks, boxes or panels could cover the space from
the rack to the roof of the aisle.
o
The underfloor void should be at least 600 mm in height and be free from cabling
infrastructure and other potential obstacles to the movement of air.
6.
Minimise pressure drop in supply- and return-air paths (several solutions). Establish an
even cold supply air distribution under the elevated floor, and adjust pressure drop across
inlet grids in the floor according to the demand. Make sure that cables or building details do
not block the airflow.
7.
Stop any bypass of supply air to return air (several solutions). Make sure that no cold
supply air escapes through unused module slots, missing crates or units, cable holes and
spaces between rack side panels and crate/unit fixture rails. Use ‘blind’ panels, sheet
metal/plastic, cardboard, foam, etc to fill the holes.
8.
Take precautions concerning operating and maintenance errors (several solutions). With
closed cold aisles, precautions must be taken to deal with the situation where the doors are
left open unintentionally for longer periods. An alarm system on the doors should be part of
the solution, rather than building in a safety margin on temperature and flow.
9.
Remove any unnecessary power load from the room (several solutions). Remove any
unnecessary power load from the room, by shutting down unused equipment (e.g. displays,
service computers, lights), and by removing equipment that is not sensitive to heat into
other non-air-conditioned rooms (e.g. UPS electronics, DX compressors, fan motors and
drives, switchboards). A few ideas are listed below:
o
1
There should be a greater emphasis on measuring and monitoring the temperature
of air onto the server, rather than the prevailing situation of monitoring the
temperature of the return air.
Keep UPS batteries in the server room1 and remove the UPS. Under normal
conditions batteries do not produce any heat, but their lifetime and performance
are improved when they’re kept at a constant moderate temperature. The UPS itself
This is a trade-off as server room is sellable footprint.
6
Data Centres SWG
A Guide to Energy Savings
in Data Centres
is often not sensitive to heat, and can be put in an adjacent room with free air
cooling (at least in northern countries) together with switchboards, etc.
o
The power delivered to the fan itself will always be an extra load to the room, but
the smaller fraction (10 to 15%) that is dissipated from the motor and VSD could be
avoided if these were placed outside the room, or isolated from the airstream in the
CRAC and ventilated from an adjacent room. Using filters and coils with lowpressure drop can reduce the fan power by a few percent.
o
Most often in standalone DX units, the compressor is placed inside the cabinet in
the air-stream, giving an extra load to the room. If the compressor is placed outside
it could be naturally cooled most of the year.
10. Increase cold air supply temperature as a result of the other solutions. When all the above
actions have been taken, the cold air supply temperature can (hopefully!) be increased.
Cooling coil temperature must now also be increased according to the new limits, and a
better COP can be obtained together with increased possibilities for free cooling. With
chilled water systems, make sure that chiller coil temperature is not dictated by other nonrelevant demands such as office ventilation and process cooling.
11. Decrease cold air supply flow as a result of the other solutions. When all the above actions
have been taken, the cold air supply flow can (hopefully!) be decreased, resulting in lower
fan power consumption together with lower cooling power to cool the fan power.
12. Improve chilled water COP (a rather big issue with several solutions). Chilled water systems
and DX units must be checked for acceptable COP, and maintained, repaired or changed
until a good COP is reached. This involves a broad field of skills and knowledge, and is best
performed by a cooling or energy specialist.
13. Use free cooling (a rather big issue with several solutions). Free cooling should be
considered if the climate is suitable. See note on free cooling below.
14. Establish a cooling buffer. If the demand for a cooling buffer in case of cooling shutdown
imposes a very high safety margin on the temperature, other solutions should be
considered: higher reliability or redundancy on the cooling systems, emergency fans for
outside air, higher thermal capacity in the building, or liquid nitrogen as used in the food
industry. Please note that the cooling buffer in the recirculated air itself is not influenced by
increasing the cold supply air temperature, if the average temperature going to the servers is
kept the same, as obtained with improved air distribution.
15. Regularly clean the chiller coils. Where permissible, chiller coils should be power-washed
every 6 months to prevent a build-up of dirt, which reduces their efficiency.
7
Data Centres SWG
A Guide to Energy Savings
in Data Centres
4. Quantifying the Impact of Savings
This study provides server-room users with tools to calculate possible savings from implementing the
initiatives described. These tools are in the form of diagrams and tables calculated for various steps in
the savings process. All the following calculations are based on the following key data, so the results
are scalable:
•
Cold supply air flow of
m3/sec
(= 3,600 m3/hour)
•
Cold supply air temperature
19°C
•
Heated return air temperature
24°C
•
Total power impact on air
6.1 kW
•
Weather
Dublin Airport 2008, 8760 hours
(out of servers)
The tools are divided into the following sections, each organised on a separate set of pages:
4.1 Reduce bypass and mixing of cold supply air: impact on room temperature demand
4.2 Cold supply air temperature demand: impact on COP
4.3 Cold supply air flow demand: impact on energy consumption
4.4 Free cooling of chilled water: impact on energy consumption
4.5 Free cooling with outside air, HVAC-style: humidification impact on energy consumption
8
Data Centres SWG
A Guide to Energy Savings
in Data Centres
4.1 Reduce bypass and mixing of cold supply air: impact on temperature
demand
To calculate the impact on the temperature demand, a model has been produced that in a simplified
manner describes the two ways that cold supply air is wasted The input and output data for the ideal
situation are shown below, together with a drawing of the model:
Calculating the total supply air temperature demand
Total power input
kW
6,1 demand
Temperature from server, unmixed °C
24,0 demand
3
Total supply-air flow
m /h
3.600 chosen according to power
Not utilized air flow
Air temperature from server area
Not utilized air flow
Not utilized air flow
Utilized-air flow to server area
Temperature increase in servers
Total supply-air temperature
Total return-air temperature
Total air temperature difference
Total power output to chillers
°C
%
3
m /h
m 3/h
°C
°C
°C
°C
kW
3.600
5,0
19,0
24,0
5,0
6,1
calculated
calculated
calculated
calculated
calculated
calculated
Mixing around servers
Unwanted mixing around server
Air flow to server
Temperature increase in server
Temperature to server, mixed
Temperature to server, unmixed
Temperature from server, mixed
%
3
m /h
°C
°C
°C
°C
0
3.600
5,0
19,0
19,0
24,0
input
output from utilized air flow calculation
calculated
calculated
calculated
calculated
Power supply
24,0 output from mixing calculation
0 input
0 calculated
6,1 kW
Server room
Server area
Server
19,0 °C
24,0 °C
24,0 °C
Mix
0 % mix
19,0 °C
3.600 m3 /h
mix
0 m3 /h Not utilized
0%
CRAC
3.600 m3 /h
24,0 °C
6,1 kW
9
19,0 °C
Cooling system
Data Centres SWG
A Guide to Energy Savings
in Data Centres
Different sets of values for unwanted mixing and unused air have been put into the model; the results
are shown in the following diagram:
Supply air temperature °C as a function of % unwanted mixing around servers.
Calculated for different % of not-utilized airflow
Supply air temperature °C
25
20
0 % not utilized
15
10 % not utilized
10
20 % not utilized
5
30 % not utilized
0
40 % not utilized
-5
50 % not utilized
-10
60 % not utilized
-15
70 % not utilized
-20
80 % not utilized
-25
0
10
20
30
40
50
60
70
80
90
% mixing
Use the diagram to visualise the effect of the non-ideal air distribution, or in a given system to read
what temperature could be obtained if airflow is improved.
Model for a situation with 10% unwanted mixing and 10% unused air
Power supply
6,1 kW
Server room
Server area
Server
18,4 °C
24,0 °C
23,4 °C
Mix
10 % mix
17,8 °C
3.240 m3 /h
mix
360 m3 /h Not utilized
10 %
CRAC
3.600 m3 /h
22,9 °C
10
6,1 kW
17,8 °C
Cooling system
Data Centres SWG
A Guide to Energy Savings
in Data Centres
4.2 Cold supply air temperature demand: impact on COP
The temperature that the CRAC must produce is closely related to the obtainable chiller COP
(coefficiency of performance). Higher temperature results in a higher COP.
E.g. 10°C gives COP = 5.8 and 14°C gives COP = 7.5
Diagram with measured COP (ammonia, without pumps and fans) at different cooling
temperatures, for the same type of system
This diagram is only an example, and should only be used as a relative measure for obtainable savings
with changing temperature.
The condensation temperature also has an impact on the temperature. Below is shown the COP for
two different size compressors (Carrier) at different chilled water and condensation temperatures:
Carrier unit 30RA-010 (25 kW cooling): COP at different
condensation temperatures for different chilled air temperatures
4,5
Chilled water °C
4,0
3,5
COP
3,0
4,4
5,6
6,7
7,2
7,8
8,9
10,0
12,8
15,6
2,5
2,0
1,5
1,0
0,5
0,0
25
30
35
40
45
Condensation °C
11
50
Data Centres SWG
A Guide to Energy Savings
in Data Centres
Carrier unit 30RA-055 (200 kW cooling): COP at different
condensation temperatures for different chilled air temperatures
4,5
Chilled water °C
4,0
3,5
COP
3,0
4,4
5,6
6,7
7,2
7,8
8,9
10,0
12,8
15,6
2,5
2,0
1,5
1,0
0,5
0,0
25
30
35
40
45
50
Condensation °C
4.3 Cold supply air flow demand: impact on energy consumption
Fan power is closely related to the airflow (and less to pressure). As a rule of thumb, the fan power
increases with the power of 2.5 for a given system (turbulent resistance) as the flow is increased. All
the fan power is dissipated in the room, thus loading the cooling even more.
Diagram showing ‘raw’ fan power consumption as a function of reduced airflow
Power is calculated for a system with a total pressure drop of 1000 Pa at 3600 m3/h.
The power shown is what the fan delivers to the air, and does not include losses in fan, transmission,
motor and variable speed drive.
12
Data Centres SWG
A Guide to Energy Savings
in Data Centres
To calculate the total fan power, the power delivered to the air must be divided with the total
efficiency.
The total efficiency is the product of the individual efficiencies for fan, transmission, motor and
frequency converter. Total efficiency could be as low as 0.35. Be aware that the individual efficiencies
vary across the entire power range for the components.
Note: The overall efficiency should be calculated as shown below. The overall machine efficiency
should be requested from the vendors of fans and other equipment in order to select the most
efficient one. It is quite normal that suppliers will quote the efficiency point at which the fan is
selected (say 70%), but if the rest of the system is inefficient then the fan selection on its own is
largely meaningless.
Based on the diagram above, the typical total power input for a well-designed system at 3,600 m3/h
(= 1 m3/sec) could be:
0.65 kW / ( 0.7fan * 0.95transmission * 0.85motor * 0.97VSD ) = 1.19 kW
The specific fan power (SFP) will in this case be 1.19 kW/(m3/sec), which is fairly good.
4.4 Free cooling of chilled water: impact on energy consumption
The temperature that the CRAC must produce is closely related to the number of hours throughout
the year where free cooling is possible. Free cooling can have COP from 30 to 50. The higher the cold
supply air temperature can be set, the more hours per year free cooling can be used.
Outside temperature (and humidity) data from Dublin Airport have been collected for 2007 and 2008.
The data have been trimmed for errors and sorted by decreasing magnitude. A diagram has been
drawn up whereby the number of free-cooling hours per year for a given temperature can be
calculated – e.g. 14°C in Dublin 2008 reads 1,681 hours, which means that a total of (8,760 – 1,681) =
7,079 hours per year can be cooled with outside air.
Diagram with annual outside air temperatures for 2007 and 2008:
How to read the diagram:
13
Data Centres SWG
A Guide to Energy Savings
in Data Centres
Read across from a given temperature to the curve. Read down to the hour’s line to see the number of
hours for which free cooling is NOT available. Subtract this value from 8,760 (the total number of hours in a
year) to obtain the number of hours that free cooling is available.
4.5 Free cooling with outside air, HVAC-style: humidification impact on energy
consumption
The energy consumption for cooling and humidifying outside air has been calculated for two room
air temperatures (19°C and 22°C ), each with varying degrees of humidification, from 60% RH down to
10 % RH.
In the diagrams, the tH (= temperature High limit) represents the highest acceptable room air
temperature after it has been subjected to 6.1 kW load on the room incl. 0.65 kW fan power. The RHL
(= Relative Humidity Low limit) represents the lowest acceptable humidity in the room at the tH
temperature. Other limits mentioned in the diagrams are default values without influence.
Calculations for both diagrams are made at 3,600 m3/h (= 1 m3/sec), with weather profile for Dublin
Airport in 2008, and with 100% fresh air (no recirculation). The pink line represents humidification
energy, and the black line humidification + cooling energy. Both cooling and humidification are ‘raw’
values, not including COP or any other efficiency factors. Humidification is calculated as steam
humidification.
Server room example, Dublin, Scenarios w. Changing humidification limit, Dublin 2008 weather, at
air-flow 3600 m3/h
kWh/yr
30.000
25.000
20.000
15.000
10.000
5.000
0
tH=19,
tL=0,
RHH=80,
RHL=60,
Proom=6,1
tH=19,
tL=0,
RHH=80,
RHL=50,
Proom=6,1
tH=19,
tL=0,
RHH=80,
RHL=40,
Proom=6,1
14
tH=19,
tL=0,
RHH=80,
RHL=30,
Proom=6,1
tH=19,
tL=0,
RHH=80,
RHL=20,
Proom=6,1
tH=19,
tL=0,
RHH=80,
RHL=10,
Proom=6,1
Data Centres SWG
A Guide to Energy Savings
in Data Centres
System Server room example, Dublin, Input for scenarios w. Changing humidification limit, Dublin
2008 weather, at air-flow 3600 m3/h
kWh/yr
30.000
25.000
20.000
15.000
10.000
5.000
0
tH=22,
tL=0,
RHH=80,
RHL=60,
Proom=6,1
tH=22,
tL=0,
RHH=80,
RHL=50,
Proom=6,1
tH=22,
tL=0,
RHH=80,
RHL=40,
Proom=6,1
tH=22,
tL=0,
RHH=80,
RHL=30,
Proom=6,1
tH=22,
tL=0,
RHH=80,
RHL=20,
Proom=6,1
tH=22,
tL=0,
RHH=80,
RHL=10,
Proom=6,1
Note: When cooling with outside air, always remember that fan power is related to pressure drop.
Therefore ductwork and fittings must be minimised, together with air velocity, in order to minimise the fan
power. If this is not done properly, the result can be that free cooling requires excessive fan power that
more or less outweighs the savings on the chillers.
5. References
European Commission, Integrated Pollution Prevention and Control (IPPC). Reference Document on
the Application of Best Available Techniques to Industrial Cooling Systems. December 2001.
Pinheiro, E., Weber, W-D. & Barroso, L.A. Failure Trends in a Large Disk Drive Population. Google Inc.,
1600 Amphitheatre Pkwy, Mountain View, CA 94043. Appears in the Proceedings of the 5th USENIX
Conference on File and Storage Technologies (FAST’07), February 2007:
http://static.googleusercontent.com/external_content/untrusted_dlcp/labs.google.com/da//papers/
disk_failures.pdf
Rocky Mountain Institute (RMI). Design Recommendations for High-Performance Data Centers.
Integrated Design Charrette. Conducted 2–5 February 2003. Published by Rocky Mountain Institute,
1739 Snowmass Creek Road, Snowmass, CO 81654-9199, USA: www.rmi.org/sitepages/pid626.php.
15