James Cooper from ebm-papst takes a look at precision air conditioning equipment in computer rooms and data centres, with a particular focus on cooling in data centres – how the importance of cooling has developed in recent years, what can be done to improve cooling systems and reduce energy, and what the future holds for cooling.
Cooling computer rooms and data centres has been a hot topic since the mid-50s, when there was a need to control the temperature and humidity of punch-cards and magnetic tape heads. Even though technology has progressed at an incredible rate, the same methods and guidelines for cooling are still adhered to, keeping return air temperatures in the low 20oC’s as it was deemed necessary / safe for the equipment and servers within the room.
An average data centre in the UK has a PUE (Power Usage Effectiveness) of 2.5 which means that only 40 per cent of the energy used is available for IT load. Most of the energy, unless it is a modern data centre, goes towards cooling, which in the eyes of most IT people is a necessary evil. However, it doesn’t have to be that way as cooling methods and technology have moved on significantly and there are many ways to make a data centre more efficient.
If you look at the plethora of cooling options in data centres it is no wonder that people struggle with what is best for them. Traditional CRAC units (Computer Room Air Conditioning), that tended to sit against the walls in the room or in a corridor blowing under the floor have seen a more recent influx of aisle or rack units taking the cooling closer to the server, with DX or water suggesting higher density cooling capabilities. Direct and indirect fresh air cooling has also been seen as a viable option for the UK, since our average annual temperature spends more than 60 per cent of the time below 12oC. Adiabatic cooling has also seen a revival recently even though it is most efficient in hotter climates.
They say that nothing is new, just reinvented and this is true. Raised floor cooling was used by the Romans and Adiabatic cooling can be seen in Ancient Egyptian frescos. Even Leonardo DaVinci had stab at it.
The problem is that, certainly in legacy data centres, there are limited options in modifying the structure of the building to utilise some of these ideas. It is also the case that most data centres run on partial load and never get anywhere near their original design. Although high density racks are available to maybe get 40kW+, in the past few years the average rack density has barely gone above 4kW/cabinet (less than 2kW/m2).
There are many views on how to improve cooling systems and save energy with much discussion about raising the temperature of the air going into the servers. This certainly has some merit. A lot of data centres measure return air temperature which can be a mix of hot and cold air in the room. Temperatures at the intake to the servers are generally at the low end of ASHRAE best practice at around 18oC with return air. Increasing the air on temperature to the racks will certainly mean the upstream cooling can become more efficient. Increasing the delta T of the air going back to the cooling unit by perhaps segregating the air paths will also increase the cooling capacity of the system. This type of strategy has obvious advantages but also concerns for IT managers who don’t want to risk the equipment over-heating and failing. With modern Blade servers there is also a health and safety risk, as with a high delta T across the server it is possible to get air off temperatures at the back of a rack of 50oC.
So what is a good strategy? Starting with low hanging fruit is always a good idea and being realistic with what your infrastructure can support will help narrow the options. The first thing to bear in mind is that two critical components within the cooling system should be the focus – the compressors and the fans. If you can improve the efficiency of the cooling circuit to allow compressors to run for less time then this will lead to huge energy savings. If you can use the latest EC fan technology and reduce the airflow when not required then this will also lead to even bigger savings.
One of the biggest and easily-fixed wastes of energy is the lack of air management. If air can find an easy route to escape and bypass a server it will do – with this being a waste of energy. Plugging gaps and forcing the air to go only to the front of the racks is an easy step to improve efficiency. Aisle containment is one method for restricting and segregating air paths and doesn’t have to be too expensive. If possible, try to stop warm air from one rack blowing into the air intake of another.
Fans are critical to the movement of air around the data centre. Legacy units may contain old inefficient AC blowers with belt drives that break regularly and shed belt dust through the room. They are usually patched up and kept going as changing a complete CRAC unit can be costly and sometimes physically impossible.
Upgrading to EC fans is one way to immediately address this problem. With modern EC fan technology there is no need for belts and pulleys and the efficiencies of the motors are significantly higher, >90 per cent. The other benefit is that they can be easily and cost-effectively speed controlled, allowing a partial load data centre to turn down the airflow to only what is needed. In some cases where you have excess capability it makes sense to run units with a reduced airflow (depending on the capabilities on the unit). A 50 per cent reduction in airflow can mean one-eighth of the power being consumed by an EC fan! This added to the improvement in cooler running, maintenance free operation and longer lifetimes will offer a simple and cost-effective improvement to any data centre.
The knock-on effect of improving the airflow within the data centre will mean that upstream systems, chillers, condensers, etc. can relax, which means that more energy is saved. It’s also important to consider the external equipment when upgrading fans and control strategies to make the overall system more efficient.
As technology advances at an exponential rate, the future of cooling is secure. Whatever the choice of medium, there will always be a need to keep equipment cool as efficiently as possible.