Cool tech wanted to acclimatize cloud data

ARPA-E notes the risk of extreme weather events to data center availability and offers $42 million for energy-efficient cooling ideas.
17 October 2022

Heating up: extreme weather events add to the burden facing existing data center cooling infrastructure. Image credit: Shutterstock.

Data centers are more than just a tech trend. They have fast become essential elements in the digital fabric powering today’s businesses. And, as digital transformation attracts more industry participants, the demand for data storage and remote computing resources continues to grow. Ultra-reliable, high-availability data center services have convinced customers to give up their on-prem solutions and partner with cloud providers such as Microsoft, Amazon, and Google – who are estimated to operate around half of the world’s largest facilities. But, at an infrastructure level, putting thousands and thousands of processors under one roof generates a huge amount of heat and has brought the need for efficient data center cooling into focus.

A decade ago, having cold air rise up through strategically located perforations in the data center floor was enough to keep servers within their operating temperature window. But as more and more technology is squeezed into data center cabinets, and facilities increase in size, alternative data center cooling solutions are required to deal with the higher, more concentrated heat loads. One option is to build data centers in cooler climates to tip things in the favour of cloud providers. Data center operators can make use of colder temperatures outside the facility to help regulate the temperature of the hot electronics inside.

Geographical advantage

Facebook’s Odense hyperscale facility in Denmark, is one example of data center cooling that benefits from being in a lower-temperature location. And further boosting its thermal efficiency, the heat generated is piped to buildings nearby to warm the residents. A more extreme example is a test setup in Japan that uses snow to enhance data center cooling capabilities. But engineering data center cooling solutions based on stable weather conditions is proving to be increasingly problematic.

This summer, unseasonably hot weather in London, UK – which saw temperatures climb above 40 deg C – affected big cloud operators including Oracle and Google. Amazon also registered a so-called ‘thermal event’ within its data center infrastructure. And the UK capital was by no means the only site in Europe’s network of data centers – typically clustered around Frankfurt, London, Amsterdam, Paris, and Dublin – that had to deal with higher than expected ambient temperatures.

Facility designs, which include data center cooling requirements, are often specified according to local 1 in 20-year extreme weather conditions (so-called N=20). However, the impacts of global warming mean that highs and lows can stray outside these patterns. Plus, even if facilities ‘bulk up’ on their temperature management capacity there can still be issues – for example, concern is growing about the amount of water that data centers use for their cooling.

In periods of drought – another consequence of global warming – data centers could be in competition with other users for water supply. Just a couple of years ago, Time magazine put the spotlight on tensions between Google’s aspiration to grow its data center operations in the US and water availability. Some operators, such as Microsoft, have launched programs to offset the impact of their demands for water, by replenishing more than they use. But these initiatives still serve to highlight the issue.

On the plus side, competition for profits in the data center space brings bright ideas into play and that includes the search for innovative data center cooling. This month, Alibaba Cloud (teaming up with NVIDIA) announced the testing and validation of ‘a new generation of immersion liquid-cooled server cluster solutions’ designed for 2U server chassis. The configuration is based around NVIDIA’s A100 card, which features more than 54 billion transistors, and is designed for data-intensive applications such as training AI models. NVIDIA has an 80 billion transistor design queued up too, dubbed the H100, which takes cloud computing capabilities another step further. The liquid-cooled cards open the door to heat management at a much more granular and targeted level, with NVIDIA’s design reportedly offering a 30% improvement in power consumption.

Firms such as Schneider Electric, together with Avnet and Iceotope, have been offering liquid-cooled rack-mounted enclosures for the past few years. High-performance processors require high-performance cooling to avoid premature aging and reduced performance, which is seeing more liquid-based designs being deployed for data center cooling. Other benefits for customers include quieter operation. Analysts are forecasting that the global data center liquid cooling market could grow to USD 6.4 million by 2027, up from around USD 2.1 billion in 2022.


To further encourage smart thinking on data center cooling, ARPA-E (the energy-focused arm of the US Advanced Research Projects Agency) has launched a new program dubbed COOLERCHIPS to ‘develop transformational, highly efficient, and reliable cooling technologies for data centers’. The goal for COOLERCHIPS is to reduce total cooling energy expenditure to less than 5% of a typical data center’s IT load at any time. Today, depending on the capacity required, data center cooling and ventilation system can account for 30 – 55% of a facility’s energy consumption.

The project team reveals a number of motivations behind the project, including the rise of extreme weather events – highlighting that funders have noted the risk of disruptions to cloud availability – and the US goal to reach net zero carbon emissions by no later than 2050.