Liquid cooling – solving the data center equation?
• Liquid cooling for data centers could allow higher load expansion than air cooling.
• Air cooling will not be physically feasible for much longer – the physics is against it.
• Hyperscalers are already beginning to investigate liquid cooling for their data centers.
Liquid cooling for data centers is what some would call a clear necessity in terms of technological leaps forward. It can help significantly mitigate the economic and ecological costs of the data centers we need.
We need those data centers now, and we’ll probably need them ever more in the future, because they power our increasingly online lives, and in particular they’re the backbone of business revolutionary technology like generative AI, and leisure revolutionary technology like modern gaming.
But we can’t afford them now, and we’ll probably be able to afford them even less in the future, because traditional air-cooled data centers have tended towards vampirism, sucking up energy and water from wherever they’re sited at a colossal rate – and that energy is, for the most part, generated in ways that increasingly throw humanity on the pyre of a burning, environmentally unpredictable planet.
So yes, we need more and more powerful data centers. But no, they can’t stay as they’ve been up to now, because firstly, air-cooling is deeply inefficient as a way of distributing the heat from the business end of the data center, and secondly, the power it takes to deliver that air cooling is a fairly heinous eco-cost (and indeed an economic cost) for the planet, the operators, and ultimately everybody who just wants ChatGPT to do their homework for them.
Liquid cooling – especially two-phase immersion-based liquid cooling, as we discovered in Part 1 of this article – significantly reduces both the space-inefficiency of traditional data centers, the energy costs involved in cooling down the systems, and subsequently the ecological impact of the whole operation.
We rejoined Kar-Wing Lau, CTO and co-founder of LiquidStack – one of the many companies already creating and selling such innovative two-phase immersion-based liquid cooling systems for data centers – to ask the obvious question.
If liquid cooling’s so good, why aren’t we doing more of it already?
We’ve seen that the numbers in terms of energy draw are pretty unsustainable for air-cooled data centers going much further into the age of high-powered chips and generative AI. So, how long has liquid cooling been an option for companies? How many know the option exists, and why aren’t more taking it up?
Oh, it’s been developing a while. I joined LiquidStack back in 2012, but back then, it was more of a bitcoin mining issue. But as you know, bitcoin mining takes a lot of energy too, and so the cooling was required – we really didn’t have any other choice, we were bitcoin mining in Hong Kong, with its subtropical climate. It’s hot and humid in the summer, right?
We required a lot of cooling for that, and I was benchmarking multiple cooling technologies against each other, like cold plate, water cooling, single face immersion, oil cooling, and so on. We saw that two-phase immersion liquid cooling was the best option for our application. Then we had a couple of data center operators coming to our data center, and they were really amazed about the power density which we were able to achieve.
We took the hint that liquid cooling was potentially an interesting technology not only for some of our own applications, but potentially also for the data center industry. But when we started back in 2012, the average output was around 3-5 kilowatts. Very low, right?
So back then, there wasn’t such a big demand for high power density applications. We were able to get to around 15 kilowatts per rack. That became dramatically important with the advent of AI.
As you know, around 2015, 2016, traditional AI really took off. Load demand keeps getting higher – we mentioned in Part 1 there are data centers now with a 35 kilowatt prerequisite, and that’s by no means the end of the road in terms of demand.
The demand-drivers of liquid cooling.
But as more and more data center operators are finding out, the higher your demand gets, the higher and more complicated and more resource-intensive your cooling demands get, too.
Which means that those of us offering liquid cooling for data centers are finding that more and more data center operators are now coming to us, because they’re all increasingly running into the unsustainable math of air-cooled – and traditional water-cooled – data centers for high-powered, high-demand applications like generative AI.
What about the hyperscalers? They tend to be looking for the next big disruptive idea, and the math on air-cooled data centers feels pretty inescapable, inasmuch as they’re not going to be feasible for very much longer if the load keeps getting bigger. What happens if all of a sudden, the hyperscalers wake up one morning and go “We must all switch to liquid cooling!”
Are the liquid cooling solution providers ready for that kind of rapid scaling?
Dealing at hyperscale.
Ha. Good question. Obviously the hyperscalers – the clue is in the name – are huge. But in fact, we’re already in discussions, and have some projects already running with multiple hyperscalers.
So, they don’t exactly need a lightbulb moment, then? They’re already looking into the potential of liquid cooling?
Oh, absolutely. You don’t get to be a hyperscaler by being taken by surprise by new developments.
Obviously, for us, our meat and drink is still relatively small to medium-sized companies, so we at LiquidStack need to scale up, but we’re definitely already setting our sights on that level of client. This fall, I’m going to set up more facilities, not just in Hong Kong, but in the US, too. We’re growing now to serve hyperscaler demand as and when it comes.
When liquid cooling in data centers becomes the new normal, because air cooling’s no longer meeting the requirements of generative AI and demanding gaming?
Exactly. And while the hyperscaler interest is likely to be transformative, we’re seeing a lot of enterprises coming into the game, too. Most enterprises don’t operate data centers as their core business. It’s not too long ago that supercomputers were the domain of government organizations and universities, and nobody else.
But now Tesla’s launching a $300m AI supercomputer with 10,000 accelerator chips. More and more enterprises are seeing the need to use AI to stay competitive.
That’s the whole thing, isn’t it? It’s a cycle. The chips are launched, they have higher capacity, businesses find ways of using them all along the chain. But along that cycle, the need for better cooling solutions factors in, otherwise the chips don’t do the things they could do, and that they need to do. And that feeds back and as soon as one cycle ends, the next chip race begins, and everything evolves upwards that way, to greater and greater data center capacity – and higher and higher need for efficient cooling.
Yeah, we certainly hope so!
6 December 2023
6 December 2023