Categories
Co-location

As liquid cooling takes off in the datacenter, fortune favors the brave – The Register

Comment Hype around liquid and immersion cooling has reached a fever pitch in recent months, and it appears that the colocation datacenter market is ready to get in on the action. Recently, Zachary Smith, Equinix's global head of edge infrastructure services, told The Register that the outfit would like to offer customers more liquid and immersion cooling capabilities, but blamed a lack of standards for getting in the way. "Can we use standard connectors for liquid-cooled loops

Comment Hype around liquid and immersion cooling has reached a fever pitch in recent months, and it appears that the colocation datacenter market is ready to get in on the action.

Recently, Zachary Smith, Equinix's global head of edge infrastructure services, told The Register that the outfit would like to offer customers more liquid and immersion cooling capabilities, but blamed a lack of standards for getting in the way.

"Can we use standard connectors for liquid-cooled loops? Can we put them in the same place? Can we label them the right colors so our techs know what to do with them?" he asked, listing the litany of questions a colocation provider like Equinix has to answer before it can support liquid-cooled systems more broadly.

The fact that colocation providers like Equinix are even talking about liquid cooling shows goes to show it isn't a niche tech suited only for exotic supercomputing applications.

And while the lack of standardization is unquestionably a roadblock, it's not an insurmountable one. Arguably the incentives for adopting liquid cooling tech, in any capacity, far outweigh any potential headaches that may arise in the process.

Equinix is certainly right about one thing: chips are only ongoing to get hotter. And at a certain point, air cooling high-end systems especially those aimed at AI and ML is going to become impractical.

Intel's and AMD's 4th-gen Xeon and Epyc processor families have thermal design profiles (TDPs) of 350W and 400W, respectively. Meanwhile, nearly every current and upcoming datacenter GPU and AI accelerator is pushing upwards of 600W a piece, with anywhere from four to eight of them being packed into a chassis. (For comparison, the CPU in a typical modern laptop has a TDP of under 20W and you know how warm they can get.)

But the situation isn't as dire as Smith would have you believe. Despite his claims to the contrary, 300W per socket isn't some magical point of no return where liquid cooling becomes a must-have. If it were, we expect the market would have taken off a lot sooner.

With that said, just because you can liquid cool something doesn't mean you should or even need to. For example, Iceotope recently demonstrated the use of immersion cooling on helium-filled hard drives. While cool in concept, as our sister site Blocks and Files recently reported, there's not really much point. Years ago, storage and backup services provider BackBlaze investigated the relationship between drive temperature and failure rate and found no correlation as long as the drives were kept within spec usually between 5C-60C.

Instead, the biggest problem for air-cooled datacenter operators is a structural one. It takes a lot of power to feed these systems with cool air fast enough to keep them from overheating. Hotter chips only exacerbate this problem.

According to analysts, upwards of 40 percent of a datacenter's power consumption can be attributed to thermal management. When every watt saved on cooling is cash back in your wallet it's not hard to see why datacenter operators might be interested in tech that lets them claw back even a fraction of that. And spoiler alert that's exactly what liquid and immersion cooling promises to do.

But rather than leaping straight to direct liquid cooling (DLCs) or filling datacenters with immersion cooling tanks, operators may want to start with something that doesn't hinge on customer buy-in.

Dell'Oro Group projects liquid and immersion cooling to grow from just 5 percent of the datacenter thermal management market to 19 percent by 2026. Meanwhile, research from Omdia shows that number to be closer to 26 percent.

Demand for air-cooled systems clearly isn't going away anytime soon.

That's not to say liquid-cooling tech can't be used with air-cooled systems. Take rear-door heat exchangers, for example. These are essentially big radiators that are bolted to the back of a rack. As coolant flows through the radiator, it pulls the heat out of the hot exhaust air exiting the servers.

The advantage of rear-door heat exchangers is they can be used to achieve much higher rack power densities than conventional air-cooled servers. This isn't some unproven technology either. California-based colocation vendor Colovore, for instance, has been using rear-door heat exchangers to cool racks up to 50kW for years.

More importantly, rear-door heat exchangers rely on the same infrastructure as DLC or immersion cooling tanks. Before any of these systems can be powered on, coolant distribution units (CDUs) need to be installed, racks need to be plumbed, and power-hungry air handlers need to be swapped for exterior mounted dry coolers.

Because this infrastructure can be shared, it becomes much easier to add support for direct liquid or immersion-cooled systems as the standards around them mature.

While half-measures or stop-gaps like rear-door heat exchangers may not be as sexy as pumping liquids through a chassis or immersing a motherboard in hydrocarbons, it arguably provides datacenter operators like Equinix a way to achieve higher-rack densities and cool hotter systems while they wait for the OEMs and equipment vendors to align on usable standards.

Read more from the original source:

As liquid cooling takes off in the datacenter, fortune favors the brave - The Register

Related Post