Introduction
Why Rubin-era densities lock in direct-to-chip cooling
Rubin-era planning isn’t about guessing one exact rack kW number. It’s about acknowledging a trajectory: rack-scale AI systems have already moved into liquid-cooled form factors, and each generation tightens the therma...
Introduction
AI/HPC racks normalizing 120–150kW exceed the practical limits of air cooling—not because operators “haven’t tried hard enough,” but because the airflow and pressure requirements grow faster than the physical space, fan curves, and duct/containment realit...
Introduction
This guide is for data center operators and facilities teams planning, expanding, or retrofitting liquid-cooled AI/HPC rows. If you’re carrying SLAs, chasing better PUE/WUE, and trying to scale rack density without turning every change into an outage window, CDU selection is no...
Introduction
AI is rewriting the thermal math inside data halls. The shift isn’t just “hotter chips.” It’s sustained, rack-level heat loads that make airflow and fan power the limiting factors long before you run out of floor space.
In 2026, operators are moving from enhanc...
Introduction
A coolant distribution unit (CDU) is the control boundary between your facility heat-rejection loop and the IT-side liquid loop. In AI/HPC environments, that boundary matters because it sets the limits for temperature stability, flow control, filtration, and—ultimately—how...
Key takeaways
Dual-loop CDUs protect IT coolant quality and reduce risk by isolating facility water through a plate heat exchanger.
The most consequential control variables are secondary supply temperature, dew point margin, approach temperature, differential pressure (DP), flow, and ΔT....
Introduction
AI training, inference, and HPC clusters push rack heat loads into territory where air cooling becomes fragile: higher fan power, tighter humidity and filtration tolerances, and smaller margins when a CRAH/CRAC or chiller train hiccups. Liquid cooling (direct-to-chip and immersion) is...
Introduction
AI/HPC deployments are forcing a new thermal reality in data centers: racks that used to be “high density” at ~30–50 kW are increasingly planned at 80–120+ kW as GPU-heavy platforms move toward rack-scale designs.
At those densities, air cooling doesn’t f...
Introduction
The AI era is changing one of the most stubborn “constants” in data center operations: the assumption that air is the default heat-removal medium at rack level.
As AI training and inference scale, operators are being pushed toward higher rack power densities, tighter therm...
In the past year, the status of liquid cooling in the intelligent computing industry has escalated rapidly. It has transitioned from a "nice-to-have" option to a "must-have" mandate. In almost every forum discussing AI computing power, liquid cooling is described as the only solu...