Cooling at the core: Choosing the right liquid cooling method for next-gen data centers
As thermal demands grow and air-cooling hits its limits, data center designers are turning to liquid cooling to unlock performance and sustainability. With multiple methods in play, choosing the right approach – and ensuring it runs reliably – requires more than thermal math.
The shift toward liquid cooling
As chips become more powerful and generate increasingly higher temperatures that approach the limits of safe operation, the traditional assumptions around data center thermal design are being challenged. Central processing units (CPUs), once considered high-performance at 200 watts, are now routinely rated above 350 watts. Graphics processing units (GPUs) are pushing past 1,000 watts. Across the board, thermal design power (TDP) is rising fast.
This shift is driving rack densities to 60–100 kilowatts and beyond – levels that air cooling alone can no longer manage effectively. High-speed fans and chillers struggle to keep up, often at the expense of energy efficiency, acoustic limits, or system stability.
In this context, liquid cooling is no longer a niche solution. It has become a strategic tool that supports higher rack densities and lowers power usage. It also helps operators meet growing sustainability targets through better thermal control and more efficient resource use.
But making the switch requires more than selecting a new heat rejection method. It involves infrastructure decisions that impact reliability, integration, and long-term performance.
Liquid cooling methods in use today
While all liquid cooling approaches improve heat transfer, each interacts differently with server design, facility infrastructure, and operational complexity:
- Liquid-to-air systems use a closed coolant loop within the rack and reject heat to ambient air via a heat exchanger. Commonly used for moderate-density retrofits, especially in facilities without access to water infrastructure, these systems are relatively easy to deploy, but are limited by air-side heat rejection, which caps scalability for high-intensity workloads.
- Direct-to-chip cooling is a higher-efficiency method using cold plates mounted directly onto CPUs and GPUs. Coolant flows through the plates to absorb heat at the source, then transfers it to the facility water loop through a heat exchanger in the coolant distribution unit (CDU). This setup is increasingly favored in high-performance environments, though it introduces more plumbing and integration requirements.
- Single-phase immersion cooling submerges entire servers in dielectric fluid, allowing for uniform, direct heat transfer from all components. This approach eliminates the need for server fans and offers excellent thermal control with quiet operation. However, it requires compatible hardware and a larger physical footprint.
- Two-phase immersion cooling goes further by boiling and condensing of a fluid to move heat. The process is largely passive and highly efficient, but the required fluids are expensive and subject to growing regulatory scrutiny. These systems are also more complex to implement and maintain.
Each method brings a tradeoff in density, complexity, and infrastructure needs – making it critical to match the approach to the specific application environment.
Why circulation still matters
Although these cooling methods vary in architecture, most rely on one shared requirement: reliable coolant circulation. Liquid-to-air, direct-to-chip, and single-phase immersion systems all depend on an active pump to keep coolant moving consistently. Without flow, even a well-designed thermal loop can fail quickly.
While two-phase immersion relies on passive heat transfer through boiling and condensing, most liquid cooling approaches require active circulation to maintain stable thermal conditions making pump performance central to system reliability.
Pump performance directly affects thermal stability, acoustic levels, and uptime. Weak flow can lead to component hotspots, while excess noise can limit deployment in shared or sound-sensitive environments. A failure in circulation can result in thermal shutdowns or long-term equipment damage.
This is why many original equipment manufacturers (OEMs) and system integrators are specifying brushless DC (direct current) pumps which are engineered for long service life, minimal vibration, and efficient operation. Xylem’s Flojet D5 and DDC pumps, recognized as leading liquid cooling solutions for high-performance desktop systems, are now being adopted in the rapidly growing space of server liquid cooling for their proven reliability, compact footprint, and quiet operation.
This makes them a strong fit for integration into rack-level cooling loops, such as cold plate and small-scale immersion systems, where uptime and thermal control are critical.
Choosing the right fit
Selecting the right cooling approach isn’t just about thermal performance. It involves understanding rack densities, infrastructure readiness, power and water availability, acoustic constraints, and maintenance capacity. For smaller installations or environments with constraints on infrastructure and downtime, liquid-to-air systems offer a practical step forward. Others – particularly those supporting dense AI workloads or working in modular data centers – may benefit more from cold plates or immersion.
What most of these methods have in common is the need for consistent coolant circulation. In these environments, the pump is not just a supporting component – it is a defining factor in thermal performance, system uptime, and peace of mind. That’s why Xylem’s Flojet D5 and DDC pumps are engineered for over 50,000 hours of continuous-duty operation, with low noise, a maintenance-free design, and a proven track record across performance-critical systems. In data center environments where failure is not an option, this kind of dependable ‘set-it-and-forget-it’ circulation is a differentiator.
Conclusion: Cooling as infrastructure
The modern data center is more than a facility. It is a powerful engine that drives everything from generative AI to cloud infrastructure. And like any engine, it depends on effective thermal control to operate efficiently.
Liquid cooling allows data centers to scale performance, reduce emissions, and improve energy use. But its success depends on well-integrated systems – cold plates, CDUs, and circulation pumps – that keep heat under control and operations running smoothly.
As performance requirements increase and sustainability pressures rise, precision thermal design will be a defining factor in how data centers evolve. Coolant must flow reliably, systems must adapt intelligently, and every part of the thermal loop – from chip to chiller – must be engineered with the future in mind.