Cooling at an AI crossroads
31st August 2025

Piergiorgio Tagliapietra, senior director of application engineering thermal management EMEA at critical infrastructure technologies company Vertiv, looks at how AI workloads are reshaping data centre cooling requirements.
Artificial intelligence (AI) workloads are advancing rapidly. With this acceleration comes a clear shift in the physical demands placed on data centres. In particular, cooling systems are under mounting pressure to adapt to changes in heat density, heat concentration, and deployment speed.
Established cooling designs have long delivered reliable performance in air-optimised environments. These approaches are now being pushed close to their limits with high-density AI racks becoming more common and individual cabinets consuming 30kW to 60kW or more, and forecasts indicate this may increase to 300-600kW and possible 1MW by 2030.
That level of concentrated heat generation cannot be managed through airflow alone, especially in legacy spaces designed around far lower densities.
Facilities built before the current wave of AI deployments can lack the floor space, containment flexibility, or structural tolerances to accommodate the digital infrastructure that is now being installed.
Operators are responding by updating containment systems, reconsidering room layout, and exploring different cooling strategies that align more closely with emerging thermal profiles.
Evolution of liquid cooling
One direction the industry is taking is the move towards hybrid cooling environments. These combine traditional air cooling with liquid systems such as direct-to-chip or rear-door heat exchangers. In these mixed environments, air may continue to serve lower-intensity IT loads while liquid systems manage compute-intensive racks.
This model brings operational benefits, but also new complexity. Introducing fluid networks into data centres, managed predominantly by electrical and mechanical teams, requires new expertise, different maintenance procedures, and a closer working relationship between IT and facilities managers. Control and monitoring systems must also be upgraded to reflect changes in cooling flow, thermal differentials, and points of vulnerability across the estate.
Data centre industry experts are reporting growing demand for integrated thermal strategies that can be adapted to site constraints and evolving AI compute profiles – an indication that this hybrid model is moving from niche to norm.
A gap between infrastructure and innovation
The speed at which AI is advancing is also creating pressure on timelines. While a new AI model may be developed in a matter of months, infrastructure upgrades or new data centre builds can take years. This mismatch is encouraging interest in cooling solutions that can be installed quickly, integrated without heavy redesign, and scaled incrementally.
Modular systems, self-contained liquid units, and prefabricated cooling modules are gaining ground, particularly in edge environments and colocation spaces where time and access are constrained. However, uptake varies depending on geography, regulation, and availability of skilled contractors.
In Europe, for example, the revision of the F-gas regulation is directly shaping technology choice. Cooling systems that rely on high GWP refrigerants are banned for new builds from 2025. In the United States, regional incentives and energy efficiency targets are influencing cooling technology adoption. Across parts of Asia and the Middle East, land constraints and ambient temperature conditions are pushing innovation in higher-efficiency cooling per square metre.
AI workloads also present subtler risks. The nature of these applications means thermal behaviour can vary significantly over time, even within the same rack. Inference workloads often introduce unpredictable spikes. If systems are not designed to respond dynamically, there is an increased risk of localised overheating or accelerated equipment wear. These patterns require higher-resolution telemetry and smarter control logic to provide reliability and uptime.
In response, many operators are re-evaluating commissioning processes. Simulation-based testing is becoming more common to validate cooling response across different operational scenarios. There is also growing interest in continuous performance monitoring, where cooling effectiveness is tracked alongside workload activity.
Expectations are rising
The wider shift underway is cultural as much as technical. Cooling design can no longer be treated as a separate discipline from power, software, or architecture, and waste heat re-use is becoming essential for maximising overall system efficiency. To meet the demands of AI
infrastructure, systems thinking is becoming the default. That means earlier involvement for cooling professionals in project planning, greater integration of digital and mechanical design, and a shared responsibility for long-term system resilience.
Skills are evolving in line with this. Cooling engineers are now expected to engage more actively with energy efficiency metrics, participate in discussions around compute lifecycle planning, and provide insight into the operational impact of thermal design decisions over time.
AI is changing expectations across the board: workloads are becoming denser, build cycles are becoming shorter and thermal conditions are becoming more variable. The cooling industry has the tools and expertise to respond, and success will rely on continuous collaboration, faster integration, and a willingness to adapt to changing infrastructure logic.
Thermal management remains one of the most critical elements in the future of AI infrastructure. Its role is expanding, and so is the opportunity to lead through smarter, more responsive system design.