Macro forces are shaping data centre innovation as AI factories place new demands on infrastructure, influencing how facilities are designed, built, operated and powered.
This is examined in the 2026 Vertiv Frontiers report, in which the digital infrastructure provider Vertiv details technology trends driving current and future innovation, including powering up for AI, digital twins and adaptive liquid cooling.
“The data centre industry is continuing to rapidly evolve how it designs, builds, operates and services data centres, in response to the density and speed of deployment demands of AI factories,” says Scott Armul, Vertiv chief product and technology officer.
“We see cross-technology forces, including extreme densification, driving transformative trends such as higher voltage DC power architectures and advanced liquid cooling that are important to deliver the gigawatt scaling that is critical for AI innovation. On-site energy generation and digital twin technology are also expected to help to advance the scale and speed of AI adoption.”
Vertiv Frontiers builds on and expands the company’s previous annual Data Centre Trends predictions. The new report identifies several macro forces driving data centre innovation. One is extreme densification, which is being accelerated by AI and high-performance computing workloads.
Another is gigawatt scaling at speed, as data centres are now being deployed rapidly and at unprecedented scale. The report highlights the data centre as a unit of compute, reflecting the AI era’s requirement for facilities to be built and operated as a single, integrated system. Finally, it points to silicon diversification, with data centre infrastructure needing to adapt to a growing range of chips and compute architectures.
Vertiv Frontiers provides the following five macro forces shaping the data centre landscape:
- Powering up for AI: Most current data centres still rely on hybrid AC/DC power distribution from the grid to the IT racks, which includes three to four conversion stages and some inefficiencies. This existing approach is under strain as power densities increase, largely driven by AI workloads. The shift to higher voltage DC architectures enables significant reductions in current, size of conductors, and number of conversion stages while centralising power conversion at the room level. Hybrid AC and DC systems are pervasive, but as full DC standards and equipment mature, higher voltage DC is likely to become more prevalent as rack densities increase. On-site generation, and microgrids, will also drive adoption of higher voltage DC.
- Distributed AI: The billions of dollars invested into AI data centres to support large language models (LLMs) to date have been aimed at supporting widespread adoption of AI tools by consumers and businesses. Vertiv believes AI is becoming increasingly critical to businesses but how, and from where, those inference services are delivered will depend on the specific requirements and conditions of the organisation. While this will impact businesses of all types, highly regulated industries, such as finance, defence, and healthcare, may need to maintain private or hybrid AI environments via on-premise data centres, due to data residency, security, or latency requirements. Flexible, scalable high-density power and liquid cooling systems could enable capacity through new builds or retrofitting of existing facilities.
- Energy autonomy accelerates: Short-term on-site energy generation capacity has been essential for most standalone data centres for decades, to support resiliency. However, widespread power availability challenges are creating conditions to adopt extended energy autonomy, especially for AI data centres. Investment in on-site power generation, via natural gas turbines and other technologies, does have several intrinsic benefits but is primarily driven by power availability challenges. Technology strategies such as Bring Your Own Power (and Cooling) are likely to be part of ongoing energy autonomy plans.
- Digital twin-driven design and operations: With increasingly dense AI workloads and more powerful GPUs also come a demand to deploy these complex AI factories with speed. Using AI-based tools, data centres can be mapped and specified virtually, via digital twins, and the IT and critical digital infrastructure can be integrated, often as prefabricated modular designs, and deployed as units of compute, reducing time-to-token by up to 50%. This approach will be important to efficiently achieving the gigawatt-scale buildouts required for future AI advancements.
- Adaptive, resilient liquid cooling: AI workloads and infrastructure have accelerated the adoption of liquid cooling. But conversely, AI can also be used to further refine and optimise liquid cooling solutions. Liquid cooling has become mission-critical for a growing number of operators but AI could provide ways to further enhance its capabilities. AI, in conjunction with additional monitoring and control systems, has the potential to make liquid cooling systems smarter and even more robust by predicting potential failures and effectively managing fluid and components. This trend should lead to increasing reliability and uptime for high value hardware and associated data/workloads.
