A new collaboration led by NVIDIA aims to turn AI data centers into flexible energy assets, accelerating deployment while strengthening power grid reliability.
At CERAWeek 2026, NVIDIA and Emerald AI unveiled a new initiative in partnership with major energy providers to develop a next generation of AI factories designed not only for computing, but also for supporting power grids.
The collaboration brings together companies including AES Corporation, Constellation Energy, Invenergy, NextEra Energy, Nscale Energy & Power, and Vistra to rethink how AI infrastructure interacts with electricity systems. Instead of being treated as passive consumers of power, these AI facilities are envisioned as dynamic assets that can both use and contribute to grid stability.
At the core of this approach is NVIDIA’s Vera Rubin–based AI factory architecture, paired with new software designed to integrate computing workloads with grid operations. This enables AI facilities to adapt their power usage in response to real-time grid conditions, improving efficiency while maintaining performance for AI workloads.
One of the key challenges addressed by the initiative is the growing energy demand driven by large-scale AI systems. Traditional grid connection timelines often lag behind the rapid expansion of AI infrastructure. To overcome this, the proposed model allows AI factories to initially rely on on-site power generation and storage, then gradually integrate with the grid. Over time, these same energy resources can be used flexibly to support the broader power system.

Emerald AI contributes its orchestration platform, which coordinates computing demand with available energy resources such as batteries and local generation. This system helps balance workloads, maintain service quality for AI operations, and respond to grid needs without overbuilding infrastructure for peak demand scenarios.
According to Jensen Huang, AI factories represent the foundation of a new “intelligence era,” where computing, energy, cooling, and networking must be designed as a unified system. Meanwhile, Varun Sivaram emphasized that AI facilities should not remain isolated energy consumers, but instead play an active role in stabilizing and optimizing power networks.
This concept could have significant implications for the U.S. energy landscape. Power systems today are typically built to handle peak demand, leaving much capacity underutilized during off-peak periods. Flexible AI factories could help unlock this unused potential by adjusting consumption patterns and even supplying energy back to the grid during critical moments.
At the same time, AI infrastructure itself is becoming one of the most valuable forms of energy use, converting electricity into high-value outputs such as AI models, insights, and digital services. Meeting this demand requires not only advances in computing, but also new approaches to planning and operating energy systems.
Energy partners involved in the initiative are exploring ways to deploy generation resources more efficiently while supporting AI growth. By combining large-scale computing loads with flexible energy strategies, the model aims to accelerate deployment timelines, improve grid reliability, and create broader economic benefits.
Initial pilot programs have already been conducted across several data centers worldwide, testing how AI workloads can adapt to changing power conditions. Looking ahead, large-scale implementation is expected to begin soon, including a planned deployment at an AI research facility in Virginia built on NVIDIA’s latest infrastructure.
Overall, this collaboration signals a shift in how AI and energy systems evolve together. Rather than treating data centers as isolated power consumers, the industry is moving toward a model where AI infrastructure becomes an integrated part of the energy ecosystem—supporting both technological advancement and grid resilience at the same time.

