The AI infrastructure buildout is moving faster than most people realize, and the power grid is struggling to keep up. In Peak Nano's April WattsNext webinar, Frank Barry of IT Brand Pulse and Shaun Walsh, Peak Nano CMO, walked through what's driving that gap, what the industry is doing about it, and what the shift from AC to DC power means for the data centers being built right now. Watch the full session here.
The Power Grid Was Not Built for This
To put it into perspective of how much power is demanded, a traditional large data center consumes around 100 megawatts of power, and the largest AI data center sites being built today draw up to 7,000 megawatts, equivalent to powering the entire city-state of Singapore. In just a few years, the grid’s power demand has increased dramatically, and to say that the grid isn’t prepared to absorb it is an understatement.
The demand is being driven by inference, otherwise known as the act of users actually interacting with AI models. Most of the data that gets published about AI power consumption focuses on training, but over the lifetime of a mature model, inference will account for 80 to 90% of total power usage. As AI agents multiply across industries, that number only grows.
At this pace, utilities and regulators have not been able to respond. Permits, grid upgrades, and new generation capacity all move on timelines measured in years. AI infrastructure does not.
Industry Is Building Around the Bottleneck
When the grid could not keep up, the companies that needed the power started solving the problem themselves. xAI sourced power from Mississippi when Tennessee could not deliver quickly enough. Meta is building its own gas-fired power plants in Louisiana. Microsoft is partnering with Helion Energy on a commercial fusion plant near Spokane and with Constellation Energy to restart the Three Mile Island nuclear facility in Pennsylvania.
These are not small workarounds. These are trillion-dollar companies deciding that waiting for public infrastructure is not an option. Jensen Huang of NVIDIA framed the stakes plainly: if China wins the AI race, it will not be because they have better chips. It will be because they have more power available to apply to those chips. The CEO of GE Vernova, the world's largest manufacturer of natural gas power plants, said on a recent earnings call that they are sold out through 2030 and have no factory capacity to build more. The new generation cannot be the only answer.
That reality is pushing the industry toward a different question: if you can't build more power fast enough, how do you get more out of the power you already have?
The Architectural Shift That Changes the Math
The answer is increasingly moving from AC to DC power distribution inside the data center itself.
In the Race to Win AI Power webinar, Shaun described how every AC data center loses power at each conversion step — from the grid into the facility, through the UPS systems, transformers, and distribution panels, down to the rack level. Those losses add up. With trillions being invested in AI infrastructure globally, roughly 15 cents of every dollar is currently wasted because facilities continue to run AC architectures. That works out to approximately $150 billion in annual losses.
Switching to DC eliminates most of those conversion stages since power comes in and goes directly to the racks that already run on it natively, resulting in far less energy lost in transit. The efficiency gain is straightforward, but there's a second benefit that tends to be overlooked: fewer conversion stages mean fewer cable runs throughout the facility, which cuts copper requirements by nearly 50%. Given current copper prices and the fact that 80% of AC data center infrastructure components currently come from China, that reduction matters well beyond the material cost savings alone.
NVIDIA introduced their DC data center architecture standard at GTC earlier this year, and major infrastructure players such as Siemens, Eaton, Schneider Electric, GE, and Vertiv, have aligned behind it. The standard exists. The economics are clear. The adoption is underway.

Where NanoPlex Fits Into the DC Data Center
The DC transition also changes what lives inside the rack. In an AC data center, battery backup systems are distributed throughout the facility and inside each rack to handle fluctuations and outages. In a DC architecture, that battery function moves to a centralized, shared system. What takes its place inside the rack is a capacitor.
GPU workloads, and inference in particular, create burst demands: large amounts of energy that need to be delivered instantly and precisely. Batteries are not built for that, but capacitors are. Capacitors can cycle far faster, respond to surges immediately, and keep the power delivery synchronized with what the compute infrastructure actually needs in real time.
This is the role NanoPlex HDC and LDF are built to fill, enabling the high-speed, precise power delivery that keeps GPUs running at full throughput inside DC data center racks. NanoPlex LDF is an ultra-low-dissipation-factor film that can operate continuously at 135°C while delivering maximum power to GPUs with minimal conversion loss. On the other end, NanoPlex HDC supports high-voltage reliability as it provides single-material 800–1200 VDC ratings with no electrolytic wear-out mechanism for 15-year AI factory SLAs.
A single capacitor thermal failure can knock out an entire rack of GPUs, making the 135°C continuous operation of NanoPlex HDC and LDF increasingly important.
What the Numbers Actually Mean
Run the numbers on the DC conversion, and the impact becomes concrete. On the same megawatt of input power, a DC data center running NVIDIA's new architecture supports 6,100 GPUs versus 4,300 in a conventional AC facility. This is a 42% increase in deployable compute from the same power input.
To put that in context, Satya Nadella of Microsoft has noted that Microsoft has over 500,000 NVIDIA GPUs sitting in warehouses inside data center shells that cannot be turned on due to power constraints. The shift to DC would effectively unlock the equivalent of half of those facilities without adding a single new megawatt of generation capacity. At a moment when new gas plants are sold out for four years, that is not a minor efficiency story. That is how the industry bridges the gap.
Our prediction is that of all new data centers built between now and 2035, roughly 80% will be DC. Not because it is newer technology, but because the economics leave no room for the alternative.
On-Demand Webcast
Watch the Full Session: The Race to Win AI Power: Geopolitics in the DC Data Center
To learn more about NanoPlex HDC or LDF for DC data center applications, contact sales@peaknano.com.