AGI
Lets talk about the elephant in the room. Artificial General Intelligence.
AGI has become a North Star for many in the tech world, inspiring billion-dollar investments, breathless media coverage, and a steady cadence of research breakthroughs. Yet behind the headlines lies a sobering constraint that rarely features in keynote slides: energy. Training today's largest language models already demands power on par with small towns; scaling to AGI implies orders of magnitude more compute—and, therefore, electricity—than our grids can realistically supply. Unless we inhabit a future where the marginal cost of energy, data-center construction, and compute cycles trends toward zero, AGI will remain an idea that outpaces its physical substrate.
The High-Voltage Reality of Modern AI
OpenAI's GPT-3—hardly state-of-the-art anymore—consumed about 1,287 MWh of electricity during training, emitting roughly 500 metric tons of CO₂ in the process. That single training run could power an average U.S. household for 120 years [source]. Newer frontier models employ tens of trillions of parameters and much larger training datasets, implying proportionally larger energy footprints, even after efficiency gains.
Data Centres Are Already Straining the Grid
The International Energy Agency projects that data centres will account for more than 20 % of all electricity‑demand growth in advanced economies by 2030 [source]. In the United States, the Department of Energy expects national data‑centre load to double or triple by 2028 , driven largely by AI workloads [source]. Utilities from Northern Virginia to Dublin warn that they cannot hook up new server farms fast enough, prompting discussions of dedicating small modular reactors to single campuses [source].
Extrapolating to AGI
No one agrees on an exact compute budget for human‑level AGI, but ball‑park figures hover in the exaflop‑year range for training and hundreds of petaflops per second for inference at scale. Even with optimistic hardware efficiency gains, the electricity required would—at minimum—add **several hundred terawatt‑hours per year** to global demand. For context, that is comparable to the entire annual consumption of medium‑sized industrial nations such as Sweden or Argentina.
Proponents often counter that Moore’s Law–like trends in specialised AI accelerators will close the gap. Yet transistor shrinks are slowing, and the thermal limits of dense silicon are unforgiving. Meanwhile, the pace at which LLM capabilities grow is driven less by clever algorithms than by ever‑larger clusters of GPUs burning ever‑larger piles of kilowatt‑hours.
The Invisible Costs of "Free" Compute
When investors tout “infinite scale” in the cloud, they abstract away land, water, and steel. A single hyperscale facility can gulp millions of litres of water per day for cooling and require hundreds of kilometres of high‑voltage transmission upgrades. These are capital‑intensive projects with multi‑year lead times; they do not bend to quarterly road‑maps or the hype cycles of tech Twitter.
To reach a world in which compute feels limitless, two & only two conditions must hold:
1. Near‑zero‑marginal‑cost energy, likely via abundant renewables paired with long‑duration storage or breakthrough nuclear technologies.
2. Ultra‑cheap, modular data‑centre construction, perhaps through prefabricated, standardised units sited where power is stranded rather than where network latency is low.
Neither condition is anywhere near fulfilment. Utility‑scale renewables still face intermittency constraints and transmission bottlenecks, while next‑generation nuclear remains stuck in regulatory limbo. Even if breakthroughs arrived tomorrow, retrofitting the global grid would take decades.
A More Sober Timeline
Does this mean AGI is impossible? Not at all. It means timelines that ignore energy and infrastructure are, at best, speculative fiction. Before we celebrate sentient chatbots, we must solve the mundane but colossal task of decarbonising and expanding our power systems.
Future historians may well look back and note that the bottleneck to machine intelligence was not math but megawatts.