Certain shifts in the world don’t announce themselves loudly.
They show up as a line in a report — a number that makes you read it twice.
This one did that for me.
In a piece for TechCrunch on the latest International Energy Agency (IEA) analysis, Tim De Chant points out that the world will spend $580 billion on data-centres this year — about $40 billion more than it will spend on developing new oil supplies.
When data-centres outpace oil exploration, the centre of gravity in the global economy has quietly moved.
Not towards software.
Not towards models.
Towards the physical infrastructure that keeps all of it alive.
This is one of those moments where the economic map redraws itself, even if most people aren’t looking.
The News
According to TechCrunch, citing a new report from the International Energy Agency (IEA), the world in 2025 will:
spend around $580 billion on data-centres
spend about $40 billion less on developing new oil supplies
TechCrunch’s report highlights several key IEA findings:
Electricity consumption from AI data-centres is expected to grow fivefold by the end of the decade, doubling today’s total data-centre usage.
Conventional data-centres will also use more electricity, though not as dramatically as AI-heavy ones.
Roughly half of this demand growth will occur in the United States, with much of the rest in Europe and China.
Most new data-centres are being built near large cities with populations of over 1 million people.
About half of the projects in the pipeline are at least 200 megawatts each.
Many of these are being built near existing data-centres, forming clusters.
The IEA warns that:
grid congestion and connection queues are increasing
connection queues for new data-centres are already long in many regions
in northern Virginia, waits for grid connection can be as long as a decade
in Dublin, new interconnection requests have been paused entirely until 2028
The report also notes that:
the grid supply chain is another pinch point: cables, critical minerals, gas turbines and transformers are delaying upgrades
companies like Amperesand and Heron Power are working on solid-state transformers, which can better integrate renewables, react more quickly to grid instabilities and handle a broader range of conversions — but deployments are still at least 1–2 years away
On the supply side, the IEA expects that by 2035:
renewables will provide the majority of new data-centre power, regardless of how aggressively countries push to lower emissions
solar will be a particular favourite thanks to falling costs
Over the next decade, the IEA projects that data-centres will draw:
about 400 terawatt-hours from renewables
around 220 terawatt-hours from natural gas
about 190 terawatt-hours from small modular nuclear power plants, if they deliver as expected
That’s the picture TechCrunch paints from the IEA report.
The question for us is what it actually means.
The Surface Reaction
You’ll see headlines about “AI’s insane energy use” or “The cloud is the new coal.”
But the more interesting story here is simpler:
Data-centres have quietly become one of the core industrial assets of the global economy.
They’ve moved from “back-end facilities” to “frontline infrastructure”.
And that shift is being driven by AI workloads that:
train on enormous datasets
require dense compute clusters
demand high reliability
expect low latency
run at scales old data-centre designs weren’t built for
This is why the spending graph now shows data-centres overtaking new oil-supply investment.
The resource that matters most is changing.
What Is Being Built or Changed
If you zoom into the IEA–TechCrunch details, you start to see what’s actually being built.
1. Urban-proximate compute hubs
Most new data-centres are near cities with populations over 1 million.
That’s not accidental.
Cities offer:
proximity to enterprise demand
better network infrastructure
labour pools with data-centre and electrical skills
existing energy infrastructure
But they also come with:
tighter grid constraints
more public scrutiny
stricter regulation
We’re relocating industrial-scale energy use closer to where people live and work.
2. Mega-scale facilities and clusters
Half of new projects are 200 MW or larger.
That’s a very different ballgame from the racks we used to think about.
At this scale:
cooling is a major design problem
internal network topology matters
grid contracts become multi-decade commitments
local politics and regulation are unavoidable
The clustering effect (building new data-centres next to existing ones) is logical from a compute standpoint — but hard on grids.
3. The grid as a hard constraint
Ten-year waits in northern Virginia and a full interconnection pause in Dublin are not small details.
They’re early warnings.
They tell us:
the internet’s physical foundation is under real pressure
AI growth is colliding with grid capacity and upgrade timelines
planning cycles are out of sync: software moves in weeks, infra moves in years
That mismatch is where outages and cost spikes tend to hide.
4. Grid tech playing catch-up
Companies like Amperesand and Heron Power are working on solid-state transformers that can:
respond faster to demand fluctuations
better handle renewables
offer more flexibility in power conversion
But innovation at the hardware and grid layer takes time.
1–2 years to first deployment.
More years to scale.
Meanwhile, AI demand curves are much steeper.
5. Renewables as default, not optional
The IEA’s outlook — summarised by TechCrunch — suggests that by 2035, most new data-centre power will come from renewables, with solar playing a central role.
The projected mix (400 TWh renewables, 220 TWh gas, 190 TWh SMRs) is as much about economics as it is about sustainability.
Renewables are no longer just a “green story”.
They are a cost and availability story.
The BitByBharat View
From a builder’s perspective, this is the kind of macro shift that quietly defines what’s possible for the next decade.
I’ve spent enough time around infra teams to know that when the foundation starts to strain, everything built on top eventually feels it:
latency becomes unpredictable
capacity fluctuations appear at the edges
cost structures become more volatile
deployment regions become strategic, not just convenient
We are watching the AI stack transition from being “cloud-backed” to “grid-constrained”.
That’s a very different mental model.
In the cloud era, you think about:
regions
availability zones
scale-out patterns
In the emerging era, you also have to think about:
which regions can actually get new power
how long interconnection will take
which geographies have paused new data-centre connections
what mix of energy sources underlies your compute
If you’re building serious AI products or infra, you are now — whether you like it or not — in the energy business, at least indirectly.
The Dual Edge (Correction vs Opportunity)
Correction
The “infinite cloud” assumption is over.
If your roadmap quietly assumes:
unlimited GPUs
easy region expansion
stable energy pricing
fast infra onboarding
…this IEA data is a healthy correction.
You don’t get infinite capacity on a finite grid.
Not at the pace AI is trying to grow.
Opportunity
At the same time, the transition opens up an entirely new problem space for builders:
tools that help teams plan AI deployments around grid constraints
schedulers that optimise training around renewable availability
cost-modelling tools that consider energy sources, not just instance prices
smarter replication strategies across clusters and regions
observability for energy use and carbon impact per workload
optimisation tools for running AI workloads in data-centres with better energy mixes
These are not abstract ideas.
They’re emerging needs, driven by very real physical limits.
This is where infra-aware startups can create leverage.
Implications (Founders, Engineers, Investors)
For Founders
If your product is compute-heavy, treat infrastructure as a first-order concern.
Questions to ask:
Which regions can reliably support your growth?
How sensitive is your product to latency if you need to move regions?
What does your cost curve look like if energy prices spike?
How does your value proposition change if you can’t get capacity where you want it?
The founders who think about this early will be less surprised later.
For Engineers
This is a moment to deepen your understanding of infra:
how data-centres are designed
how power contracts work
how cooling and density trade-offs play out
how geographic distribution affects reliability
Low-level awareness will translate into better system design.
For Investors
The IEA–TechCrunch comparison is a macro signal.
When spending on data-centres exceeds spending on new oil supply, you’re not just looking at a tech trend — you’re looking at a structural reallocation of capital.
Pay attention to:
companies that sit at the intersection of AI and energy
tools that help manage the infra bottlenecks
specialised hardware and grid tech (like solid-state transformers)
infra-layer platforms that optimise AI workloads, not just run them
This is a long runway, not a quarterly story.
Closing Reflection
It’s easy to talk about AI in terms of parameters and benchmarks.
But under every model is a very physical reality — land, steel, cables, transformers, electrons.
When data-centre investment surpasses new oil exploration, it’s a reminder that the real story of AI is not just happening in labs or on product roadmaps.
It’s happening in how we build, power and govern the infrastructure that lets these systems exist.
If you’re building in this era, it’s worth stepping back and asking:
What assumptions am I making about compute, energy and geography — and are they still true?
Because the future of AI won’t just be defined by smarter models.
It’ll be defined by the infrastructure that can actually keep up with them.
Related Post
Latest Post
Subscribe Us
Subscribe To My Latest Posts & Product Launches












