AI is no longer constrained by algorithms but by physical systems.
Power, water, materials, and infrastructure, not compute, will determine which companies scale AI successfully and which fall behind. While the focus remains on models and breakthroughs, the real bottlenecks are emerging across the physical value chain that underpins AI—from semiconductor supply to data center operations.
And most organizations are not set up to manage them.
The Real Challenge: A Fragmented System
AI infrastructure is not a single asset—it is a connected system.
From advanced semiconductor manufacturing to data center siting, grid interconnection, cooling, and end-of-life hardware recovery, each stage depends on the others. Yet in most organizations, these challenges are managed in silos (e.g., energy, water, supply chain), leading to delays, cost escalation, and increasing risk.
The result is that companies are not addressing systemic issues holistically, slowing deployment, and constraining growth.
Three Physical Constraints Now Defining AI Scale
1. Energy and Grid Capacity: Speed to Power = Speed to Revenue
Data centers already consume a meaningful share of global electricity, and demand is accelerating rapidly with AI workloads. However, generation and grid infrastructure are not keeping pace.
In key markets, power availability—not capital—is the gating factor for new data center development. Interconnection delays, grid congestion, and permitting challenges are extending project timelines by years. In some cases, developers are turning to on-site or “behind-the-meter” generation to move forward—introducing new cost, regulatory, and air quality considerations. At the same time, many companies have significant decarbonization goals, requiring them to source substantial amounts of clean energy for their data center projects—a task made more difficult by complex global supply chains and regulations.
For AI leaders, access to reliable, scalable, and low-carbon power is becoming the critical path to growth.
2. Water and Community Barriers: The New Permitting Risk
Cooling requirements can place significant pressure on local water systems, particularly in water-stressed regions. While consumption varies by design and climate, large hyperscale facilities using evaporative cooling can require millions of gallons per day.
At the same time, community scrutiny is rising. Local stakeholders are increasingly challenging new developments based on water use, land impact, and perceived local benefit.
Taken together, water is no longer just an operational consideration, it is a driver of permitting timelines, community acceptance, and long-term site viability.
3. Materials and Supply Chain: The Hidden Limitation
Behind every AI model is a deeply complex supply chain.
Semiconductor manufacturing, critical minerals, and advanced components are all under pressure from surging demand. At the same time, accelerated hardware refresh cycles—often as short as 18-24 months for AI infrastructure—are driving significant increases in e-waste and material demand.
Critical mineral inputs such as gallium and germanium face concentrated supply chains and geopolitical risk, while embodied carbon from manufacturing now represents a growing share of lifecycle emissions.
The “silicon side” of AI is increasingly determining the pace and sustainability of scale.
The Core Problem: Solving Interconnected Challenges in Isolation
Energy, water, and materials are not separate issues, they are interdependent constraints.
Yet most organizations address them independently:
- Expanding supply without circularity increases long-term material scarcity.
- Setting decarbonization targets without supplier alignment and clean energy sourcing plans limits execution.
- Advancing site development without integrated power (including clean energy) and water planning creates delays.
- Designing end-of-life programs without embedding circularity into the full data center lifecycle means valuable materials are lost as waste.
Fragmented approaches ultimately reinforce the very bottlenecks they aim to solve.
What It Takes to Scale: An Integrated, Value Chain Approach
Overcoming these constraints requires a shift from isolated initiatives to integrated system management across the full AI infrastructure lifecycle.
Five priorities stand out:
1. Design for Circularity and Longevity
Hardware design decisions directly shape material demand, emissions, and refresh cycles. Standardization, modularity, and refurbishment strategies can reduce capex, ease supply pressure, and limit waste.
2. Align the Supply Chain Early
Supplier readiness (e.g., on emissions, materials, and production capacity) will determine whether growth targets are achievable. Early alignment prevents Scope 3 from becoming a constraint on expansion.
3. Integrate Power, Water, and Site Planning
Coordinated forecasting across energy, transmission, and water availability enables smarter siting decisions and avoids stranded or delayed assets.
4. Co-Plan with Public Infrastructure
AI growth depends on external systems (e.g., grids, utilities, and local resources). Closer collaboration with utilities, regulators, and regional planners is essential to accelerate timelines and ensure capacity.
5. Shift to System-Level Metrics
Tracking individual metrics is no longer enough. Organizations need integrated visibility across material availability, power reliability, water use, emissions, and end-of-life performance to inform real investment decisions.
From Challenge to Competitive Advantage
AI will ultimately scale within the limits of the systems that power it.
Companies that treat infrastructure as a connected system spanning silicon to server will move faster, deploy capital more efficiently, and avoid regulatory and community friction. Those that do not will face delays, cost escalation, and constrained growth, regardless of how advanced their technology is.
Turning Physical Constraints into Enterprise Value
ERM partners with technology leaders and their suppliers across the full value chain—from semiconductor supply chains to data center siting and operations—to de-risk and accelerate AI infrastructure at scale.
By integrating energy, water, materials, and sustainability into a single system approach, we reduce delays, avoid stranded capacity, and deploy capital more efficiently—enabling reliable, scalable growth in the AI era.