Login
Expert Opinion

AI’s $7 trillion infrastructure build-out and the emerging fault lines

The first of a two-part series exploring the unchartered opportunities and risks AI presents for corporates and real estate
The Nvidia campus in Santa Clara. (CoStar)
The Nvidia campus in Santa Clara. (CoStar)
By James Wallace
February 6, 2026 | 2:55 P.M.

Artificial intelligence has become a macroeconomic force reshaping future expectations around productivity, employment, industrial competitiveness and sovereign priorities in a post-globalisation world. It has also accelerated capex investment on a scale dwarfing all previous investment super-cycles in history.

McKinsey estimates that by 2030 the world will need around $5.2 trillion for AI-specific data centres, power generation, grid infrastructure and another $1.5 trillion for traditional IT, taking total AI‑linked infrastructure capex close to $7 trillion.

While AI's long-term potential to solve important societal, scientific and engineering challenges remains compelling, the pace and structure of the infrastructure response has moved well ahead of demonstrated monetisation, creating systemic risks within an increasingly interdependent AI ecosystem.

The dominant investment posture is optimistic on AI’s potential but sceptical on timing. Even if the long-term applications of AI prove themselves in extraordinary ways over time, the capex roll‑out needs to be paced against realistic return over investment timelines. Optimistic estimates put global generative AI revenues at between $30 billion to $55 billion in 2025, including a $20 billion annualised run-rate by the largest AI model, OpenAI. S&P Global estimates aggregate revenues will only reach $85 billion by 2029 – a three-fold increase in four years, but insufficient to keep pace with the frenetic rise in required capex.

This unprecedented spending wave is underpinned by rapid AI adoption across consumer and enterprise markets, alongside rising compute intensity as models evolve towards reasoning-based, multi-modal and agentic workloads. The prevailing economic assumption is that AI utilisation and monetisation scale sufficiently quickly to absorb an exponentially rising fixed-cost base that is subject to binding system constraints – related to power, labour, policy and geopolitics.

The data centre real estate drive

Data centres sit at the frontline of the AI investment super-cycle. Generative AI data centres are being underwritten on forward-looking assumptions about adoption and pricing power, often before execution timelines and grid access are fully secured. ROI horizons remain opaque but are starting to come under increasing scrutiny by asset managers, developers and capital markets.

The AI infrastructure build-out is predominantly driven by five hyperscalers monetising AI through large language models, alongside Nvidia, the market-leading chipmaker, that supplies computational inputs that make AI viable. Nvidia’s competitors remain at a relative distance.

JPMorgan’s base case points to 122 GW of global data-centre capacity additions between 2026 and 2030, implying a steep acceleration from current installation rates. However, growth is constrained by physical limits of power systems, labour pools and capital markets – with sustainable expansion only possible within a risk-aligned ROI framework.

Risks building

Beneath the surface, complex risks are emerging from the interaction between, firstly, highly concentrated group of capital-intensive major technology firms, whose AI revenues and investment decisions are increasingly intertwined, creating circular capital flows, the illusion of organic demand and systemic fragility.

Secondly, AI adoption curves that remain durable but unevenly monetised, leaving large portions of the infrastructure build-out underwritten on forward assumptions rather than realised revenues.

And funding structures that elevate and obscure risk-taking, including highly leveraged off-balance-sheet vehicles and growing reliance on capital markets, plus physical and regulatory constraints – power, grid access, land, labour and permitting – that limit how quickly capital to be converted into operational data centres.

Lastly, a tightly coupled ecosystem, where stress or retreat of a single large participant could create spill over contagion risks through revenues, investment funding, remaining performance obligation commitments, debt serviceability, rental payments and asset valuations.

These interrelated risks outline structural vulnerabilities under the surface that will are likely to grow over time.

Monetisation risk

AI revenue growth may fail to materialise quickly enough to justify continued capex intensity. Even if long-term demand proves durable, slower-than-expected monetisation would pressure hyperscalers' cash flows, data-centre valuations, funding access and borrowing costs, particularly in highly-leveraged off-balance sheet structures.

According to a Nasdaq survey of global chief executives and senior leaders, around 71% of AI initiatives remain in early exploration stages, struggle to demonstrate ROI and to identify viable use cases.

Circular capital risk

AI investment increasingly involves circular flows among four interdependent groups: hyperscalers (for example, Microsoft's Azure, Amazon AWS, Google's GCP, Oracle and OCI), AI model developers such as OpenAI, Anthropic, xAI and Gemini), data Centre Developers (Blackstone/QTS, Digital Realty and Equinix), and chipmakers (Nvidia, AMD, Broadcom).

This creates a reflexive feedback loop where hyperscalers fund AI developers' models, who purchase compute from hyperscalers, while chipmakers invest in AI developers and sell GPUs to hyperscalers. This interdependence can accelerate the AI infrastructure build-out in benign conditions but also introduces risks – such as in the event of a major participant retreat, funding tightening or slower-than-expected AI monetisation. Circular capital flows also create the illusion of organic demand. Consider these four examples that show how AI capital loops back into the same small set of cloud and chipmakers.

Microsoft’s $13 billion investment in OpenAI was partially structured as Azure cloud credits. OpenAI is contracted to purchase an incremental $250 billion of Azure’s cloud services over an undisclosed time period, allowing Microsoft to recognise OpenAI’s Azure usage as cloud revenue.

Nvidia's investments in AI start-ups often result in those firms purchasing compute from hyperscalers that are, in turn, Nvidia's largest customers.

Nvidia, Amazon and Microsoft have agreed to invest up to $23 billion in Anthropic. Anthropic’s models require Nvidia CPUs, use Amazon Web Services as its primary cloud and training partner and has committed to purchase $30 billion of Azure compute over the coming years.

Elon Musk’s xAI has structured a $20 billion special purpose vehicles (split $7.5 billion equity and $12.5 billion debt) that will buy Nvidia GPUs and lease them back to xAI. Nvidia is reportedly contributing $2 billion of equity to the SPV, partially self-financing the purchase of its own hardware. SPV debt is secured by GPU hardware and not xAI’s corporate assets.

What appears as diversified demand masks a tightly coupled ecosystem vulnerable to correlated failure. Hyperscalers, AI labs and chipmakers are reinvesting in each other’s promises in a multi-trillion-dollar loop that creates multiple systemic technology firms.

Funding and balance-sheet risk

In the past 12 months, exponential capex investment has outgrown hyperscalers’ ability to solely rely on free cash flows, prompting a pivot to capital markets, off-balance sheet vehicles with high leverage and third-party equity. This elevates the risk profile of the infrastructure build out, with heightened sensitivity to macro variables (e.g interest rate, inflation), credit spreads, equity valuations and ratings agencies, particularly for hyperscalers and firms across the broader AI ecosystem with concentrated customer bases.

Concentration and execution risks

Across the AI ecosystem, hyperscalers now carry an estimated $1.5 trillion in remaining performance obligations – contracted sales not yet recognised as revenue – concentrated among a small number of AI developers, including OpenAI, Anthropic, Meta and xAI.

Adding in Google Cloud’s $240 billion backlog, which is largely led by AI enterprise demand, total AI-linked contracted revenue commitments approach $1.7 trillion. This customer concentration attaches counterparty risk to otherwise perceived long-duration, high-quality future revenue streams.

This exposure is amplified by execution bottlenecks. Securing sufficient power, timely grid interconnections, skilled labour, permitting and construction capacity has become increasingly constrained, raising the risk of delivery slippage even where demand is contractually committed.

Table 1: Estimated RPOs top $1.5 trillion from a handful of AI labs

HyperscalerEstimated RPOComposition, circularity and timelines
Microsoft$625bnOpenAI accounts for 45% ($281.3bn) of revenue backlog, including 25% within 12 months (2.5 years weighted duration). Microsoft's $13bn OpenAI investment has circled back in Azure fees for GPT training.
Oracle$523bnOpenAI represents $300bn, with the bulk of the balance from Meta, NVIDIA and xAI deals. Around 25% due within 12 months. Hyperscaler investments circle back as Oracle Cloud Infrastructure (OCI) fees for AI training.
Google Cloud$155bnNine out of 10 top AI labs use Google Cloud to build and run their AI models. Around 20% due within 12 months.
AWS$160-180bnAnthropic is an AWS customer, who is a minority owner. Amazon’s $8bn Anthropic investment circles back as AWS cloud fees for training Anthropic’s Claude. Stability AI is also a significant AWS customer. Around 25% due within 12 months.
Total~$1.5THyperscalers’ RPOs are heavily concentrated on few AI labs, with circular investments amplifying risks.

Source: Earnings reports and calls

 

Single point of failure risk

In a downside scenario where a major AI lab experiences a funding shortfall, second-order effects could cascade across revenues, build schedules and financing structures. Highly leveraged development SPVs would be exposed to widening credit spreads, weaker debt serviceability and rental payment risk. While not a base case, this scenario illustrates how tightly interwoven the AI ecosystem has become.

OpenAI is one of the largest and most visible demand anchors within the AI infrastructure ecosystem, akin to an anchor tenant. Its upcoming fundraising round ahead of a possible initial public offering later this year can be viewed as a leading indicator of sector liquidity. With more than $1 trillion lined up in spending over the next decade, OpenAI plans to raise to up to $100 billion in the first half 2026. The private round aims to strengthen its balance sheet before the company faces the scrutiny of public markets. Contingent on a successful fundraising, a regulatory filing could happen as early as the second half2026.

Open AI’s fundraising success is vital to the broader ecosystem, the durability of the $1.5 trillion RPOs, as well as the assumptions underpinning data centre developments, leases, and bond credit spreads. Reported participatants include Amazon ($50 billion), SoftBank ($30 billion), Nvidia ($20 billion), and MGX ($50 billion), which would put Open AI’s valuation at between $750 billion and $830 billion. OpenAI has a $100 billion revenue goal by 2028, while its projected cash burn over the five years to 2029 is $140 billion.

Nvidia has slowed talks with OpenAI over its investment with chief executive Jensen Huang emphasising its September statement to progressively invest up to $100 billion was nonbinding. The Wall Street Journal reported Huang privately criticised “a lack of discipline in OpenAI’s business approach and expressed concern about the competition it faces from the likes of Google and Anthropic”. This led to a sharp drop in AI-related stocks, prompting Huang to perform damage control, dismissing reports that imply his dissatisfaction with OpenAI.

“That’s nonsense. We will invest a great deal of money,” Huang said. “I believe in OpenAI. The work that they do is incredible. They’re one of the most consequential companies of our time.” The episode reveals the underlying fragility and interdependence of the ecosystem. Nvidia cannot risk undermining confidence in OpenAI as it would devastate the AI trade and broader ecosystem. Nvidia told reporters it is nearing a deal to invest roughly $20 billion in OpenAI’s latest funding round.

System constraints and policy risks

Data-centre growth is limited by availability of land, power, grid capacity, water, commodities and skilled labour. Capital can be committed without guaranteeing capacity will be delivered within economically viable timeframes. Oxford Economics forecasts that US electricity demand from data centres will outpace supply additions through 2030, with bottlenecks shifting from generation to grid integration and transmission. Data centres require continuous, highly reliable power at large scale, creating a growing reliance on renewables, which can be deployed relatively quickly but have intermittent energy outputs and require storage capacity.

Lease structures increasingly resemble utility-like cashflows, characterised by high entry barriers, capital intensity, specialist expertise and power reliance. But this can obscure familiar real estate risks (that is, tenant concentration, obsolescence, reletting uncertainty, liquidity constraints and sustainability). Older facilities face expensive upgrade requirements, which can create localised stranded assets.

These challenges are compounded by industrial policy, export controls, energy regulation and national security considerations. In this context, geography matters, with hyperscalers shifting builds from saturated hubs such as Northern Virginia to secondary and remote metros, creating uneven execution risks and significant dispersion in build-out costs and return assumptions.

Conclusion

These risks are surfacing unevenly across the AI ecosystem. Rising capital intensity, leverage and system-level constraints are making the estimated $7 trillion infrastructure build-out increasingly sensitive to execution, funding conditions, sentiment and timing. As the build-out expands, systemic risks increase, raising the possibility that a future liquidity shock could spark a rapid negative feedback loop that potentially would require a coordinated market or policy intervention.

There are few obvious pressure points: refinancing, capital raising rounds for labs, or the slowdown of a major participant. These events would have material second-order impacts on revenues, credit spreads, and valuations of data centre and power assets. These outcomes are neither a base case forecasts nor remote tail risks, but plausible risk-weighted scenarios that warrant closer scrutiny in development underwriting, lease structuring, financing assumptions and geographic allocation.

Part Two will cover how the AI investment cycle is permanently changing big tech, the read across from the big five hyperscalers’ quarterly earnings and outlooks for 2026.

James Wallace is an independent financial journalist and strategic writer for global asset managers and advisory firms, focused on real assets, macro, risk regimes, refinancing, and portfolio construction. Subscribe to his free Substack here.

News | AI’s $7 trillion infrastructure build-out and the emerging fault lines