Last Updated on February 6, 2026 by Chicago Policy Review Staff
In the 1840s, Britain experienced one of the largest investment booms in modern economic history. Suddenly, an emerging, epochally transformative technology emerged, promising faster transport, lower costs, and national market integration. Railway mania was born, and capital poured in.
At its peak, railway investment reached roughly seven percent of British GDP, Parliament authorized thousands of miles of new track, and financial institutions reorganized themselves around the expectation that rail would reshape the economy. Those expectations were broadly correct. Yet the boom ended in collapse. Share prices fell by two-thirds, projects were abandoned mid-construction, and investors faced widespread losses. The railways survived and ultimately transformed Britain, but the financial architecture built around them did not.
Artificial intelligence now sits in a similar position. Like railways, AI is a general-purpose technology with clear long-run potential and highly uncertain short-run returns. Enormous sums are being invested not because investors are irrational, but because the upside is plausibly enormous. The question, then, is not whether the AI bubble is “real” or whether valuations will eventually correct. It is whether the financial architecture supporting today’s AI build-out can withstand slower adoption, delayed monetization, and cyclical tightening without turning a rational investment boom into a broader liquidity problem.
Financial Opaqueness
The defining feature of this AI cycle is not just the volume of capital being deployed, but the insular path it travels. Chipmakers invest in AI labs, which then use that capital to buy more chips; cloud providers invest in AI companies, which in turn sign long-term cloud contracts; data-center operators borrow heavily to build capacity for the same companies funding them. By 2025, the five hyperscalers accounted for roughly 19 per cent of the S&P 500, with Nvidia and Broadcom adding another 9 per cent, underscoring how narrowly this ecosystem is distributed.
At the center of this financial architecture sits Nvidia, which has moved beyond acting solely as a hardware supplier to become a key financier of its own demand. By late 2025, Nvidia had entered into dozens of structured financing and equity arrangements with AI firms, including OpenAI and Anthropic, allowing those companies to fund large-scale infrastructure build-outs that ultimately translate into purchases of Nvidia’s chips. In parallel, cloud providers such as Microsoft, Amazon, and Oracle have committed to long-term cloud contracts worth tens of billions of dollars with the same AI firms in which they hold strategic stakes.
More concretely, the opaqueness of the AI cycle is best found in CoreWeave, a former crypto-mining firm turned data-center operator. CoreWeave’s IPO in March was the largest tech listing since 2021, and its share price has more than doubled since, despite the company having no profits, roughly $5 billion in expected revenue against nearly $20 billion in annual spending, and $14 billion in debt, nearly a third of which comes due within a year. Its business model is simple: buy high-end chips, build or lease data centers, and rent compute to AI firms.
However, CoreWeave’s finances raise incredible suspicions at best. CoreWeave faces $34 billion in scheduled lease payments through 2028, making its viability heavily dependent on continued demand and favorable financing conditions. As much as 70 percent of CoreWeave’s revenue comes from a single customer, Microsoft, with Nvidia and OpenAI supplying much of the rest. Nvidia is simultaneously CoreWeave’s chip supplier and a major investor, meaning capital flows from Nvidia to CoreWeave, back to Nvidia in the form of chip purchases, and onward again as rented computers.
The Problem with Leverage
That correlated exposure becomes far more dangerous once it is layered onto a credit-driven investment cycle, and here the AI boom looks structurally different from past tech manias. Unlike the dotcom era, where losses were absorbed largely by equity investors, the AI build-out is increasingly financed through private credit, special-purpose vehicles (SPVs), long-dated infrastructure leases, and asset-backed securities tied to data centres and hardware. Morgan Stanley estimates that AI-related borrowing could rise to $1.5 trillion by 2028, while private-equity firms have already extended roughly $450 billion in private credit to tech, with hundreds of billions more projected.
One increasingly important mechanism in this credit build-out is the rise of GPU-backed loans. CoreWeave has borrowed billions to expand capacity by posting their existing chips as collateral. This structure works as long as demand remains strong and chip values hold. But a recent report from Public Enterprise reveals that the collateral itself is unusually fragile. New chip generations arrive quickly, and older models can lose value fast. If chip prices fall, loans secured against them can suddenly become under-collateralized, prompting lenders to demand early repayment. Forced sales of hardware would then push prices down further, triggering additional margin calls across similar loans.
Importantly, the largest technology firms such as Meta, Apple, Amazon, NVidia and Google are unlikely to face immediate balance-sheet distress, given their diversified revenue streams and cash reserves. The adjustment would instead occur at the margins—among infrastructure providers, suppliers, private lenders, regional economies, and labor markets that depend on continued AI expansion. When revenues arrive later than expected, companies face refinancing pressure, falling collateral values, and forced asset sales. And because many firms are exposed to the same risks, leverage turns what might have been a normal market correction into a broader liquidity problem.
What Failure Looks Like
Failure in this cycle is unlikely to arrive as a singular “pop” moment or as a collective admission that AI was fake or purely hype. Instead, it would unfold gradually. Enterprise adoption may grow more slowly than projections assume: its penetration into core business operations remains uneven and measurable value accruing to only a minority of firms even as budgets swell.
Companies that have made big commitments still owe debt payments, lease fees, and loan repayments even if AI revenues arrive more slowly than planned. If monetization takes years longer than expected, which a serious growing body of evidence has suggested, those obligations don’t disappear.
The risk grows because many of the same companies are tied together financially. Revenue, demand, and borrowing are concentrated among a small group of firms, so a slowdown in one place doesn’t stay contained. If an AI lab misses its revenue targets or a cloud provider cuts back on expansion, suppliers see fewer orders, data-center operators feel the strain, and lenders become more cautious across the board.
In theory, problems in private credit should stay contained, because these lenders don’t take deposits from the public. In practice, the risk may still reach households through familiar channels. Pension funds, university endowments, and insurance companies increasingly invest in private credit to boost returns, and banks themselves are more connected to these lenders than they once were. Federal Reserve research shows that large banks’ lending commitments to private equity/private credit funds rose from about $10B (2013) to ~$300B (2023), and the Fed has also noted how life insurers’ structures and exposures to riskier credit have grown and become more complex. If AI-related borrowers struggle to refinance or lenders pull back, the effects show up quickly: construction projects are delayed, hiring slows, credit becomes harder to access, and retirement assets face pressure.
Three interventions could reduce systemic risk. First, require disclosure of related-party transactions including strategic investments and cloud commitments. When cloud providers invest in AI labs that then commit billions in compute contracts, investors and regulators cannot track the circular liability exposure. Second, revise collateral standards for GPU-backed lending. Current asset-based lending treats AI hardware like conventional equipment, despite different depreciation and obsolescence patterns. Regulators should require periodic revaluation based on market liquidity and generation-specific pricing, forcing realistic loan-to-value ratios. Third, extend stress-testing to private credit’s AI infrastructure exposure. Given private credit’s potential $1.5 trillion AI exposure by 2028, with indirect connections to pensions and insurers, systemic risk justifies closer scrutiny before problems become acute.
With these reforms, regulators can stymie a large-scale burst like that in 19th-century Britain. The parallel to Railway Mania extends beyond metaphor. Both involved transformative technologies that justified enormous investment, and both created financial structures that amplified returns during expansion but transmitted losses during contraction.
The AI economy faces identical risks. Whether models prove transformative matters less than whether debt comes due before revenues materialize. The technology may succeed while the financing fails.
The AI boom may justify its investment level. But justified investment and sustainable financing are different questions. Railway Mania’s lesson is that transformative technologies can generate crises if financing structures become fragile. The railways survived; many investors did not.
Whether today’s AI economy follows a similar path depends less on model capabilities than balance sheet resilience. The choice is not between innovation and caution, but between designing financial architecture that can withstand uncertainty and waiting to see whether current structures survive contact with reality. One approach allows adjustment; the other invites crisis.

