The AI Bubble Risk: How Uncertainty Could Create the Next Tech Crash

The AI Bubble Risk: How Uncertainty Could Create the Next Tech Crash

AI Summary

Anthropic CEO Dario Amodei highlights a stark divide in the AI industry between sustainable growth and "yoloing" financial risks, noting that while the technology’s capability follows a predictable upward trajectory, the business models supporting it are increasingly precarious. Companies are navigating a "cone of uncertainty," making $50 billion infrastructure bets today for customers they hope to have in 2027, often relying on "circular deals" where chip makers fund the very startups that buy their hardware.


December 21 2025 09:14

The artificial intelligence industry is experiencing something remarkable and potentially dangerous at the same time. Dario Amodei, CEO of Anthropic, sat down for a revealing conversation that laid bare the economic calculations happening behind closed doors at AI companies. His message was both optimistic and cautionary: the technology will deliver on its promises, but some companies are making bets that could haunt them for years.

Anthropic has seen its revenue grow from zero to potentially $10 billion in just three years. That's a 10x increase annually. Yet Amodei won't predict another 10x jump to $100 billion next year, even though the pattern suggests it's possible. This hesitation reveals everything about the precarious position AI companies find themselves in today.

The Cone of Uncertainty

Amodei introduced a concept he calls the "cone of uncertainty" that every AI company must navigate. The problem is straightforward but brutal: you need to decide today how much computing power to buy for customers you'll serve in 2027. Building data centers takes one to two years. Miss your projection on the low end and you'll turn away customers to your competitors. Overshoot on the high end and you might not generate enough revenue to pay for the infrastructure.

The stakes are enormous. A gigawatt of compute costs roughly $50 billion in capital expenses. Anthropic thinks conservatively about these investments, Amodei said, but acknowledged that some players in the industry are "yoloing" their spending decisions. He wouldn't name names, though the context made it fairly clear he was talking about OpenAI and its ambitious expansion plans.

OpenAI is projecting it will reach profitability by 2030, despite currently running at a $4 billion annual loss. The company would need to swing from massive losses to breaking even in just a few years while simultaneously ramching up spending to unprecedented levels. Amodei's skepticism was palpable, even as he carefully avoided directly criticizing his former employer.

The Vendor Financing Problem

The industry has developed what Amodei calls "circular deals," though older observers might recognize them as vendor financing arrangements. Here's how they work: Nvidia or another chip manufacturer invests money in an AI company, which then uses those funds to buy chips from that same manufacturer.

Amodei defended the practice to a point. If you need $50 billion worth of compute over five years but only have revenue projections that make sense year by year, having a chip maker front 20% of the cost makes the math work. You pay for year one upfront, then pay as you go based on growing revenue. For Anthropic, approaching $10 billion in annual revenue, this seems reasonable.

But the logic breaks down when companies stack these deals to massive scale. If you're betting you'll make $200 billion annually by 2028 to justify your compute purchases, you've entered dangerous territory. The margins need to be there. The customers need to materialize. The technology needs to keep improving on schedule.

Depreciation and the Race for New Chips

One critical factor that could upend all these calculations is chip depreciation. The question isn't really about how long chips physically last. They keep working for years. The issue is that new chips come out constantly, and they're faster and cheaper. Your competitors will have them. You'll need them too.

This creates a treadmill effect where the value of your existing infrastructure degrades faster than you might expect. Anthropic assumes very aggressive efficiency improvements in new chip generations when planning its spending. Again, Amodei emphasized his company takes the conservative approach. But if other companies are assuming their current chip purchases will generate revenue for five or six years without major upgrades, they could be in for an unpleasant surprise.

The Enterprise Advantage

Anthropic has positioned itself differently than OpenAI and Google, focusing primarily on enterprise customers rather than consumers. This might be the smartest strategic decision in the entire AI industry right now.

When Google released a new model recently, it triggered what OpenAI's Sam Altman reportedly called a "code red" situation. Both companies are fighting over consumer market share. Google is defending its search monopoly. OpenAI built its business on consumer products. This creates an intense, direct competition.

Anthropic sits adjacent to that battle. The company optimizes its models for business needs, particularly coding. The Claude model Anthropic released recently is widely considered the best for coding tasks. But more importantly, enterprise customers are sticky. Even the raw API business, where companies just access the model directly, creates switching costs. Customers build workflows around specific models. Their downstream users expect certain behaviors. The models have different personalities and require different prompting strategies.

This isn't like consumer apps where users switch based on whichever tool released the best update last week. Enterprise relationships take time to build and time to unwind. That stability makes the business model more predictable and the revenue projections more reliable.

The Revenue Reality Check

Anthropic's revenue trajectory is genuinely impressive: $100 million in 2023, $1 billion in 2024, and heading toward $8-10 billion in 2025. That kind of growth makes the massive infrastructure investments seem reasonable, at least for Anthropic.

But here's where the math gets tricky for the industry as a whole. Microsoft, Amazon, and other cloud providers are planning to spend $100 billion or more annually on AI infrastructure. Nvidia's valuation assumes continued explosive growth in chip sales. Every major tech company is racing to build or expand data centers.

All of this spending is a bet that AI services will generate enough revenue to justify the costs. The technology clearly creates value. Anthropic's customers are seeing real productivity gains. The models keep getting better at every task. But will the monetization happen fast enough to match the investment timeline?

Amodei's point about margins is crucial here. If you have 80% margins, you can afford to buy $20 billion in compute to serve $100 billion in revenue, even with significant uncertainty in your projections. But what if you're running a consumer business with much thinner margins? What if you're giving away your product for free or nearly free to build market share? The math changes dramatically.

What Scaling Laws Really Mean

Amodei and his co-founders were the first to document AI scaling laws. The concept is simple but profound: as you add more compute power and more data, AI models get better at essentially everything. They improve at coding, science, biomedicine, law, finance, materials, manufacturing. That list basically covers every source of economic value.

This isn't a hypothesis. It's been playing out consistently for 12 years. Small modifications like reasoning models represent tiny tweaks to the basic scaling formula. The improvements are predictable and ongoing.

From Anthropic's perspective, this makes the technology side of the equation feel solid. The models will keep getting smarter. They'll keep creating more value. Eventually, that value will translate into revenue that justifies the infrastructure costs. The question is timing.

Amodei explicitly said he doesn't believe in a single point where we achieve AGI or artificial general intelligence. There's just a continuous exponential curve of increasing capability. Models are already winning high school math olympiads and moving on to college level competitions.

They're starting to do novel mathematics for the first time. Some Anthropic employees have stopped writing code entirely, instead letting Claude generate first drafts that they just edit.

The drumbeat of progress will continue. Every new model release will be better at more things. The revenue will keep adding zeros. But the gap between "eventually" and "on schedule" is where companies could fall into trouble.

The Jobs Question

One of the most significant potential downsides is employment disruption. Amodei has talked about this publicly, including on 60 Minutes, but his focus is less on predicting doom and more on solving problems.

The first level of response can happen in the private sector. Every Anthropic customer faces a trade-off between pure efficiency gains and creating new value. AI can do insurance claims processing or know-your-customer workflows end to end with far fewer humans. That's pure cost savings through job elimination.

But AI can also enable entirely new capabilities. Even when AI handles 90% of a task, the remaining 10% done by humans can make them 10 times more productive. Sometimes you need 10 times more people to capture 100 times more value because the work is so much more efficient. Encouraging companies to emphasize value creation over pure efficiency gains could preserve and create jobs even as AI capabilities grow.

The second level requires government involvement. Retraining programs aren't a panacea, but they'll be necessary. Companies and governments will need to work together. At some point, fiscal policy has to play a role.

If AI really does increase productivity by 5% or 10% annually, that creates an enormous economic pie. The wealth might concentrate initially with AI companies and their customers, but there's enough value to redistribute to people who aren't direct beneficiaries of the technology. Tax policy could play a role here, though Amodei didn't prescribe specific approaches.

The third level is the most profound and slowest. Society itself will need to restructure for a post-AGI world. John Maynard Keynes predicted his grandchildren would only need to work 15-20 hours per week due to technological progress. That didn't happen, but maybe AI will finally deliver on that vision.

Some people will always want to work as much as possible. But can we create a world where work doesn't have the same centrality for everyone? Where people find meaning outside of economic productivity? Where work is more about fulfillment than survival? These questions don't have top-down answers. Society will need to figure out how to adapt organically over time.

The Circular Logic of AI Economics

There's something almost paradoxical about the current state of AI economics. The technology demonstrably works and creates value. Anthropic's 10x annual revenue growth isn't fiction. Companies are seeing real productivity improvements. The models keep getting better at an astonishing range of tasks.

Yet the industry rests on financial arrangements that could be fragile. Chip makers invest in AI companies that buy chips from those same manufacturers. Cloud providers build enormous data centers based on demand projections with huge uncertainty bands. Companies make billion-dollar bets on infrastructure that needs to generate returns years before the revenue fully materializes.

For well-managed companies with strong margins and conservative assumptions, this can work. Amodei clearly believes Anthropic will navigate these challenges successfully. The company focuses on enterprise customers with sticky relationships. It assumes aggressive chip depreciation and conservative revenue projections. The technology roadmap looks solid.

But across the industry, not everyone is managing risk as carefully. Some companies are making assumptions inflected far toward the optimistic end of possibilities. They're stacking vendor financing deals to massive scale. They're betting on consumer businesses with uncertain margins. They're projecting profitability timelines that require revenue growth curves steeper than anything in tech history.

The technology will probably deliver on its promises. The question is whether the business models and financial engineering will hold together long enough to realize those promises. In an industry where everyone faces the same cone of uncertainty but not everyone makes the same risk calculations, some companies could face serious trouble even as the overall technology succeeds.



Recent Posts