News commentary

OpenAI Is Maneuvering for a Government Bailout

prospect.org · Ryan Cooper · last updated

A perennial characteristic of Silicon Valley startup companies is that they lose a lot of money, at least at first. That’s what happened to Amazon, Uber, YouTube, etc. But to my knowledge, no tech company has ever burned more cash more quickly than OpenAI.

In 2024, it lost about $5 billion; in the first half of 2025, it lost a reported $13.5 billion; and in the last quarter alone, it lost another $12 billion. For artificial intelligence to ever pencil out, some truly enormous revenue streams will be required—$2 trillion by 2030, according to Bain & Company. As the company at the center of the AI boom (along with Nvidia), OpenAI would represent a sizable chunk of that money.

More from Ryan Cooper

Faced with this dilemma—where do you get a trillion dollars quick?—OpenAI is getting ready to run hat in hand to the taxpayer for subsidies, like every great Ayn Randian self-created entrepreneur, pulling themselves up by their bootstraps. At a recent Wall Street Journal tech conference, OpenAI Chief Financial Officer Sarah Friar suggested that a government loan guarantee might be necessary to fund the enormous investments needed to keep the company at the cutting edge.

Gerrit De Vynck of The Washington Post explained further that she also discussed “financial innovation,” like making sweetheart deals with chipmakers like AMD that get a stock boost from having any relationship with OpenAI, or trying to get a cut of the revenue that other companies generate through ChatGPT. But the loan guarantee suggestion stuck out; it felt like a pre-bailout, leaping past the crash and going right to the socialization of risk.

Though Friar later walked back her suggestion, saying that she was advocating for structural support for AI in general, not just her company, it is likely true that some kind of huge subsidy or another is probably the only way that OpenAI’s preposterous business model—it is “worth” a supposed $500 billion—can be sustained.

Related: How did Elon Musk turn Grok into MechaHitler?

One’s view on the prospects of AI will necessarily depend on what one thinks of it as a business. My semi-informed take is that it has a great deal of potential in certain applications, will probably be somewhat useful for a broad variety of companies, and is currently developing some of the worst business models I have ever seen. A lot of companies are using various AI products for coding, logistics, management, and other ordinary tasks, but from what I’ve read it’s not clear yet whether these actually increase productivity or not. What definitely does work is routinely awful—tremendously accelerating production of spam content of all kinds, mass production of revenge porn and CSAM, enormously more convincing scams and frauds, and greatly facilitating cheating at school or work.

I have no doubt you could make quite a lot of money using these applications, though it may expose you to legal risk from all the crimes, as well as from authors whose works you illegally downloaded by the millions to train your model. I doubt, however, even if you got full legal immunity, that the revenues would add up to anything like $2 trillion a year.

The venture capitalists pouring hundreds of billions of dollars into this technology, however, appear to be convinced that a much more ambitious result is just around the corner: robot slaves. This class went full MAGA last year because they were driven into a frenzy by the brief period of worker power and mobilization after the pandemic, and now they are slavering at the prospect of being able to fire all their workers forever.

I am a lot more confident that nothing like this is on the horizon. What kind of entity LLMs and image or video generators are is quite mysterious, and they can do amazing things, but they are not anything like a conscious, sentient person, and on my read of the technology, there is no reason to think they ever will be on the current track. They might be a portion of a conscious machine someday, but we will need a totally different approach to reach true sentience.

More importantly, there are very strong reasons to doubt that any company will be able to build the kind of enormous market leverage on which Silicon Valley has historically depended. Indeed, in many ways AI is the opposite of the traditional software or social media business model that prints hundreds of billions of dollars in profits. Microsoft and Facebook, for example, have relatively low capital costs, near-zero marginal costs, and are protected from competition. Each additional copy of Word or new Facebook account costs pennies; government software patents legally forbid people from duplicating the former, while network effects make it nearly impossible to compete with the latter.

ChatGPT, by contrast, has extremely high capital costs, relatively high marginal costs, and is structurally vulnerable to imitation. As I have argued before, its very business model of selling API access to ChatGPT is precisely what you would do if you wanted to sample and thereby recreate the data set used to train it. And any attempt to copyright the data is going to be a stiff ask, since OpenAI did not pay for the data it ingested either. This is apparently what Chinese AI creator DeepSeek did—pay to get ChatGPT to puke itself inside out, and rebuild a very similar model for a fraction of the cost.

So even if various AI products become extremely useful, there is every reason to think that there will be stiff competition or a lot of in-house models, and therefore no mega-profits. If you’re Novo Nordisk, and you made about $15 billion in profit last year, why share that with OpenAI when you could hire some engineers to cook up a home-brew model for relatively cheap—especially since you already have a lot of highly relevant proprietary data you’d much prefer to keep in-house anyway?

But there’s a place for businesses selling whiz-bang products at a ludicrous markup, or ones that simply don’t work: government contracts. OpenAI could join the likes of Palantir, TransDigm, Boeing, and all the rest fleecing the taxpayer in the name of national security. They better get on it, too—$12 billion a quarter is a lot even for the Pentagon.

Even throwing all of those possibilities together, $1 trillion in computing spend seems very out of reach for a company with limited revenue potential. And that’s about the moment when companies with some power and influence start saying that the future of America depends on them mainlining taxpayer funds into their gaping maw. The more that desperation sets in, the less likely that OpenAI “walks back” their next request for a bailout. Those in charge at that time have to think about whether it’s worth being conned by Sam Altman into giving away the public Treasury.