Is OpenAI's $100 Billion Valuation Built on Sand?
Altman's Death Star: Autopsy of a $100 Billion Damp Squib.
The noise was deafening. The promise, almost messianic. In the grand theater of Silicon Valley, where every announcement must be an earthquake, Sam Altman, the high priest of OpenAI, brought out the heavy artillery. A single image posted on X: the Death Star, the planet-destroying weapon of the Galactic Empire. The message, in its sublime arrogance, was crystal clear: prepare to be overwhelmed. GPT-5, or whatever its given name would be, was going to redefine the boundary between man and machine.
Then, silence. Followed by a murmur. Then, palpable disappointment. The Death Star, capable of pulverizing worlds, turned out to be nothing more than a damp squib. The anticipated revolution has devolved into an incremental update, whose gaping flaws are a cruel reminder of its predecessors. The question is no longer when OpenAI will achieve AGI (Artificial General Intelligence), but whether the company ever truly possessed the crushing competitive advantage it was credited with.
The "Right Tool for the Job" Wallet Guide: 4 Archetypes for Your Bitcoin.
Stop searching for the one “best” wallet. Start thinking like a craftsman.
The Sobering Reality of Trivial Failures
The contrast is striking. On one hand, the promise of a conversation with “a PhD-level expert in any field.” On the other hand, the harsh reality of the most elementary tests. The new model, the fruit of billions of dollars in investment and computational power that defies imagination, fails where a primary school child would succeed.
It is presented with a worksheet: it struggles to correctly count four-leaf clovers. It is asked to circle the vowels in the word “intelligence”: it makes mistakes, hesitates, and forgets. It is questioned about the political history of its closest neighbor: it invents Canadian Prime Ministers who never existed, digital ghosts born from a statistical slurry.
These errors are not anecdotal. They are a symptom of a deep-seated ailment that has plagued Large Language Models (LLMs) since their inception. They reveal a complete lack of what we humans call understanding. The model does not understand what a clover, a vowel, or a Prime Minister is. It only predicts the most probable sequence of words based on the terabytes of text it has ingested. It is a stochastic parrot of incredible sophistication, but a parrot nonetheless.
The Eternal Return of Hallucinations
The AI sector, in its frantic race towards a still-mythical AGI, has injected over $500 billion. Half a trillion dollars. For what tangible result? The same hallucinations as with GPT-3. The same basic reasoning errors. The same weaknesses in computer vision.
The example of psychologist Jonathan Shedler is a chilling illustration of this phenomenon. A respected researcher, he questions Grok, Elon Musk's competitor, about his scientific paper, one of the most cited in the world on the effectiveness of psychotherapies. Shedler’s conclusion, based on a rigorous meta-analysis, demonstrated a very significant therapeutic effect, quantified at 0.97. The AI’s response? A complete and dangerous inversion of the paper’s findings. It claims the study concludes a weak effect, citing a figure of 0.33, a number that appears nowhere in the original document.
This is not a simple mistake; it is a fabrication. A hallucination that, if taken at face value by a student, a journalist, or a practitioner, could have serious consequences. This example, though involving a competitor, perfectly illustrates the pathology common to all current LLMs, including OpenAI’s latest iteration. They are experts in counterfeiting knowledge, capable of generating plausible and well-structured text that is, at its core, completely false.
The Complexity Wall: “Attention Is All You Need” Is No Longer Enough
The mantra of OpenAI, and an entire generation of AI researchers, was based on a 2017 research paper with a prophetic title: “Attention Is All You Need.” The idea was that through the attention mechanism and an exponential increase in model size and training data (“scaling”), intelligence would almost magically emerge.
Today, this magic formula is showing its limits. A study from Arizona State University, published on August 5th, delivers the final blow to this philosophy. Researchers have confirmed what many suspected: LLMs remain fundamentally incapable of generalizing beyond their training data. They excel at interpolating, at finding solutions to problems that closely resemble what they have already seen. But as soon as they are presented with a problem that requires applying a universal rule in a radically new context, they collapse.
This inability is not a bug to be fixed with more data or more power. It is a structural feature of their architecture. They have no model of the world, no causal understanding, and no ability to reason about abstract and durable representations. They are prisoners of the statistical surface of data.
OpenAI: The Empire Crumbles
In this context, OpenAI's once-hegemonic position suddenly seems precarious. The company finds itself in an agitated situation.
First, the technical lead has melted away like snow in the sun. The “secret sauce” is no longer so secret. Competitors like Anthropic (founded by former OpenAI executives), Google with Gemini, and even open-source players are offering models with comparable or even superior performance on certain tasks.
Second, the best talent has slammed the door. The brain drain that followed the company’s internal turmoil has sown the seeds of its competition. These top-tier engineers and researchers didn’t retire; they are building the technologies that now challenge their former employer.
Third, the relationship with its main financial backer, Microsoft, is becoming strained. The Redmond giant has invested tens of billions of dollars and expects a return on its investment. Yet, OpenAI continues to burn astronomical amounts of cash without being profitable. The need to lower its API prices to stay competitive against fierce competition only exacerbates the financial pressure.
Therefore, how can a valuation flirting with several hundred billion dollars be justified? When your main competitive advantage is no longer a revolutionary technology but a well-designed user interface and a household name, your “moat” is as deep as a puddle. The competition is coming very, very fast, and it isn't very merciful.
Beyond the Stochastic Parrot: The Quest for True Intelligence
GPT-5, or whatever its name may be, is not a bad product. It is probably, in many respects, the best LLM ever created. But it is not the revolution that was announced. It is the culmination of a paradigm, and at the same time, the glaring proof of its limitations.
The path to a more robust, reliable, and human-like artificial intelligence does not lie in “pure scaling.” Continuing to build ever-larger models on the same architecture is like trying to build a rocket to the Moon by stacking bigger and bigger hot-air balloons. At some point, the approach itself must be questioned.
The future likely lies in hybrid approaches, particularly neuro-symbolic ones. Architectures that integrate the pattern-recognition power of neural networks with the logical rigor of symbolic systems. AIs that are capable of building and manipulating explicit models of the world, of reasoning about abstract concepts, of understanding causality, and of applying universal rules. This is an immense scientific and technical challenge, far more complex than simply adding more layers of neurons and petabytes of data.
The GPT-5 disappointment will mark a turning point. It is the end of innocence for an industry that has fed on oversized promises. The Death Star destroyed no planets. It merely shone a harsh light on the flaws of its design. The emperor has no clothes, and everyone is starting to notice. The race to AGI is far from over; perhaps it has not even truly begun on the right foundation.
The Quantum Gold Rush: 4 Industries Poised to Explode.
Marseille, France – Here on this bright mid-August morning in 2025, looking out over the shimmering Mediterranean, the world feels firmly rooted in the tangible: the bustling port, the ancient streets, the reliable hum of classical technology that powers our lives. Yet, bubbling just beneath this surface, a new frontier is opening up—a technological gol…
Your Future Doctor is a Quantum AI: 3 Ways This Changes Healthcare Forever.
Imagine your annual check-up in the not-so-distant future. You don’t just see a human physician. Instead, working silently in the background is the most powerful medical mind ever conceived. It isn’t a person, nor is it a robot in a white coat. It's a vast, intelligent system—a fusion of artificial intelligence and the near-magical power of quantum comp…
The Day After Cryptography: How a Quantum Experiment Just Rang the Alarm Bell.
I don't know if you've had time to see this, but a recent news story has shaken the world of cybersecurity with the subtlety of an earthquake. Researchers at Arizona State University, in collaboration with teams from IBM, have just pulled off something incredible: they used Shor's algorithm on a quantum computer to crack an Elliptic Curve Cryptography (…
Interesting read, but I’m curious about the title. The latest valuation figures I’ve seen place OpenAI closer to $500B rather than $100B. Was the $100B figure intentional?