As Silicon Valley accelerates toward “artificial general intelligence,” Gary Marcus stands almost alone in asking the question few dare to: what if our models can predict everything — except the truth?
The illusion of intelligence
The story begins with a bet. In late 2024, cognitive scientist Gary Marcus challenged a former OpenAI researcher, Miles Brundage, to a wager: by 2027, could a single AI system perform a broad set of human-level tasks — understanding a movie twist, writing an Oscar-worthy screenplay, or producing a Nobel-caliber discovery?
Brundage said yes. Marcus said no.
To Marcus, today’s large language models are not intelligence at all, but “autocomplete on steroids” — machines that predict the next word, not the next truth. They imitate understanding without grasping meaning. They are brilliant mimics, not thinkers.
“They’re great at sounding right,” he says, “but not at being right.”
His skepticism is not anti-technology. It’s anti-delusion. He believes that language prediction has reached its ceiling — and that true intelligence requires causality, reasoning, memory, and self-reflection. What we have now, he says, is powerful text synthesis — not cognition.
The Alternative Path : Hybrid Minds
Marcus doesn’t reject AI’s promise; he rejects its current direction.
His alternative is what he calls neuro-symbolic AI — blending neural networks’ pattern recognition with the logical structure of symbolic reasoning.
This dual system mirrors the human mind itself: a fast, intuitive System 1 and a slower, deliberate System 2, as described by psychologist Daniel Kahneman.
Current AI has mastered System 1 — imitation, association, fluency — but remains blank on System 2: reasoning, abstraction, and the capacity to explain its own thinking.
> “If we want trustworthy AI, we need machines that don’t just predict patterns — they have to understand rules.”
DeepMind’s AlphaFold, which solved the decades-old problem of protein folding by merging deep learning with scientific constraints, is one early glimpse of this synthesis. But such hybrid architectures are still rare, drowned out by the race to scale language models ever larger and ever dumber.
Meanwhile, Marcus warns, the next hype cycle — “AI agents” — will make the illusion more dangerous. In 2025, he predicts, we’ll see thousands of semi-autonomous systems that act before they think: financial bots that misread orders, assistants that hallucinate policies, medical tools that invent diagnoses. Unreliable automation at industrial scale.
Betting Against Hype, Not Against Humanity
Marcus’s “no” to AGI by 2027 isn’t a rejection of progress; it’s a demand for discipline.
He argues that AI should be treated like aviation or medicine: tested, audited, and certified before mass deployment. Europe’s AI Act moves in that direction; the U.S., he fears, is walking blindfolded into a deregulated future.
He warns that the biggest risks in 2025 won’t come from superintelligence — but from stupid, powerful AI: systems that are misaligned, insecure, and unaccountable. Even without consciousness, they can already amplify fraud, cybercrime, and bias.
“The danger isn’t that AI becomes too smart,” Marcus says. “It’s that it stays stupid — and we hand it the keys anyway.”
In the end, his bet is less about prediction than about principle. If he wins, the field may rediscover humility; if he loses, the world may need the very safeguards he’s been calling for.
Either way, 2027 will not be the year machines surpass us. It will be the year we learn whether our intelligence — political, moral, and collective — was enough to manage the one we built.
