Recently, a wave of skepticism has surrounded large reasoning models (LRMs), fueled by Apple’s paper “The Illusion of Thinking.” Apple argues that LRMs are simply sophisticated pattern‑matching engines that cannot truly think, citing failures of chain‑of‑thought (CoT) reasoning on larger puzzles. The critique hinges on the observation that, like a human given the Tower‑of‑Hanoi algorithm, an LRM struggles with problems whose size exceeds its working‑memory limits. The article counters this by pointing out that the same logic would also condemn human problem‑solving, revealing a flaw in Apple’s reasoning: the absence of evidence that LRMs cannot think does not prove they do not.
To assess whether LRMs can think, the author first outlines the human cognitive architecture that underpins problem solving. Working memory in the prefrontal cortex stores intermediate steps, while the parietal lobe encodes symbolic structure. Inner‑speech, mediated by Broca’s area, mirrors CoT generation, and the hippocampus retrieves past experiences—parallels to a model’s training data. Visual simulation, though less prominent in LRMs, is not essential, as aphantasic humans still excel at symbolic reasoning. The key takeaway is that pattern‑matching, working memory, and back‑tracking search—core to both humans and LRMs—provide a shared foundation for “thinking.”
Empirical evidence further supports the claim. Open‑source LRMs such as DeepSeek‑R1, when evaluated on reasoning benchmarks, solve a substantial fraction of logic‑based questions, sometimes outperforming untrained humans. The models’ ability to generate CoT tokens, adjust strategies when encountering memory bottlenecks, and recover from missteps mirrors human insight and the default‑mode network’s role in reframing problems. Given sufficient representational capacity, training data, and computation, any system can perform any computable task, including reasoning. Consequently, the convergence of theoretical arguments, human‑like reasoning patterns, and benchmark performance leads to the conclusion that LRMs almost certainly possess the ability to think.
Want the full story?
Read on VentureBeat →