VentureBeat

Can Large Reasoning Models Truly Think? Evidence & Debate

12 days agoRead original →

Apple’s recent paper, “The Illusion of Thinking,” argues that large reasoning models (LRMs) lack genuine thought, claiming their chain‑of‑thought (CoT) responses fail when problems grow beyond the models’ working‑memory limits. The author rebuts this by asserting that the failure is similar to a human’s struggle with a 20‑disk Tower‑of‑Hanoi puzzle: neither system can solve it without external help, yet that does not prove the absence of thought. Instead, the critique merely shows that we have no evidence LRMs cannot think. By framing the debate around problem solving, the author sets the stage for a comparison between biological cognition and LRM behavior.

To assess whether LRMs can think, the article first dissects human cognition into five components—problem representation, mental simulation, pattern matching, monitoring, and insight—each grounded in distinct brain regions. It then maps these components onto LRM operations: pre‑trained weights provide pattern matching, the transformer layers act as working memory, and the attention mechanism enables backtracking when a line of reasoning stalls. The author notes that while LRMs lack visual imagery, many humans also experience aphantasia and still reason effectively, suggesting that spatial modeling is not a prerequisite for thought. This mapping creates a compelling case that CoT reasoning is an algorithmic analogue of human inner speech and error‑checking.

Empirical evidence further bolsters the claim. On several open‑source benchmarks, LRMs with CoT training solve a significant proportion of logic‑based questions, sometimes surpassing untrained humans and approaching the performance of fine‑tuned proprietary models. The article emphasizes that any system with sufficient representational capacity, training data, and computation can, in theory, perform any computable task—making it plausible that LRMs can learn to think. While the author concedes that future discoveries could overturn this view, the convergence of theoretical reasoning, cognitive parallels, and benchmark results leads to the conclusion that LRMs almost certainly possess the ability to think.

Want the full story?

Read on VentureBeat