VentureBeat

Can Large Reasoning Models Truly Think? Exploring CoT Evidence

12 days agoRead original →

The debate over whether large reasoning models (LRMs) can think has resurfaced after Apple’s paper, "The Illusion of Thinking," concluded that LRMs are simply sophisticated pattern‑matching engines. The author refutes this by highlighting a fundamental flaw: a human who knows the Tower‑of‑Hanoi algorithm cannot solve a twenty‑disc version, yet that does not mean humans cannot think. Instead, the claim shows a lack of evidence that LRMs cannot think, not that they can. The article then defines what it means to think, mapping five key aspects of human cognition—problem representation, mental simulation, pattern matching, monitoring, and insight—to analogous mechanisms in LRMs. For instance, the prefrontal cortex’s role in working memory parallels the attention layer’s key‑value cache, while the human inner‑voice used in mental simulation is mirrored by the chain‑of‑thought (CoT) token generation in models like DeepSeek‑R1. Even though LRMs lack true visual imagery, the author notes that some humans with aphantasia still think flawlessly, implying that spatial reasoning is not a prerequisite for thought.

The piece then discusses the theoretical possibility of learning to think through next‑token prediction. By treating natural language as a complete expressive system, a parameterized model can encode world knowledge and reasoning patterns purely via data. This process allows the model to internally represent future tokens, effectively maintaining a working memory of the logical path. Consequently, LRMs can perform backtracking, error detection, and insight—behaviors observed in CoT training when models attempt to solve larger puzzles.

Empirical evidence is presented through open‑source benchmarks that demonstrate LRMs solving significant numbers of logic‑based questions, sometimes outperforming untrained humans. While still trailing expert‑trained baselines, the consistency across multiple tasks strengthens the claim that LRMs have acquired a form of reasoning. The article concludes that, given sufficient representational capacity, training data, and compute, LRMs meet the criteria to perform any computable task, and therefore they almost certainly possess the ability to think.

Want the full story?

Read on VentureBeat