"The ability to adapt beliefs or behaviors in response to unexpected outcomes, reflection, is fundamental to intelligent systems’ interaction with the world. From a cognitive science perspective, this serves as a core principle of intelligence applicable to both human and AI systems. To address the debate on the intelligence of large language models (LLMs), we propose Reflection-Bench, a comprehensive benchmark comprising 7 tasks spanning core cognitive functions crucial for reflection, including perception, memory, belief updating, decision-making, prediction, counterfactual thinking, and meta-reflection. We evaluate the performances of 13 prominent LLMs such as OpenAI o1, GPT-4, Claude 3.5 Sonnet, etc. The results indicate that current LLMs still lack satisfactory reflection ability. While demonstrating basic automatic surprise signal detection and working memory, most models struggle to adapt flexibly to changing environments. This inflexibility manifests in rigid belief updating, rule inference, predictive learning, and counterfactual thinking. Most strikingly, all models lack metareflection ability. In MBT, all models merely alter their choices based on immediate feedback, failing to recognize the task’s meta-structure. This indicates an absence of reflection on their adaption strategies, i.e., lack of meta-reflection. It is not just an advanced cognitive skill but a cornerstone of intelligence, crucial for rational reasoning and learning, reliable decision-making, and self-development"
Psychological Modeling Limitations in Large Language Models
Explore top LinkedIn content from expert professionals.
Summary
Large language models (LLMs) are powerful AI systems designed to mimic human language understanding, but they face significant limitations when it comes to modeling complex psychological reasoning and adapting to unexpected situations. Psychological-modeling-limitations-in-large-language-models refers to the challenges these systems encounter in replicating human-like thinking, belief updating, and reflective decision-making, especially as problems grow more complicated.
- Recognize reasoning ceilings: Understand that LLMs struggle with complex tasks and may not generalize solutions beyond the patterns they were trained on, leading to abrupt drops in performance as problem complexity rises.
- Design with simplicity: For many real-world applications, straightforward prompts and simpler reasoning strategies can outperform more elaborate reasoning traces generated by advanced models.
- Watch for overthinking: Be aware that these models may waste resources by overanalyzing simple tasks or failing to adapt their strategies, highlighting the need for practical evaluation and careful deployment.
-
-
𝗜 𝗵𝗮𝘃𝗲 𝗯𝗲𝗲𝗻 𝗶𝗻 𝘁𝗵𝗲 𝗡𝗟𝗣 𝘀𝗽𝗮𝗰𝗲 𝗳𝗼𝗿 𝗮𝗹𝗺𝗼𝘀𝘁 𝟭𝟬 𝘆𝗲𝗮𝗿𝘀 𝗻𝗼𝘄, and I know the first-hand challenges of building text-based models in the pre-GPT era! So, I am a 𝗽𝗿𝗼-𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹 (𝗟𝗟𝗠) 𝗲𝗻𝘁𝗵𝘂𝘀𝗶𝗮𝘀t, but I don’t believe they will replace humans or solve all our problems, especially when it comes to highly complex reasoning in industries like Finance. 𝗧𝗵𝗶𝘀 𝘄𝗲𝗲𝗸𝗲𝗻𝗱, I spent reading two compelling papers, and I’m convinced we’re bumping into real reasoning ceilings: 𝗜> "𝗧𝗵𝗲 𝗜𝗹𝗹𝘂𝘀𝗶𝗼𝗻 𝗼𝗳 𝗧𝗵𝗶𝗻𝗸𝗶𝗻𝗴: 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 𝘁𝗵𝗲 𝗦𝘁𝗿𝗲𝗻𝗴𝘁𝗵𝘀 𝗮𝗻𝗱 𝗟𝗶𝗺𝗶𝘁𝗮𝘁𝗶𝗼𝗻𝘀 𝗼𝗳 𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗠𝗼𝗱𝗲𝗹𝘀 𝘃𝗶𝗮 𝘁𝗵𝗲 𝗟𝗲𝗻𝘀 𝗼𝗳 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆" (Apple) Apple researchers rigorously tested 𝗟𝗮𝗿𝗴𝗲 𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗠𝗼𝗱𝗲𝗹𝘀 (𝗟𝗥𝗠𝘀), LLMs that explicitly generate chain-of-thought reasoning, using controlled puzzles like Tower of Hanoi and River Crossing Key insights: 1. 𝗧𝗵𝗿𝗲𝗲 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗿𝗲𝗴𝗶𝗺𝗲𝘀: ▪️Low complexity: standard LLMs outperform LRMs ▪️Medium complexity: LRMs excel ▪️High complexity: 𝗯𝗼𝘁𝗵 𝗰𝗼𝗹𝗹𝗮𝗽𝘀𝗲, accuracy plummets 2. Fascinating observation, 𝗟𝗥𝗠𝘀 “𝗴𝗶𝘃𝗲 𝘂𝗽” as puzzle complexity increases, their reasoning effort declines rapidly, even with enough tokens 3. Even when provided an exact algorithm (e.g., Tower of Hanoi strategy), the 𝗺𝗼𝗱𝗲𝗹𝘀 𝘀𝘁𝗶𝗹𝗹 𝗳𝗮𝗶𝗹𝗲𝗱 𝘁𝗼 𝗴𝗲𝗻𝗲𝗿𝗮𝗹𝗶𝘇𝗲 and mostly outputs based on past observed data pattern it is trained on 𝗜𝗜> "𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗼𝗿 𝗢𝘃𝗲𝗿𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴: 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗻𝗴 𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝘀 𝗼𝗻 𝗙𝗶𝗻𝗮𝗻𝗰𝗶𝗮𝗹 𝗦𝗲𝗻𝘁𝗶𝗺𝗲𝗻𝘁 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀" (Dimitris Vamvourellis & Dhagash Mehta, Ph.D., BlackRock) This study tested major 𝗟𝗟𝗠𝘀 (𝗚𝗣𝗧‐𝟰𝗼, 𝗚𝗣𝗧‐𝟰.𝟭, 𝗼𝟯‐𝗺𝗶𝗻𝗶, 𝗙𝗶𝗻𝗕𝗘𝗥𝗧 𝘃𝗮𝗿𝗶𝗮𝗻𝘁𝘀) on financial sentiment classification using: - "𝗦𝘆𝘀𝘁𝗲𝗺 𝟭" (𝗳𝗮𝘀𝘁/𝗶𝗻𝘁𝘂𝗶𝘁𝗶𝘃𝗲) - "𝗦𝘆𝘀𝘁𝗲𝗺𝟮" (𝘀𝗹𝗼𝘄/𝗱𝗲𝗹𝗶𝗯𝗲𝗿𝗮𝘁𝗲) 𝗽𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴 Key takeaways: ▪️𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗽𝗿𝗼𝗺𝗽𝘁𝘀 𝗱𝗶𝗱 𝗻𝗼𝘁 𝗶𝗺𝗽𝗿𝗼𝘃𝗲 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 ▪️Surprisingly, straightforward, intuitive prompts with GPT-4o (no chain-of-thought) outperformed all others ▪️More reasoning led to overthinking, reducing alignment with human-labeled sentiments 💡 Why it matters for builders and researchers in Finance and every industry: ❎ 𝗕𝗶𝗴𝗴𝗲𝗿 𝗺𝗼𝗱𝗲𝗹𝘀 + 𝗺𝗼𝗿𝗲 “𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴” = 𝗯𝗲𝘁𝘁𝗲𝗿 𝗼𝘂𝘁𝗰𝗼𝗺𝗲𝘀. Sometimes it’s actively worse ❎ We’re not seeing a soft plateau — these are 𝗵𝗮𝗿𝗱 𝗰𝗲𝗶𝗹𝗶𝗻𝗴𝘀 𝗶𝗻 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗰𝗮𝗽𝗮𝗰𝗶𝘁𝘆 ❎ For real-world systems, agents, and financial tools: design for 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗲𝗰𝗼𝗻𝗼𝗺𝘆, not just reasoning depth. #LLMs #ReasoningLimits #LLMChainofthought #LLMReasoningDecline
-
The Illusion of Thinking in LLMs - Apple researchers have spilled the beans on the strengths and limitations of reasoning models. Reasoning models "collapse" beyond certain task complexities. "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity" highlights several limitations of Large Language Models (LLMs) and their specialized variants, Large Reasoning Models (LRMs), particularly in the context of reasoning and problem-solving. Below is a list of the key limitations of LLMs identified by Apple researchers: (1) Poor Performance on Reasoning Benchmarks: Earlier iterations of LLMs exhibited poor performance on reasoning benchmarks, indicating fundamental challenges in reasoning capabilities (Page 4, Section 2). (2) Lack of Generalizable Reasoning: Despite advancements, LLMs and LRMs fail to develop generalizable problem-solving capabilities, especially for planning tasks. Performance collapses to zero beyond certain complexity thresholds in controlled puzzle environments (Page 3, Section 1; Page 11, Section 5). (3) Data Contamination Issues: Established mathematical and coding benchmarks suffer from data contamination, where models may have been exposed to similar problems during training, skewing performance evaluations (Page 2, Section 1; Page 5, Section 3). (4) Inefficiency in Low-Complexity Tasks: For simpler, low-compositional problems, standard LLMs demonstrate greater efficiency and accuracy compared to LRMs, suggesting that additional "thinking" mechanisms in LRMs may introduce unnecessary overhead (Page 3, Section 1; Page 7, Section 4.2.1). (5) Complete Collapse at High Complexity: Both LLMs and LRMs experience complete performance collapse when problem complexity exceeds a critical threshold, indicating a fundamental limitation in handling highly complex, compositionally deep tasks (Page 3, Section 1; Page 8, Section 4.2.2). (6) Counterintuitive Scaling Limitation: LRMs reduce their reasoning effort (measured by inference-time tokens) as problem complexity increases beyond a certain point, despite having ample token budgets, revealing a scaling limitation in reasoning capabilities (Page 3, Section 1; Page 8, Section 4.2.2). (7) Overthinking Phenomenon: In simpler problems, LLMs and LRMs often identify correct solutions early but continue exploring incorrect alternatives, wasting computational resources in an "overthinking" pattern (Page 3, Section 1; Page 9, Section 4.3)
-
How much do language models actually think? A recent paper from Apple, 'The Illusion of Thinking', explores this question by probing the limits of Large Reasoning Models (LRMs) such as Claude 3.7 Sonnet Thinking and DeepSeek-R1. These models aim to improve reasoning by generating long Chain-of-Thought (CoT) traces before producing an answer. Instead of relying on traditional math benchmarks, the authors designed controlled puzzle environments (like Tower of Hanoi and River Crossing), that allow them to systematically vary problem complexity and analyze model behavior step by step. Key takeaways from the paper: 🔹Three performance regimes: → At low complexity: non-thinking models often outperform LRMs in both accuracy and token efficiency. → At medium complexity: LRMs show benefits thanks to more elaborate reasoning traces. → At high complexity: both model types collapse (accuracy drops to zero). 🔹As problems grow more complex, models actually use fewer thinking tokens, despite having sufficient budget which highlights a possible inference-time scaling limitation. 🔹On simple tasks, models often reach the correct solution early but then continue generating incorrect or redundant reasoning. 🔹Even when the correct algorithm is provided in the prompt, models still fail at execution as complexity increases. The authors raise an important question: are today's LRMs truly engaging in reasoning or just producing more elaborate pattern completions? You can read the paper here: https://lnkd.in/dn3GTT66 The image used in the post is taken from the same paper. Curious to hear your take, especially if you work on reasoning, interpretability, or evaluation design. #technology #generativeai #artificialintelligence #llms #innovation