Large Language Model Reasoning Failures
26 points
by T-A
6 hours ago
| 5 comments
| arxiv.org
| HN
chrisjj
5 hours ago
[-]
The only reasoning failures here are in the minds of humans gulled into expecting chatbot reasoning ability.
reply
altmanaltman
1 hour ago
[-]
But how else will Dario raise Series X
reply
chrisjj
42 minutes ago
[-]
Too true! :)
reply
Lapel2742
2 hours ago
[-]
> These models fail significantly in understanding real-world social norms (Rezaei et al., 2025), aligning with human moral judgments (Garcia et al., 2024; Takemoto, 2024), and adapting to cultural differences (Jiang et al., 2025b). Without consistent and reliable moral reasoning, LLMs are not fully ready for real-world decision-making involving ethical considerations.

LOL. Finally the Techbro-CEOs succeeded in creating an AI in their own image.

reply
throw310822
1 hour ago
[-]
> These models

Which models? The last ones came out this week.

reply
donperignon
2 hours ago
[-]
an llm will never reason. reasoning is an emergent behavior of those systems that is poorly understood. neurosymbolic systems will be what combined with llm will define the future of AI
reply
simianwords
1 hour ago
[-]
how do you falsify that "llm will never reason?"

I asked GPT to compute some hard multiplications and the reasoning trace seems valid and gets the answer right.

https://chatgpt.com/share/6999b72a-3a18-800b-856a-0d5da45b94...

reply
hackinthebochs
2 hours ago
[-]
What are neurosymbolic systems supposed to bring to the table that LLMs can't in principle? A symbol is just a vehicle with a fixed semantics in some context. Embedding vectors of LLMs are just that.
reply
logicprog
1 hour ago
[-]
Pre-programmed, hard and fast rules for manipulating those symbols, that can automatically be chained together according to other preset rules. This makes it reliable and observable. Think Datalog.

IMO, symbolic AI is way too brittle and case-by-case to drive useful AI, but as a memory and reasoning system for more dynamic and flexible LLMs to call out to, it's a good idea.

reply
hackinthebochs
1 hour ago
[-]
Sure, reliability is a problem for the current state of LLMs. But I see no reason to think that's an in principle limitation.
reply
logicprog
49 seconds ago
[-]
There are so many papers now showing that LLM "reasoning" is fragile and based on pattern-matching heuristics that I think it's worth considering that, while it may not be an in principle limitation — in the sense that if you gave an autoregressive predictor infinite data and compute, it'd have to learn to simulate the universe to predict perfectly — in practice we're not going to build Laplace's LLM, and we might need a more direct architecture as a short cut!
reply
sergiomattei
4 hours ago
[-]
Papers like these are much needed bucket of ice water. We antropomorphize these systems too much.

Skimming through conclusions and results, the authors conclude that LLMs exhibit failures across many axes we'd find to be demonstrative of AGI. Moral reasoning, simple things like counting that a toddler can do, etc. They're just not human and you can reasonably hypothesize most of these failures stem from their nature as next-token predictors that happen to usually do what you want.

So. If you've got OpenClaw running and thinking you've got Jarvis from Iron Man, this is probably a good read to ground yourself.

Note there's a GitHub repo compiling these failures from the authors: https://github.com/Peiyang-Song/Awesome-LLM-Reasoning-Failur...

reply
throw310822
52 minutes ago
[-]
> conclude that LLMs exhibit failures across many axes we'd find to be demonstrative of AGI.

Which LLMs? There's tons of them and more powerful ones appear every month.

reply
alansaber
26 minutes ago
[-]
True but the fundamental architecture tends not to be radically different, it's more about the training/RL regime
reply
throw310822
13 minutes ago
[-]
But the point is that to even start to claim that a limitation holds for all LLMs you can't use empirical results that have been demonstrated only for a few old models. You either have a theoretical proof, or you have empirical results that hold for all existing models, including the latest ones.
reply
simianwords
58 minutes ago
[-]
Most of the claims are likely falsified using current models. I wouldn’t take many of them seriously.
reply
jibal
8 minutes ago
[-]
I wouldn't take baseless "likely" claims or the people who make them seriously.
reply
vagrantstreet
3 hours ago
[-]
Isn't it strange that we expect them to act like humans even though after a model was trained it remains static? How is this supposed to be even close to "human like" anyway
reply
mettamage
2 hours ago
[-]
> Isn't it strange that we expect them to act like humans even though after a model was trained it remains static?

An LLM is more akin to interacting with a quirky human that has anterograde amnesia because it can't form long-term memories anymore, it can only follow you in a long-ish conversation.

reply
alansaber
23 minutes ago
[-]
I mean you can continue to evolve the model weights but the performance would suck so we don't do it. Models are built to an optimal state for a general set of benchmarks, and weights are frozen in that state.
reply
LiamPowell
2 hours ago
[-]
If we could reset a human to a prior state after a conversation then would conversations with them not still be "human like"?

I'm not arguing that LLMs are human here, just that your reasoning doesn't make sense.

reply
hackinthebochs
2 hours ago
[-]
Henry Molaison was exactly this.
reply
otabdeveloper4
1 hour ago
[-]
> We antropomorphize these systems too much.

They're sold as AGI by the cloud providers and the whole stock market scam will collapse if normies are allowed to peek behind the curtain.

reply
alansaber
22 minutes ago
[-]
The stock market being built on conjecture? Surely not sir.
reply
lostmsu
2 hours ago
[-]
https://en.wikipedia.org/wiki/List_of_cognitive_biases

Specifically, the idea that LLMs fail to solve some tasks correctly due to fundamental limitations where humans also fail periodically well may be an instance of the fundamental attribution error.

reply
simianwords
2 hours ago
[-]
i'm very skeptical of this paper.

>Basic Arithmetic. Another fundamental failure is that LLMs quickly fail in arithmetic as operands increase (Yuan et al., 2023; Testolin, 2024), especially in multiplication. Research shows models rely on superficial pattern-matching rather than arithmetic algorithms, thus struggling notably in middle-digits (Deng et al., 2024). Surprisingly, LLMs fail at simpler tasks (determining the last digit) but succeed in harder ones (first digit identification) (Gambardella et al., 2024). Those fundamental inconsistencies lead to failures for practical tasks like temporal reasoning (Su et al., 2024).

This is very misleading and I think flat out wrong. What's the best way to falsify this claim?

Edit: I tried falsifying it.

https://chatgpt.com/share/6999b72a-3a18-800b-856a-0d5da45b94...

https://chatgpt.com/share/6999b755-62f4-800b-912e-d015f9afc8...

I provided really hard 20 digit multiplications without tools. If you looked at the reasoning trace, it does what is normally expected and gets it right. I think this is enough to suggest that the claims made in the paper are not valid and LLMs do reason well.

To anyone who would disagree, can you provide a counter example that can't be solved using GPT 5 pro but that a normal student could do without mistakes?

reply
rybosworld
27 minutes ago
[-]
I see that your prompt includes 'Do not use any tools. If you do, write "I USED A TOOL"'

This is not a valid experiment, because GPT models always have access to certain tools and will use them even if you tell them not to. They will fib the chain of thought after the fact to make it look like they didn't use a tool.

https://www.anthropic.com/research/alignment-faking

It's also well established that all the frontier models use python for math problems, not just GPT family of models.

reply
simianwords
22 minutes ago
[-]
Would it convince you if we use the GPT Pro api and explicitly not allow tool access?

Is that enough to falsify?

reply
rybosworld
17 minutes ago
[-]
No, it wouldn't be enough to falsify.

This isn't an experiment a consumer of the models can actually run. If you have a chance to read the article I linked, it is difficult even for the model maintainers (openai, anthropic, etc.) to look into the model and see what it actually used in it's reasoning process. The models will purposefully hide information about how they reasoned. And they will ignore instructions without telling you.

The problem really isn't that LLM's can't get math/arithmetic right sometimes. They certainly can. The problem is that there's a very high probability that they will get the math wrong. Python or similar tools was the answer to the inconsistency.

reply
simianwords
13 minutes ago
[-]
What do you mean? You can explicitly restrict access to the tools. You are factually incorrect here.
reply
chickenimprint
15 minutes ago
[-]
As far as I know, you can't disable the python interpreter. It's part of the reasoning mode.

If you ask ChatGPT, it will confirm that it uses the python interpreter to do arithmetic on large numbers. To you, that should be convincing.

reply
jibal
18 minutes ago
[-]
It's not falsifiable because it's not false.
reply
simianwords
14 minutes ago
[-]
That’s not falsifiable means
reply
chickenimprint
1 hour ago
[-]
It's a well known fact that LLMs struggle with basic arithmetic of large numbers, that's not what they are made for. Most chatbots will just call a python interpreter in the background.
reply
simianwords
1 hour ago
[-]
how do you want to falsify it? can you come up with a test?
reply
chickenimprint
1 hour ago
[-]
Ask a local AI or a chatbot that allows you to disable tool calling to multiply two large number for example.

This is what Mistral outputs:

The result of multiplying 63,157,997,633 by 63,114,90,009 is:

3,965,689,999,999,999,999,999 (approximately 3.966 × 10²⁴).

That's like 5 orders of magnitude off, the scientific notation doesn't even match the full integer, and the mantissa is also slightly wrong.

reply
simianwords
1 hour ago
[-]
I don't think finding the worst model and asking it to do things is the best way to go.

GPT-5 pro without tools can easily solve your question and much harder ones.

Rather: does there exist a model that can perform these calculations reliably is a better way to falsify this claim.

Else we can always find the worst model with 1B parameters to falsify any claim made on LLM's.

reply
chickenimprint
34 minutes ago
[-]
I chose Mistral because it's a large SOTA model that allows me to disable tool calling.

If you try using ChatGPT, you'll see it goes into reasoning mode (It flashes the text "Working") and takes a while when asked to do arithmetic. Chatbots that don't forbid the user from viewing the reasoning chain of thought, such as Deepseek or Mistral, reveal that these chatbots work exactly as I described:

====

Thought for 1s

Thinking

Okay, the user wants to multiply two large numbers: 63,157,997,633 and 6,311,490,009. Since these numbers are quite large, I should use the code interpreter to handle this calculation accurately. Let's write a simple piece of code to compute this product.

Code

output

generated

python

# Calculating the product of the two numbers

number1 = 63157997633

number2 = 6311490009

product = number1 * number2

product

ResultCopy

398621071049125148697

Thinking

Great, the code interpreter has given us the product of the two numbers. Now, let's present the result to the user.

====

Falsifying my claim is simple: Find a model that verifiably does basic arithmetic correctly without relying on any external tools.

reply
kgeist
19 minutes ago
[-]
I run Qwen3-32b locally without any tools (just llama.cpp) and it can do basic arithmetic for smaller numbers ( like 134566) but I didn't check it for much larger numbers. I'm not at the PC right now but trying to do it via OpenRouter on much larger numbers overflows the context and it stops without giving a result :)
reply
throw310822
26 minutes ago
[-]
Can you do basic arithmetic correctly without relying on external tools?
reply
AlexeyBrin
47 minutes ago
[-]
How do you know GPT-5 does not call a Python interpreter remotely on OpenAI servers when you ask it to do arithmetic ? Your prompt goes to their servers, you have no way to know what happens there.

The only way to be sure a model calls no tool is to run it locally and control the network.

reply
rybosworld
36 minutes ago
[-]
> GPT-5 pro without tools can easily solve your question and much harder ones.

How are you able to use GPT-5 with tools turned off? Do you mean external tools (like searching the web)?

My understanding is that GPT models always have access to python, and it isn't something you can turn off.

reply
simianwords
20 minutes ago
[-]
What if we use the use the api? You can explicitly disable tool class. Is that enough?
reply
simianwords
2 hours ago
[-]
>Math Word Problem (MWP) Benchmarks. Certain benchmarks inherently possess richer logical structures that facilitate targeted perturbations. MWPs exemplify this, as their logic can be readily abstracted into reusable templates. Researchers use this property to generate variants by sampling numeric values (Gulati et al., 2024; Qian et al., 2024; Li et al., 2024b) or substituting irrelevant entities (Shi et al., 2023; Mirzadeh et al., 2024). Structural transformations – such as exchanging known and unknown components (Deb et al., 2024; Guo et al., 2024a) or applying small alterations that change the logic needed to solve problems (Huang et al., 2025b) – further highlight deeper robustness limitations.

I'm willing to bet this is no longer true as well. We have models that are doing better than humans at IMO.

reply
otabdeveloper4
1 hour ago
[-]
> We have models that are doing better than humans at IMO.

Not really. From my brief experience they can guess the final answer but the intermediate justifications and proofs are complete hallucinated bullshit.

(Possibly because the final answer is usually some sort of neat and beatiful answer and human evaluators don't care about the final answer anyways, in any olympiad you're graded on the soundness of your reasoning.)

reply
simianwords
1 hour ago
[-]
what's the best way to falsify it?
reply
throw310822
1 hour ago
[-]
Just look at the dates of the cited articles. 2023, 2024: that's prehistory, before thinking models anyway. It's like concluding that humans don't understand arithmetic because they can't multiply large numbers at sight.
reply
simianwords
1 hour ago
[-]
i don't get the point of using that in a paper today
reply
throw310822
54 minutes ago
[-]
I'm not sure what the paper is really about despite the enthusiasm of the LLM haters here. Certainly there isn't something called "LLMs" that stayed reasonably the same in the last 4 years- GPT-2 is an LLM but a finding on it most likely doesn't apply to Opus 4.6. You can't document a failure on a 2024 model and claim "LLMs can't do this".
reply