LLMorphism: When humans come to see themselves as language models
24 points
by okey
3 hours ago
| 9 comments
| arxiv.org
| HN
thepasch
49 minutes ago
[-]
This paper introduces a term and instantly defines it as a definitely biased thing that is definitely happening, then spends its entirety arguing against the strawman it built itself. Not a single sentence is spent actually arguing with the idea or any of its points (other than the “partial similarities” paragraph on page I just realized the pages aren’t even numbered).

In general, the terms “LLM-like” and “human-like” are used all over the place, and in contrast with each other, but they’re never actually defined. It all just seems more vibes-based than anything else.

And “treating the human cognitive process like it’s similar to the LLM cognitive process might lead to a society where epistemics turns into a discipline where plausibility is an acceptable substitute for empiricism” has got to be one of the most ridiculous notions I’ve ever read in a paper (ctrl+F “fifth pathway is epistemic” for the exact quote).

reply
Alifatisk
56 minutes ago
[-]
> When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs.

I think I experienced this when I learned about LLMs, chain of thought, thinking tokens, short-term memory context, and long-term memory context. I began applying these concepts to real life and reasoning about how our brains work as if these concepts described how our brains actually function. But maybe this is more akin to the Tetris effect?

reply
dr_dshiv
51 minutes ago
[-]
I teach students to use their own imagination like generative AI. Prompting works. They just need a bit of practice.
reply
vachina
30 minutes ago
[-]
I mimic how LLM responds when I talk to my boss lol. Appear useful and present verbose facts. Works pretty well so far.
reply
artninja1988
1 hour ago
[-]
I think it's meaningless anyway. A calculator doesn't multiply numbers like a human does. The important part is to develop systems that can do many human tasks
reply
mhalle
9 minutes ago
[-]
Early LLMs typically tried to do multiplication "in their head" by recall.

Now most LLMs do multiplication using a tool call to a programming language, akin to a person reaching for a calculator rather than relying on a learned table or working the problem out mentally.

The high level comparison between what LLMs do and what humans do" for this example is fairly parallel.

reply
Den_VR
1 hour ago
[-]
> are [we] beginning to attribute too little mind to humans.

I don’t think this way of thinking started with LLM. Does Systems Based Thinking also attribute too little mind to humans?

reply
iugtmkbdfil834
1 hour ago
[-]
Agreed. I think we, as humans, like to think in terms of various metaphors when it comes to how we perceive ourselves in the world ( for example, "I am not some sort of automaton/robot" when objecting to some boss way back when ).
reply
MichaelRo
33 minutes ago
[-]
Nothing new under the sun. When clocks and precision mechanics started in the 17th century, there was a tendency to view humans as "machines". Computers came, suddenly human brains are "computers". Now we're LLMs.

If scientists make green jelly that emits thoughtful judgements, humans will be compared to green jelly.

reply
stavros
1 hour ago
[-]
I'm sure we don't know for sure that humans work like LLMs, but do we know that they don't?
reply
TMWNN
1 hour ago
[-]
Highly relevant: Reading Doesn't Fill a Database, It Trains Your Internal LLM <https://tidbits.com/2026/02/28/reading-doesnt-fill-a-databas...>
reply