We're Learning Backwards: LLMs build intelligence in reverse
2 points
1 hour ago
| 1 comment
| pleasedontcite.me
| HN
preyneyv
1 hour ago
[-]
Wrote this after a lecture on Hays & Efros' scene completion using just lookup on 2.3M images. Made me think about where the "simple model + lots of data" bet actually leads. The article traces that thread through the Scaling Hypothesis to ARC-AGI-3, where frontier LLMs score under 1% on novel interactive tasks. I think LLMs built intelligence in the wrong order and I'm curious where people think I'm wrong.
reply