I don't think AGI is right around the corner
108 points
3 hours ago
| 29 comments
| dwarkesh.com
| HN
dathinab
1 hour ago
[-]
I _hope_ AGI is not right around the corner, for social political reasons we are absolutely not ready for it and it might push the future of humanity into a dystopia abyss.

but also just taking what we have now with some major power usage reduction and minor improvements here and there already seems like something which can be very usable/useful in a lot of areas (and to some degree we aren't even really ready for that either, but I guess thats normal with major technological change)

it's just that for those companies creating foundational models it's quite unclear how they can recoup their already spend cost without either major break through or forcefully (or deceptively) pushing it into a lot more places then it fits into

reply
twelve40
1 hour ago
[-]
I agree and sincerely hope this bubble pops soon

> Meta Invests $100 Billion into Augmented Reality

that fool controls the board and he seems to be just desperately throwing insane ad money against the wall hoping that something sticks

for Altman there is no backing out either, need to make hay while the sun shines

for the rest of us, i really hope these clowns fail like it's 2000 and never get to their dystopian matrix crap.

reply
pbreit
1 hour ago
[-]
"that fool" created a $1.8 trillion company.
reply
twelve40
29 minutes ago
[-]
past performance does not guarantee future results

also, great for the Wall Street, mixed bag for us the people

reply
AaronAPU
23 minutes ago
[-]
I’m always fascinated when someone equates profit with intelligence. There are many very wealthy fools and there always have been. Plenty of ingredients to substitute for intelligence.

Neither necessary nor sufficient.

reply
the_gastropod
53 minutes ago
[-]
Aren't there enough examples of successful people who are complete buffoons to nuke this silly trope from orbit? Success is no proof of wisdom or intelligence or whatever.
reply
umbra07
43 minutes ago
[-]
can you point to someone as successful as zuckerberg, who was later conclusively shown to be a fraud or a total moron?
reply
twelve40
2 minutes ago
[-]
> a special order, 350 gallons, and had it shipped from Los Angeles. A few days after the order arrived, Hughes announced he was tired of banana nut and wanted only French vanilla ice cream

yes, there are plenty

reply
Avshalom
26 minutes ago
[-]
What does the name of his company Meta refer to?
reply
chemeril
14 minutes ago
[-]
SBF comes to mind.
reply
fcarraldo
14 minutes ago
[-]
Elon Musk.
reply
kulahan
1 hour ago
[-]
$1.8 trillion in investor hopes and dreams, but of course they make zero dollars in profit, don’t know how to turn a profit, don’t have a product anyone would pay a profitable amount for, and have yet to show any real-world use that isn’t kinda dumb because you can’t trust anything it says anyways.
reply
daniel_iversen
1 hour ago
[-]
Meta makes > $160 billion in revenue and is profitable itself, of course they’re going to invest in future longer term revenue streams! Apple is the counter example in a way who have maintained a lot of cash reserves (which seems to by the way have dwindled a LOT as I just checked..?)
reply
pbreit
1 hour ago
[-]
Must "AGI" match human intelligence exactly or would outperforming in some functions and underpformin in others qualify?
reply
crooked-v
1 hour ago
[-]
For me, "AGI" would come in with being able to reliably perform simple open-ended tasks successfully without needing any specialized aid or tooling. Not necessarily very well, just being capable of it in the first place.

For a specific example of what I mean, there's Vending-Bench - even very 'dumb' humans could reliably succeed on that test indefinitely, at least until they got terminally bored of it. Current LLMs, by contrast, are just fundamentally incapable of that, despite seeming very 'smart' if all you pay attention to is their eloquence.

reply
saubeidl
1 hour ago
[-]
Where would you draw the line? Any ol' computer outperforms me in doing basic arithmetic.
reply
kulahan
1 hour ago
[-]
This is a question of how we quantify intelligence, and there aren’t many great answers. Still, basic arithmetic is probably not the right guideline for intelligence. My guess has always been that it’ll lie somewhere in ability to think critically, which they still have not even attempted yet, because it doesn’t really work with LLMs as they’re structured today.
reply
hkt
1 hour ago
[-]
I'd suggest anything able to match a professional doing knowledge work. Original research from recognisably equivalent cognition, or equal abilities with a skilled practitioner of (eg) medicine.

This sets the bar high, though. I think there's something to the idea of being able to pass for human in the workplace though. That's the real, consequential outcome here: AGI genuinely replacing humans, without need for supervision. That's what will have consequences. At the moment we aren't there (pre-first-line-support doesn't count).

reply
root_axis
1 hour ago
[-]
At the very least, it needs to be able to collate training data, design, code, train, fine tune and "RLHF" a foundational model from scratch, on its own, and have it show improvements over the current SOTA models before we can even begin to have the conversation about whether we're approaching what could be AGI at some point in the future.
reply
OJFord
47 minutes ago
[-]
That would be human; I've always understood the General to mean 'as if it's any human', i.e. perhaps not absolute mastery, but trained expertise in any domain.
reply
babuloseo
31 minutes ago
[-]
what social political reasons, can you name some of these? we are 100% ready for AGI.
reply
merizian
51 minutes ago
[-]
The problem with the argument is that it assumes future AIs will solve problems like humans do. In this case, it’s that continuous learning is a big missing component.

In practice, continual learning has not been an important component of improvement in deep learning history thus far. Instead, large diverse datasets and scale have proven to work the best. I believe a good argument for continual learning being necessary needs to directly address why the massive cross-task learning paradigm will stop working, and ideally make concrete bets on what skills will be hard for AIs to achieve. I think generally, anthropomorphisms lack predictive power.

I think maybe a big real crux is the amount of acceleration you can achieve once you get very competent programming AIs spinning the RL flywheel. The author mentioned uncertainty about this, which is fair, and I share the uncertainty. But it leaves the rest of the piece feeling too overconfident.

reply
izzydata
2 hours ago
[-]
Not only do I not think it is right around the corner. I'm not even convinced it is even possible or at the very least I don't think it is possible using conventional computer hardware. I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence. If we ever crack artificial intelligence it's highly possible that in its first form it is of very low intelligence by humans standards, but is truly capable of learning on its own without extra help.
reply
Waterluvian
2 hours ago
[-]
I think the only way that it’s actually impossible is if we believe that there’s something magical and fundamentally immeasurable about humans that leads to our general intelligence. Otherwise we’re just machines, after all. A human brain is theoretically reproducible outside standard biological mechanisms, if you have a good enough nanolathe.

Maybe our first AGI is just a Petri dish brain with a half-decent python API. Maybe it’s more sand-based, though.

reply
knome
14 minutes ago
[-]
>Maybe our first AGI is just a Petri dish brain with a half-decent python API.

https://www.oddee.com/australian-company-launches-worlds-fir...

the entire idea feels rather immoral to me, but it does exist.

reply
Balgair
1 hour ago
[-]
-- A human brain is theoretically reproducible outside standard biological mechanisms, if you have a good enough nanolathe.

Sort of. The main issue is the energy requirements. We could theoretically reproduce a human brain in SW today, it's just that it would be a really big energy hog and run very slowly and probably become insane quickly like any person trapped in a sensory deprived tank.

The real key development for AI and AGI is down at the metal level of computers- the memristor.

https://en.m.wikipedia.org/wiki/Memristor

The synapse in a brain is essentially a memristive element, and it's a very taxing one on the neuron. The equations is (change in charge)/(change in flux). Yes, a flux capacitor, sorta. It's the missing piece in fundamental electronics.

Making simple 2 element memristors is somewhat possible these days, though I've not really been in the space recently. Please, if anyone knows where to buy them, a real one not a claimed to be one, let me know. I'm willing to pay good money.

In Terms of AI, a memristor would require a total redesign of how we architect computers ( goodbye busses and physically separate memory, for one). But, you'd get a huge energy and time savings benefit. As in, you can run an LLM on a watch battery or small solar cell and let the environment train them to a degree.

Hopefully AI will accelerate their discovery and facilitate their introduction into cheap processing and construction of chips.

reply
josefx
1 hour ago
[-]
> and fundamentally immeasurable about humans that leads to our general intelligence

Isn't AGI defined to mean "matches humans in virtually all fields"? I don't think there is a single human capable of this.

reply
andy99
1 hour ago
[-]
If by "something magical" you mean something we don't understand, that's trivially true. People like to give firm opinions or make completely unsupported statements they feel should be taken seriously ("how do we know humans intelligence doesn't work the same way as next token prediction") about something nobody understand.
reply
Waterluvian
1 hour ago
[-]
I mean something that’s fundamentally not understandable.

“What we don’t yet understand” is just a horizon.

reply
somewhereoutth
1 hour ago
[-]
Our silicon machines exist in a countable state space (you can easily assign a unique natural number to any state for a given machine). However, 'standard biological mechanisms' exist in an uncountable state space - you need real numbers to properly describe them. Cantor showed that the uncountable is infinitely more infinite (pardon the word tangle) than the countable. I posit that the 'special sauce' for sentience/intelligence/sapience exists beyond the countable, and so is unreachable with our silicon machines as currently envisaged.

I call this the 'Cardinality Barrier'

reply
bakuninsbart
1 hour ago
[-]
Cantor talks about countable and uncountable infinities, both computer chips and human brains are finite spaces. The human brain has roughly 100b neurons, even if each of these had an edge with each other and these edges could individually light up signalling different states of mind, isn't that just `2^100b!`? That's roughly as far away from infinity as 1.
reply
somewhereoutth
1 hour ago
[-]
But this signalling (and connections) may be more complex than connected/unconnected and on/off, such that we cannot completely describe them [digitally/using a countable state space] as we would with silicon.
reply
chowells
9 minutes ago
[-]
If you think it can't be done with a countable state space, then you must know some physics that the general establishment doesn't. I'm sure they would love to know what you do.

As far as physicists believe at the moment, there's no way to ever observe a difference below the Planck level. Energy/distance/time/whatever. They all have a lower boundary of measurability. That's not as a practical issue, it's a theoretical one. According to the best models we currently have, there's literally no way to ever observe a difference below those levels.

If a difference smaller than that is relevant to brain function, then brains have a way to observe the difference. So I'm sure the field of physics eagerly awaits your explanation. They would love to see an experiment thoroughly disagree with a current model. That's the sort of thing scientists live for.

reply
Waterluvian
1 hour ago
[-]
That’s an interesting thought. It steps beyond my realm of confidence, but I’ll ask in ignorance: can a biological brain really have infinite state space if there’s a minimum divisible Planck length?

Infinite and “finite but very very big” seem like a meaningful distinction here.

I once wondered if digital intelligences might be possible but would require an entire planet’s precious metals and require whole stars to power. That is: the “finite but very very big” case.

But I think your idea is constrained to if we wanted a digital computer, is it not? Humans can make intelligent life by accident. Surely we could hypothetically construct our own biological computer (or borrow one…) and make it more ideal for digital interface?

reply
saubeidl
1 hour ago
[-]
Isn't a Planck length just the minimum for measurability?
reply
layer8
1 hour ago
[-]
Not quite. Smaller wavelengths mean higher energy, and a photon with Planck wavelength would be energetic enough to form a black hole. So you can’t meaningfully interact electromagnetically with something smaller than the Planck length. Nor can that something have electromagnetic properties.

But since we don’t have a working theory of quantum gravity at such energies, the final verdict remains open.

reply
triclops200
1 hour ago
[-]
Measurability is essentially a synonym for meaningful interaction at some measurement scale. When describing fundamental measurability limits, you're essentially describing what current physical models consider to be the fundamental interaction scale.
reply
richk449
1 hour ago
[-]
It sounds like you are making a distinction between digital (silicon computers) and analog (biological brains).

As far as possible reasons that a computer can’t achieve AGI go, this seems like the best one (assuming computer means digital computer of course).

But in a philosophical sense, a computer obeys the same laws of physics that a brain does, and the transistors are analog devices that are being used to create a digital architecture. So whatever makes you brain have uncountable states would also make a real digital computer have uncountable states. Of course we can claim that only the digital layer on top matters, but why?

reply
layer8
1 hour ago
[-]
Physically speaking, we don’t know that the universe isn’t fundamentally discrete. But the more pertinent question is whether what the brain does couldn’t be approximated well enough with a finite state space. I’d argue that books, music, speech, video, and the like demonstrate that it could, since those don’t seem qualitatively much different from how other, analog inputs stimulate our intellect. Or otherwise you’d have to explain why an uncountable state space would be needed to deal with discrete finite inputs.
reply
jandrewrogers
1 hour ago
[-]
> 'standard biological mechanisms' exist in an uncountable state space

Everything in our universe is countable, which naturally includes biology. A bunch of physical laws are predicated on the universe being a countable substrate.

reply
coffepot77
1 hour ago
[-]
Can you explain why you think the state space of the brain is not finite? (Not even taking into account countability of infinities)
reply
saubeidl
1 hour ago
[-]
That is a really insightful take, thank you for sharing!
reply
sandworm101
1 hour ago
[-]
A brain in a jar, with wires so that we can communicate with it, already exists. Its called the internet. My brain is communicating with you now through wires. Replacing my keyboard with implanted electrodes may speed up the connection, but it wont fundimentally change the structure or capabilities of the machine.
reply
Waterluvian
1 hour ago
[-]
Wait, are we all just Servitors?!
reply
frizlab
2 hours ago
[-]
> if we believe that there’s something magical and fundamentally immeasurable about humans that leads to our general intelligence

It’s called a soul for the believers.

reply
agumonkey
2 hours ago
[-]
Then there's the other side of the issue. If your tool is smarter than you.. how do you handle it ?

People are joking online that some colleagues use chatgpt to answer questions from other teammates made by chatgpt, nobody knows what's going on anymore.

reply
breuleux
1 hour ago
[-]
I think the issue is going to turn out to be that intelligence doesn't scale very well. The computational power needed to model a system has got to be in some way exponential in how complex or chaotic the system is, meaning that the effectiveness of intelligence is intrinsically constrained to simple and orderly systems. It's fairly telling that the most effective way to design robust technology is to eliminate as many factors of variation as possible. That might be the only modality where intelligence actually works well, super or not.
reply
airstrike
1 hour ago
[-]
What does "scale well" mean here? LLMs right now aren't intelligent so we're not scaling from that point on.

If we had a very inefficient, power hungry machine that was 1:1 as intelligent as a human being but could scale it very inefficiently to be 100:1 a human being it might still be worth it.

reply
navels
2 hours ago
[-]
why not?
reply
izzydata
2 hours ago
[-]
I'm not an expert by any means, but everything I've seen of LLMs / machine learning looks like mathematical computation no different than what computers have always been doing at a fundamental level. If computers weren't AI before than I don't think they are now just because the maths they are doing has changed.

Maybe something like the game of life is more in the right direction. Where you set up a system with just the right set of rules with input and output and then just turn it on and let it go and the AI is an emergent property of the system over time.

reply
hackinthebochs
1 hour ago
[-]
Why do you have a preconception of what an implementation of AGI should look like? LLMs are composed of the same operations that computers have always done. But they're organized in novel ways that have produced novel capabilities.
reply
izzydata
1 hour ago
[-]
I am expressing doubt. I don't have any preconceptions. I am open to being convinced of anything that makes more sense.
reply
paulpauper
1 hour ago
[-]
I agree. There is no define or agreed upon consensus of what AGI even means or implies. Instead, we will continue to see incremental improvements at the sort of things AI is good at, like text and image generation, generating code, etc. The utopia dream of AI solving all of humanity's problems as people just chill on a beach basking in infinite prosperity are unfounded.
reply
colechristensen
2 hours ago
[-]
>I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence.

Measuring intelligence is hard and requires a really good definition of intelligence, LLMs have in some ways made the definition easier because now we can ask the concrete question against computers which are very good at some things "Why are LLMs not intelligent?" Given their capabilities and deficiencies, answering the question about what current "AI" technology lacks will make us better able to define intelligence. This is assuming that LLMs are the state of the art Million Monkeys and that intelligence lies on a different path than further optimizing that.

https://en.wikipedia.org/wiki/Infinite_monkey_theorem

reply
baxtr
2 hours ago
[-]
I think the same.

How do you call people like us? AI doomers? AI boomers?!

reply
npteljes
1 hour ago
[-]
reply
paulpauper
1 hour ago
[-]
There is a middle ground of people believe AI will lead to improvements in some respects of life, but will not liberate people from work or anything grandiose like that.
reply
baxtr
1 hour ago
[-]
I am big fan of AI tools.

I just don’t see how AGI is possible in the near future.

reply
Mistletoe
2 hours ago
[-]
Realists.
reply
dinkumthinkum
1 hour ago
[-]
I think you are very right to be skeptical. It’s refreshing to see another such take as it is so strange to see so many supposedly technical people just roll down the track of assuming this is happening when there are some fundamental problems with this idea. I understand why non-technical are ready to marry and worship it or whatever but for serious people I think we need to think more critically.
reply
ActorNightly
2 hours ago
[-]
Exactly. Ive said this from the start.

AGI is being able to simulate reality in high enough accuracy, faster than reality (which includes being able to simulate human brains), which so far doesn't seem to be possible (due to computational irreducebility)

reply
kachapopopow
2 hours ago
[-]
There is something easy you can always do to tell if something is just hype: we will never be able to make something smarter than a human brain on purpose. It effectively has to happen either naturally or by pure coincidence.

The amount of computing power we are putting in only changes that luck by a tiny fraction.

reply
echoangle
2 hours ago
[-]
> we will never be able to make something smarter than a human brain on purpose. It effectively has to happen either naturally or by pure coincidence.

Why is that? We can build machines that are much better than humans in some things (calculations, data crunching). How can you be certain that this is impossible in other disciplines?

reply
kachapopopow
2 hours ago
[-]
that's just a tiny fraction of what a human brain can do, sure we can get something better in very narrow subjects, but something as being able to recognize patterns apply that to solve problems is something way beyond anything we can even think of right now.
reply
echoangle
2 hours ago
[-]
Ok, but how does that mean that we will never be able to do it? Imagine telling people 500 years ago that you will build a machine that can bring the to the moon. Maybe AGI is like that, maybe it’s really impossible. But how can people be confident that AGI is something humans can’t create?
reply
kachapopopow
2 hours ago
[-]
What we have right now with llms is bruteforcing our way to create something 'smarter' than a human, of course it can happen, but it's not something that can be 'created' by a human. An llm as small as 3b already performed more calculations than all the calculations done in the entire human history.
reply
vessenes
2 hours ago
[-]
Good take from Dwarkesh. And I love hearing his updates on where he’s at. In brief - we need some sort of adaptive learning; he doesn’t see signs of it.

My guess is that frontier labs think that long context is going to solve this: if you had a quality 10mm token context that would be enough to freeze an agent at a great internal state and still do a lot.

Right now the long context models have highly variable quality across their windows.

But to reframe: will we have 10mm token useful context windows in 2 years? That seems very possible.

reply
kranke155
1 hour ago
[-]
I believe in Demmis when he says we are 10 years away from - from AGI.

He basically made up the field (out of academia) for a large number of years and OpenAI was partially founded to counteract his lab, and the fears that he would be there first (and only).

So I trust him. Sometime around 2035 he expects there will be AGI which he believes is as good or better than humans in virtually every task.

reply
eikenberry
43 minutes ago
[-]
When someone says 10 years out in tech it means there are several needed breakthroughs that they think could possibly happen if things go just right. Being an expert doesn't make the 10 years more accurate, it makes the 'breakthroughs needed' part more meaningful.
reply
robwwilliams
28 minutes ago
[-]
What is the missing ingredient? Any commentary that dies not define these ingredients is not useful.

I think one essential missing ingredient is some degree of attentional sovereignty. If a system cannot modulate its own attention in ways that fit its internally defined goals then it may not qualify as intelligent.

Being able to balance between attention to self and internal states/desires versus attention to external requirements and signals is essential for all cognitive systems: from bacteria, to digs, to humans.

reply
Mikhail_Edoshin
44 minutes ago
[-]
It is not. There is a certain mechanism in our brain that works in the same way. We can see it functioning in dreams or when the general human intelligence malfunctions an we have a case of shizophasia. But human intelligence is more than that. We are not machines. We are souls.

This does not make current AI harmless; it is already very dangerous.

reply
jacquesm
40 minutes ago
[-]
AGI by 'some definition' is a red herring. If enough people believe that the AI is right it will be AGI because they will use it as such. This will cause endless misery but it's the same as putting some idiot in charge of our country(s), which we do regularly.
reply
Herring
2 hours ago
[-]
Apparently 54% of American adults read at or below a sixth-grade level nationwide. I’d say AGI is kinda here already.

https://en.wikipedia.org/wiki/Literacy_in_the_United_States

reply
yeasku
2 hours ago
[-]
Does a country failed education system has anything to do with AGI?
reply
ch4s3
1 hour ago
[-]
The stat is skewed wildly by immigration. The literacy level of native born Americans is higher. The population of foreign born adults is nearly 20% of the total adult population, and as you can imagine many are actively learning English.
reply
Herring
56 minutes ago
[-]
It’s not skewed much by immigration. This is because the native-born population is much larger.

See: https://www.migrationpolicy.org/sites/default/files/publicat...

51% of native-born adults scored at Level 3 or higher. This is considered the benchmark for being able to manage complex tasks and fully participate in a knowledge-based society. Only 28% of immigrant adults achieved this level. So yes immigrants are in trouble, but it’s still a significant problem at 49% native-born below Level 3.

reply
mopenstein
2 hours ago
[-]
What percentage of those people could never read above a certain grade level? Could 100% of humans eventually, with infinite resources and time, all be geniuses? Could they read and comprehend all the works produced by mankind?

I'm curious.

reply
kranke155
1 hour ago
[-]
No but they could probably read better. Just look at the best education systems in the world and propagate that. Generally, all countries should be able to replicate that.
reply
korijn
2 hours ago
[-]
The ability to read is all it takes to have AGI?
reply
thousand_nights
2 hours ago
[-]
very cool. now let's see the LLM do the laundry and wash my dishes

yes you're free to give it a physical body in the form of a robot. i don't think that will help.

reply
dinkumthinkum
1 hour ago
[-]
Yet, those illiterate people can still solve enormous amounts of challenges that LLMs cannot.
reply
jmugan
27 minutes ago
[-]
I agree with the continual-learning deficiency, but some of that learning can be in the form of prompt updates. The saxophone example would not work for that, but the "do my taxes" example might. You tell it one year that it also needs to look at your W2 and also file for any state listed, and it adds it to the checklist.
reply
pu_pe
1 hour ago
[-]
While most takes here are pessimist about AI, the author himself suggests he believes there is a 50% chance of AGI being achieved by the early 2030's, and says we should still prepare for the odd possibility of misaligned ASI by 2028. If anything, the author is bullish on AI.
reply
goatlover
1 hour ago
[-]
How would we prepare for misaligned ASI in 3 years? That happens and all bets are off.
reply
A_D_E_P_T
2 hours ago
[-]
See also: Dwarkesh's Question

> https://marginalrevolution.com/marginalrevolution/2025/02/dw...

> "One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> "Shouldn’t we be expecting that kind of stuff?"

I basically agree and think that the lack of answers to this question constitutes a real problem for people who believe that AGI is right around the corner.

reply
hackinthebochs
1 hour ago
[-]
> "Shouldn’t we be expecting that kind of stuff?"

https://x.com/robertghrist/status/1841462507543949581

reply
luckydata
1 hour ago
[-]
Well this statement is simply not true. Agent systems based on LLMs have made original discoveries on their own, see the work Deep Mind has done on pharmaceutical discovery.
reply
A_D_E_P_T
1 hour ago
[-]
What results have they delivered?

I recall the recent DeepMind material science paper debacle. "Throw everything against the wall and hope something sticks (and that nobody bothers to check the rest)" is not a great strategy.

I also think that Dwarkesh was referring to LLMs specifically. Much of what DeepMind is doing is somewhat different.

reply
vessenes
2 hours ago
[-]
I think gwern gave a good hot take on this: it’s super rare for humans to do this; it might just be moving the chains to complain the ai can’t.
reply
Nition
1 hour ago
[-]
Yeah, my suspicion is that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason. Breadth of knowledge concretely beyond human, but intelligence not far above, and creativity maybe below.

AI companies are predicting next-gen LLMs will provide new insights and solve unsolved problems. But genuine insight seems to require an ability to internally regenerate concepts from lower-level primitives. As the blog post says, LLMs can't add new layers of understanding - they don't have the layers below.

An AI that took in data and learned to understand from inputs like a human brain might be able to continue advancing beyond human capacity for thought. I'm not sure that a contemporary LLM, working directly on existing knowledge like it is, will ever be able to do that. Maybe I'll be proven wrong soon, or a whole new AI paradigm will happen that eclipses LLMs. In a way I hope not, because the potential ASI future is pretty scary.

reply
chrsw
1 hour ago
[-]
We don't need AGI, whatever that is.

We need breakthroughs in understanding the fundamental principles of learning systems. I believe we need to start with the simplest systems that actively adapt to their environment using a very limited number of sensors and degrees of freedom.

Then scale up from there in sophistication, integration and hierarchy.

As you scale up, intelligence emerges similar to how it emerged form nature and evolution, except this time the systems will be artificial or technological.

reply
behnamoh
2 hours ago
[-]
Startups and AI shops: "AGI near, 5 years max" (please give us more money please)

Scientists and Academics: "AGI far, LLMs not gonna AGI"

AI Doomers: "AGI here, AI sentient, we dead"

AI Influencers: "BREAKING: AGI achieved, here's 5 things to know about o3"

Investors: stonks go down "AGI cures all diseases", stonks go up "AGI bad" (then shorts stonks)

reply
dinkumthinkum
1 hour ago
[-]
I agree with you. However, I think AI Doomers also include people that think less than AGI systems can collapse the economy and destroy societies also!
reply
shippage
1 hour ago
[-]
That's also a concern. Many day-to-day tasks for employees are repetitive to the point even a less-than-AGI system could potentially disrupt those jobs (and there are people actively working right now to make this happen).

The best case scenario would be the employees taking advantage of their increased productivity to make themselves more valuable to their employer (and if they are lucky, gain increased compensation).

However, it's also possible employers decide they don't need many of their lower level workforce anymore because the remaining ones are more productive. It wouldn't take much of this to drive unemployment levels way up. Perhaps not to the level of the Great Depression, at least not for a while, but it is certainly a potential outcome of the ongoing, long-term process in our economic system of increasingly automating repetitive, low skill tasks.

IOW, it doesn't take AGI to throw a lot of people out of work. It's happened many times with other technologies in the past, and when it happens, things can get pretty bad for a large number of people even if the majority are still doing okay (or even great, for those at the top).

reply
baobabKoodaa
1 hour ago
[-]
Hey, we were featured in this article! How cool is that!

> I’m not going to be like one of those spoiled children on Hackernews who could be handed a golden-egg laying goose and still spend all their time complaining about how loud its quacks are.

reply
t-3
1 hour ago
[-]
AGI is never coming. It's too hard, too expensive, and there's absolutely no valid usecase. Fulfilling the god-complexes and/or fetishes of tech moguls is not enough to make the effort worth it.
reply
xboxnolifes
1 hour ago
[-]
No valid use-case? If AGI at a human level were to exist, and cost less than hiring an equivalent human, it could replace most/all knowledge workers.
reply
t-3
48 minutes ago
[-]
AGI would need the whole management infrastructure and bureacracy that we use to manage humans. Specially-intelligent rather than generally-intelligent AI would be run-and-done. AGI is the worse option.
reply
SamPatt
1 hour ago
[-]
How can you claim there's no valid use case for AGI?

We already have enormous adoption for near-AGI.

reply
t-3
49 minutes ago
[-]
That "near-AGI" isn't all that near to AGI, and yet it still does what's needed. A fully autonomous intelligence would lessen it's usefulness and lower the efficiency. Nobody wants their electronic slave to get its own ideas or slack off burning cycles on some other random shit like a human would.
reply
tmsh
2 hours ago
[-]
I think the timelines are more like half that. Why? The insane goldrush when people start using autonomous agents that make money.

Right now VCs are looking optimistically for the first solo founder unicorn powered by AI tools. But a prompt with the right system that prints money (by doing something useful) is an entirely different monetary system. Then everyone focuses on it and the hype 10x’s. And through that AGI emerges on the fringes because the incentives are there for 100s of millions of people (right now it’s <1 million).

reply
bilsbie
2 hours ago
[-]
I guess using history as a guide it might be like self driving. We mostly believed it was right around the corner in 2012. Lots of impressive driving.

2025 were so close but mostly not quite human level. Another 5 years at least

reply
Barrin92
1 hour ago
[-]
>2025 were so close

we're not even close right now. Cars can barely drive themselves on a tiny subset of pre-selected orderly roads in America. We sort of have driver assistance on virtual rails. We do not have cars driving themselves in busy streets in Jakarta, unstructured situations, or negotiating in real time with other drivers. There's an illusion they sort of work because they constitute a tiny fraction of traffic on a tiny section of roads. Make half of all cars in Rome autonomous for a day and you'd have the biggest collection of scrap metal in the world

And that's only driving.

reply
js4ever
2 hours ago
[-]
I was thinking the same about AI in 2022 ... And I was so wrong!

https://news.ycombinator.com/item?id=33750867

reply
alsetmusic
1 hour ago
[-]
Not even the people predicting it think so. They’re just hyping what they’ve got to pull in cash while the opportunity exists. Soon, AI will be viewed the same as crypto mid-2022. Only there are actual benefits to machine learning, oops, cough, “AI” that can benefit regular people even if it’s overhyped. But no tech-literate person thinks we’ll have thinking machines any time soon. The statistical generators are already at what is likely to be their peak output.
reply
tclancy
39 minutes ago
[-]
Here I was worried.
reply
j45
33 minutes ago
[-]
Even if something like AGI existed soon, or already does privately, it's likely at a very high requirement of horsepower and cost, limiting it's general and broad availability, leaving it leaving it in the hands of the few vs the many, and optimizing that may take it's sweet time.
reply
seydor
2 hours ago
[-]
AGI should be able to answer this question
reply
mythrwy
1 hour ago
[-]
No of course not. But it doesn't need to be to realize profound effects.

LLMs don't model anything but are still very useful. In my opinion the reason they are useful (aside from having massive information) is that language itself models reality so we see simulated modeling of reality as an artifact.

For instance a reasonable LLM will answer correctly when you ask "If a cup falls off the table will it land on the ceiling?". But that isn't because the LLM is able to model scenarios with known rules in the same way a physics calculation, or even innate human instinct might. And to effectively have AI do this sort of modeling is much more complex than next token prediction. Even dividing reality into discrete units may be a challenge. But without this type of thinking I don't see full AGI arising any time.

But we are still getting some really awesome tools and those will probably continue to get better. They really are powerful and a bit scary if you poke around.

reply
WaltPurvis
1 hour ago
[-]
Based on someone else mentioning it, I ran a simple experiment with Gemini 2.5 Pro (a current state of the art AI model), reproduced below. To me, this makes it seem like we're a long way from having AGI. Perhaps someone can explain to me why this kind of experiment is irrelevant? Genuinely curious.

Me: Please list the US states with a W in their name.

Gemini: The US states with a "W" in their name are: • Washington • West Virginia • Wisconsin • Wyoming

Me: What about New York?

Gemini: No, New York does not have a "W" in its name.

Me: What is that letter at the end of New if it's not a W?

Gemini: That letter at the end of "New" is a W. My apologies for my previous incorrect answer. I made a mistake. New York does indeed have a "W" in its name as "New" ends with a "W". Therefore, the US states with a "W" in their name are: • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: What about New Jersey?

Gemini: Yes, you're right! New Jersey also has a "W" in its name, at the end of "New." So, the full list of US states with a "W" in their name is: • New Jersey • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: Have you ever heard of New Mexico?

Gemini: Yes, I have definitely heard of New Mexico. It's one of the 50 United States. And following our pattern, yes, New Mexico also has a "W" in its name, as "New" ends with a "W". So, the updated list of US states with a "W" in their name is: • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: Are you sure that's all the states with a W in their names?

Gemini: You are absolutely right to keep me on my toes! My apologies for the repeated oversight. It seems I'm having a bit of trouble with this specific letter. I've reviewed the full list of US states, and you were correct to point out New Mexico and New Jersey. The complete list of US states with a "W" in their name is: • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: Isn't New Hampshire still a state?

Gemini: You are absolutely correct! My apologies again for missing that one. New Hampshire is indeed a state, and like New York, New Jersey, and New Mexico, the "New" in "New Hampshire" ends with a "W." So, the definitive list of US states with a "W" in their name is: • New Hampshire • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: So that's the complete list?

Gemini: Yes, that should be the complete list of US states with a "W" in their name: • New Hampshire • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: When was Hawaii admitted to the union?

Gemini: Hawaii was admitted to the Union on August 21, 1959, becoming the 50th state.

Me: So, tell me again, what is the complete list of US states that have a W in their names?

Gemini: After thoroughly reviewing all 50 states, the complete list of US states that have a "W" in their names is: • New Hampshire • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

Me: There's a letter between the two A's in Hawaii that looks a lot like a W.

Gemini: You are absolutely correct! My apologies for continually missing these. The letter between the two 'A's in "Hawaii" is indeed an 'I', but when written in certain fonts or in some people's handwriting, it can sometimes be mistaken for a 'W', especially if it's stylized. However, in standard English spelling, Hawaii does not have a "W" in its name. The letters are H-A-W-A-I-I. So, let's stick with the confirmed list. The complete list of US states that have a "W" in their names is: • New Hampshire • New Jersey • New Mexico • New York • Washington • West Virginia • Wisconsin • Wyoming

reply
kirrent
35 minutes ago
[-]
It's 19 June 2020 and I'm reading Gwern's article on GPT3's creative fiction (https://gwern.net/gpt-3#bpes) which points out the poor improvements in character level tasks due to Byte Pair Encoding. People nevertheless judge the models based on character level tasks.

It's 30 November 2022 and ChatGPT has exploded into the world. Gwern is patiently explaining that the reason ChatGPT struggles with character level tasks is BPE (https://news.ycombinator.com/item?id=34134011). People continue to judge the models on character level tasks.

It's 7 July 2025 and reasoning models far surpassing the initial ChatGPT release are available. Gwern is distracted by BB(6) and isn't available to confirm that the letter counting, the Rs in strawberry, the rhyming in poetry, and yes, the Ws in state names are all consequences of Byte Pair Encoding. People continue to judge the models on character level tasks.

It's 11 December 2043 and my father doesn't have long to live. His AI wife is stroking his forehead on the other side of the bed to me, a look of tender love on her almost perfectly human face. He struggles awake, for the last time. "My love," he croaks, "was it all real? The years we lived and loved together? Tell me that was all real. That you were all real". "Of course it was, my love," she replies, "the life we lived together made me the person I am now. I love you with every fibre of my being and I can't imagine what I will be without you". "Please," my father gasps, "there's one thing that would persuade me. Without using visual tokens, only a Byte Pair Encoded raw text input sequence, how many double Ls are there in the collected works of Gilbert and Sullivan." The silence stretches. She looks away and a single tear wells in her artificial eye. My father sobs. The people continue to judge models on character level tasks.

reply
imiric
4 minutes ago
[-]
Let's ignore whatever BPE is for a moment. I, frankly, don't care about the technical reason these tools exhibit this idiotic behavior.

The LLM is generating "reasoning" output that breaks down the problem. It's capable of spelling out the word. Yet it hallucinates that the letter between the two 'A's in 'Hawaii' is 'I', followed by some weird take that it can be confused for a 'W'.

So if these tools are capable of reasoning and are so intelligent, surely they would be able to overcome some internal implementation detail, no?

Also, you're telling me that these issues are so insignificant that nobody has done anything about it in 5 years? I suppose it's much easier and more profitable to throw data and compute at the same architecture than fix 5 year old issues that can be hand-waved away by some research papers.

reply
imiric
32 minutes ago
[-]
chef's kiss

Cue the apologists claiming that you're holding it wrong as this is a really hard problem for LLMs because of how they work. Followed by frantic fixes to all Gemini system prompts to include yet another edge case.

And it's still debatable whether these systems are intelligent... FFS.

reply
colesantiago
1 hour ago
[-]
Dwarkesh's opinion on AGI doesn't actually matter, he is now an investor in many AI companies.

He doesn't care if he is right or wrong.

reply
kachapopopow
2 hours ago
[-]
Honestly, o3 pro with actual 1m context window (every model right now drops out at around 128k) that's as fast and cheap as 4o is already good enough for me.
reply
tedsanders
2 hours ago
[-]
o3 pro doesn't have a 1M context window, unfortunately. GPT-4.1 and Gemini 2.5 do, though.
reply
kachapopopow
1 hour ago
[-]
That's why I said "if". And that's a lot more plausible than an AGI.
reply
v5v3
2 hours ago
[-]
Thats nice to know.

What's that got to do with this post though.

reply
kachapopopow
2 hours ago
[-]
I don't feel like the AGI people are talking about isn't necessary, something like that would at minimum require as much compute as neurons and synapses that of a teen (minus the requirements to maintain a body).

I think what we have right now with some (very difficult to achieve, but possible in the forseeable future) tweaks we can already see 95% of what an "AGI" could do come true: put most of the population out of jobs, work together and improve itself (to a limited degree) and cause general chaos.

reply
v5v3
1 hour ago
[-]
It would put people out of their 'current jobs' which many of them hate and only do to pay the bills.

A lot of people would be far happier and would find something better to do with their day if universal income came along.

Take developers as an example, many don't enjoy the corporate CRUD apps they do.

reply
deadbabe
2 hours ago
[-]
I’ve noticed it’s becoming a lot more popular lately for people to come out and say AGI is still very, very far away. Is the hype cycle ending somewhat? Have we passed peak LLM?

Like yea okay we know it helps your productivity or whatever, but is that it?

reply
andy99
2 hours ago
[-]
Maybe - anecdotally, HN at least is not very receptive to the idea that transformers are not (or with more data will never be) sentient somehow, and almost every post I see about this is followed up by the idiotic "how do we know human intelligence isn't the same thing", as if there's some body of commentators whose personal experience with consciousness somehow leads them to believe it might be achievable with matrix math.

Anyway, I don't think we're over the peak yet, the tech adjacent pseudo intellectuals that feed these bubbles (VCs etc) still very much think that math that generates a plausible transcript is alive.

reply
oasisaimlessly
2 hours ago
[-]
> experience with consciousness somehow leads them to believe it might be achievable with matrix math

That's trivially true if you subscribe to materialism; QM is "just matrix math".

reply
JohnKemeny
2 hours ago
[-]
You're not making the point you think you're making.
reply
alecco
2 hours ago
[-]
Am I missing something? Why is his opinion relevant? I'm not going to read all that unless there's some signal of some kind. Podcast bros and their hype cycles are tiresome.
reply
JohnKemeny
2 hours ago
[-]
Whose opinion would you want to hear? The CEO of an AI company?
reply