“I’m not an ML expert and I haven’t read your article, but here’s my amazing experience with LLM Agents that changed my life:”
Now what....? Whats happening right now that should make me care that AGI is here (or not). Whats the magic thing thats happening with AGI that wasn't happening before?
<looks out of window> <checks news websites> <checks social media...briefly> <asks wife>
Right, so, not much has changed from 1-2 years ago that I can tell. The job markets a bit shit if you're in software...is that what we get for billions of dollars spent?
The writing is on the wall. Even if there's no new advances in technology, the current state is upending jobs, education, media, etc
It took one September. Then as soon as you could take payments on the internet the rest was inevitable and in _clear_ demand. People got on long waiting lists just to get the technology in their homes.
> no new advances in technology
The reason the internet became so accessible is because Moore was generally correct. There was two corresponding exponential processes that vastly changed the available rate of adoption. This wasn't at all like cars being introduced into society. This was a monumental shift.
I see no advances in LLMs that suggest any form of the same exponential processes exist. In fact the inverse is true. They're not reducing power budgets fast enough to even imagine that they're anywhere near AGI, and even if they were, that they'd ever be able to sustainably power it.
> the current state is upending jobs
The difference is companies fought _against_ the internet because it was so disruptive to their business model. This is quite the opposite. We don't have a labor crisis, we have a retention crisis, because companies do not want to pay fair value for labor. We can wax on and off about technology, and perceptrons, and training techniques, or power budgets, but this fundamental fact seems the hardest to ignore.
If they're wrong this all collapses. If I'm wrong I can learn how to write prompts in a week.
It's the classic "slowly, then suddenly" paradigm. It took decades to get to that one September. Then years more before we all had internet in our pocket.
> The reason the internet became so accessible is because Moore was generally correct.
Can you explain how Moore's law is relevant to the rise of the internet? People didn't start buying couches online because their home computer lacked sufficient compute power.
> I see no advances in LLMs that suggest any form of the same exponential processes exist.
LLMs have seen enormous growth in power over the last 3 years. Nothing else comes close. I think they'll continue to get better, but critically: even if LLMs stay exactly as powerful as they are today, it's enough to disrupt society. IMHO we're already at AGI.
> The difference is companies fought _against_ the internet
Some did, some didn't. As in any cultural shift, there were winners and losers. In this shift, too, there will be winner and losers. The panicked spending on data centers right now is a symptom of the desire to be on the right side of that.
> because companies do not want to pay fair value for labor.
Companies have never wanted to pay fair value for labor. That's a fundamental attribute of companies, arising as a consequence of the system of incentives provided in capitalism. In the past, there have been opportunities for labor to fight back: government regulation, unions. This time that won't help.
> If I'm wrong I can learn how to write prompts in a week.
Why would you think that anyone would want you to write prompts?
Rapid de industrialization followed by the internet and social media almost broke our society.
Also, I don’t think people necessarily realize how close we were to the cliff in 2007.
I think another transformation now would rip society apart rather than take us to the great beyond.
Most of all, AI will exacerbate the lack of trust in people and institutions that was kicked into high gear by the internet. It will be easy and cheap to convince large numbers of people about almost anything.
The GFC was a big recession, but I never thought society was near collapse.
Just about the time it hit the mainstream coincidentally, is when the enshitification began to go exponential. Be careful what you wish for.
Has it runaway yet? Not sure, but is it currently in the process of increasing intelligence with little input from us? Yes.
Exponential graphs always have a slow curve in the beginning.
Will there still be ice cream after Tuesday? General societal collapse would be hard to bare without ice cream.
Firefox introducing their dev debugger many years ago "completely changed my life and the way I write code and run my business"
You get the idea. Yes, the day to day job of software engineering has changed. The world at large cares not one jot.
In what units?
In the meantime, I've had to continuously hear talk about AI, both in real life (like at the local pub) AND virtually (tv/radio/news/whatever) and how it's going to change the world in unimaginable ways for the last...2/3 years. Billions upon billions of dollars are being spent. The only tangible thing we have to show is software development, and some other fairly niche jobs, have changed _a bit_.
So yeah, excuse my impatience for the bubble to burst, I can stop having to hear about this shit every day, and I can go about my job using the new tools we have been gifted, while still doing all the other jobs that sadly do not benefit in any similar way.
Are you making 3x the money compounding monthly ?
No?
Then what's the point?
Many people slowly losing jobs and can’t find new ones. You’ll see effects in a few years
A slightly different angle on this - perhaps AGI doesn't matter (or perhaps not in the ways that we think).
LLMs have changed a lot in software in the last 1-2 years (indeed, the last 1-2 months); I don't think it's a wild extrapolation to see that'll come to many domains very soon.
After enlightenment^WAGI: chop wood, fetch water, prepare food
>Now what....? Whats happening right now that should make me care that AGI is here (or not).
Do you have any insight into what those changes might concretely be? Or are you just trying to instil fear in people who lack critical thinking skills?
I dunno, mixed bag. Value is positive if you can sort the wheat from the chaff for the use cases I've ran by it. I expect the main place it'll shine for the near and medium term is going over huge data sets or big projects and flagging things for review by humans.
All this being said, what I was throwing at it was really not what it was optimized for, and it still delivered some really good ideas.
It's weird that this sentence has two distinct meanings and the author never considers the second or points it out. Maybe Mary is holding a ball for her society friends.
This is true in a specific contextual sense (each token that an LLM produces is from a feed-forward pass). But untrue for more than a year with reasoning models, who feed their produced tokens back as inputs, and whose tuning effectively rewards it for doing this skillfully.
Heck, it was untrue before that as well, any time an LLM responded with more than one token.
> A [March] 2025 survey by the Association for the Advancement of Artificial Intelligence (AAAI), surveying 475 AI researchers, found that 76% believe scaling up current AI approaches to achieve AGI is "unlikely" or "very unlikely" to succeed.
I dunno. This survey publication was from nearly a year ago, so the survey itself is probably more than a year old. That puts us at Sonnet 3.7. The gap between that and present day is tremendous.
I am not skilled enough to say this tactfully, but: expert opinions can be the slowest to update on the news that their specific domain may have, in hindsight, have been the wrong horse. It's the quote about it being difficult to believe something that your income requires to be false, but instead of income it can be your whole legacy or self concept. Way worse.
> My take is that research taste is going to rely heavily on the short-duration cognitive primitives that the ARC highlights but the METR metric does not capture.
I don't have an opinion on this, but I'd like to hear more about this take.
> who feed their produced tokens back as inputs, and whose tuning effectively rewards it for doing this skillfully
Ah, this is a great point, and not something that I considered. I agree that the token feedback does change the complexity, and it seems that there's even a paper by the same authors about this very thing! https://arxiv.org/abs/2310.07923
I'll have to think on how that changes things. I think it does take the wind out of the architecture argument as it's currently stated, or at least makes it a lot more challenging. I'll consider myself a victim of media hype on this, as I was pretty sold on this line of argument after reading this article https://www.wired.com/story/ai-agents-math-doesnt-add-up/ and the paper https://arxiv.org/pdf/2507.07505 ... who brush this off with:
>Can the additional think tokens provide the necessary complexity to correctly solve a problem of higher complexity? We don't believe so, for two fundamental reasons: one that the base operation in these reasoning LLMs still carries the complexity discussed above, and the computation needed to correctly carry out that very step can be one of a higher complexity (ref our examples above), and secondly, the token budget for reasoning steps is far smaller than what would be necessary to carry out many complex tasks.
In hindsight, this doesn't really address the challenge.
My immediate next thought is - even solutions up to P can be represented within the model / CoT, do we actually feel like we are moving towards generalized solutions, or that the solution space is navigable through reinforcement learning? I'm genuinely not sure about where I stand on this.
> I don't have an opinion on this, but I'd like to hear more about this take.
I'll think about it and write some more on this.
Not sure I follow. Are you saying that AI researchers would be out of a job if scaling up transformers leads to AGI? How? Or am I misunderstanding your point.
There's more than one way to do intelligence. Basic intelligence has evolved independently three times that we know of - mammals, corvids, and octopuses. All three show at least ape-level intelligence, but the species split before intelligence developed, and the brain architectures are quite different. Corvids get more done with less brain mass than mammals, and don't have a mammalian-type cortex. Octopuses have a distributed brain architecture, and have a more efficient eye design than mammals.
For a clear analogy, consider how tokenization causes LLMs to behave stupidly in certain cases, even though they're very capable in others.
Maybe AGI's arrival is when one day someone is given an AI to supervise instead of a new employee.
Just a user who's followed the whole mess, not a researcher. I wonder if the scaffolding and bolt-ons like reasoning will sufficiently be an asymptote to 'true AGI'. I kept reading about the limits of transformers around GPT-4 and Opus 3 time, and then those seem basic compared to today.
I gave up trying to guess when the diminishing returns will truly hit, if ever, but I do think some threshold has been passed where the frontier models are doing "white collar work as an API" and basic reasoning better than the humans in many cases, and once capital familiarizes themselves with this idea more, it's going to get interesting.
A CS degree is going to give you much less experience than building projects and businesses yourself.
There doesn't seem to be a reason why AIs should act as these distinct entities that manage each other or form teams or whatever.
It seems to me way more likely that everything will just be done internally in one monolithic model. The AIs just don't have the constraints that humans have in terms of time management, priority management, social order, all the rest of it that makes teams of individuals the only workable system.
AI simply scales with the compute resources made available, so it seems like you'd just size those resources appropriately for a problem, maybe even on demand, and have a singluar AI entity (if it's even meaningful to think of it as such, even that's kind of an anthropomorphisation) just do the thing. No real need for any organisational structure beyond that.
So I'd think maybe the opposite, seems like what agents really means is a way to use fundamentally narrow/limited AI inside our existing human organisations and workflows, directed by humans. Maybe AGI is when all that goes away because it's just obviously not necessary any more.
I'm honestly shocked by the latest results we're seeing with Gemini 3 Deep Think, Opus 4.6, and Codex 5.3 in math, coding, abstract reasoning, etc. Deep Think just scored 84.6% on ARC-AGI-2 (https://deepmind.google/models/gemini/)! And these benchmarks are supported by my own experimentation and testing with these models ~ specifically most recently with Opus 4.6 doing things I would have never thought possible in codebases I'm working in.
These models are demonstrating an incredible capacity for logical abstract reasoning of a level far greater than 99.9% of the world's population.
And then combine that with the latest video output we're seeing from Seedance 2.0, etc showing an incredible level of image/video understanding and generation capability.
I was previously deeply skeptical that the architecture we have would be sufficient to get us to AGI. But my belief in that has been strongly rattled lately. Honestly I think the greatest gap now is simply one of orchestration, data presentation, and work around in-context memory representations - that is, converting work done into real world into formats/representations, etc. amenable for AI to run on (text conversion, etc.) and keeping new trained/taught information in context to support continual learning.
This is the key I think that Altman and Amodei see, but get buried in hype accusations. The frontier models absolutely blow away the majority of people on simple general tasks and reasoning. Run the last 50 decisions I've seen locally through Opus 4.6 or ChatGPT 5.2 and I might conclude I'd rather work with an AI than the human intelligence.
It's a soft threshold where I think people saw it spit out some answers during the chat-to-LLM first hype wave and missed that the majority of white collar work (I mean it all, not just the top software industry architects and senior SWEs) seems to come out better when a human is pushed further out of the loop. Humans are useful for spreading out responsibility and accountability, for now, thankfully.
OpenClaw, et al, are one thing that got me nudged a little bit, but it was Sammy Jankis[1,2] that pushed me over the edge, with force. It's janky as all get out, but it'll learn to build it's own memory system on top of an LLM which definitely forgets.
Whether or not AGI is imminent, and whether or not Sammy Jankis is or will be conscious... it's going to become so close that for most people, there will be no difference except to philosophers.
Is AGI 'right around the corner' or currently already achieved? I agree with the author, no, we have something like 10 years to go IMO. At the end of the post he points to the last 30 years of research, and I would accept that as an upper bound. In 10 to 30 years, 99% of people won't be able to distinguish between an 'AGI' and another person when not in meatspace.
It seems like a prediction like "Bob won't become a formula one driver in a minivan". It's true, but not very interesting.
If Bob turned up a couple of years later in Formula one, you'd probably be right in saying that what he is driving is not a mini van. The same is true for AGI anyone who says it can't be done with current methods can point to any advancement along the way and say that's the difference.
A better way to frame it would be, is there any fundimental, quantifiable ability that is blocking AGI? I would not be surprised if the breakthrough technique has been created, but the research has not described the problem that it solves well enough for us to know that it is the breakthrough.
I realise that, for some the notion of AGI is relatively new, but some of us have been considering the matter for some time. I suspect my first essay on the topic was around 1993. It's been quite weird watching people fall into all of the same philosophical potholes that were pointed out to us at university.
It's a tautology - obviously advancements come through newer, refined methods.
I believe they mean that AGI can't be achieved by scaling the current approach; IOW, this strategy is not scalable, not this method is not scalable.
I feel like it's such a bending of the idea,that it's not really making a prediction of anything at all.
PS The first thing you learn about ML is to compare your models to random to make sure the model didn't degenerate during training.
From my understanding this is now outdated. The deep double descent research showed that although past a certain point performance drops as you increase model size, if you keep increasing it there is another threshold where it paradoxically starts improving again. From that point onwards increasing the parameter count only further improves performance.
Looking into it further, it seems that typical LLMs are in the first descent regime anyway though so my original point is not too relevant for them anyway it seems. Also it looks like the second descent region doesn't always reach a lower loss than the first, it appears to depend on other factors as well.
Sounds like that was quite awhile ago.
I'm not entirely sure where you get your confidence that we've past the ideal model size from, but at least that's a clear prediction so you should be able to tell if and when you are proven wrong.
Just for the record, do you care to put an actual number on something we won't go past?
[edit] Vibe check on user comes out as
Contrarian 45%
Pedantic 35%
Skeptical 15%
Direct 5%
That's got to be some sort of record.for instance yours comes out as
Analytical 45%, Cynical 30%, Pedantic 15%, Melancholic 10%
and mine is
Philosophical 35%, Hardware-Obsessed 25%, Analytically Pedantic 20%, Retro-Nostalgic 15%, Anti-Ad Skeptic 5%
You should consider gathering all of your analysis and pedantry into one easy to manage neurosis.
It's from https://hn-wrapped.kadoa.com
The intelligence we think we recognize is simply an electronic parrot finding the right words in its model to make itself useful.
The issue is that we're not modelling the problem, but a proxy for the problem. RL doesn't generalize very well as is, when you apply it to a loose proxy measure you get the abysmal data efficiency we see with LLMs. We might be able to brute-force "AGI" but we'd certainly do better with something more direct that generalizes better.
because what we have at the moment is specifically intelligent but generally stupid.
What is the benchmark now that the Turing test has been blown out of the water?
The fundamental issue was the assumption that general intelligence is an objective property that can be determined experimentally. It's better to consider intelligence an abstraction that may help us to understand the behavior of a system.
A system where a fixed LLM provides answers to prompts is little more than a Chinese room. If we give the system agency to interact with external systems on its own initiative, we get qualitatively different behavior. The same happens if we add memory that lets the system scale beyond the fixed context window. Now we definitely have some aspects of general intelligence, but something still seems to be missing.
Current AIs are essentially symbolic reasoning systems that rely on a fixed model to provide intuition. But the system never learns. It can't update its intuition based on its experiences.
Maybe the ability to learn in a useful way is the final obstacle on the way towards AGI. Or maybe once again, once we start thinking we are close to solving intelligence, we realize that there is more to intelligence than what we had thought so far.
For example, looking at the statistical distribution of the chat over long time horizons, and looking at input/output correlations in a similar manner would out even the best current models in a "Pro Turing Test." Ironically, the biggest tell in such a scenario would be excess capabilities AI displays that a human would not be able to match.
Humans will never accept we created AI, they'll go so far as to say we were not intelligent in the first place. That is the true power of the AI effect.
More so, our recent advances in AI have massively accelerated robotics evolution. They are becoming smarter, faster, and more capable at an ever increasing rate.
We didn't evolve our brains to do math, write code, write letters in the right registers to government institutions, or get an intuition on how to fold proteins. For us, these are hard tasks.
That's why you get AI competing at IMO level but unable to clean toilets or drive cars in all of the settings that humans do.
That, sadly, is the incentive driving the current wave of AI innovation. Your job will be automated long before your household chores are.
That seems like a massive oversimplification of the things our brains evolved to do.
Humans discovered or invented all of those.
Now think about what we just created.
Evolution transcends hard lines in the temporal sand that "separate species".
It also took billions of years of evolution to get to humans. so, humans, on the grander scale of life, is also just a very recent development.
I think the biggest issue we currently have is with proper memory. But even that is because it's not feasible to post-train an individual model on its experiences at scale. It's not a fundamental architectural limitation.
In a handful of prompts I got the paid version of ChatGPT to say it's possible for dogs to lay eggs under the right circumstances.
I give it 10 years, maybe, for that to exist.
But I'd like to think that, even though you could find exceptions, the average human is never confused about whether dogs can lay eggs or not.
Like, it's in the name.
I asked Opus 4.6 to multiply 50,651 x 895,707. It immediately responded: 45,368,055,057. This was wrong, but very close (good to about 10^-5 relative error). I then asked it how it arrived at that answer, and whether it had used any tool calls. It said:
``` No, I didn't use any specialized tools — I just computed it directly. Though I should be honest: large multiplications done "in my head" like this are prone to error. Let me double-check by breaking it down: 50,651 × 895,707 = 50,651 × (900,000 - 4,293) = 50,651 × 900,000 - 50,651 × 4,293 First term: 50,651 × 900,000 = 45,585,900,000 Second term: 50,651 × 4,293
50,651 × 4,000 = 202,604,000 50,651 × 293 = 14,840,743 Total: 202,604,000 + 14,840,743 = 217,444,743
Result: 45,585,900,000 - 217,444,743 = 45,368,455,257 So my first answer was actually slightly off! The corrected answer is 45,368,455,257. I don't have a calculator tool available — I'm doing this arithmetically by decomposition. For mission-critical calculations, I'd recommend verifying with a calculator since I can make arithmetic errors on large numbers like this. ```
So I would rate OP's claim (which is supported by a paper from 2024) highly implausible. Opus 4.6 appears to be able to do multi-digit arithmetic formally, as well as give remarkably accurate estimates based on something like "number sense".
“yes it will”, “no it won’t” - nobody really knows, it's just a bunch of extremely opinionated people rehashing the same tired arguments across 800 comments per thread.
There’s no point in talking about it anymore, just wait to see how it all turns out.
I just struck me - would be fun to re-read The Age of Spiritual Machines (Kurzweil, 1999.) I was so into it 26-27 years ago. The amount of ridicule this man has suffered on HN is immense.
>Imagine you had a frozen [large language] model that is a 1:1 copy of the average person, let’s say, an average Redditor. Literally nobody would use that model because it can’t do anything. It can’t code, can’t do math, isn’t particularly creative at writing stories. It generalizes when it’s wrong and has biases that not even fine-tuning with facts can eliminate. And it hallucinates like crazy often stating opinions as facts, or thinking it is correct when it isn't.
>The only things it can do are basic tasks nobody needs a model for, because everyone can already do them. If you are lucky you get one that is pretty good in a singular narrow task. But that's the best it can get.
>and somehow this model won't shut up and tell everyone how smart and special it is also it claims consciousness. ridiculous.
Mind you, I used the EXACT same prompts. I don't know which model Perplexity was using since the free version has multiple it chooses from (including Claude 3.0).
I'll go so far as to say LLM agents are AGI-lite but saying we "just need the orchestration layer" is like saying ok we have a couple neurons, now we just need the rest of the human.
When you have a single model that can do all you require, you are looking at something that can run billions of copies of itself and cause an intelligence explosion or an apocalypse.
It feels like an arbitrary bar to perhaps make sure we aren't putting AIs over humans, which they are most certainly in the superhuman category on a rapidly growing number of tasks.
But yeah, I suspect LLM:s may actually get close enough. "Just" add more reasoning loops and corresponding compute.
It is objectively grotesquely wasteful (a human brain operates on 12 to 25 watts and would vastly outperform something like that), but it would still be cataclysmic.
/layperson, in case that wasn't obvious
Yeah, but a human brain without the human attached to it is pretty useless. In the US, it averages out to around 2 kW per person for residential energy usage, or 9 kW if you include transportation and other primary energy usage too.
Maybe the Matrix (1999) with the human battery farms were on to something. :)
However at that point I don't see the value of retaining the human form. It's for a story obviously, but a not-human computational device can still be made out of carbon processing units rather than silicon or semiconductors generally.
Lolwut. I keep having to correct Claude at trivial code organization tasks. The code it writes is correct; it’s just ham-fisted and violates DRY in unholy ways.
And I’m not even a great coder…
You wouldn't expect a Jr. dev to be the best at keeping things dry either.
Well said