I have been a paid AI practitioner since 1982, so I appreciate the benefits of AI - it is just that I hate the almost religious tech belief that real AI will happen from exponential cost increases for LLM training and inference for essentially linear gains.
I get that some lazy ass people have turned vibe coding and development into what I consider an activity sort-of like mindlessly scrolling social media.
Where are they?
Are we sure that's not a misunderstanding of the terminology? Artificial diamonds, such as cubic zirconia, are not diamonds, and nobody thinks they are. 'Artificial' means it's not the real thing. When will conscious, actual intelligence be called 'synthetic intelligence' instead of 'artificial'?
Incidentally, this comment was written by AI.
When computers have super-human level intelligence, we might be making similar distinctions. Intelligence IS intelligence, whether it's from a machine or an organism. LLMs might not get us there but something machine will eventually.
People who declare that AGI is coming.
I saw someone on the news claiming this recently, but he ran an AI consultancy firm so I suspect he was trying to drum up business.
And nobody working in the space either as ML/AI practitioners, or as philosophers, or as cognitive scientists, even thinks we know what consciousness is, or what is required to create it. So there would be no way to tell if an AI is conscious because we haven’t yet managed to reliably tell if humans, or dogs, or chimpanzees or whales are conscious.
The claim that is often made is that more work on the current generation of AI tech will lead to AGI at a human or better level. I agree with Yann Lecun that this is unlikely.
- "Thousands of CEOs just admitted AI had no impact on employment or productivity..." https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-s...
- “Over 80% of companies report no productivity gains from AI…” https://www.tomshardware.com/tech-industry/artificial-intell...
But fundamentally, large shifts like this are like steering a super tanker, the effects take time to percolate through economies as large and diversified as the US. This is the Solow paradox / productivity paradox https://en.wikipedia.org/wiki/Productivity_paradox
> The term can refer to the more general disconnect between powerful computer technologies and weak productivity growthThere will be a period like we are in now where dramatic capability gain (like recent coding gains) take a while for people to adapt to, however, I think the change will be much faster. Even the speed of uptake in coding tools over the last 3 months has been faster than I predicted. I think we'll see other shifts like this in different sectors where it changes almost over a series of a few months.
That isn’t actually true though, right now everyone has a hard dependency on a cloud service. That is currently sold to them at deep discount by companies that are losing billions.
When the market eventually corrects it’ll be interesting to see how much AI ends up costing. At the very least it will be comparable to the broadband internet connection you mentioned. Possibly a whole lot more.
Isn't that a huge red flag? If customers are being given this product at a discount and it still isn't showing a positive ROI for them, what makes people think it will improve once we're charged full price?
Financially this feels similar to Uber's business plan in the 2010s; undercut the market with unsound pricing propped up by venture capital (PE was literally subsidising taxi fares; they admitted this and their intention to readjust, but no one seemed to care) then stop manipulating the market and allow fares to even out at (gasp) what it cost to get a cab before Uber.
The difference here is that the LLM market is human productivity; enormous subsidies are afforded to Anthropic, OpenAI etc. in the form of VC or compute credit, but eventually those debts will be called in, the free-to-use aspect will vanish because it's simply not profitable, and we'll be left with several premium products that only a few people will actually pay for, and even then that may not be enough to cover their costs. That's when the bubble will burst.
I'm pretty sure that in corpo-speak "inference" excludes the cost of datacenter construction, GPUs and other hardware, manual data cleaning, R&D, administration, etc - basically everything except the power bill for inference.
I have absolutely no problem with companies that run inference only - plenty of them offer open models as a service - they're usefull and their accounting can be believed... but they don't have near $ Trillion valuations and they don't misallocate capital on a vast scale as the frontier models do.
The point of the OP is that closed models don't pay for themselves and, on the scale of the US economy, they provide minuscule economic advantages compared to the enormous investments they consume.
They're spending more than they're making. For the foreseeable future, saying "we could be profitable if we stopped training" if goofy, because they can't stop. If they do, no one will want to use their product because it will be overtaken by competitors within three months.
I get it that in 10 years all of this might peak and we're gonna be content using old models, but that'll be a very different landscape and Anthropic might not be a part of it anymore if they don't start making money before that.
I would personally be happy using gpt 5.3 codex for the foreseeable future, with just improvements in harnesses
IMO we're already at the point where even if these company collapse and the models end up being sold at the cost of inference (no new training), we would be massively ahead
In the past 30 days I have burned $78.19 in API token costs with my $20/month Claude Pro subscription. In January I burnt over $300 in API token costs.
That’s.. kinda the question.
And also that may be the case for Anthropic who have fewer free users, a large enterprise business, and less generous rate limits on their subscriptions. I don't know if OpenAI or Google have commented. I suspect OpenAI is in a worse position given their massive non-paying consumer base.
To my eyes, the problem is not the productivity gain arriving slowly, but the immediate draining of funding from virtually all other areas of innovation.
the same firms "predict sizable impacts" over the next three years
late 2025 was an inflection point for a lot of companies
"The Productivity Paradox" is what they called it when people were skeptical that computer would end up finding a place in the office. There are articles from the 90s complaining about how much people are spending on buying computers for no real impact on productivity https://dl.acm.org/doi/10.1145/163298.163309
Once confronted with reality we have a "productivity paradox"?
I needed and embedded document based database, a friend of mine with 30 years experience was vibe coding a database in Rust and I asked him if he can make it support Swift and be embedded in iOS and in few minutes he delivered that using Claude. Then I started vibe working on it with Codex adding features I wanted and it worked as expected.
There’s going to be fundamental changes in how we program computers and consequently the IT industry.
It's not that AI can't be useful, but that there's a learning curve, and early in the learning curve we should expect as many resources to be spent learning as resources are saved by using the thing. A macro level view of the economy as a whole sees this as "zero economic growth".
So if you want to think of it in economic terms, some software consulting firm that would otherwise have made six figures instead did not. The vast majority of the money we would have spent stayed in our pocket. Slight decreases like this in “velocity of money” no doubt add up to significant sums.
Today you have to be blind to not see the change that is coming.
World has its own (massive) inertia, burocracy present in businesses accounting for a big part in it.
AI itself is moving fast but not at infinite speeds. We start to have good enough tooling but it's not yet available to everyone and it still hangs on too many hacks that will need to crystalize. People have a lot of mess to sort out in their projects to start taking full advantage of AI tooling - in general everybody has to do bottom up cleanup and documentation of all their projects, setup skills and whatnot and that's assuming their corp is ok with it, not blocking it and "using ai" doesn't mean that "you can copy paste code to/from copilot 365".
As people say - something changed around Dec/Jan. We're only now going to start seeing noticable changes and changes themselves will start speeding up as well. But it all takes time.
Yes, Anthropic decided they wanted to IPO and got the hype machine in full swing.
Don’t get me wrong LLMs are here to stay but how we’re currently using them is likely going to change a lot. Stuff like this:
> in general everybody has to do bottom up cleanup and documentation of all their projects, setup skills and whatnot and that's assuming their corp is ok with it, not blocking it
Is not needed to get a lot out of AI, and is mostly snake oil. Integrating them with actionable feedback is, but that takes a lot of time and rethinking of some existing systems.
I don’t like the Internet analogy cause that’s like producing a new raw material, but AI is gonna be like Excel eventually (one of the most important pieces of software in the world).
If you replace OpenClaw with any number of other hot LLM products/projects, I’ve been hearing that same exact sentiment for numerous 6-to-12-month periods. I’d argue we have no idea how long it’s doing to be, but it’s probably not very soon.
Why? It’s descriptive of the “past”. While you’re trying to predict the near/far “future” and project your assumptions. Two different things.
Things are actually slowing down. And society will still see AI adding little to next years report. The costs still outweigh the benefits.
the change that is coming.
Everything you argue reinforces that net output was still basically zero last year. I don't see them talking about 2026 data..It's not good enough to just say oreo ceos say we need to more oreos.
There's a real grey area where these tools are useful in some capacity, and in that confusion we're spending billions. Too may people are saying too conflicting things and chaos is never good for clear long-term growth.
Either that 20 years is completelly inapplicable to AI, or we're in for a world of hurt. There's no in between given the kinds of bets that have been made.
They don’t have time to wait for all the companies to pick up use of AI tooling in their own pace.
So they lie and try to manufacture demand. Well demand is there but they have to manufacture FOMO so that demand materializes now and not in 20 or 10 years.
To be fair, it isn't necessarily the same people doing both at once. Sometimes there are two groups under the same general banner, where one makes the big-claims, and another responds to perceived criticism of their lesser-claim.
I'm going to be honest, you can feel the AGI when you use newer agentic tools like OpenClaw or Claude. It's an entirely different world from GPT-4.0. This is serious intelligence.
Superintelligence in 3 years doesn't really sound that crazy given how quickly I can write code with Claude. I mean we're 90%-95% of the way there already.
You're right. I can feel how far away it is and how these tools will in no way be capable of getting us there.
You might as well be telling people to “HODL”
You can feel it coming.
Yeah? So you must have a clear idea of where "there" is, and of the route from here to there?
Forgive me my skepticism, but I don't believe you. I don't believe that you actually know.
whether or not these companies can turn a profit - time will tell. but I am betting that our massively profitable companies (which are biggest spenders of course) perhaps know what they are doing and just maybe they should get the benefit of the doubt until they are proven wrong. but if I had to make a wager and on one side I have google, microsoft, amazon, meta... and on the other side I have bunch of AI bubble people with a bunch of time to predict a "crash" I'd put my money on the former...
quite the opposite is happening as evidenced from last earnings reports…
>Many observers disagree that any meaningful "productivity paradox" exists and others, while acknowledging the disconnect between IT capacity and spending, view it less as a paradox than a series of unwarranted assumptions about the impact of technology on productivity. In the latter view, this disconnect is emblematic of our need to understand and do a better job of deploying the technology that becomes available to us rather than an arcane paradox that by its nature is difficult to unravel.
It takes time for technology to show measurable impact in enormous economies. No reason why AI will be any different.
Personally I think AI is unlikely to go the way of NFTs and it shows actual promise. What I'm much less convinced of is that it will prove valuable in a way that's even remotely within the same order of magnitude as the investments being pumped into it. The Internet didn't begin as a massive black hole sucking all the light out of the room for anything else before it really started showing commensurate ROI.
I think there are two layers of uncertainty here. One is, as you say, if the value is worth the investment. The other and possibly bigger issue is who is going to capture the value and how.
Assuming AI turns out to be wildly valuable, I'm not at all convinced that at the end of this money spending race that the companies pouring many billions of dollars into commercial LLMs are going to end up notably ahead of open models that are running the race on the cheap by drafting behind the "frontier" models.
For now the frontier models can stay ahead by burning heaps of money but if/when progress slows toward a limit whatever lead they have is going to quickly evaporate.
At some point I suspect some ugly legal battles as some attempt to construct some sort of moat that doesn't automatically drain after a few months of slowed progress. Google's recent complaining about people distilling gemini could be an early signal of this.
I have no idea how any of that would shake out legally, but I have a hard time sympathizing with commercial LLM providers (who slurped up most existing human knowledge without permission) if/when they start to get upset about people ripping them off.
Most idiots like Columbus died in obscurity.
The iPhone killer UX + App store release can be directly traced to the growth in tech in the subsequent years its release.
We might have been possibly better of actually, with the Apple walled garden abominations and user device lockdowns not being dragged into the mainstream.
Adult life doesn’t have to be boring drudgery, you know. I mean, it mostly is, but the rare moments of childlike joy and excitement are some of the best parts.
As far as the putting a damper on anything, nope it doesn’t. And it never will.
The people excited about AI are excited because of the impacts they see on their own jobs and daily lives. We don’t care what Goldman Sachs has to say about productivity.
And yeah, blah blah they burn money blah blah. Check Anthropic CEO interviews. He openly describe the balance problem : - cost of training a new model - newly built infra ratio of training vs inference - market adoption, that is despite extremely quick is not unlimited, since even market is not unlimited.
essentially it's a tricky balance, between you do not invest today you will loose tomorrow vs you invest too much and go bankrupt next year.
See his interview in Dwarkesh's podcast: https://www.youtube.com/watch?v=c0-0gGdDJyE&t=4983s
I'm genuinely not sure. We are all computer people in this forum, so it may have improved our lives. But for many people, information technology has lessened the time spent in a given week or year on activities they find meaningful.
"On top of that, there is currently no reliable way to accurately measure how AI use among businesses and consumers contributes to economic growth."
No doubt people are using it work ( https://www.gallup.com/workplace/701195/frequent-workplace-c... ) the question is how much productivity results and to whom does it accrue.
Partially this is AI capability (both today and in the past), partially this is people taking time to change their tools.
With all this recent Claw stuff, it's weird that as people who should be championing the opposite due to our field of study or industry, some of us are now pushing a method of automation that is akin to robo vaccums randomly tracking dogshit across the carpet.
In my working environment, people get dressed down for repeatedly communicating incorrect information. If they do it repeatedly in an automated fashion they will be publically shamed if they are senior enough.
I have no idea what benefit a human-in-loop for sending an automatically generated emails or agent generated sdks or buliding blocks has when there is no guarentee or even a probability of correctness attached to the result. The effort for vaildating and editing a generated email can be equally or greater than manually writing a regular email let alone one of certain complexity or significance.
And what do we do to create to try to guarentee a semblance of correctness? We add another layer of automated validation performed by, you guessed it, the same crew of wacky fuzzy operators that can inject correct sounding gibberish or business workflows at any moment.
It's almost like trying to build a house of cards faster than the speed with which it is collapsing. There seems to be a morbid fascination among even the best of us with how far things can be taken until this way forward leads to some indisputable catastrophe.
Is it possible that this sort of problem will be fixed? Hypothetically, what would happen in a scenario where one of these apps can do in 1 hr the work that would take a developer a month, reliably? Or is your premise that will NEVER happen?
We need to get past the hype first and let the cash grabbers crash.
After that, with a clear mind we can finally think about engineering this technology in a sane and useful way.
https://www.washingtonpost.com/technology/2026/02/23/ai-econ...
I have no doubt that people will use this to axe grind about they think AI is dumb in general, but I feel like that misses the point that this is mostly about data center construction contributing to GDP.
Economists and businesses are calling BS and saying AI is cool, but basically adding zero measurable value with 95% of AI projects failing.
The truth is likely somewhere in the middle, but it seems unlikely this bubble can continue much longer.
But now we have something else happening. It's hard to find an application for something that makes a lot of mistakes. That's not the same issue. The issue then was that no one had written the software yet. Everyone knew what software needed writing. The future was obvious. Here, not so much. We can't see how to make it not make mistakes.
We have to hope someone will come up with a solution to that. Otherwise their big bets on something non-productive won't pan out the same way that the computer did, and we're all going to suffer for it.
Opus 4.6 is SPECIAL. nothing like other models. This is a new breed of intelligence.
I give it 18-24 months until we see a full-scale societal transformation.
And most jobs that can be automated already has been automated using traditional software.
I'm not sure if LLMs will change that or not
Having a higher-paid, qualified employee supervising multiple AIs as the human only needs to spot for mistakes - maybe.
> If AI can't do 100% of a job then you can't remove the job.