AI could be the end of the digital wave, not the next big thing
154 points
3 hours ago
| 27 comments
| thenextwavefutures.wordpress.com
| HN
neals
2 hours ago
[-]
I had to code something on a plane today. It used to be that you couldn't get you packages or check stackoverflow. But now, I'm useless. My mind has turned to pudding. I cannot remember basic boilerplate stuff. Crazy how fast that goes.
reply
dleslie
2 hours ago
[-]
All skill degrade with disuse. For example, here in Canada we have observed a literacy and numeracy skills curve that peaks with post-secondary education and declines with retirement.[0]

Use it or lose it, as it were.

0: https://www150.statcan.gc.ca/n1/daily-quotidien/241210/dq241...

reply
MattRix
1 hour ago
[-]
That is one factor, but it’s not the whole thing. The other key element is “cognitive offloading” where your brain stops doing stuff when it thinks it is redundant.

This is similar to the photo-taking impairment effect where people will remember an event more poorly if they took photos at the event. Their brain basically subconsciously decides it doesn’t need to remember the event because the camera will remember the event instead.

reply
randcraw
3 minutes ago
[-]
The more the role of the tool, the less the role of the craftsman.
reply
rafterydj
1 hour ago
[-]
For my money, while surely it must have been jarring, that experience would seem to say that on-device LLMs are more important programming tools than package repositories.

As another commenter said, the affordability of LLM subscriptions (or, as others are predicting, the lack thereof) is the primary concern, not the technology itself stealing away your skills.

I am far from the definitive voice in the does-AI-use-corrupt-your-thinking conversation, and I don't want to be. I don't want LLMs to replace my thinking as much as the next person, but I also don't want to shun anything useful that can be gained from these tools.

All that said, I do feel that perhaps "dumber" LLMs that work on-device first will allow us to get further and be better, more reliable tools overall.

reply
fendy3002
2 hours ago
[-]
In my 7th years of professionally programming node, not even once I remember the express or html boilerplate, neither is the router definition or middleware. Yet I can code normally provided there's internet accessible. It's simply not worth remembering, logic and architecture worth more IMO
reply
malfist
1 hour ago
[-]
Einstein famously refused to learn people's phone numbers, stating that he could look them up in the phonebook whenever he needed it.

I don't think there is that much value in memorizing rarely used, easily looked up information.

reply
gilleain
1 hour ago
[-]
Agreed, it interests me how much some people emphasise knowing facts - like dates in history or dictionary definitions of words.

Facts alone are like pebbles on a beach, far better (IMO) to have a few stones mortared with understanding to make a building of knowledge. A fanciful metaphor but you know ...

reply
animuchan
21 minutes ago
[-]
This is an entirely false dichotomy though, is it not? One can both know facts and understand logic behind them, it's not like you're creating an RPG character and need to make a choice with limited character points.

(Can't say time is the limiting factor either -- we're both in HN comments, valuing our own time at zero.)

reply
dukky
2 hours ago
[-]
I thought this comment was going the opposite way - previously no internet/googling but now you can run a local model and figure things out without the need for internet at all
reply
wanderingstan
2 hours ago
[-]
Mine as well. 2 years ago my mind was blown that I could code in a language I didn’t know (scala) while on a log train ride with no internet (Amtrak) using a local model on a laptop. Couldn’t believe it.
reply
jimbokun
1 hour ago
[-]
The staggeringly effective compression of LLMs is still under appreciated, I think.

2 years ago you had downloaded onto your laptop an effective and useful summary of all of the information on the Internet, that could be used to generate computer programs in an arbitrarily selected programming language.

reply
leoedin
12 minutes ago
[-]
I got excited about that, until I actually tried to download a model and run it locally and ask it questions. A current gen local LLM which is small enough to live on disk and fit in my laptop's RAM is very prone to hallucination of facts. Which makes it kind of useless.

Ask your local model a verifiable question - for example a list of tallest buildings in Europe. I did it with Gemma on my laptop, and after the top 3 they were all fake. I just tried that again with Gemma-4 on my iphone, and it did even worse - the 3 tallest buildings in Europe are apparently the Burj Khalifa, the Torre Glories and the Shanghai Tower.

I wouldn't call that effective compression of information.

reply
wanderingstan
1 hour ago
[-]
Yes! Continuing on thoughts of LLM compression, I'm now convinced and amazed that economics will dictate that all devices contain a copy of all information on the Internet.

I wrote a post about it: Your toaster will know mesopotamian history because it’s more expensive not too.

https://wanderingstan.com/2026-03-01/your-toaster-will-know-...

reply
queenkjuul
34 minutes ago
[-]
Fairly certain the least expensive option will always be a dumb toaster that just plugs into the wall
reply
wanderingstan
14 minutes ago
[-]
I chose a toaster specifically because it's about the simplest electrical device out there, and thus pushes the thesis to the extreme. But smart toasters are pretty common: https://revcook.com/products/r180-connect-plus-smart-toaster...

And as other commenter pointed out, a smart toaster with ads or data collection can be subsidized and thus be more profitable. (Oh what a world we're headed for!)

In any case, I think the LLM-everywhere thesis holds even strong for even moderate-complexity devices like power plugs, microwaves, and mobile phones.

reply
maartenh
21 minutes ago
[-]
But in that case, it won't be subsidized by the manufacturer!

I'm sure people would get a cheaper toaster in exchange of an ad being burned in your bread.

reply
jimbokun
27 minutes ago
[-]
Not if it requires the toaster company to maintain a different SKU without the LLM chip and sells very few units.
reply
bandrami
2 hours ago
[-]
This conversation keeps missing me because I don't think I've typed out boilerplate in like 20 years.

Were people actually physically typing every character of the software they were writing before a couple of years ago?

reply
sph
38 minutes ago
[-]
Interesting. I don’t think my brother has written boilerplate in 20 years either. He’s a chef.

I on the other hand am a software engineer, so writing code is part of the job title.

reply
bandrami
19 minutes ago
[-]
Right but, were you really not using snippets and templates before LLMs?

Company? Helm? Whatever vi uses that's like company and helm? Haven't IDEs written function calls for you for like decades now?

reply
danelski
42 minutes ago
[-]
Couple of years ago I was (as a human being, not my career span) 20. Spare for the usual StackOverflow / blog snippets, that was my experience and I suppose most of those just starting out. I think it's very recent to have fresh grads that barely type code themselves.
reply
dwedge
2 hours ago
[-]
Will you do anything differently knowing this? Does the risk of LLMs being unaffordable to you in the near future make you wary about losing the skills?
reply
joseda-hg
1 hour ago
[-]
Open Models are currently within reach for most of the kind of writing I do I still decide what and why it generates what it does, I just don't do it manually

I'm not super worried, either I still do the last leg of the work, or I go back an abstraction level with my prompts and work there

reply
olmo23
1 hour ago
[-]
I don't think it would take very long to regain those skills either.
reply
onemoresoop
1 hour ago
[-]
Yes, they do come back faster than learning from scratch. However, what’s possibly worrying is that our brains atrophy some faculties if we decided to skip the learning part altogether.
reply
himata4113
2 hours ago
[-]
I haven't written complex code for so long I forgot how I used to type && on my keyboard. Wild times.
reply
aworks
1 hour ago
[-]
It was a long time ago but I attended a session by IBM at an OO conference. The speaker's claim was that the half-life of programming language knowledge was 6 months i.e. if not reinforced, that how fast it goes.

I learned the Q array language five years ago and then didn't touch it for six months. I was surprised how little I remembered when I tried to resume.

reply
falcor84
1 hour ago
[-]
Maybe it's my memory issues, but I personally could never remember basic boilerplate. 30 years ago I would spend half of my time in Borland's help menu coupled with grepping through man pages. These days I use LLMs, including ollama when on a plane. I don't feel worse off.
reply
jasonlotito
2 hours ago
[-]
Others have addressed other aspects of this, but I want to address this:

> I cannot remember basic boilerplate stuff.

I don't know exactly what you mean by boilerplate stuff, but honestly, that's stuff we should have automated away prior to AI. We should not be writing boilerplate.

I'd highly encourage you to take the time to automate this stuff away. Not even with AI, but with scripts you can run to automate boilerplate generation. (Assuming you can't move it to a library/framework).

reply
bandrami
2 hours ago
[-]
So many use cases for LLMs I've read leave me asking "did none of you have a working text editor?"
reply
DrewADesign
2 hours ago
[-]
Jeez, I never remembered boilerplate stuff anyway. Losing grasp of your commonly used, slightly more involved code idioms in your key languages would probably be where I’d draw the ‘be concerned’ line. Like if I get into a car after years of only using public transit, I wouldn’t be too worried if I couldn’t immediately use a standard transmission smoothly. If I no longer could intuitively interact with urban traffic or merge onto a highway, I’d be a lot more concerned.
reply
jimbokun
1 hour ago
[-]
Lisp macros had pretty much solved the boiler plate problem decades ago.
reply
bdangubic
2 hours ago
[-]
I read the "boilerplate" in that comment as "basic" meaning "I don't know how to center a div" or "I do not know how to remove duplicates from a collection"
reply
TheOtherHobbes
1 hour ago
[-]
Does anyone know how to centre a div?

Last time I looked there were at least seven ways to do it.

reply
queenkjuul
15 minutes ago
[-]
Flexbox, mate
reply
papa0101
1 hour ago
[-]
margin: auto, or flex align-items/justify-content are my go-tos
reply
fendy3002
2 hours ago
[-]
Well both of them are easily retrieved from web search, it's not a problem if you forget one or two. I'll probably need some refreshment if I want to implement bubble sort again.
reply
bdangubic
45 minutes ago
[-]
The topic refers to being on an airplane without internet....
reply
sph
42 minutes ago
[-]
Good news for us Luddites. Keep it up.
reply
embedding-shape
2 hours ago
[-]
Really? How long you've been a developer? I've been almost exclusively doing "agent coding" for the last year + some months, been a professional developer for a decade or something. Tried just now to write some random JavaScript, C#, Java, Rust and Clojure "manually" and seems my muscle memory works just as well as two years ago.

I'm wondering if this is something that hits new developers faster than more experienced ones?

reply
mathgeek
2 hours ago
[-]
Probably depends on the individual. Senior developer here and I've always offloaded boilerplate and other "easy to google" things to search engines and now AI. Just how my brain and memory work. Anything I haven't used recently isn't worth keeping (in my subconscious mind's opinion anyway).
reply
ConceptJunkie
2 hours ago
[-]
Yeah, having to look up the "basic boilerplate" stuff is not worse for me after starting to use AI than it was beforehand.
reply
juvoly
2 hours ago
[-]
Experience isn't the problem. I have 20+ years of C++ development, built commercial software in Java, Rust, Python, played with assembly, Erlang, Prolog, Basic.

Played with these coding agents for the last couple weeks and instantly noticed the brainrot when I was staring at an empty vim screen trying to type a skeleton helloworld in C.

Luckily the right idioms came back after couple of hours, but the experience gave me a big scare.

reply
askonomm
2 hours ago
[-]
Same for me. Been fully agentic for half a year or so, still remember the myriad of programming languages and things just as well if there's no AI present at all. Hard to shake 15 years of experience that quick, unless maybe that experience never fully cemented?

Maybe the difference between actually knowing stuff vs surface level? I know a lot of devs just know how to glue stuff together, not really how to make anything, so I'd imagine those devs lose their skills much faster.

reply
farresito
2 hours ago
[-]
> I'm wondering if this is something that hits new developers faster than more experienced ones?

Almost certainly, at least according to Ebbinghaus' forgetting curve.

reply
eru
2 hours ago
[-]
I can tell you that I can still code Python and Haskell just fine (I did those in vim without bothering to set up any language assistance), but Rust I only ever did with AI and IDE and compiler assistance.
reply
hansmayer
2 hours ago
[-]
> random JavaScript, C#, Java, Rust and Clojure "manually"

Right, sounds very credible to me. What did you write, an addition function in each of those?

reply
embedding-shape
1 hour ago
[-]
Lol, thanks (I guess?), but really isn't that hard. I don't think I know a single experienced developer who doesn't know at least 3-4 languages. I probably could add another couple of languages in there, but those are the ones I currently know best. Besides, once you've picked up a few language, most of them look and work more similarly to each other than different. From my lisp-flavored lenses, C# and Java are basically the same language for most intents and purposes.

I wrote a little toy-calculator in each, ended up being ~250 LOC in each of them, not exactly the biggest test but large enough to see if my muscle memory still works which I was happy to discover it still did.

reply
intended
2 hours ago
[-]
It a side effect of using AI.

People using AI for tasks (essay writing in the MIT study linked below) showed lower ownership, brain connectivity, and ability to quote their work accurately.

> https://arxiv.org/abs/2506.08872

There was a MSFT and Carnegie Mellon study that saw a link between AI use, confidence in ones skills, confidence in AI, and critical thinking. The takeaway for me is that people are getting into “AI take the wheel” scenarios when using GenAI and not thinking about the task. This affects people novices more than experts.

If you managed to do critical thinking, and had relegated sufficient code to muscle memory, perhaps you aren’t as impacted.

reply
Zigurd
2 hours ago
[-]
It's probably too much inside baseball to merit a study, but I'm curious if the results would change for part-time coders. When I'm not coding, I'm writing patents, doing technical competitive analysis, team building, etc.

My theory is that if you're not full-time coding, it's hard harder to remember the boiler plate and obligatory code entailed by different SDKs for different modules. That's where the documentation reading time goes, and what slows down debugging. That's where agent assisted coding helps me the most.

reply
nyrikki
1 hour ago
[-]
SDKs and Binary format descriptors are where I see agents failing the most, they are typically acceptable for the happy path but fail at the edge cases.

As an example I have been fighting with agents re-writing or removing guard clauses and structs when dealing with Mach-o fat archives this week, I finally had to break the parsing out into an external module and completely remove the ability for them to see anything inside that code.

I get the convenience for prototyping and throwaway code, but the problem is when you don’t have enough experience with the quirks to know something is wrong.

It will be code debt if one doesn’t understand the core domain. That is the problem with the confidence and surface level competence of these models that we need to develop methods for controlling.

Writing code is rarely the problem with programming in general, correctness and domain needs are the hard parts.

I hope we find a balance between gaining value from these tools while not just producing a pile of fragile abandonware

reply
eru
2 hours ago
[-]
> [...] and ability to quote their work accurately.

I guess that's an advantage? People shouldn't have to burden their memory with boilerplate and CRUD code.

reply
intended
1 hour ago
[-]
The task was essay writing, and the three 3 groups were No tools, search, ChatGPT.

The people who used chatGPT had the most difficulty quoting their own work. So not boilerplate, CRUD - but yes the advantage is clear for those types of tasks.

There were definite time and cognitive effort savings. I think they measured time saved, and it was ~60% time saved, and a ~32% reduction in cognitive effort.

So its pretty clear, people are going to use this all over the place.

reply
order-matters
2 hours ago
[-]
i think your environment is a big role. with Ai you can kind of code first, understand second. without AI if you dont fully understand something then you havent finished coding it, and the task is not complete. if the deadline is too aggressive you push back and ask for more time. with AI, that becomes harder to do. you move on to the next thing before you are able to take the time to understand what it has done.

i dont think it is entirely a case of voluntary outsourcing of critical thinking. I think it's a problem of 1) total time devoted to the task decreasing, and 2) it's like trying to teach yourself puzzle solving skills when the puzzles are all solved for you quickly. You can stare at the answer and try to think about how you would have arrived at it, and maybe you convince yourself of it, but it should be relatively common sense that the learning value of a puzzle becomes obsolete if you are given the answer.

reply
I_am_tiberius
1 hour ago
[-]
Soon everyone will run local models for simple stuff like that.
reply
simianwords
39 minutes ago
[-]
I keep seeing this repeated but isn’t it a good thing you don’t remember boilerplate? This is not information that deserves to be memorised.

The fact that this is being called out is strange.

reply
nprateem
58 minutes ago
[-]
I haven't been able to code without reading a sample of code for years before AI. Maybe it's just what happens when you're polyglot but I remember thinking even stupid things like how to declare a class in whatever lang I had to see. But once I saw a sample of code I'd get back into it. Then there's stuff I never committed to memory, like the nonsensical dance of reading from a file in go, or whatever.

So I don't think this is all AI tbh.

reply
acedTrex
1 hour ago
[-]
If this was me you couldn't waterboard this info out of me.
reply
marliechiller
1 hour ago
[-]
Why? Is this is because of shame or fear of losing your job?
reply
jimbokun
1 hour ago
[-]
Because the info is no longer in their brain.
reply
acedTrex
1 hour ago
[-]
Because its incredibly embarrassing to admit you can no longer do very basic programming tasks as a "professional" in that field.
reply
embedding-shape
56 minutes ago
[-]
I think it's a matter of what "very basic programming tasks" actually mean keeps sliding across the years. Surely in the beginning, being able to write Assembly was "very basic programming tasks" but as Algol and Fortran took over, suddenly those instead became the "very basic programming tasks".

Repeat this for decades, and "very basic programming tasks" might be creating a cross-platform browser by using LLMs via voice dictation.

reply
desireco42
1 hour ago
[-]
Honestly, you shouldn't be working on a plane. This thing where people are plugged in all the time is just insane.

Yes, you lost some abilities. Install local model so you have someone to talk to while you are on the plane ;)

reply
pablogiuffrida
2 hours ago
[-]
probably a junior/semi sr developer?
reply
XCSme
2 hours ago
[-]
I guess writing code is now like creating punch-cards for old computers. Or even more recently, as writing ASM instead of using a higher level language like C. Now we simply write our "code" in a higher language, natural language, and the LLM is the compiler.
reply
bilekas
2 hours ago
[-]
> Now we simply write our "code" in a higher language, natural language, and the LLM is the compiler.

No we don't and we never should actually, compilers need to be deterministic.

reply
Farox
41 minutes ago
[-]
Why?

Also, give the same programming task to 2 devs and you end up with 2 different solutions. Heck, have the same dev do the same thing twice and you will have 2 different ones.

Determinism seems like this big gotcha, but in it self, is it really?

reply
bilekas
36 minutes ago
[-]
> Heck, have the same dev do the same thing twice and you will have 2 different ones

"Do the same thing" I need to be pedantic here because if they do the same thing, the exact same solution will be produced.

The compiler needs to guarantee that across multiple systems. How would QA know they're testing the version that is staged to be pushed to prod if you can't guarantee it's the same ?

reply
SkyBelow
1 hour ago
[-]
It needs to be something stronger than just deterministic.

With the right settings, a LLM is deterministic. But even then, small variations in input can cause very unforeseen changes in output, sometimes drastic, sometimes minor. Knowing that I'm likely misusing the vocabulary, I would go with saying that this counts as the output being chaotic so we need compilers to be non-chaotic (and deterministic, I think you might be able to have something that is non-deterministic and non-chaotic). I'm not sure that a non-chaotic LLM could ever exist.

(Thinking on it a bit more, there are some esoteric languages that might be chaotic, so this might be more difficult to pin down than I thought.)

reply
TheRoque
2 hours ago
[-]
I cringe every time I read this "punch card" narrative. We are not at this stage at all. You are comparing deterministic stuff and LLMs which are not deterministic and may or may not give you what you want. In fact I personally barely use autonomous Agents in my brownfield codebase because they generate so much unmaintainable slop.
reply
bigfishrunning
2 hours ago
[-]
Except that compiler is a non-deterministic pull of a slot-machine handle. No thanks, I'll keep my programming skills; COBOL programmers command a huge salary in 2026, soon all competent programmers will.
reply
acedTrex
1 hour ago
[-]
This is not what a compiler is in any sense.
reply
Zealotux
3 hours ago
[-]
I'm currently looking for sort of niche clothes for an event and it's the first time I had to give up on buying online because of the sheer amount of AI-generated pictures. Going to a physical store was just a much better experience, I can't recall the last time this happened, almost all sellers on Etsy are using AI for their pictures.
reply
ori_b
2 hours ago
[-]
We're racing to build hell.
reply
roxolotl
1 hour ago
[-]
A hell that’s been widely documented in fiction as well. That’s the part that’s so wild to me about this. None of this was unseen. Across every medium the extreme commercialization and general collapse of the social contract due to AI has been described and a lot of the authors have been largely prophetic.
reply
jimbokun
1 hour ago
[-]
In the US this is due to the overall failure of trust in our institutions.

No one trusts Congress or the US government to effectively regulate AI for the greater good of the population. Each party believes regulations proposed by the other party will be used to discriminate against and control their party.

reply
dw_arthur
1 hour ago
[-]
We have been since we bound consumption to the internet. All of this was inevitable after that.
reply
thatjoeoverthr
1 hour ago
[-]
It's threatening to "unwind" the entire digital sector back to 1990. Online shopping damaged, job interviews done in person, essays by hand, exams proctored. Cover letters obsolete. There could be a "cognitive waterline" effect where older people who can't tell will continue living in an AI-generated bubble. Cover letters already are generated on demand specifically because people still claim to require them, even though we know they're not real anymore.

Could be an advantage to knowing this because you can step around it.

_You_ know it's AI, so you go in person to a store. Likewise, next time you hire, you can simply refuse to accept "cover letters".

reply
zemo
2 hours ago
[-]
full disclosure I work at Whatnot but that sort of thing is a large part of the appeal of Whatnot to me, that people are showing off the stuff live on stream and you can ask questions about it
reply
addandsubtract
1 hour ago
[-]
This whole concept of selling things in video format seems so alien to me. I didn't believe when someone told me they shop on TikTok now. It already takes me ages to browse through a gallery of items, I couldn't imagine going through items video by video.
reply
zemo
56 minutes ago
[-]
that's more or less how I felt about it, but someone I know worked at Whatnot and liked working there so I tried out the app before applying and then applied because the product clicked for me. I wouldn't have joined Whatnot if I didn't like the product.

> I couldn't imagine going through items video by video.

That's fair, it's just not how people use it and it's not the concept. It's primarily a browse experience, not a search experience. You can search but that's not the core experience.

I buy vinyl records and retro games. There are sellers that I like. When I open the app I see which of my preferred sellers are live and I tune into their stream and hang out and watch them. If something I'm interested in pops up, I'll bid on it. Live shopping is not trying to be "ebay but video", it's a different experience.

reply
Maken
29 minutes ago
[-]
The digital Yellow Pages were replaced by streaming teleshopping.
reply
vel0city
10 minutes ago
[-]
Some people watch TV channels which do nothing but present things to buy with a phone number to order. Lots of live shows as well, its not just non-stop pre-recorded infomercials. It doesn't surprise me in the slightest such an idea would move to short form video content as well. People trying on makeup or showing off clothing with their affiliate links down below.

https://en.wikipedia.org/wiki/HSN

reply
SpicyLemonZest
1 hour ago
[-]
I’ve been car shopping recently, and I’ve found myself deliberately seeking out videos, because I’ve found that it’s very hard to get a sense of what the thing is really going to look like from static photos. Unstaged photos make everything look uglier, staged photos require adjusting for the unknown staging.
reply
addandsubtract
1 hour ago
[-]
Oh, I definitely look up products I intend to buy on YouTube. But I don't go there (or any other video platform) to discover them.
reply
foldU
1 hour ago
[-]
This sounds like a really unpleasant shopping experience to me.
reply
jerf
2 hours ago
[-]
AI is in spitting distance of being able to do that too.
reply
geerlingguy
2 hours ago
[-]
I sometimes wonder if the random people sitting there hawking a pile of Amazon goods that pops up after every Amazon purchase are already AI.
reply
csomar
1 hour ago
[-]
This was my experience as well trying to buy a charger. You can't trust anything. For brands that have their own store, some have such a bad experience that it's easier and less stressful to go to the store and buy directly from there.
reply
coldpie
1 hour ago
[-]
I do woodworking for a hobby and wanted to find a nice "intro to routers" article. After skimming past the obvious SEO crap on google I clicked the first likely-seeming link and was greeted by an AI slop image of two misshapen routers being operated by three disembodied hands with seventeen fingers each. I immediately threw my laptop out the window, watched it shatter into five hundred pieces, walked across the street to the library, and checked out a goddamn book.

I was already getting disillusioned with the Internet as a learning resource during the SEO spam era, but the AI era has completely destroyed it.

reply
augustk
45 minutes ago
[-]
Then it turns out that the book was written by an LLM.
reply
coldpie
31 minutes ago
[-]
I checked! Copyright 2013. Phew.
reply
TheOtherHobbes
1 hour ago
[-]
For questions like this you can ask an AI directly instead of getting herded through the clickbait.

Education and targeted summary searches are one of the best uses. I literally found the location of the criminal who embezzled thousands of euros from my condominium with an AI search. It took me around fifteen minutes. Other people had been looking for years. (True story...)

reply
coldpie
45 minutes ago
[-]
No way in hell I'm trusting AI for something that could lose me a finger.
reply
xantronix
47 minutes ago
[-]
The thing with LLMs is that it is very, very easy to adjust the weights across the entire model to sway responses one way or another. Previously, in the hypothetical case one wanted to rewrite history, it would be a much more involved endeavour of curation; fabrication of original sources would be difficult to do at scale. But now it's trivial for a provider to inject a preamble to the prompt to not only hide results that do not fit the narrative of those legislating in the model providers' favour, but to distort the results.

Obviously none of that is happening in the current moment, and I grant that cake recipes would be low stakes, but I would rather take the tradeoff of trawling through a little bit of slop to get that same information than acclimate myself to a workflow that could be abused by providers in more high-stakes situations down the line.

But that's just me, and I realise this is not a particularly popular take, but it should nonetheless be illustrative for why "just ask the LLM" might not be the best of ideas long term.

reply
alexwebb2
2 hours ago
[-]
I view this post as primarily pattern-matching and storytelling. But I think there’s a buried truth there, and that they were nibbling at the edges of it when they started talking about the overlapping stages.

There are some very interesting information network theories that present information growth as a continually evolving and expanding graph, something like a virus inherent to the universe’s structure, as a natural counterpoint to entropy. And in that view, atomic bonds and cells and towns and railroads and network connections and model weights are all the same sort of thing, the same phenomenon, manifesting in different substrates at different levels of the shared graph.

To me, that’s a much better and deeper explanation that connects the dots, and offers more predictive power about what’s next.

Highly recommend the book Why Information Grows to anyone whose interest is piqued by this.

reply
awongh
2 hours ago
[-]
I think it's clear to me that AI will be both things:

1) as in the article it's a contraction of work- industrialization getting rid of hand-made work or the contraction of all things horse-related when the internal combustion engine came around

but- it will also be

2) new technologies and ideas enabled by a completely new set of capabilities

The real question is if the economic boost from the latter outpaces the losses of the former. History says these transitions aren't easy on society.

But also, the AI pessimism is hard to understand in this context- do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?

reply
sweezyjeezy
1 hour ago
[-]
Well this is HN so a lot of us are pretty terrified of your 1). We went from 'you have a good job for the next couple of decades' to 'your job is at extreme risk for disruption from AI' in the space of like 5 years. Personally I have a family, I'm a bit old to retrain, but I never worked at a high-comp FAANG or anything so I can't just focus on painting unless my government helps me (note - not US/China). That's extremely anxiety-inducing, that a vague promise of novel new things does not come close to compensating.
reply
Jcampuzano2
1 hour ago
[-]
I'm 33 and I feel sort of lucky that I'll still potentially have time to retrain. I'm fully prepared to within the next 5 years or so (and potentially much less) I'll probably need to retrain into a trade or something to stay relevant in any sort of field.

Many people claim its going to become a tool we use alongside our daily work, but its clear to me thats not how anybody managing a company sees it, and even these AI labs that previously tried to emphasize how much its going to augment existing workforces are pushing being able to do more with less.

Most companies are holding onto their workforce only begrudgingly while the tools advance and they still need humans for "something", not because they're doing us some sort of favor.

The way I see it unless you have specialized knowledge, you are at risk of replacement within the next few years.

reply
sweezyjeezy
1 hour ago
[-]
I also have contemplated just retraining now to try and get ahead of the curve, but I'm not confident that trades can absorb the shock of this - both in terms of supply (more unemployment) and demand (anything non-commercial will be hit by capital flight on the customer-side). I figure I will just try and make as much money on a higher wage as I can and hope for the best...
reply
bluefirebrand
7 minutes ago
[-]
> I'm 33 and I feel sort of lucky that I'll still potentially have time to retrain. I'm fully prepared to within the next 5 years or so (and potentially much less) I'll probably need to retrain into a trade or something to stay relevant in any sort of field.

The problem is that there are not many fields that are going to be immune to AI based cost cutting and there surely will not be enough work for all of us even if we all retrain.

If we all do, then it will create a n absolutely massive downward pressure on wages due to massive oversupply in other lines of work too

So there's really just no good way out

reply
isodev
1 hour ago
[-]
> AI pessimism is hard to understand

Well, it really isn’t. First, this entire post makes two assumptions: 1) that AI adds more value to the process than it removes and 2) that it’s sustainable.

It’s not pessimism to want to validate these first.

Are AI “gains” really transformative or simply random opportunities for automation which we can achieve by other means anyway?

Can the world continue to afford “AI as a service” long enough for the gains to result in improvements that make it sustainable? Are we dooming our kids to a hellishly warm planet with no clear plan how to fix it?

It’s not pessimism, just simple project management if you ask me.

reply
packetlost
1 hour ago
[-]
> Are AI “gains” really transformative

They're transformative in the sense that will shrink the optimal team size, but I don't expect the jobs to actually go away unless these things both get substantially better at engineering (they're good at generating code but that is like 20% of engineering at best) and we have a means of giving them full business/human levels of context.

Really basic stuff gets a lot easier but the needle doesn't move much on the harder stuff. Without some sort of "memory" or continuous feedback system, these models don't learn from mistakes or successes which means humans have to be the cost function.

Maybe it's just because I'm burnt out or have a miner RSI at the moment, but it definitely saves me a bit of time as long as I don't generate a huge pile and actually read (almost) everything the models generate. The newer models are good at following instructions and pattern matching on needs if you can stub things out and/or write down specs to define what needs to happen. I'd say my hit rate is maybe 70%

reply
mlcruz
46 minutes ago
[-]
> we have a means of giving them full business/human levels of context

Trust me, this is a work in progress. Right now most corporations do not have their data organized and structured well enough for this to be possible, but there is a lot of heat and money in this space.

Imo, What most of the people that are not directly working in this space get wrong is assuming swes are going to be hit the hardest: There are some efficiency gains to be won here, but a full replace is not viable outside of AGI scenarios. I would actually bet on a demand increase (even if the job might change fundamentally). Custom domain made software is cheaper as it has ever been and there is a gigantic untapped market here.

Low complexity to medium complexity white colar jobs are done for in the next decade through. This is what is happening right now in finance: if models stopped improving now, the technology at this point is already good enough to lower operational costs to the point where some part of the workforce is redundant.

reply
packetlost
42 minutes ago
[-]
> Right now most corporations do not have their data organized and structured well enough for this to be possible, but there is a lot of heat and money in this space.

I think you misunderstand what I'm saying. I'm not really referring to data systems at all, I'm referring to context on what problems are actually being solved by a business. LLMs very clearly do not model outcomes that don't have well-defined textual representations.

I'm not sure that I agree with white collar jobs being done for, not every process has as little consequence to getting it wrong as (most) software does.

reply
fridder
1 hour ago
[-]
Also I think over the past few years/decade the tech sector has lost any benefit of the doubt that everything that comes out of it is a "good thing".
reply
damnesian
2 hours ago
[-]
Hard to understand, when essential human nature is so predictable? Sure, we will do novel things with it. But society in the main will use to it exploit labor. same as it ever was.
reply
simianwords
31 minutes ago
[-]
“Exploit labour” is just outdated Marxism. No self respecting economist believes this kind of rhetoric anymore but it only exists amongst west coded leftist.

It’s a sort of cynical fatalism to think everything is exploitation — directly coming from Marx.

It’s not exploitation to mutually agree on a deal. Most of population know this except Marxists!

reply
cmrdporcupine
1 minute ago
[-]
Ah, peak HN pseudo-libertarianism

a) just hand wave away that there is a massive power and wealth differential involved in this "mutual agreement" b) dismiss all discussions which recgonize that as "oudated Marxism"

Plenty of mainstream economists are capable of seeing the real world which you are pretending doesn't exist.

reply
anthonypasq
1 hour ago
[-]
are you under the impression life was better before capitalism?
reply
sweezyjeezy
1 hour ago
[-]
That's a false-dichotomy. Capitalism was good for artisanal workers before the industrial revolution, and then it became pretty goddamn bad for them. We're worried we're staring down the barrel of that right now - just saying 'well it was even worse before capitalism' does nothing for us.
reply
anthonypasq
1 hour ago
[-]
yes it does, it says that trying to prevent technology in order to protect the interests of some special class up people at the expense of everyone else is dumb and shortsighted.

If if people actually listened to the people wailing "but what about the horse carriage business!!!" in the 20th century, it would have been a disaster.

reply
sweezyjeezy
1 hour ago
[-]
Sure, but AI pessimism is allowed to be personal. Am I supposed to be optimistic that I feel I'm about to get shafted? Should I be less concerned that I need to provide for my family, because in the long term this is going to be a great step forward for humanity?
reply
simianwords
33 minutes ago
[-]
You are addressing something totally different to the original claim - which tried to say that capitalism is inherently exploitative on labour which is just outdated Marxism
reply
jimbokun
1 hour ago
[-]
> do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?

The cost cutting is the only revenue producing models for the AI companies so far. It's being pitched as a way for corporations to fire a lot of employees and save money.

Revenue for the consumer facing products is not very impressive. Consumers are mostly satisfied with the free versions and very resistant to adding yet another channel to shove advertising at them.

reply
hnthrow0287345
2 hours ago
[-]
>That it's all about cost-cutting?

Cost cutting has less uncertainty than making something new, so they do that first. If something else comes along, then great.

This is also why the people should make the transition as difficult as possible for companies doing layoffs when the companies are paying proportionally very little in taxes compared to the people they are laying off.

reply
jillesvangurp
1 hour ago
[-]
Change is a constant in history. Stuff happens, and then we adjust. Big changes may result in short term confusion, anger, etc. All the classic signs of the five stages of grief basically.

If you step back a little, a lot of people simply don't see the forest for the trees and they start imagining bad outcomes and then panic over those. Understandable but not that productive.

If you look at past changes where that was the case you can see some patterns. People project both utopian and dystopian views and there's a certain amount of hysteria and hype around both views. But neither of those usually play out as people hope/predict. The inability to look beyond the status quo and redefine the future in terms of it is very common. It's the whole cars vs. faster horses thing. I call this an imagination deficit. It usually sorts itself out over time as people find out different ways to adjust and the rest of society just adjusts itself around that. Usually this also involves stuff few people predicted. But until that happens, there's uncertainty, chaos, and also opportunity.

With AI, there's going to be a need for some adjustment. Whether people like it or not, a lot of what we do will likely end up being quite easy to automate. And that raises the question what we'll do instead.

Of course, the flip side of automating stuff is that it lowers the value of that stuff. That actually moderates the rollout of this stuff and has diminishing returns. We'll automate all the easy and expensive stuff first. And that will keep us busy for a while. Ultimately we'll pay less for this stuff and do more of it. But that just means we start looking for more valuable stuff to do and buy. We'll effectively move the goal posts and raise the ambition. That's where the economical growth will come from.

This adjustment process is obviously going to be painful for some people. But the good news is that it won't happen overnight. We'll have time to learn new things and figure out what we can do that is actually valuable to others. Most things don't happen at the speed the most optimistic person wants things to happen. Just looking at inference cost and energy, there are some real constraints on what we can do at scale short term. And energy cost just went up by quite a lot. Lots of new challenges where AI isn't the easy answer just yet.

reply
bluefirebrand
10 minutes ago
[-]
> But also, the AI pessimism is hard to understand in this context- do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?

I frankly do not care how much novel stuff is "unlocked" with AI tech if it means I become unemployable due to it replacing all of my skills

reply
dodu_
2 hours ago
[-]
> do people really believe no novel things will be unlocked with this tech?

Yes. It's a mostly shitty but very fast and relatively inexpensive replacement for things that already exist.

Give your best example of something that is novel, ie isn't just replacing existing processes at scale.

It's been 3 and a half years now since the initial hype wave. Maybe I genuinely missed the novel trillion dollar use case that isn't just labor disruption.

reply
jenniferhooley
2 hours ago
[-]
I think that most people are pretty short-sighted about the utility cases right now (which is understandable given the negative feelings about a lot of what's currently going on).

There are a lot of really useful things that were impossible before. But none of these use cases are "easy," and they all take years of engineering to implement. So, all we see right now are trashy, vibe-code style "startups" rather than the actual useful stuff that will come over the years from experienced architects and engineers who can properly utilize this technology to build real products.

I'm someone who feels very frustrated with most of the chatter around AI - especially the CEOs desperate to devalue human labor and replace it - but I am personally building something utilizing AI that would have been impossible without it. But yeah, it's no walk in the park, and I've been working on it for three years and will likely be working on it for another year before it's remotely ready for the public.

When I started, the inference was too slow, the costs were too high, and the thinking-power was too poor to actually pull it off. I just hypothesized that it would all be ready by the time I launch the product. Which it finally is, as of a few months ago.

reply
pixl97
1 hour ago
[-]
With this said, a lot of people are likely worried about being eaten by whales when it comes to doing things with AI.

It's kind of like dealing with Amazon, or any other company that has both compute and the ability to sell the kind of product you make.

Said AI providers can sell you the compute to make the product, or they can make the product themselves with discounted compute and eat all the profits you'd make.

reply
jenniferhooley
1 hour ago
[-]
This is always a worry, but typically, being first to market is the most important part. As long as you can scale quickly and maintain your edge, this doesn't seem like such a big deal.

However, my product is so far removed from anything these companies would make, on top of that I'm using open-source models (e.g., oss gpt 120b is really, really good). I don't use any of the main providers like AWS, etc., and the underlying AI systems are only about 5% of the product. I need it for the idea to work, but it is a tiny part of the full offering. I can't really imagine it would make any sense for Amazon, etc., to compete on something like this.

But yes, in the end, huge conglomerates with infinite money can destroy smaller entrepreneurs - but that's not really any different than it's been for decades pre-AI.

reply
awongh
1 hour ago
[-]
The most obvious thing is bio-tech, protein folding, drug discovery, etc. As in, things that have an actual positive effect on humanity (not just dollars).

I don't really get people who are dismissive about this aspect of AI- my original question wasn't about cost-efficiency of developing these things, but just that the technology itself is creating things that wouldn't have been possible before. It seems hard to refute.

Whether or not it's worth the cost is a different debate entirely- about how tech trees are developed and what the second order effects of technology are. There are so many examples- the computer itself, nuclear power, etc. I think AI is probably on the same order as these.

reply
dodu_
1 hour ago
[-]
Correct me if I'm off base but these things (protein folding and drug discovery) both existed before AI, no?

The implication of your comment seemed to be that this was going to be so much more than replacing people. But I fail to see how any of the items you listed are anything other than that.

These things have always been possible. Just slow and limited by labor. Which is the primary and novel "unlock" of AI.

You can argue it's a good thing, and in many areas I'd probably agree. I'm directly responding to your skepticism and implied absurdity that replacement is the main unlock here. It absolutely is.

reply
awongh
1 hour ago
[-]
> Correct me if I'm off base but these things (protein folding and drug discovery) both existed before AI, no?

Yes, you are off-base.

Solutions to the protein folding problem existed before, but not in the way you are implying.

reply
dodu_
59 minutes ago
[-]
Fair enough. I appreciate the correction.

I do still believe the main value proposition is large scale replacement and am unconvinced that most people driving AI adoption have these other more noble pursuits in mind with respect to AI.

But I will absolutely stand corrected here and if our dystopian future includes some genuinely useful medicinal advancements then maybe that will make the medicine (heh) go down easier.

reply
girvo
2 hours ago
[-]
It’s pretty decent for natural language -> query language tasks

But also you don’t need SOTA frontier models for that!

reply
vjvjvjvjghv
2 hours ago
[-]
"Yes. It's a mostly shitty but very fast and relatively inexpensive replacement for things that already exist."

Wouldn't that apply to most technological advances? Cars, computers, cell phones.

reply
dodu_
1 hour ago
[-]
Yes, but I'm not the one who introduced the "novel" constraint to the argument.

e: Also I don't know that I'd strictly bucket these specific examples you gave as shittier versions, though I guess that's a matter of perspective.

reply
butlike
2 hours ago
[-]
So now the ancillary question from your example is: "Is hand-spun cotton better than industrialized polyester?"
reply
awongh
2 hours ago
[-]
If you're implying that hand-spun cotton is better, that's an easy question to answer- people used to spend a huge amount of their income on clothing, also spending a huge amount of time washing it. Industrialization made clothing so much cheaper that it's now completely disposable. There's plenty of reasons why that's not a bad thing.

One reason people forget that "good quality" shoes existed was that you could only afford to buy one pair ever, not that things were made better, necessarily. (or could be both, but that replacing a pair of shoes was a financial hardship, because hand-made things, even back then, were expensive).

Even if you're against fast fasion I don't think anyone wants a pair of shoes to cost $10,000.

reply
butlike
52 minutes ago
[-]
It seems to me you're advocating for waste, as I'm not seeing the "plenty of reasons why completely disposable cheap clothing isn't a bad thing" argument.

Replacing shoes wasn't necessary because there were cobblers. For clothing; tailors. I'd much prefer to get a set of clothing, then work with it over the course of its lifetime, over sending it to the landfill after one tear.

reply
wordspotting
34 minutes ago
[-]
Many companies are following planned obsolence framework to keep their industry alive. That is the major reason for waste and drop in quality.
reply
TeMPOraL
2 hours ago
[-]
Define better. Fast fashion sucks, but hand-spun cotton won't give you Kevlar or modern wind-resistant clothing or fireproof materials for your furniture or... <insert half thousand different things adjacent to modern textile production>.

It's always win some, lose some with the economy, but technology itself opens previously impossible capabilities.

reply
butlike
46 minutes ago
[-]
Better is 'longer lasting and less disposable.'

Your comment got me thinking about if technology is actually better, but that's a whole new discussion. We wouldn't need the fireproof furniture if we all used the local sweat lodge for bathing or the mess hall yurt for cooking. We wouldn't need wind-resistant clothing if we didn't make personal rockets that go 200mph to travel long distances to arrive at the same amenities (just in a different city).

reply
jimbokun
1 hour ago
[-]
What is the AI equivalent of wind-resistant clothing or fireproof materials?

So far the only product AI is producing is layoffs.

reply
TheOtherHobbes
1 hour ago
[-]
It used to open them to most of the population - at least that was the ideal for a couple of decades - but now it seems to be opening them to oligarchs more than workers.

It's essentially a political energy source. It heats everything up.

Eventually it either explodes, goes through a phase change to a new (meta)stable state, or collapses back to a previous state.

reply
jimbokun
1 hour ago
[-]
What's the AI equivalent of industrialized polyester in your analogy?

From a consumer perspective, AI isn't really producing any new products with real market demand. Chatbots are fun, but there's no indication consumers are willing to pay for them.

reply
throwaway613746
2 hours ago
[-]
It's been a few years and I have yet to see a single novel thing come out of it. Even chatbots weren't novel when ChatGPT came out.
reply
jimbokun
1 hour ago
[-]
It's disingenuous to say ChatGPT is not novel relative to older chatbots. The capabilities of ChatGPT compared to what came before were astonishing and continued to improve at a rapid rate.
reply
barrkel
2 hours ago
[-]
The lack of robotics mention somewhat undermines this article.

I don't think it's intrinsically wrong, we are in a late stage of a transformation. Software is eating the world and AI is (so far) most profitably an automation of software.

There is plenty of money to be made along the way. I don't really buy the article's seeming confusion about where the money is going to come from. Anthropic is making billions and signing up prodigious amounts of recurring revenue every month.

reply
jimbokun
1 hour ago
[-]
> The lack of robotics mention somewhat undermines this article.

True but in common parlance "AI" has come to mean LLMs.

> Software is eating the world

The premise of the article is software ATE the world and there isn't much left to eat that hasn't been eaten.

reply
barrkel
4 minutes ago
[-]
Software hasn't eaten all it could, in my book, and AI makes a lot more stuff legible to software.
reply
madaxe_again
35 minutes ago
[-]
I chuckled at:

>> At the early stage of a surge, investment tends to be patchy and not fully understood—the sector exists but it is not completely legible yet.

He says this in the context that AI clearly doesn’t fit this pattern, as the investment has been enormous.

I feel like he and everyone else has a scale problem, due to the tendency to equate AI to LLMs - the investment is patchy and not fully understood - I really don’t think we’ve seen anything more than the pretremors at this point - as the scale of the change is just as incomprehensible to the world at large now as it was when the steam engine was just a slightly better way of getting water out of a mine than a donkey.

reply
nerptastic
2 hours ago
[-]
Anthropic today, who next week? If locally run models ever get to the point where they can reliably solve... 85% of what the frontier cloud models can do, I think many would be willing to accept slightly less problem solving ability and just run the thing locally.

All hypothetical, but if compute + AI research continues at pace, in 5 years we should see extremely good local models.

reply
sdsd
26 minutes ago
[-]
As a user of local models, it's well above 85% already. I use frontier models at work and local models for home use because my day to day tasks are well within what DeepSeek can handle.
reply
jimbokun
1 hour ago
[-]
Yes it's not clear to me that AI companies have a defensible moat against open models.
reply
petra
1 hour ago
[-]
The question is whether robotics will look like a some number of platforms with little development to adapt to different scenarios, or a million types of machines that are highly fit for purpose.

Because the first situation won't create that many jobs. The second one might.

reply
barrkel
1 hour ago
[-]
I expect hybrids. Something general has to be adaptable for what will be an expensive capital purchase.

The human form factor - torso up anyway - is probably easier to bootstrap on a general basis; keyed off of human data. But I don't like the failure modes of bipedal robots - imagine a robot flailing around trying to regain balance, in any setting with humans around.

I'm no expert of course, just pontificating.

reply
surgical_fire
2 hours ago
[-]
As far as I know Anthropic still bleeds money, as Open AI also does.

They will keep bleeding money by the way.

reply
barrkel
1 hour ago
[-]
I don't believe the marginal customer of Claude Code is loss-making.
reply
jimmypk
1 hour ago
[-]
The Perez model contains a falsification test the article doesn't apply to its own thesis. In Perez's framework, the installation phase is characterized by financialization, frothy infrastructure bets, and capital rushing toward uncertain new technology—exactly the behavior we see with US AI investment (hyperscalers committing $500B+ to uncertain infrastructure, speculative valuations). Deployment phases look like industrial efficiency gains and normal returns. By those criteria, US AI investment is behaving like an installation-phase bet, not late-deployment optimization.

The article's US-China comparison quietly reveals the prediction that would follow from the thesis: if the Perez 'late deployment' framing is right, then the Chinese model—lean, industrial, healthcare and education application, grounded in near-term ROI—is betting correctly on where we are in the curve and should outperform over the next decade. That's a concrete, testable claim that would validate or falsify the argument independently of whether AI constitutes a 'new surge.'

reply
jmstfv
3 hours ago
[-]
tangentially related, but as someone who built multiple internet businesses -- mostly unsuccessful, some mildly successful -- I barely have any new ideas to work on.

I don't know if this is the effect of relying on AI too much in my day-to-day work or leading a more monotonous life as of late, but I'm sure I'm not the only one. Lots of ideas that I could have built before LLMs took over now seem trivial to build with Claude & friends.

reply
Cilvic
2 hours ago
[-]
I can relate to this, in the past I felt like I could write down pages of projects to try if only I had time. Now my mind immediately goes towards "do I want to manage this long term after the initial spark".
reply
DougN7
2 hours ago
[-]
That made me wonder, honestly, if AI can build it, could AI manage it too?
reply
AndroTux
1 hour ago
[-]
Wait, I just deleted prod. You're absolutely right, that shouldn't have happened. My mistake.
reply
dasil003
2 hours ago
[-]
It seems really premature to talk about AI being the end of anything. What’s at an end stage is adoption of smart phones and monetizing human attention. That’s been the fuel that powered the last quarter century of tech gains, and while still huge in absolute terms it has been running out of steam as a growth engine and facing cultural pushback (eg. Social media lawsuits) for a while.

AI so far has really only shown massive utility for programming. It has broad potential across almost all knowledge work, but it’s unclear how much of that can be fulfilled in practice. There are huge technical, UX and social hurdles. Integrating middle brow chatbots everywhere is not the end game.

reply
jemmyw
2 hours ago
[-]
The question it raises is if this is the fake surge, the one we see, what is the real one we don't see? Renewable energy comes to mind. Robotics too but maybe that's too tied up with AI.
reply
schnitzelstoat
2 hours ago
[-]
I think robotics will be the next surge for sure. But I don't think it's really tied up with the LLM stuff either and it could be decades away.

In the end, it'll probably require something like model-based RL like Yann LeCun talks about and that's totally different to the LLMs.

reply
pixl97
1 hour ago
[-]
Eh, robotics is going through explosive growth right now with the same computing power that's being used on LLMs. You can take human motion capture of a task, dump it in a robotics simulator for a few hours and get a model that can operate autonomously better than something that would have taken a half a year to teach just a few years back.
reply
whizzter
2 hours ago
[-]
Space (Space-X showed that reusable rockets are feasible), Programmable health (Covid vaccine and remember that mRNA curing that dog?),etc.

Sadly, I think there's a risk we might also be heading towards a dark age with few advances since fundamental research has been squeezed away for being unprofitable or hobbled by a industrialized publishing/review-system for a while now and we've been coasting along on profitable applications rather than (expensive) breakthroughts in basics.

reply
SideburnsOfDoom
2 hours ago
[-]
I firmly believe that Renewable energy, the Solar+battery+EV stack, not LLMs, really is the biggest technology transformation of our times. Renewable energy really is surging, just it's on a longer timeline and unlike LLMs, it doesn't benefit venture capitalists to hype it. In fact many existing sectors deliberately downplay it. But we are in the middle of it.

Robotics? lights-out operations in automated factories are already a thing, so I don't know if they're the "next thing".

mRNA vaccines? Sure, they're a huge medical advance. With great potential, in that area. But it's just an area.

Space? Maybe, if we get past LEO, find something useful to do there, and don't succumb to Kessler syndrome.

reply
pixl97
1 hour ago
[-]
>Robotics? lights-out operations in automated factories are already a thing, so I don't know if they're the "next thing".

Eh, I do think this is kind of underestimating the changes in robotics that are occurring. LLMs incorporated with other ML kernels extend the capabilities a long way. That and the amount of computing power now usable to train robotics is far far larger.

reply
jimbokun
1 hour ago
[-]
ELI5: how do LLMs facilitate better robotics?

I don't see the immediate application of language generation to navigating and manipulation in the physical world.

reply
SideburnsOfDoom
1 hour ago
[-]
Are large Language Models of use to move robot limbs around?
reply
darkwater
1 hour ago
[-]
Probably a bit unrelated but I wondering if there is any economic theory that actually predicted something for real rather than extrapolate trends from past data in hindsight - even if crossing different kinds of events.

Honest question, I'm not trying to mock economists or anything like that.

reply
tomhillson
2 hours ago
[-]
if this could last till a point where AI have actual automation ability, it's not a tool for humans anymore. it could have a identity and start to evolve literally. i don't understand why some people consider AI as tech revolution. maybe i'm into sf, but AI can be something other than just a tool.
reply
cyclopeanutopia
2 hours ago
[-]
Surprising number of people here and in tech general lacks any imagination.
reply
tomhillson
1 hour ago
[-]
cuz youre focused on individual matters. it's the entire system, big picture. what human can do alone.
reply
techteach00
2 hours ago
[-]
I sort of agree with the premise of the article. I ask myself, did more non-technical people pick up AI chat bots when they were invented than picked up personal computers in the late 70s/early 80's? I think probably. From my conversations with others.
reply
BirAdam
1 hour ago
[-]
The very first personal computers came out in 1972. In 1978, we got several. The PC came out in 1981. The computer boom didn't begin until 1992.

My wife is absolutely not technical, and she began using ChatGPT before me.

This is to say, I believe you to be correct here. The LLM adoption rate is many times the computer adoption rate. Non-technical people are immediately seeing the benefit of LLMs where they did not with computers in the 1970s.

reply
cowl
1 hour ago
[-]
personal computers in early 70s/80s were a considerable investment for little to no gain and especially no force pushed FOMO.

it costs you nothing to install/adopt an AI chat bot and it's being force fed to everyone at head turning loss in order to justify the push.

reply
Forgeties79
2 hours ago
[-]
Part of this is because we aren’t paying the actual cost of these chatbots. If ChatGPT wasn’t essentially free for casual users then we’d definitely see a much smaller/slower adoption rate. I wonder if a single person using them, even paying for tokens, isn’t substantially subsidized. Probably not but I’m speculating.

If 3D printers could’ve given usage away for years directly in our homes then I bet we would’ve seen wider adoption there too.

reply
elAhmo
1 hour ago
[-]
Well, we are not paying for Gmail, Youtube, TikTok either, all sorts of other services that are free as well.
reply
pixl97
1 hour ago
[-]
Well, we are paying for it, but not directly with cash.
reply
Forgeties79
35 minutes ago
[-]
You’re right but I’m not sure what you’re driving at
reply
zozbot234
2 hours ago
[-]
Chat bots can run on your local hardware these days, even mobile phone hardware. That's effectively free.
reply
Forgeties79
34 minutes ago
[-]
That’s literally not free and good luck running even the lightest LLM’s on 8gb ram consumer hardware. 16gb is barely sufficient and you probably need a new MacBook to really stretch that.

People aren’t going to wait minutes per response for clearly inferior results compared to what they get for free on ChatGPT in browser in seconds, whether it’s logical or not. Not to mention they can’t ask more than a few questions tops before the whole thing crumbles. Expectations and reality are too far apart here.

Let’s also address another real issue: what are they going to use? LM studio? Is that really a user experience most will tolerate?

reply
a-dub
2 hours ago
[-]
this perez model thing completely misses the communications revolutions of the telegraph, radio and television not to mention demonopolization of bell.

> Then came AI, revealing new dynamics. ChatGPT’s breakthrough didn’t come from a garage startup but from OpenAI,

i thought the transformer and large language models came from google research.

> There’s also social pushback—in the UK the campaigns against big ringroad schemes started in the late 1960s and early 1970s. And perhaps we’re seeing some of that about AI. The U.S. map of local pushback against data centres from Data Center Watch covers the whole of the country, in red states and blue. People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.

the us had the highway revolts. in most cities where the revolts succeeded it is widely heralded today as a success.

the data center hate is interesting. i think many people are just learning what data centers are. but that said, they've come to represent something different in recent years. previously they were part of the infrastructure that made industry hum, now public messaging from tech leaders and academics is along the lines of "this is how your livelihood is going to be replaced" while the institutions that are supposed to provide any sort of backstop are being dismantled or slashed to pieces by crazypants trumpist politics. i think focusing the energy on the tangible like mundane buildings is interesting, but the hate makes a lot of sense.

addressing the core thesis, i'd argue that ai is not the next step in the 70s digital technological wave (especially considering the future of ai compute is probably hybrid digital-analog systems), but rather is something fundamentally new that also changes how technology interacts with society and how economics itself will function.

previous systems helped, these systems can do. that's a fundamental change and one that may not be compatible with our existing economic systems of social sorting and mobility. the big question in my mind is: if it succeeds, will we desperately try to hold onto the old system (which essentially would be a disaster that freezes everyone in place and creates a permanent underclass) or will we evolve to a new, yet to be defined, system? and if so, how will the transition look?

reply
bearjaws
2 hours ago
[-]
I could totally see it, recently there has been a social club opened near me and it has 100+ people attending weekly. All younger, 20-30 year olds in their early career.

Separately, I have a local camera repair shop and my friend told me its 2 months backlog to get your film based camera worked on.

Ultimately if the deal we get online is infinite tracking, infinite scrolling and infinite enshittification, real life start to sound a whole lot better.

reply
Forgeties79
2 hours ago
[-]
Going to the local movie rental shop with my kids is the highlight of my week. What a bizarre sentence to write in 2026 but it’s absolutely 1000% better than modern streaming (outside of my Plex setup).

I gladly pay the (modest/token) late fees to help keep them open at this point. If someone set up a local arcade man…I’d be in heaven ha

reply
HWR_14
2 hours ago
[-]
> I gladly pay the (modest/token) late fees to help keep them open at this point

Keeping movies longer and paying late fees may be hurting them more than helping them. It's entirely possible that the late fees are underpriced to avoid scaring away customers. New customers going away disappointed they movie they want wasn't returned on time hurts them more than your late fees help.

reply
Forgeties79
2 hours ago
[-]
Not keeping them on purpose, I’m just not sweating the fee because I’m happy to pay them.

Additionally, the odds that my kids are holding on to exactly what somebody else wants in that timeframe is very small. It’s a small shop within a larger co-op situation with a modest following and pretty substantial stock. I know for instance we’ve never had an issue of wanting something that was rented.

Has it happened? Maybe. But the fees I’ve paid probably net positive against that rare instance. They aren’t open half the week so I can’t return them once Monday passes for several days anyway. Owner certainly hasn’t expressed concern and has even waived the fee before because clearly it’s of little consequence.

reply
hanyki111
1 hour ago
[-]
I don't really understand why they call it the end of the digital revolution.
reply
jimbokun
1 hour ago
[-]
Meaning software has eaten the world and there's nothing left to eat.
reply
DeepYogurt
1 hour ago
[-]
We are collectively out of ideas
reply
wordspotting
41 minutes ago
[-]
So many diseases to solve, nuclear fusion, better materials, expanding the frontier of science, communicating complex ideas to public, climate change, helping disadvantaged communities better, better farming, better participation platforms for good governance. There are so many aspects we can improve on with AI. But it is contingent on our govts prioritizing progess over destruction.
reply
himata4113
2 hours ago
[-]
Every time I see these I am thinking to myself: Is microsoft copilot a problem of implementation or the capability of the models?

I have ZERO doubt that if you put people that haven't used a computer in front of one and you had copilot everywhere and I mean not the way it is now instead you're presented with a chatbox in the middle of the screen and you just ask the computer what you want I am 99.99% sure that everyone would prefer to use that chatbox rather than trying to figure out how to use a computer which is why I am not quick to discredit "microslop", they're most likely pivoting windows to how it will look like in the future.

Obviously, the strongest argument here is that it should have been an entirely different product such as "Windows AI" where the entire system is designed around it. But if you look at their current implementation it's more of a copilot which is just there, letting you know it exists. Obviously not all of these features were thought through such as recall, that should have been dead and burried since it doesn't offer that much real value a magical box that takes in english sentences and does roughly what you want.

At the end of the day it's a question if AI will/is doing more harm than good. AI has really only existed in this form for a little more than 3 years and really started shining since the advent of Opus 4.5. We went from having models producing more security vulnerabilities than one can count to fixing obscure human made ones and the capabilities will keep increasing (if anthropic is to be believed). We will enter an era where it will have 95%+ accuracy in doing what a typical computer user would want from AI and there's really nothing anyone can do to stop it.

So my opinion is that AI will be the next big thing and it might spread way beyond what we can even imagine.

I think that we will have things similar to non technical people that just talk on the phone with an AI agent to get a website done, register a domain and have a website done within a 1 hour phone call all for pennies while the AI has access to their financials, mail and other things. All of that is relatively possible today with the simple caviat of security and I do believe we have enough smart people in the world that can figure out how to make AI better at rejecting social engineering than 99% of humans.

reply
chromacity
1 hour ago
[-]
> I have ZERO doubt that if you put people that haven't used a computer in front of one ... presented with a chatbox in the middle of the screen and you just ask the computer what you want I am 99.99% sure that everyone would prefer to use that chatbox

I don't know. We've been telling ourselves things like that about user interfaces for a long time. For decades, it was pretty much universally understood that everyone would prefer to talk to their computer instead of using a keyboard. Now that you can, no one really wants to. In fact, now that we can text / email / IM other people, we don't talk to them as much as before.

One obvious problem with the interface you're proposing is that sometimes, it's easier to do the thing than to explain precisely what you want. For example, it takes much longer to ask ChatGPT what's the weather forecast for this week, and then read the flowery response, than to press Ctrl-N, "wea", enter, and see it at a glance in a consistent format with pictograms.

reply
himata4113
1 hour ago
[-]
You already know how to use a computer or a phone, but take someone who has never seen or used a smartphone, computer or a laptop. I think the story will be very different.
reply
chromacity
1 hour ago
[-]
I don't know. In a vacuum, if we prevent them from ever finding out that there's a faster way with less cognitive overhead? Sure. Until they have to explain to an agent precisely which shoes they want the AI agent to buy them...

In any case, in practice, people pick up stuff from each other. I'm old enough that learning to use the computer mouse needed to be a deliberate effort on my end. I never really had to "teach" that to my kids, they just picked it up naturally. So you might even have a difficulty producing that "computer-naive" subject in the first place.

reply
pixl97
1 hour ago
[-]
> I never really had to "teach" that to my kids,

It's better to look at these things statistically rather than anecdotally. And statistically the Xennial group seems to have the highest penetration of computer skills, even more so than the generations that followed them. Simply put the new tablet generation is more apt to use apps and not understand the premises of how they work.

If you find yourself going to an actual computer to make 'large' purchases you're part of a group that is not growing in size.

reply
LunicLynx
2 hours ago
[-]
And with robots, this also applies to the physical world.
reply
boh
2 hours ago
[-]
AI is destroying the economic premise that has drawn so much investment into Silicon Valley. It's going from a capital light business model with network driven moats that allow market domination, to a capital heavy, high burn-rate model with the potential to not only offer ZERO moat protection but destroy the ones that already exist. Cloud infrastructure + vibe coding now make it possible to quickly replace existing apps with custom fit alternatives. Open source+cheap Chinese LLMs may not be as good as Opus but maybe good enough turns out to be good enough ( Sun Microsystems Vs. Linux is a good example). Currently AI has just as much potential destroying Silicon Valley as it does building it up.
reply
jimbokun
1 hour ago
[-]
The open weight LLMs seem a prime candidate for Innovator's Dilemma style disruption.
reply
pixl97
1 hour ago
[-]
That sounds like Silicon Valley's fault for taking the actual silicon out of the valley.

Sounds like it's best to be the shovel manufacturer now.

reply
justonepost2
2 hours ago
[-]
it be the end of the paradigm myth, and eventually, the Anthropocene

it be the beginning of vast and infinite potentia spreading out beyond us

reply
lkm0
2 hours ago
[-]
These economic frameworks sure look like pareidolia to me
reply
cmrdporcupine
1 hour ago
[-]
Yeah, I agree with TFA I think.

Introduction of new mass production techniques often has an initial wave of high profit when early adopters have an initial advantage... existing workers are more efficient... but this will followed by a long term decline in the rate of profit as margins aggressively fall ...

e.g. if every software company uses AI to double its coding speed, the price of software will eventually drop by half.

As "AI" becomes a required and common commodity input, competition will drive prices down until the productivity gains are entirely captured by customers, leading to margin compression across the sector.

Also... firms will be forced to invest in using AI just to stay in the same place. If you don't adopt it aggressively, you'll be priced out; if you do, your margins still shrink because everyone else did too.

So... yeah, I don't think this is the next part of a "digital wave" if that means giant increase in new startup investments and SaaS companies etc, it's actually probably the start of I think a margin collapse and consolidation in our industry.

If it's 2x easier to build e.g. a CRM, we’ll end up with 10x more CRMs, leading to a "race to the bottom" on pricing.

The last 15 years of investment by people like YC etc seems to have been in businesses that were "like Uber but for <X>". Service businesses on which a small layer of software automated things, and drove some sort of explosion of customers. I don't really see how VCs are going to separate wheat from chaff on this front anymore? If anybody can do it.... what's the value of any particular approach over the others? I'd think the result would be consolidation?

So I suppose if you're selling "the means of production" in the form of GPUs you're in a good spot, but even that is likely to be subject to aggressive downward pricing.

reply
ETH_start
2 hours ago
[-]
Humanity has industrialized the production of intelligence. We're nowhere near the end of what this leads to.
reply
josefritzishere
2 hours ago
[-]
It could also be a huge bubble like everyone seems to agree about.
reply
Invictus0
2 hours ago
[-]
ItS thE eND of ThE InTeRwEbS
reply
aswegs8
2 hours ago
[-]
Could
reply
schnitzelstoat
2 hours ago
[-]
The theory doesn't seem to make much sense to me - like why can't there be simultaneous technological revolutions? And why would they last an arbitrary 50-60 years?

> People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.

That could do with a solid citation tbh. The anti-AI people are really vocal on social media but personally I like having the AI results given how awful navigating the modern internet has become with all the cookie banners and anti-Ad Blocker popups etc.

Honestly, the LLMs seem like the most transformative technology we've had since the release of the iPhone.

reply
tom_
2 hours ago
[-]
50-60 years is far from arbitrary: it's very roughly two generations (plus a bit of extra time, to ensure the process takes). 50-60 years gives enough time for a generation to grow up and reach adulthood who have never known anything other than the post-revolution state.

Not unrelated: https://blog.gardeviance.org/2015/03/on-pioneers-settlers-to...

reply
parrellel
2 hours ago
[-]
I mean when I needed to look up something I used to just google it.

Now, with the advent of LLMs I've had to pull out my old textbooks from storage.

reply