Friendship, love, sex, art, even faith and childrearing are opportunities for substitution with AI. Ask an AI to create a joke for you at a party. Ask an AI to write a heartfelt letter to somebody you respect. Have an AI make a digital likeness of your grandmother so you can spend time with her forever. Have an AI tell you what you should say to your child when they are sad.
Hell. Hell on earth.
So yeah, it’s just everyone collectively devaluing human interaction.
Who do they think will make their ventures profitable? Who do they think will take their dollars and provide goods and services in exchange?
If automation reaches the point where 99% of humans add no value to the "owners" then the "owners" will own nothing.
I don't think that's right. The owners will still own everything. If or when that happens, I think the economy would morph into a new thing completely focused on serving the whims of those "owners."
There's an old Isaac Asimov book with something similar: https://en.wikipedia.org/wiki/Foundation_universe#Solaria (though accomplished more peacefully and with less pain than I think is realistic).
There are good people everywhere, but bring good and ethical stands in the way of making money, so most of the good people lose out in the end.
AI is the perfect technology for those who see people as complaining cogs in an economic machine. The current AI bubble is the first major advancement where these people go mask off; when people unapologetically started trying to replace basic art and culture with "efficient" machines, people started noticing.
It's practically the definition of psychopathy.
One of the many terrible things about software engineers their the tendency to think and speak as if they were some kind of aloof galaxy-brain, passively observing humanity from afar. I think that's at least partially the result of 1) identifying as an "intelligent person" and 2) computers and the internet allowing them to in-large-part become disconnected from the rest of humanity. I think they see that aloofness as being a "more intelligent" way to engage with the world, so they do it to act out their "intelligence."
The graph says horse ownership per person. People probably stopped buying horses, they let theirs retire (well, to be honest, probably also sent to the glue factory), and when they stopped buying new horses, horse breeding programs slowed down.
One would argue also if you don't see this, it's because you'd prefer not to.
If we had at least a somewhat functioning safety net, or UBI, or both, you'd at least have an argument to be made, but we don't. AI and it's associated companies' business model is, if not killing people, certainly attempting to make lots of lives worse at scale. I wouldn't work for one for all the money in the world.
and even if AI becomes good enough to replace most humans the economic surplus does not disappear
it's a coordination problem
in many places on Earth social safety nets are pretty robust, and if AI helps to reduce cost of providing basic services then it won't be a problem to expand those safety nets
...
there's already a pretty serious anti-inequality (or at least anti-billionaire) storm brewing, the question is can it motivate the necessary structural changes or just fuels yet another dumb populist movement
I don't exactly know how I feel about those, but I respect those criticisms. I think the grand synthesis is that UBI exists on top of existing safety nets.
Not only would there be more people on the streets protesting against real or perceived cuts;
there also would be fewer movements based on exclusivist ideologies protesting _in favour of cuts_*
* e.g. racist groups in favour of cutting some kinds of welfare because of racial associations
I'm not up to speed here -- is Bill Gates doing work to reduce the birth rates in Africa?
When the Covid-truther geniuses "figured out" that "Bill Gates was behind Covid", they pulled out things like this as "proof" that his master plan is to reduce the world's population. Not to reduce the rate of increase, but to kill them (because of course these geniuses don't understand derivatives)...
> And not very long after, 93 per cent of those horses had disappeared.
> I very much hope we'll get the two decades that horses did.
> But looking at how fast Claude is automating my job, I think we're getting a lot less.
While most of the text is written from cold economic(ish) standpoint it is really hard not to get bleak impression from it. And the last three sentences express that in vague way too. Some ambiguity is left on purpose so you can interpret the daunting impression your way.
The article presents you with crushing juxtaposition, implicates insane dangers, and leaves you with the feeling of inevitability. Then back to work, I guess.
> I very much hope we'll get the two decades that horses did.
Horses typically live between 25 to 30 years. I agree with OP that most likely those horses were not decimated (killed) but just died out and people stopped mass breeding them. Also as other noticed chart shows 'horses PER person in US'. Population between 1900 and 1950 increased from 1.5B to 2.5B (globally but probably similarly almost 70% increase in US).
I think depends what do you worry about:
1) `that human population decrease 50-80%`?
I don't worry about it even if that happen. 200 years ago human population was ~1 B today is ~8 B. At year 0 AD human population was ~0.250 B. Did we 200 years ago worry about it like "omg human population is only 1 B" ?
I doubt human population decrease 80% because of no demand for human as workforce but I don't see problem if it decrease by 50%. There will short transition period with surplus of retired people and work needed to keep the infrastructure but if robots can help with this then I don't see the problem.
2) `That we will not be needed and we will loose jobs?`
I don't see work like something in demand. Most people hate their jobs or do crappy jobs. What do people actually worry about that they will won't get any income. And actually not even about that - they worry that they will not be able to survive or be homeless. If there is improvement in production that food, shelter, transportation, healtcare is dirty cheap (all stuff from bottom maslov piramid) and fair distribution on social level then I also see a way this can be no problem.
3) `That we will all die because of AI`
This I find more plausable and maybe not even by AGI but earlier because of big social unrests during transition period.
I see quite the opposite, and have very little hope that reduced reliance on labor will increase the equability of distribution of wealth.
Historically, advanced civilizations with better production capabilities don't necessarily do better in war if they lack "practice". Sad but true. Maybe not in 21st century, but who knows.
That at least is the fantasy of these people. Fortunately. LLMs don't really work, Tesla cars are still built by KUKA robots (while KUKA has a fraction of Tesla's P/E) and data centers in space are a cocaine fueled dream.
Yes, actually, because this has been a deep vein of writing for the past 100 or more years. There's The Phools, by Stanislav Lem. There's the novels written by Boris Johnson's father that are all about depopulation. There's Aldous Huxley's Brave New World. How about Logan's Run? There has been so much writing about the automation / technology apocalypse for humans in the past 100 years that it's hard to catalog it -- much of what I have read or seen go by in the vein I've totally forgotten.
It's not remotely a surprise to see this amp up with AI.
At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.
People have been thinking apocalyptic thoughts like these since.. at least Malthus's An Essay on the Principle of Population (1798). That's 227 years if you're keeping score. Probably longer; Malthus might only have been the first to write them down and publish them.
We collectively have a lot of choice on the how we deal with it part. I'm personally optimistic that people will vote in people friendly policies when it comes to it.
I agree we can kinda make the argument that abundance is soon upon us, and humanity as a whole embraces the ideas of equality and harmony etc etc... but still there's a kinda uncanny dissociation if you're happily talking about horses disappearing and humans being next while you work on the product that directly causes your prediction to come true and happen earlier...
Yes, here's a youtube classic that put forth the same argument over a decade ago, originally titled "Humans need not apply": https://youtu.be/7Pq-S557XQU
So much money is spent on developing gambling, social media, crypto (fraud and crime enabler) and surveillance software. All of these are making people's lives worse, these companies aren't even shy about it. They want to track you, they want you to spend as much time as possible on their products, they want to make you addicted to gambling.
Just by how large these segments are, many of the people developing that software must be posting here, but I have never seen any actual reflection on it.
Sure, I guess developing software making people addicted to gambling pays the bills (and more than that), but I haven't seen even that. These industries just exist and people seem to work for them as if it was just a normal job, with zero moral implications.
In this instance, in particular, I wouldn't expect our preferences to bear any relevance.
I don’t know if you are intentionally being vague and existential here. However, context matters, and the predictive power is zero sounds unreasonable in the face of history.
I think humans learning that diseases were affecting us and thus leading to solutions like antibiotics and vaccines. It was not guaranteed, but I’m skeptical of the predictive power being zero.
It reminds me of "You maniacs! You blew it up! Goddamn you all to hell!" from the original Planet of the Apes (1968), https://youtu.be/mDLS12_a-fk?t=71
Quite ironically, the scene features a horse.
https://www.census.gov/library/visualizations/interactive/te... Look at all the professions on the bottom right: Teachers, therapists, clergy, social workers, etc. It’s not a coincidence that cruel people take top positions.
For the Romans, winning wars was the main source of elite prestige. So the Empire had to expand to accommodate winning more wars.
Today, the stock market and material wealth dominates. If elite dominance of the means of production requires the immiseration of most of the public, that's what we'll get.
That's almost 100% backwards. The Republic expanded. The Empire, not so much.
My comments being downvoted, pretty rare lately, were about never discussed but legitimate points about AI that I validated IRL. I have no resonance about the way AI is discussed on HN and IRL, to the point that I can't rule out more or less subtle manipulation on the discussions.
Not sure if by accident or not, but that’s what we are according today’s “tech elite”.
Therefore, the most profitable disposition for this dubious form of capital is to convert them into biodiesel, which can help power the Muni buses
https://www.goodreads.com/work/quotes/55660903-patchwork-a-p...I think the comparisons are useful enough as metaphors, though I wonder at analysis, because it sounds like if someone took a Yudkowsky idea and talked about it like a human, which might make a bad assumption go down more smooth than it should. But I don't know.
It shines through that the most fervent AI Believers are also Haters of Humans.
1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.I wish corporations really acted this rationally.
At least where I live hospitals fired most secretaries and assistants to doctors a long time ago. The end result? High-paid doctors spending significant portion of their time on administrative and bureaucratic tasks that were previously handled by those secretaries, preventing them from seeing as many patients as they otherwise would. Cost savings may look good on spreadsheet, but really the overall efficiency of the system suffered.
If you take away the juniors, you are now asking your seniors to do that work instead which is more expensive and wasteful. The PM cannot tell the AI junior what to do for they don't know how. Then you say, hey we also want you to babysit the LLM to increase productivity, well I can't leave a task with the LLM and come back to it tomorrow. Now I am wasting two types of time.
You could actually just do that, leave an agent on a problem you would give a junior, go back on your main task and whenever you feel like it check the agent's work.
I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not.
Once you get to something in the thousands or tens of thousands, you just have spreadsheets; and anything that doesn't show up in that spreadsheet might as well not exist. Furthermore, you have competing business units, each of which want to externalize their costs to other business units.
Very similar to what GP described -- when I was in a small start-up, we had an admin assistant who did most of the receipt entry and what-not for our expense reports; and we were allowed to tell the company travel agent our travel constrants and give us options for flights. When we were acquired by a larger company, we had to do our own expense reports, and do our own flight searches. That was almost certainly a false economy.
And then when we became a major conglomerate, at some point they merged a bunch of IT functions; so the folks in California would make a change and go home, and those of us in Europe or the UK would come in to find all the networks broken, with no way to fix it until the people in California started coming in at 4pm.
In all cases, the dollars saved is clearly visible in the spreadsheet, while the "development velocity" lost is noisy, diffuse, and hard to quantify or pin down to any particular cause.
I suppose one way to quantify that would be to have the Engineering function track time spent doing admin work and charge that to the Finance function; and time spent idle due to IT outages and charge that to the IT department. But that has its own pitfalls, no doubt.
I am going to assume that the Doctors are just working longer hours and/or aren't as attentive as they could be and so care quality declines but revenue doesn't. Overworking existing staff in order to make up for less staff is a tried and true play.
> I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not.
By conflating 'Doctors' and 'Dentists' you are basically saying the equivalent of 'all Doctors' and 'Doctors of a certain specialty'. Dentists are 'Doctors for teeth' like a pediatrician is a 'Doctor for children' or an Ortho is a 'Doctor for bones'.
Teeth need maintenance, which is the time consuming part of most visits, and the Dentist has staff to do that part of it. That in itself makes the specialty not really that comparable to a lot of others.
There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.
I want to be be optimistic. But it’s hard to ignore what I’m doing and seeing. As far as I can tell, we haven’t hit serious unemployment yet because of momentum and slow adoption.
I’m not replying to argue, I hope you are right. But I look around and can’t shake the feeling of Wile E. Coyote hanging in midair waiting for gravity to kick in.
Yes, it’s a god of the gaps situation. We don’t know what the ceiling is. We might have hit it, there might be a giant leap forward ahead, we might leap back (if there is a rug pull).
The most interesting questions are the ones that assume human equivalency.
Suppose an AI can produce like a human.
Are you ok with merging that code without human review?
Are you ok with having a codebase that is effectively a black box?
Are you ok with no human being responsible for how the codebase works, or able to take the reins if something changes?
Are you ok with being dependent on the company providing this code generation?
Are we collectively ok with the eventual loss of human skills, as our talents rust and the new generation doesn’t learn them?
Will we be ok if the well of public technical discussion LLMs are feeding from dries up?
Those are the interesting debates I think.
When was the last time you looked at the machine code your compiler was giving you? For me, doing embedded development on an architecture without a mature compiler the answer is last Friday but I expect that the vast majority of readers here never look at their machine code. We have abstraction layers that we've come to trust because they work in practice. To do our work we're dependent on the companies that develop our compilers where we can at least see the output, but also companies that make our CPUs which we couldn't debug without a huge amount of specialized equipment. So I expect that mostly people will be ok with it.
You could rephrase that as “when was the last time your compiler didn’t work as expected?”. Never in my whole career in my case. Can we expect that level of reliability?
I’m not making the argument of “the LLM is not good enough”. that would brings us back to the boring dissuasion of “maybe it will be”.
The thing is that human langauge is ambiguous and subject to interpretation, so I think we will have occasionally wrong output even with perfect LLMs. That makes black box behavior dangerous.
It's like everyone knows it is super cool but nobody has really cracked the code for what it's economic value truly, truly is yet
If anything the quality has gotten worse, because the models are now so good at lying when they don’t know it’s really hard to review. Is this a safe way to make that syscall? Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect, and it’ll either be right or lying, it never says “I don’t know”.
Every time OpenAI or Anthropic or Google announce a “stratospheric leap forward” and I go back and try and find it’s the same, I become more convinced that the lying is structural somehow, that the architecture they have is not fundamentally able to capture “I need to solve the problem I’m being asked to solve” instead of “I need to produce tokens that are likely to come after these other tokens”.
The tool is incredible, I use it constantly, but only for things where truth is irrelevant, or where I can easily verify the answer. So far I have found programming, other than trivial tasks and greenfield ”write some code that does x”, much faster without LLMs
Fully agree, in fact, this has literally happened to me a week ago -- ChatGPT was confidently incorrect about its simple lock structure for my multithreaded C++ program, and wrote paragraphs upon paragraphs about how it works, until I pressed it twice about a (real) possibility of some operations deadlocking, and then it folded.
> Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations.
As an university assistant professor trying to keep up with AI while doing research/teaching as before, this also happens to me and I am dismayed by that. I am certain there are models out there that can solve IMO and generate research-grade papers, but the ones I can get easy access to as a customer routinely mess up stuff, including:
* Adding extra simplifications to a given combinatorial optimization problem, so that its dynamic programming approach works.
* Claiming some inequality is true but upon reflection it derived A >= B from A <= C and C <= B.
(This is all ChatGPT 5, thinking mode.)
You could fairly counterclaim that I need to get more funding (tough) or invest much more of my time and energy to get access to models closer to what Terrence Tao and other top people trying to apply AI in CS theory are currently using. But at least the models cheap enough for me to get access as a private person are not on par with what the same companies claim to achieve.
- They are not making progress, currently. The elephant-in-the-room problem of hallucinations is exactly the same or, as I said above, worse as it was 3 years ago
- It's clearly possible to solve this, since we humans exist and our brains don't have this problem
There's then two possible paths: Either the hallucinations are fundamental to the current architecture of LLMs, and there's some other aspect about the human brains configuration that they've yet to replicate. Or the hallucinations will go away with better and more training.
The latter seems to be the bet everyone is making, that's why there's all these data centers being built right? So, either larger training will solve the problem, and there's enough training data, silica molecules and electricity on earth to perform that "scale" of training.
There's 86B neurons in the human brain. Each one is a stand-alone living organism, like a biological microcontroller. It has constantly-mutating state, memory: short term through RNA and protein presence or lack thereof, long term through chromatin formation, enabling and disabling it's own DNA over time, in theory also permanent through DNA rewriting via TEs. Each one has a vast array of input modes - direct electrical stimulation, chemical signalling through a wide array of signaling molecules and electrical field effects from adjacent cells.
Meanwhile, GPT-4 has 1.1T floats. No billions of interacting microcontrollers, just static floating points describing a network topology.
The complexity of the neural networks that run our minds is spectacularly higher than the simulated neural networks we're training on silicon.
That's my personal bet. I think the 88B interconnected stateful microcontrollers is so much more capable than the 1T static floating points, and the 1T static floating points is already nearly impossibly expensive to run. So I'm bearish, but of course, I don't actually know. We will see. For now all I can conclude is the frontier model developers lie incessantly in every press release, just like their LLMs.
Any sources on that? Except for some big tech companies I dont see that happening at all. While not empirical most devs I know try to avoid it like the plague. I cant imagine that many devs actually jumped on the hype train to replace themselves...
Those folks that do churn out such apps, for them its great & horrible long term. For folks like me development is maybe 10% of my work, and by far the best part - creative, problem-solving, stimulating, actually learning myself. Why would I want to mildly optimize that 10% and loose all the good stuff, while speed wouldn't visibly even improve?
To really improve speed in bigger orgs, the change would have to happen in processes, office politics, management priorities and so on. No help of llms there, if anything trend-chasing managers just introduce more chaos with negative consequences.
There were many secretaries up until the late 20th century that took dictation, either writing notes of what they were told or from a recording, then they typed it out and distributed memos. At first, there were many people typing, then later mimeograph machines took away some of those jobs, then copying machines made that faster, then printers reduced the need for the manual copying, then email reduced the need to print something out, and now instant messaging reduces email clutter and keep messages shorter.
All along that timeline there were fewer and fewer people involved, all for the valuable task of communication. While they may not have held these people in high esteem, they were critical for getting things done and scaling.
I’m not saying LLMs are perfect or will replace every job. They make mistakes, and they always will; it’s part of what they are. But, as useful as people are today, the roles we serve in will go away and be replaced by something else, even if it’s just to indicate at various times during the day what is or isn’t pleasing.
It would be amazing if LLMs could replace the role that meetings has in communication, but somehow I strongly doubt that will happens. It is a fun idea to have my AI talk with your AI so no one need to actually communicate, but the result is more likely to create barriers for communication than to help it.
AI isn't automation. It's thinking. It automates the brain out of human jobs.
You can still get a job that requires a body. My job doesn't require a body, so I'm screwed. If you're say, a surgeon or a plumber, you're in a better place.
Why this example? One of the things automation has done is reduce and replace stevedores, the shipping equivalent of stacking shelves.
Amazon warehouses are heavily automated, almost self-stacking-shelves. At least, according to the various videos I see, I've not actually worked there myself. Yet. There's time.
> AI isn't automation. It's thinking. It automates the brain out of human jobs. You can still get a job that requires a body. My job doesn't require a body, so I'm screwed. If you're say, a surgeon or a plumber, you're in a better place.
Right up until the AI is good enough to control the robot that can do that job. Which may or may not be humanoid. (Plus side: look how long it's taking for self-driving cars, how often people think a personal anecdote of "works for me" is a valid response to "doesn't work for me").
Even before the AI gets that good, a nice boring remote-control android doing whatever manual labour could outsource the "controller" position to a human anywhere on the planet. Mental image: all the unemployed Americans protesting outside Tesla's factories when they realise the Optimus robots within are controlled remotely from people in 3rd world countries getting paid $5/day.
People that actually care about the quality of their output are a dying breed, and that death is being accelerated by this machine that produces somewhat plausible-looking output, because we're optimizing around "plausible-looking" and not "correct"
The sad thing is that for many software devs, the implementation is the fun bit.
There were more people typing than ever before? Look around you, we're all typing all day long.
1. either reading notes in shorthand, or reading something from a sheet that was already fully typed using a typewriter, or listening to recorded or live dictation
2. then typing that content out into a typewriter.
People were essentially human copying machines.
Current tech can't yet replace everything but many jobs already see the horizon or are at sunset.
Last few time this happened the new tech, whether textile mills or computers, drove job creation as well as replacement.
This time around some component of progress are visibile, because end of the day people can use this tech to create wealth at unprecedented scale, but other arent as the tech is run with small teams at large scale and has virtually no related ondustries is depends on like idk cars would. It's energy and gpus.
Maybe we will be all working on gpu related industries? But seems another small team high scale job. Maybe few tens of million can be employed there?
Meanwhile I just dont see the designer + AI job role materializing, I see corpos using AI and cutting the middleman, while designers + AI get mostly ostracized, unable to raise, like a cran in a bucket of crabs.
_Where?_ so far the only technology to have come out widespread for this is to shove a chatbot interface into every UI that never needed it.
Nothing has been improved, no revelatory tech has come out (tools to let you chatbot faster don’t count).
If you dont see it happening around you, you're just not looking.
This doesn't sound like "creating wealth at unprecedented scale"
Oh? I sat down for a game of chess against a computer and it never showed up. I was certain it didn't show up because computers are unable to without human oversight, but tell me why I'm wrong.
Those modes of transport are all equivalent to planes for the point being made.
I (not that I'm even as good as "mediocre" at chess) cannot legally get from my current location to the USA without some other human being involved. This is because I'm not an American and would need my entry to be OKed by the humans managing the border.
I also doubt that I would be able to construct a vessel capable of crossing the Atlantic safely, possibly not even a small river. I don't even know enough to enumerate how hard that would be, would need help making a list. Even if knew all that I needed to, it would be much harder to do it from raw materials rather than buying pre-cut timber, steel, cloth (for a sail), etc. Even if I did it that way, I can't generate cloth fibres and wood from by body like plants do. Even if I did extrude and secrete raw materials, plants photosynthesise and I eat, living things don't spontaneously generate these products from their souls.
For arguments like this, consider the AI like you consider treat Stephen Hawking: lack of motor skills aren't relevant to the rest of what they can do.
When AI gets good enough to control the robots needed to automate everything from mining the raw materials all the way up to making more robots to mine the raw materials, then not only are all jobs obsolete, we're also half a human lifetime away from a Dyson swarm.
The point is that even those things require oversight from humans. Everything humans do requires oversight from humans. How you missed it, nobody knows.
Maybe someday we'll have a robot uprising where humans can be exterminated from life and computers can continue to play chess, but that day is not today. Remove the human oversight and those computers will soon turn into lumps of scrap unable to do anything.
Sad state of affairs when not even the HN crowd understands such basic concepts about computing anymore. I guess that's what happens when one comes to tech by way of "Learn to code" movements promising a good job instead of by way of having an interest in technology.
'cause you said:
Computer chess systems, on the other hand, cannot do anything without human oversight.
The words "on the other hand" draws a contrast, suggesting that the subject of the sentence before it ("chess grandmasters") are different with regard to the task ("show up to elite tournaments"), and thus can manage without the stated limitation ("anything without human oversight").> Maybe someday we'll have a robot uprising where humans can be exterminated from life and computers can continue to play chess, but that day is not today. Remove the human oversight and those computers will soon turn into lumps of scrap unable to do anything.
OK, and? Nobody's claiming "today" is that day. Even Musk despite his implausible denials regarding Optimus being remote controlled isn't claiming that today is that day.
The message you replied to was this: https://news.ycombinator.com/item?id=46201604
The chess-playing example there was an existing example of software beating humans in a specific domain in order to demonstrate that human oversight is not a long-term solution, you can tell by the use of the words "end state", and even then hypothetical (due to "if"), as in:
If successful, the end state is full automation
There was a period where a chess AI that was in fact playing a game of chess could beat any human opponent, and yet would still lose to the combination of a human-AI team. This era has ended and now the humans just hold back the AI, we don't add anything (beyond switching it on).Furthermore, there's nothing at all that says that an insufficiently competent AI won't wipe us out:
And as we can already observe, there's clearly nothing stopping real humans from using insufficiently competent AI due to some combination of being lazy and/or the vendors over-promising what can be delivered.
Also, we've been in a situation where the automation we have can trigger WW3 and kill 90% of the human population despite the fact that the very same automation would be imminently destroyed along with it since the peak of the Cold War, with near-misses on both US and USSR systems. Human oversight stopped it, but like I said, we can already observe lazy humans deferring to AI, so how long will that remain true?
And it doesn't even need to be that dramatic; never mind global defence stuff, just correlated risks, all the companies outsourcing all their decisions to the same models, even when the models' creators win a Nobel prize for creating them, is a description of how the Black–Scholes formula and its involvement in the 2008 financial crisis — and sure, didn't kill us all, but this is just an illustration of a failure mode rather than consequences.
I know it can be hard for programmers stuck in a programming language mindset, especially where one learned about software from "Learn to code" movements, but as this is natural language, technically it only draws what I intended for it to draw. If you wish to interpret it another way, cool. Much like as in told in the Carly Simon song of similar nature, it makes no difference to me.
The same will be true of every other intellectual discipline with time. It's already happening with maths and science and coding.
The one where computers don't magically run all by themselves. It's amazing how out of touch HN has become with technology. Thinking that you can throw something up into the cloud, or whatever was imagined, needing no human oversight to operate it... Unfortunately, that's not how things work in this world. "The cloud" isn't heaven, despite religious imagery suggesting otherwise. It requires legions of people to make it work.
This is the outcome of that whole "Learn to code" movement from a number of years ago, I suppose. Everyone thinks they're an expert in everything when they reach the mastery of being able to write a "Hello, World" program in their bedroom.
But do tell us what planet you are on as it sounds wonderful.
(Disclaimer: this is me trying to be optimistic in a very grim and depressing situation)
And that’s why across-the-board AI-induced job losses aren’t going to happen-nobody wants the economic house of cards to collapse. Corporate leaders aren’t stupid enough to blow everything up because they don’t want to be blown up in the process. And if they actually are stupid enough, politicians will intervene with human protectionism measures like regulations mandating humans in the loop of major business processes.
The horse comparison ultimately doesn’t work because horses don’t vote.
Businesses need consumers when those consumers are necessary to provide something in return (e.g. labor). If I want beef and only have grass, my grass business needs people with cattle wanting my grass so that we can trade grass for beef, certainly. But if technology can provide me beef (and anything else I desire) without involving any other people, I don't need a business anymore. Businesses is just a tool to facilitate trade. No need for trade, no need for business.
If AI can take all the jobs (IMO at least a decade away for the robotics, and that's a minimum not a best-guess), the economy hasn't been destroyed, it's just doing whatever mega-projects the owners (presumably in this case the Chinese government) want it to do.
That can be all the social stability stuff they want. Which may be anything from "none at all" to whatever the Chinese equivalent is of the American traditional family in a big detached house with a white picket fence, everyone going to the local church every Sunday, people supporting whichever sports teams they prefer, etc.
I don't know Chinese culture at all (well, not beyond OSP and their e.g. retelling of Journey to the West), so I don't know what their equivalents to any of those things would be.
Can the process be modelled using game theory where the actors are greedy corporate leaders and hungry populace?
He's going to learn how to drive (and repair) a tractor but he's also going to learn how to ride a horse.
People seem to think this discussion is a binary where either agents replace everybody or they don't. It's not that simple. In aggregate, what's more likely to happen (if the promises of AI companies hold good) is large scale job losses and the remaining employees becoming the accountability sinks to bear the blame when the agent makes a mistake. AI doesn't have to replace everybody to cause widespread misery.
Computers can't play chess.
In the past a strike mattered. With robots, it may have to go on for years to matter.
This is very important yet rarely talked about. Having worked in a well-run group on a very successful product I could see that no matter how many people would be on a project there was alway too much work. And always too many projects. I am no longer with the company but I can see some of the ideas talked about back then being launched now, many years later. For a complex product there is always more to do and AI would simply accelerate development.
Instead our productivity went way above anything he could imagine, yet there was no radical shift in labor. We just instead started making billionaires by the thousand, and soon enough we can add trillionaires. He underestimated how many people were willing to designate the pursuit of wealth as the meaning of life itself.
At least since the Industrial Revolution, and probably before, the only advance that has led to shorter work weeks is unions and worker protections. Not technology.
Technology may create more surplus (food, goods, etc) but there’s no guarantee what form that surplus will reach workers as, if it does at all.
I mean, that basically just sums up how capitalism works. Profit growth is literally (even legally!) the only thing a company can care about. Everything else, like product quality, pays service to that goal.
Profit growth is based primarily on offering the product that best matches the consumer wish at the lowest price, and production cost possible. That benefits both the buyer and the seller. If the buyer does not care about product quality, then you will not have any company producing quality products.
The market is just a reflection of that dinámica. And in the real world we can easily observed that: Many market niches are dominated by quality products (outdoor and safety gear, professional and industrial tools…) while others tend to be dominated by non-quality (low end fashion, toys).
And that result is not imposed by profit growth but by the average consumer preference.
You can of course disagree with those consumer preferences and don’t buy low quality products, that’s why you most probably also find high products in any market niche.
But you cannot blame companies for that. What they sell is just the result of the aggregated buyers preferences and the result of free market decisions.
"There are changes in other spheres too which we must expect to come. When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession – as distinguished from the love of money as a means to the enjoyments and realities of life – will be recognised for what it is, a somewhat disgusting morbidity, one of those semi-criminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease. All kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital, we shall then be free, at last, to discard."
It's rather unfortunate it only began in 1967, because I think we would see an even more extreme flip if we were able to just go back a decade or two more, and back towards Keynes' time. As productivity and wealth accumulation increased, society seems to have trended in the exact opposite direction he predicted. Or at least there's a contemporary paradox. Because I think many, if not most, younger people hold wealth accumulation with some degree of disdain yet also seek to do the exact same themselves.
In any case, in a society where wealth is seen as literally the most important aspect in life, it's not difficult to predict what follows.
[1] - https://www.heri.ucla.edu/monographs/50YearTrendsMonograph20...
---
1966 Median Household Income = $7400 [1].
51% of students in the $0-9999 bracket
Largest chunk of students (33%) in $6k-$9999 bracket.
Percent of students from families earning at least 2x median = 23%
---
2015 Median Household Income = $57k [1].
65% of students came from families earning more than $60k.
Largest chunk of students (18%) in $100k-$150k bracket.
Percent of students from families earning at least 2x median = 44%
---
So I think it's fairly safe to say that the average student at UCLA today comes from a significantly wealthier family than in 1966.
[1] - https://www.census.gov/library/publications/1967/demo/p60-05...
What we know so far though is that many of the traditional values were bound to the old society structures, based on the traditional family.
The advent of the sexual revolution, brought by the contraception pill, completely obliterated those structures, changing the family paradigm since then. Only accentuated in the last decade by social media and the change in the sexual marketplace due to dating apps.
Probably today many young people would just prioritize reputation (eg followers) over wealth and life philosophy. As that seems to be the trend that dictates the sexual marketplace dinámics.
Still haven’t gotten rid of work for work’s sake being a virtue, which explains everything else. Welfare? You don’t “deserve” it. Until we solve this problem, we’re not or less heading straight for feudalism.
Didn’t we also get standards of living much higher than he would ever imagine? I think blaming everything on billionaires is really misguided and shallow.
We just instead started doing Bullshit Jobs. https://en.wikipedia.org/wiki/Bullshit_Jobs
I agree with all the limitations you've written about the current state of AI and LLMs. But the fact is that the tech behind AI and LLMs never really gets worse. I also agree that just scaling and more compute will probably be a dead end, but that doesn't mean that I don't think that progress will still happen even when/if those barriers are broadly realized.
Unless you really believe human brains have some sort of "secret special sauce" (and, FWIW, I think it's possible - the ability of consciousness/sentience to arise from "dumb matter" is something that I don't think scientists have adequately explained or even really theorized), the steady progress of AI should, eventually, surpass human capabilities, and when it does, it will happen "all at once".
[0] https://en.wikipedia.org/wiki/Ford_Model_T#Mass_production
Not really, given that the article goes into detail about this in the first paragraph, with US data and graphs: "Then, between 1930 and 1950, 90% of the horses in the US disappeared."
The point isn't to claim that motor vehicles did not replace horses, they obviously did, but that the replacement was less "sudden" than claimed.
They have more work to do until they don't.
The number of bank tellers went up for a while after the invention of the ATM, but then it went down, because all the demand was saturated.
We still need food, farming hasn't stopped being a thing, nevertheless we went from 80-95% of us working in agriculture and fishing to about 1-5%, and even with just those percentages working in that sector we have more people over-eating than under-eating.
As this transition happened, people were unemployed, they did move to cities to find work, there were real social problems caused by this. It happened at the same time that cottage industries were getting automated, hand looms becoming power-looms, weaving becoming programmable with punch cards. This is why communism was invented when it was invented, why it became popular when it did.
And now we have fast-fashion, with clothes so fragile that they might not last one wash, and yet still spend a lower percentage of our incomes on clothes than the pre-industrial age did. Even when demand is boosted by having clothes that don't last, we still make enough to supply demand.
Lumberjacks still exist despite chainsaws, and are so efficient with them that the problem is we may run out of rainforests.
Are there any switchboard operators around any more, in the original sense? If I read this right, the BLS groups them together with "Answering Service", and I'm not sure how this other group then differs from a customer support line: https://www.bls.gov/oes/2023/may/oes432011.htm
> 2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.
This would be absolutely correct — I've made the analogy to Amdahl's law myself previously — if LLMs didn't also do so many of the other things. I mean, the linked blog post is about answering new-starter questions, which is also not the only thing people get paid to do.
Now, don't get me wrong, I accept the limitations of all the current models. I'm currently fairly skeptical that the line will continue to go up as it has been for very much longer… but "very much longer" in this case is 1-2 years, room for 2-4 doublings on the METR metric.
Also, I expect LLMs to be worse at project management than at writing code, because code quality can be improved by self-play and reading compiler errors, whereas PM has slower feedback. So I do expect "manage the AI" to be a job for much longer than "write code by hand".
But at the same time, you absolutely can use an LLM to be a PM. I bet all the PMs will be able to supply anecdotes about LLMs screwing up just like all the rest of us can, but it's still a job task that this generation of AI is still automating at the same time as all the other bits.
I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I've seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. The article is right about one thing: in that moment AI will overtake us suddenly! But I doubt we will make linear progress toward that goal. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can't influence anyway.
So, while I don't think AGI will happen any time soon, I wonder what 'roads' we'll build to squeeze the most out of our current AI. Probably tons of power generation.
What would that look like for navigating life without AI? Living in a community similar to the Amish or Hasidic Jews that don't integrate technology in their lives as much as the average person does? That's a much more extreme lifestyle change than moving to NYC to get away from cars.
Remember when "AGI" was the weasel word because 1980s AI kept on not delivering?
To companies like Anthropic, “AGI” really means: “Liquidity event for (AI company)” - IPO, tender offer or acquisition.
Afterwards, you will see the same broken promises as the company will be subject to the expectations of Wall St and pension funds.
That's highly irrelevant because if it were otherwise, we would already be replaced. The article was talking about the future.
It only appears “simple” because you're used to see working engines everywhere without never having to maintain them, but neither the previous generations nor the engineers working on modern engines would agree with you on that.
An engine performs “a simple mechanical operation” the same way an LLM performs a “simple computation”.
The question is how do our individuals, and more importantly our various social and economic systems handle it when exactly what humans can do to provide value for each other shifts rapidly, and balances of power shift rapidly.
If the benefits of AI accrue to/are captured by a very small number of people, and the costs are widely dispersed things can go very badly without strong societies that are able to mitigate the downsides and spread the upsides.
Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive. In the 50's and 60's we replaced all these people with computers. An entire career of "bank clerk" vanished, and it was a net good for humanity. The cost of bank transactions came down (by a lot!), banks became more responsive and served their customers better. And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs.
There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way. They're boring jobs (for most people doing them) and having humans do them makes administration slow and expensive. Automating them will be a net good for humanity. Imagine if "this meeting could have been an email" actually moves to "this meeting never happened at all because the person making the decision just told the LLM and it did it".
You are right that the danger is that most of the benefits of this automation will accrue to capital, but this didn't happen with the bank clerk automation - bank customers accrued a lot of the benefits too. I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better.
I don't mean to pick on your example too much. However, when I worked in financial audit, reviewing journal entries spit out from SAP was mind numbingly boring. I loved doing double-entry bookkeeping in my college courses. Modern public accounting is much, much more boring and worse work than it was before. Balancing entries is enjoyable to me. Interacting with the terrible software tools is horrific.
I guess people that would have done accounting are doing other, hopefully more interesting jobs in the sense that absolute numbers of US accountants is on a large decline due to the low pay and the highly boring work. I myself am certainly one of them as a software engineer career switcher. But the actual work for a modern accountant has not been improved in terms of interesting tasks to do. It's also become the email + meetings + spreadsheet that you mentioned because there wasn't much else for it to evolve into.
I'm not sure most of those organizations will have many customers left, if every white collar admin job has been automated away, and all those people are sitting unemployed with whatever little income their country's social safety net provides.
Automating away all the "boring jobs" leads to an economic collapse, unless you find another way for those people to earn their living.
Humans will continue to have certain desires far outstripping the supply we have for a long time to come.
We still don’t have cures for all diseases, personal robot chefs & maids, and an ideal house for everyone, for example. Not all have the time to socialize as much as they wish with their family and friends.
There will continue to be work for humans as long as humans provide value & deep connections beyond what automation can. The jobs could themselves become more desirable with machines automating the boring and dangerous parts, leaving humans to form deeper connections and be creatively human.
The transition period can be painful. There should be sufficient preparation and support to minimize the suffering.
Workers will need to have access to affordable and effective methods to retrain for new roles that will emerge.
“soft” skills such as empathetic communication and tact could surge in value.
Or, as Cory Doctorow argues, the machines could become tools to extract "efficiency" by helping the employer make their workers lives miserable. An example of this is Amazon and the way it treats its drivers and warehouse workers.
it's interesting how it's never your job that will be automated away in this fantasy, it's always someone else's.
> There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way.
Managing risks, can't automate it. Every project and task needs a responsibility sink.
The point about ai not having “skin” (I assume “skin in the game”) is well taken. I say often that “if you’ve assigned an ai agent the ‘a’ in a raci matrix, you’re doing it wrong”. Very important lesson that some company will learn publicly soon enough.
I don't disagree, though I'd put it more as "machines cannot take responsibility for decisions, so machines must not have authority to make decisions".
But we've all been in meetings where there are too many people in the room, and only one person's opinion really counts. Replacing those other people with an LLM capable of acting on the decision would be a net positive for everyone involved.
AI, at the limit, is a vampiric technology, sucking the differentiated economic value from those that can train it. What happens when there are no more hosts to donate more training-blood? This, to me, is a big problem, because a model will tend to drift from reality without more training-blood.
The owners of the tech need to reinvest in the hosts.
The problem for me is the point of the economy in the limit where robots are better, faster and cheaper than any human at any job. If the robots don’t decide we’re worth keeping around we might end up worse than horses.
But machines can experience neither pain nor pleasure.
LLMs have over 1B users and exchange over 1T tokens with us per day. We put them through all conceivable tasks and provide support for completing those tasks, and push back when the model veers off. We test LLM ideas in reality (like experiment following hypothesis) and use that information to iterate. These logs are gold for training on how to apply AI in real world.
But from the perspective of a human being, an animal, and the environment that needs love, connection, mutual generosity and care, another human being who can provide those is priceless.
I propose we break away and create our own new economy and the ultra-wealthy can stay in their fully optimised machine dominated bunkers.
Sure maybe we'll need to throw a few food rations and bags of youthful blood down there for them every once in a while, but otherwise we could live in an economy that works for humanity instead.
"Don't give yourselves to these unnatural men, machine men, with machine minds and machine hearts. You are not machines, you are not cattle, you are men. You have the love of humanity in your hearts."
Spoken 85 years ago and even more relevant today
They sure as shit won't be content to leave the rest of us alone.
That said my theory about power and privilege is that it's actually just a symptom of a deep fear of death. The reason gaining more money/power/status never lets up is because there's no amount of money/power/status that can satiate that fear, but somehow naively there's a belief that it can. I wouldn't be surprised if most people who have any amount of wealth has a terrible fear of losing it all, and to somebody whose identity is tied to that wealth, that's as good as death.
You can definitely use AI and automation to help yourself and your family/community rather than the oligarchs. You set the prompts. If AI is smart enough to do your old job, it is also smart enough to support you be independent.
I can’t help but smile at the possibility that you could be a bot.
In times past, the only people on earth who had their standard of living raised to a level that allowed them to cast there gaze upon the stars were the Kings and there courts, vassals, and noblemen. As time passed we have learned to make technologies that provide enough energy slaves to the common man that everyone lives a life that a king would have envied in times past.
So the question arises as to whether AI or the pursuit of AGI provides more or less energy slaves to the common man?
AI kinda breaks this; there is a real risk that human labor is going to become almost worthless this century, and this might mean that the common man ends up worse off despite nominal economic growth.
It's really changing cultural expectations. Don't ping a human when an LLM can answer the question probably better and faster. Do ping a human for meaningful questions related to product directions / historical context.
What LLMs are killing is:
- noisy Slacks with junior folks questions. Those are now your Gemini / chat gpt sessions.
- tedious implementation sessions.
The vast majority of the work is still human led from what I can tell.
What I want to know when I join a company is "why" the system does what it does. Sure, give me pointers, some overview of how the code is structured, that always helps, but if you don't tell me why how am I supposed to work?
$currentCompany has the best documentation I've seen in my career. It's been spun off from a larger company, from people collaborating asynchronously and remotely whenever they had some capacity.
No matter how diligent we've been, as soon as the company started in earnest and we got people fully dedicated to it, there's been a ton of small decisions that happened during a quick call, or on a slack thread, or as a comment on a figma design.
This is the sort of "you had to be there" context the onboarding should aim to explain, and I don't see how LLMs help with that.
I think done right it is a superior onboarding experience. As a new hire, you no longer have to wait for your mentor to be available to learn some badly documented tech things. This is really empowering some of them. The lack of building human context / connections etc is real, and I don't think LLMs can meaningfully help there. Hence my skepticism for the horse analogy.
The way the average dev structures their code requires like 10x the number of lines as I do and at least 10x the amount of time to maintain... The interest on technical debt compounds like interest on normal debt.
Whenever I join a new project, within 6 months, I control/maintain all the core modules of the system and everything ends up hooked up to my config files, running according to the architecture I designed. Happened at multiple companies. The code looks for the shortest path to production and creates a moat around engineers who can make their team members' jobs easier.
IMO, it's not so different to how entrepreneurship works. But with code and processes instead of money and people as your moat. I think once AI can replace top software engineers, it will be able to replace top entrepreneurs. Scary combination. We'll probably have different things to worry about then.
1: https://www.lbec-law.com/blog/2025/04/the-majority-of-driver...
I am regularly tempted to do this (I have done this a few times), but unless I truly own the project (being the tech lead or something), I stop myself. One of the reasons is reluctance to trespass uninvited on someone's else territory of responsibility, even if they do a worse job than I could. The human cost of such a situation (to the project and ultimately to myself) is usually worse than the cost of living with status quo. I wonder what your thoughts are on this.
6 months is also average time it takes people like you to burn out on a project. Usually starting with relatively simple change/addition requested by customer that turns into 3 month long refactor - "because architecture is wrong". And we just let you do it, because we know fighting windmills is futile.
You’re assuming the LLM produces extra complexity because it’s mimicking human code. I think it’s more likely that LLMs output complex code because it requires less thought and planning, and LLMs are still bad at planning.
It's often very creative how junior devs approach problems. It's like they don't fully understand what they're doing and the code itself is part of the exploration and brainstorming process trying to find the solution as they write... Very different from how senior engineers approach coding when it's like you don't even write your first line until you have a clear high level picture of all the parts and how they will fit together.
About the second point, I've been under the impression that because LLMs are trained on average code, they infer that the bugs and architectural flaws are desirable... So if it sees your code is poorly architected, it will generate more of that poorly architected code on top. If it sees hacks in your codebase, it will assume hacks are OK and give you more hacks.
When I use an LLM on a poorly written codebase, it does very poorly and it's hard to solve any problem or implement any feature and it keeps trying to come up with nasty hacks... Very frustrating trial and error process; eats up so many tokens.
But when I use the same LLM on one of my carefully architected side projects, it usually works extremely well, never tries to hack around a problem. It's like having good code lets you tap into a different part of its training set. It's not just because your architecture is easier to build on top, but also it follows existing coding conventions better and always addresses root causes, no hacks. Its code style looks more like that of a senior dev. You need to keep the feature requests specific and short though.
There is a strange dynamic currently at play in the software labour market where the demand is so huge that the market can bear completely inefficient coders. Even though the difference between a good and a bad software engineer is literally orders of magnitude.
Quite a few times I encountered programmers "in the wild" - in a sauna, on the bus etc, and overheard them talking about their "stack". You know the type, node.js in a docker container. I cannot fathom the amount of money wasted at places that employ these people.
I also project that actually, if we adopt LLMs correctly, these engineers (which I would say constitute a large percentage) will disappear. The age of useless coding and infinite demand is about to come to an end. What will remain is specialist engineer positions (base infra layer, systems, hpc, games, quirky hardware, cryptographers etc). I'm actually kind of curious what the effect on salary will be for these engineers, I can see it going both ways.
Ctrl-F 'code', 0 results
What is this comment about?
It's a confusing comment. I misinterpreted it myself too originally.
No idea why you'd want this in a normal job, but the capabilities are here.
Too much is on the line here regardless of what ultimately ends up being true or just hype.
While in the last year I’ve seen generated images go from complete slop to indistinguishable from real photos. It’s hard to know what is right around the corner and what isn’t even close.
Before someone says "but benchmark doesn't reflect real world..." please name what metric you think is meaningful if not benchmark. Token consumption? OpenAI/Anthropic revenue?
This will never change because you can only use an LLM to generate code (or any other type of output) you already know how to produce and are expert at - because you can never trust the output.
W.r.t code changes especially small ones (say 50 lines spread across 5 files), if you can't get an agent to make nearly exactly the code changes you want, just faster than you, that's a you problem at this point. If it maybe would take you 15 minutes, grok-code-fast-1 can do it in 2.
If you're creating basic CRUDs, what on earth are you doing? That kind of thing should have been automated a long time ago.
It is true that LLMs make it easier to build these kind of things without having to become a competent programmer first.
Job satisfaction and human flourishing
By those metrics, AI is getting worse and worse
AI is able to speed up the progress, to give more resources, to give the most important thing people have - time. The fact that these incredible gifts are misused (or used inefficiently) is not the problem of AI. This would be like complaining that the objective positive of increased food production is actually a negative, because people are getting fatter.
1. Is there steady progress in AI?
2. What example do you need? In every single benchmark AI is getting better and better.
3. Job satisfaction and human flourishing.
Hence my answer "AI is very satisfied in doing the job, just ask it". It came about because of the stupid comment 3, which tried to link and put a blame on unrelatable things (akin to refering to obesity when asked what metrics make him say that agriculture/transportation have not made progress in the last 100 years) and at the same time anthropomorphed AI. I only accepted the premise and continued answering on the same level in order to demonstrate stupidity of their answer.
The figures for cost are wildly off to start with.
But would you rather be a horse in 1920 or 2020? Wouldn't you rather have modern medicine, better animal welfare laws, less exposure to accidents, and so on?
The only way horses conceivably have it worse is that there are fewer of them (a kind of "repugnant conclusion")...but what does that matter to an individual horse? No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion. We care more about the welfare of the 9 billion.
I have met some transhumanists and longtermists who would really like to see some orders of magnitude increase in the human population. Maybe they wouldn't say "tragedy", but they might say "burning imperative".
I also don't think it's clearly better for more beings to exist rather than fewer, but I just want to assure you that the full range of takes on population ethics definitely exists, and it's not simply a matter of straightforward common sense how many people (or horses) there ought to be.
> Then in December, Claude finally got good enough to answer some of those questions for us.
What getting high on your own supply actually looks like. These are not the types of questions most people have or need answered. It's unique to the hiring process and the nascent status of the technology. It seems insane to stretch this logic to literally any other arena.
On top of that horses were initially replaced with _stationary_ gasoline engines. Horses:Cars is an invalid view into the historical scenario.
And that really is the entire question at this point: Which domains will AI win in by a sufficient margin to be worth it?
This is an assumption for the best-case scenario, but I think you could also just take the marginal case. Steady progress builds until you get past the state of the art system, and then the switch becomes easy to justify.
AI is like that, but instead with dudes in slim fitting vests blogging about alignment
AI, faster please!
And not very long after, 93 per cent of those horses had disappeared.
I very much hope we'll get the two decades that horses did."
I'm reminded of the idiom "be careful what you wish for, as you might just get it." Rapid technogical change has historically lead to prosperity over the long term but not in the short term. My fear is that the pace of change this time around is so rapid that the short term destruction will not be something that can be recovered from even over the longer term.
4,500,000 in 1959
and even an increase to
7,000,000 in 1968
largely due to increase in recreational horse population.
https://time.com/archive/6632231/recreation-return-of-the-ho...
So that recreational existence at the leisure of our own machinery seems like an optional future humans can hope for too.
Turns out the chart is about farm horses only as counted by the USDA not including any recreational horses. So this is more about agricultural machinery vs. horses, not passenger cars.
---
City horses (the ones replaced by cars and trucks) were nearly extinct by 1930 already.
City horses were formerly almost exclusively bred on farms but because of their practical disappearance such breeding is no longer necessary. They have declined in numbers from 3,500,000 in 1910 to a few hundred thousand in 1930.
https://www2.census.gov/library/publications/decennial/1930/...
At least currently humans do not need AI to reproduce.
Pray it’s still humans who ask these kinds of questions about AI, not the other way around.
That's what Sandy over the road (born 1932, died last year), used to hitch up every morning at 4am, when he was ten, to sled a tank of water back to the farm from the local spring.
1. we aren’t good at building cars yet,
2. they break down so often that using horses often still ends up faster,
3. we have dirt tracks and feed stations for horses but have few paved roads and are not producing enough gasoline.
I think that it's true that governments want the efficiency gains but it's false that they don't anticipate the consumption increases. Nobody is spending trillions on datacenters without knowing that demand will increase, that doesn't mean we shouldn't make them efficient.
Ambivalent??
What exactly does specifically engine efficiency have to do with horse usage? Cars like the Ford Model T entered mass production somewhere around 1908. Oh, and would you look at the horse usage graph around that date! sigh
The chess ranking graph seems to be just a linear relationship?
> This pink line, back in 2024, was a large part of my job. Answer technical questions for new hires.
>
> Claude, meanwhile, was now answering 30,000 questions a month; eight times as many questions as me & mine ever did.
So more == better. sigh. Ran any, you know, studies to see the quality of those answers? I too can consult /dev/random for answers at a rate of gigabytes per second!
> I was one of the first researchers hired at Anthropic.
Yeah. I can tell. Somebody's high on their own supply here.
So I guess we should check to see if computers are good at scaling or doing things concurrently. If not, no worries!
> Then in December, Claude finally got good enough to answer some of those questions for us.
> … Six months later, 80% of the questions I'd been being asked had disappeared.
Interesting implications for how to train juniors in a remote company, or in general:
> We find that sitting near teammates increases coding feedback by 18.3% and improves code quality. Gains are concentrated among less-tenured and younger employees, who are building human capital. However, there is a tradeoff: experienced engineers write less code when sitting near colleagues.
https://pallais.scholars.harvard.edu/sites/g/files/omnuum592...
Is it really possible to make this claim given the vast sums of money that have gone in to AI/LLM training?
Early factories were expensive, too (compared to the price of a horse), but that was never a show-stopper.
1. The release of Claude Code in February
2. The release of Opus 4.5 two weeks ago
In both of these cases, it felt like no big new unlocks were made. These releases aren’t like OpenAI’s o1, where they introduced reasoning models with entirely new capabilities, or their Pro offerings, which still feel like the smartest chatbots in the world to me.
Instead, these releases just brought a new user interface, and improved reliability. And yet these two releases mark the biggest increases in my AI usage. These releases caused the utility of AI for my work to pass thresholds where Claude Code became my default way to get LLMs to read my code, and then Opus 4.5 became my default way to make code changes.
- Radical massive multimodality. We perceive the world through many wide-band high-def channels of information. Computer perception is nowhere near. Same for ability to "mutate" the physical world, not just "read" it.
- Being able to be fine-tuned constantly (learn things, remember things) without "collapsing". Generally having a smooth transition between the context window and the weights, rather than fundamental irreconcilable difference.
These are very difficult problems. But I agree with the author that the engine is in the works and the horses should stay vigilant.
Also maybe go out for some fresh air. Maybe knowledge work will go down for humans, but plumbing and such will take much longer since we'll need dextrous robots.
plenty of charts you can look at - net productivity by virtually any metric vs real adjusted income. the example I like are kiosks and self checkout. who has encountered one at a place where it is cheaper than its main rival and is directly attributable to (by the company or otherwise) to lower prices?? in my view all it did was remove some jobs. that's the preview. that's it. you will lose jobs and you will pay more. congrats.
even with year 2020 tech you could automate most work that needs to be done, if our industry wouldn't endlessly keep disrupting itself and have a little bit of discipline.
so once ai destroys desk jobs and the creative jobs, then what? chill out? too bad anyone who has a house won't let more be built.
Compare sorting by median vs average to get a sense of the issue; https://en.wikipedia.org/wiki/List_of_countries_by_wealth_pe...
This is a recent development where the median wealth of citizens in progressively taxes nations has quickly overtaken the median wealth of USA citizens.
All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right? Yet things gave consistently been going the other way for along time in the USA.
If all that fails, they have their underground bunkers on faraway islands and/or backup citizenships.
Agreed and I think this is a result of a naive belief that we humans tend to have that controlling thoughts can control reality. Politicians still live by this belief but eventually reality and lived experience does catch up. By that time all trust is long gone.
The richest of the rich have purchased islands where they can hole up.
The bunkers are in case of nuclear war or serious pandemics. Absolutely worst case last resort scenario, not just "oh I don't care if I end up there"
People usually change their behavior after some pretty horrific events. So I would predict something like that in future. For both Europe and US too.
You could tax 100% of all of the top 1%'s income (not progressively, just a flat 100% tax) and it'd cover less than double the federal government's budget deficit in the US. There would be just enough left over to pay for making the covid 19 ACA subsidies permanent and a few other pet projects.
Of course, you can't actually tax 100% of their income. In fact, you'd need higher taxes on the top 10% than anywhere else in the West to cover the deficit, significantly expand social programs to have an impact, and lower taxes on the middle class.
It should be pointed out that Australia has higher taxes on their middle class than the US does. It tops out at 45% (plus 2% for medicare) for anyone at $190k or above.
If you live in New York City, and you're in the top 1% of income earners (taking cash salary rather than equity options) you're looking at a federal tax rate of 37%, a state tax rate of 10.9%, and a city income tax rate of 3.876% for a total of 51.77%. Some other states have similarly high tax brackets, others are less, and others yet use other schemes like no income tax but higher sales and property taxes.
Not quite so obvious when you look closer at it.
How much of the current burden is shouldered by the middle class? How much by the 1%? How does that compare to other Western nations? What measurable effect would raising this on the 1% be? What about the middle class?
Tech and AI have taken off in the US partially because they’re in the domain of software, which hasnt bee regulated to the point of deliberate inefficiency like other industries in the US.
(I pick this example because our regulation of insurance companies has (unintuitively) incentivized them to pay more for care. So it’s an example of poor regulation imo)
We've blundered into a system that has the worst parts of socialized health care and private health insurance without any of the benefits.
Stuff like this isn't Wall Street or Billionaires or whatever bogeyman - it's our neighbors: https://bendyimby.com/2024/04/16/the-hearing-and-the-housing...
However regulation is helpful for those already sick or with pre-existing conditions. Developed countries with well-regulated systems also have better health outcomes than the US does.
What do you mean? Several Asian cities have housing crises far worse than the US in local purchasing power, and I'd even argue that a "cheap" home in many Asian countries is going to be of a far lower quality than a "cheap" home in the US.
As it is now anyone with assets is only barely affected by inflation while those who earn a living from wages have their livelihood eroded over time covertly.
The key issue upstream is that too many good jobs are concentrated in too few places, and that leads to consumerism stimulating those places and making them further more attractive. Technology, through Covid, actually gave governments a get out of jail free card by allowing remote work to become more mainstream. Only to just not grasp the golden egg they were given. Pivot economies more to remote working more actively helps distribute people to other places with more affordable home. Over time, and again slowly, those places become more attractive because people now actually live there.
Existing homeowners can still wrap themselves in the warm glow of their high house prices which only loses "real" value through inflation which people tend not to notice as much.
But we decided to try to go back to the status quo so oh well
- House prices increasing while wages are stagnant
- Home loans and increasing prices mean the people going for huge leverages on their home purchases
- Supply is essentially government controlled, and dependent, and building more housing is heavily politicized
- A lot of dubious money is being created, which gets converted to good money by investing it in the housing market
- Housing is genuinely difficult to build and labor and capital intensive
> The key issue upstream is that too many good jobs are concentrated in too few places
This no longer is the case with remote work on the rise, If that were the case, housing prices would increase faster in trendy overpriced places, but the increase as of late was more uniform, with places like London growing slower (or even depreciating, relatively speaking) to less in-demand places.
physical products & energy are the two things that are relevant to people's wellbeing.
right now A.I is sucking up the energy & the RAM - so is it gonna translate into a net positive ?
Think of it another way. It's not that these things are more expensive. It's that the average US worker simply doesn't provide anything of value. China provides the things of value now. How the government corrected for this was to flood the economy with cash. So it looks like things got more expensive, when really it's that wages reduced to match reality. US citizens selling each other lattes back and forth, producing nothing of actual value. US companies bleeding people dry with fees. The final straw was an old man uniting the world against the USA instead of against China.
If you want to know where this is going, look at Britain: the previous world super power. Britain governed far more of the earth than the USA ever did, and now look at it. Now the only thing it produces is ASBOs. I suppose it also sells weapons to dictators and provides banking to them. That is the USA's future.
If you were to buy that same house today, your mortgage would be about $5100/m-- about 6 weeks of pay.
And the reason is exactly what you're saying: the average US worker doesn't provide as much value anymore. Just as her factory job got optimized/automated, AI is going to do the same for many. Tech workers were expensive for a while and now they're not. The problem is that there seems to be less and less opportunity where one can bring value. The only true winners are the factory owners and AI providers in this scenario. The only chance anybody has right now is to cut the middleman out, start their own business, and pray it takes off.
That's just an example, but the pattern will easily repeat. One thing that came out of the post-pandemic era is that the lowest deciles saw the biggest rises in income. Consequently, things like Doordash became more expensive, and stuff like McDonald's stopped staffing as much.
This isn't some grand secret, but most Americans who post on Twitter, HN, or Reddit consider the results some kind of tragedy, though it is the natural thing that happens when people become much higher income: you can't hire many of them to do low-productivity jobs like bus a McD's table.
That's what life looks like when others get richer relative to you. You can't consume the fruits of their labor for cheap. And they will compete for you with the things that you decided to place supply controls on. The highly-educated downwardly-mobile see this most acutely, which is why you see it commonly among the educated children of the past elite.
So the young want cheap affordable housing, right in the middle of Manhattan, never going to happen.
Pretty much everything gets more expensive, with the outliers being tech which has gotten much cheaper, mostly because the rate at which it progresses is faster than the rate at which governments can print money. But everything we need to survive, like food, housing, etc, keeps getting more expensive. And the asset class get richer as a result.
I am not an AI sceptic.. I use it for coding. But this article is not compelling.
I don't think applies for general human intelligence - yet.
With this setup, you would need batteries that can sustain load for weeks on end, in many parts of the world.
I'm willing to believe the hype on LLMs except that I don't see any tiny 1-senior-dev-plus-agents companies disrupting the market. Maybe it just hasn't happened "yet"... But I've been kind of wondering the same thing for most of 2025.
Glad I noticed that footnote.
Article reeks of false equivalences and incorrect transitive dependencies.
And they often do it at the expense of the rest of us
The big thing this AI boom has showed us that we can all be thankful to have seen is what a human in a box will eventually look like. The first generation of humans to be able to see that is a super lucky experience to have.
Maybe it's one massive breakthrough away or maybe it's dozens away. But there is no way to predict when some massive breakthrough will occur Illya said 5-20 that really just means we don't know.
Horses and cars had a clearly defined, tangible, measurable purpose: transport... they were 100% comparable as a market good, and so predicting an inflection point is very reasonable. Same with Chess, a clearly defined problem in finite space with a binary, measurable outcome. Funny how Chess AI replacing humans in general was never considered as a serious possibility by most.
Now LLMs, what is their purpose? What is the purpose of a human?
I'm not denying some legitimate yet tedious human tasks are to regurgitate text... and a fuzzy text predictor can do a fairly good job of that at less cost. Some people also think and work in terms of text prediction more often than they should (that's called bullshitting - not a coincidence).
They really are _just_ text predictors, ones trained on such a humanly incomprehensible quantity of information as to appear superficially intelligent, as far as correlation will allow. It's been 4 years now, we already knew this. The idea that LLMs are a path to AGI and will replace all human jobs is so far off the mark.
Where did they go?
> In Pursuit of Honor is a 1995 American made-for-cable Western film directed by Ken Olin. Don Johnson stars as a member of a United States Cavalry detachment refusing to slaughter its horses after being ordered to do so by General Douglas MacArthur. The movie follows the plight of the officers as they attempt to save the animals that the Army no longer needs as it modernizes toward a mechanized military.
I really doubt horses would be ambivalent about this, let alone about anything. Or maybe I'm wrong, they were in two minds: oh dear I'm at risk of being put to sleep, or maybe it could lead to a nice long retirement out on a grassy meadow. But they're in all likelihood blissfully unaware.
No one wants to say the scary potential logical conclusion of replacing the last value that humans have a competitive advantage in; that being intelligence and cognition. For example there is one future scenario of humanity where only the capital and resource holders survive; the middle and lower classes become surplus to requirements and lose any power. Its already happening slowly via inflation and higher asset prices after all - it is a very real possibility. I don't think a revolution will be possible in this scenario; with AI and robotics the rich could outnumber pretty much everyone.
(A parenthetical comment explaining where he ballparked the measurements for himself, the "cheapest human labor," and Claude numbers would also have supported the argument, and some writers, especially web-focused nerd-type writers like Scott Alexander, are very good at this, but text explanations, even in parentheses, have a way of distracting readers from your main point. I only feel comfortable writing one now because my main point is completed.)
The article is a Misanthropic advertisement. The "AI" mafia feels that no one wants their products and doubles down.
They are so desperate that Pichai is now talking about data centers in space on Fox News. Next up are "AI" space lasers.
> I very much hope we'll get the two decades that horses did.
> But looking at how fast Claude is automating my job, I think we're getting a lot less.
This "our company is onto the discovery that will put you all out of work (or kill you?)" rhetoric makes me angry.
Something this powerful and disruptive (if it is such) doesn't need to be owned or controlled by a handful of companies. It makes me hope the Chinese and their open source models ultimately win.
I've seen Anthropic and OpenAI employees leaning into this rhetoric on an almost daily basis since 2023. Less so OpenAI lately, but you see it all the time from these folks. Even the top leadership.
Meanwhile Google, apart from perhaps Kilpatrick, is just silent.
Meanwhile, my own office is buried in busywork that there are currently no AI tools on the market that will do the work for us, and AI entering a space sometimes increases busywork workloads. For example, when writing descriptions of publications or listings for online sales, we have to put more effort now into not sounding like it was AI-generated or we will lose sales. The AI tools for writing descriptions / generating listings are not very helpful either. (An inaccurate listing/description is a nightmare.)
I was able to help set up a client with AI tools to help him generate basically a faux website in a few hours that has lots of nice graphic design, images, etc. so that his new venture looks like a real company. Well, except for the "About Us" page that hallucinated an executive team plus a staff of half a dozen employees. So I guess work like that does get done faster now.
As the potential of AI technical agents has gone from an interesting discussion to extraordinarily obvious as to what the outcome is going to be, HN has comically shifted negative in tone on AI. They doth protest too much.
I think it's a very clear case of personal bias. The machines are rapidly coming for the lucrative software jobs. So those with an interest in protecting lucrative tech jobs are talking their book. The hollowing out of Silicon Valley is imminent, as other industrial areas before it. Maybe 10% of the existing software development jobs will remain. There's no time to form powerful unions to stop what's happening, it's already far too late.
I have noticed, however, that people who are either not programmers or who are not very good programmers report that they can derive a lot of benefit from AI tools, since now they can make simple programs and get them to work. The most common use case seems to be some kind of CRUD app. It's very understandable this seems revolutionary for people who formerly couldn't make programs at all.
For those of us who are busy trying to deliver what we've promised customers we can do, I find I get far less use out of AI tools than I wish I did. In our business we really do not have the budget to add another senior software engineer, and we don't the spare management/mentor/team lead capacity to take on another intern or junior. So we're really positioned to be taking advantage of all these promises I keep hearing about AI, but in practical terms, it saves me at an architect or staff level maybe 10% of my time and for one of our seniors maybe 5%.
So I end up being a little dismissive when I hear that AI is going to become 80% of GDP and will be completely automating absolutely everything, when what I actually spend my day on is the same-old same-old of trying to get some vendor framework to do what I want to get some sensor data out of their equipment and deliver apps to end customers that use enough of my own infrastructure that they don't require $2,000 a month of cloud hosting services per user. (I picked that example since at one customer, that's what we were brought in to replace: that kind of cost simply doesn't scale.)
I think AI tools are great, and I use them daily and know their limits. Your view is commonly held by management or execs who don't have their boots on the ground.
I don't think you can characterise it as a sentiment of the community as a whole. While every AI thread seems to have it's share of AI detractors, the usernames of the posters are becoming familiar. I think it might be more accurate to say that there is a very active subset of users with that opinion.
This might hold true for the discourse in the wider community. You see a lot of coverage about artists outraged by AI, but when I speak to artists they have a much more moderate opinion. Cautious, but intrigued. A good number of them are looking forward to a world that embraces more ambitious creativity. If AI can replicate things within a standard deviation of the mean, the abundance of that content there will create an appetite for something further out.
But the temptation of easy ideas cuts both ways. "Oldsters hate change" is a blanket dismissal, and there are legitimate concerns in that body of comments.