Then there's a tendency to be so 'anti' that there's an assumption that anyone reporting that the tools are accomplishing truly impressive and useful things must be an 'AI booster' or shill. Or they assume that person must not have been a very good engineer in the first place, etc.
Really is one of those examples of the quote, "In the beginner's mind there are many possibilities, but in the expert's mind there are few."
It's a rapidly evolving field, and unless you actually spend some time kicking the tires on the models every so often, you're just basing your opinions on outdated experiences or what everyone else is saying about it.
And often when the two are arguing it's tricky to tell which is which, because whether or not it does something isn't totally black and white, there's some things it can sometimes do, which you can argue either way about that being in its capabilities or not.
Are the models that exist today a "true scottsman" for you?
I just don’t see that much of a difference coding either with Claude 4 or Gemini 2.5 pro. Like they’re all fine but the difference isn’t changing anything in what I use them for. Maybe people are having more success with the agent stuff but in my mind it’s not that different than just forking a GitHub repo that already does what you’re “building” with the agent.
The only people you’re excluding are the people who are forced to use it, and the random sampling of people who happened to try it recently.
So it may have been accidental or indirectly, but yes, no true Scotsman would apply to your statement.
Yes, but the claims do not. When the hypemen were shouting that GPT-3 was near-AGI, it still turned out to be absolute shit. When the hypemen were claiming that GPT-3.5 was thousands of times better than GPT-3 and beating all highschool students, it turned out to be a massive exaggeration. When the hypemen claimed that GPT-4 was a groundbreaking innovation and going to replace every single programmer, it still wasn't any good.
Sure, AI is improving. Nobody is doubting that. But you can only claim to have a magical unicorn so many times before people stop believing that this time you might have something different than a horse with an ice cream cone glued to its head. I'm not going to waste a significant amount of my time evaluating Unicorn 5.0 when I already know I'll almost certainly end up disappointed.
Perhaps it'll be something impressive in a decade or two, but in the meantime the fact that Big Tech keeps trying to shove it down my throat even when it clearly isn't ready yet is a pretty good indicator to me that it is still primarily just a hype bubble.
I agree it will probably be something in a decade, but right now, it has some interesting concepts but I do notice upon successive iterations of chat responses that its got a ways to go.
It remind me of Tesla car owners buying into the self-driving terminology. Yes the drive assistant technology has improved quite a bit since cruise control, but its a far cry from self-driving.
For example, I dismissed AI three years ago because it couldn’t do anything I needed it to. Today I use it for certain things and it’s not quite capable of other things. Tomorrow it might be capable of a lot more.
Yes, priors have to be updated when the ground truth changes and the capabilities of AI change rapidly. This is how chess engines on supercomputers were competitive in the 90s then hybrid systems became the leading edge competitive and then machines took over for good and never looked back.
So I would say the overall service provided is better than it was, thanks to functions being built based on user queries, but not the actual LLM models themselves.
It is also true that the tooling and context management has gotten more sophisticated (often using models by the way). That doesn’t negate that the models themselves have gotten better at reliable tool calling so that the LLM is driving more of the show rather than purpose built coordination into the LLM and that the codegen quality is higher than it used to be.
Not as many as on HN. "Influencers" have agendas and the stream of income, or other self-interest. HN always comes off as a monolith, on any subject. Counter-arguments get ignored and downvoted to oblivion.
There are also those of us who have used them substantially, and seen the damage that causes to a codebase in the long run (in part due to the missing gains of having someone who understands the codebase).
There are also those of us who just don’t like the interface of chatting with a robot instead of just solving the problem ourselves.
There are also those of us who find each generation of model substantially worse than the previous generation, and find the utility trending downwards.
There are also those of us who are concerned about the research coming out about the effects of using LLMs on your brain and cognitive load.
There are also those of us who appreciate craft, and take pride in what we do, and don’t find that same enjoyment/pride in asking LLMs to do it.
There are also those of us who worry about offloading our critical thinking to big corporations, and becoming dependent on a pay-to-play system, that is current being propped up by artificially lowered prices, with “RUG PULL” written all over them.
There are also those of us who are really concerned about the privacy issues, and don’t trust companies hundreds of billions of dollars in debt to some of the least trust worth individuals with that data.
Most of these issues don’t require much experience with the latest generation.
I don’t think the intention of your comment was to stir up FUD, but I feel like it’s really easy for people to walk away with that from this sort of comment, so I just wanted to add my two cents and tell people they really don’t need to be wasting their time every 6 weeks. They’re really not missing anything.
Can you do more than a few weeks ago? Sure? Maybe? But I can also do a lot more than I was able to a few weeks ago as well not using an LLM. I’ve learned and improved myself.
Chances are if you’re not already using an LLM it’s because you don’t like it, or don’t want to, and that’s really ok. If AGSI comes out in a few months, all the time you would have invested now would be out of date anyways.
There’s really no rush or need to be tapped in.
Yep, this is me. Every time people are like "it's improved so much" I feel like I'm taking crazy pills as a result. I try it every so often, and more often than not it still has the same exact issues it had back in the GPT-3 days. When the tool hasn't improved (in my opinion, obviously) in several years, why should I be optimistic that it'll reach the heights that advocates say it will?
I’m open to programming with LLMs, and I’m entirely fine with people using them and I’m glad people are happy. But this insistence that progress is so crazy that you have to be tapped in at all times just irks me.
LLM models are like iPhones. You can skip a couple versions it’s fine, you will have the new version at the same time with all the same functionality as everyone else buying one every year.
Another sign tapping is needed.
> AI is exceptional for coding! [high-compute scaffold around multiple instances / undisclosed IOI model / AlphaEvolve]
> AI is awesome for coding! [Gpt-5 Pro]
> AI is somewhat awesome for coding! ["gpt-5" with verbosity "high" and effort "high"]
> AI is a pretty good at coding! [ChatGPT 5 Thinking through a Pro subscription with Juice of 128]
> AI is mediocre at coding! [ChatGPT 5 Thinking through a Plus subscription with a Juice of 64]
> AI sucks at coding! [ChatGPT 5 auto routing]
I have code to write
1) LLMs are controlled by BigCorps who don’t have user’s best interests at heart.
2) I don’t like LLMs and don’t use them because they spoil my feeling of craftsmanship.
3) LLMs can’t be useful to anyone because I “kick the tires” every so often and am underwhelmed. (But what did you actually try? Do tell.)
#1 is obviously true and is a problem, but it’s just capitalism. #2 is a personal choice, you do you etc., but it’s also kinda betting your career on AI failing. You may or may not have a technical niche where you’ll be fine for the next decade, but would you really in good conscience recommend a juniorish web dev take this position? #3 is a rather strong claim because it requires you to claim that a lot of smart reasonable programmers who see benefits from AI use are deluded. (Not everyone who says they get some benefit from AI is a shill or charlatan.)
After all, I can always pick up LLMs in the future. If a few weeks is long enough for all my priors to become stale, why should I have to start now? Everything I learn will be out of date in a few weeks. Things will only be easier to learn 6, 12, 18 months from now.
Also no where in my post did I say that LLMs can’t be useful to anyone. In fact I said the opposite. If you like LLMs or benefit from them, then you’re probably already using them, in which case I’m not advocating anyone stop. However there are many segments of people who LLMs are not for. No tool is a panacea. I’m just trying to nip and FUD in the butt.
There are so many demands for our attention in the modern world to stay looped in and up to date on everything; I’m just here saying don’t fret. Do what you enjoy. LLMs will be here in 12 months. And again in 24. And 36. You don’t need to care now.
And yes I mentor several juniors (designers and engineers). I do not let them use LLMs for anything and actively discourage them from using LLMs. That is not what I’m trying to do in this post, but for those whose success I am invested in, who ask me for advice, I quite confidently advise against it. At least for now. But that is a separate matter.
EDIT: My exact words from another comment in this thread prior to your comment:
> I’m open to programming with LLMs, and I’m entirely fine with people using them and I’m glad people are happy.
How does someone reconcile a faith that AI tooling is rapdily improving with that contradictory belief that there is some permanent early-adopter benefit?
I agree very strongly with the poster above yours: If these tools are so good and so easy to use then I will learn them at that time
Otherwise the idea that they are saving me time is likely just hype and not reality, which matches my experience
I understand all of what you said, but I can't get over that fact that the term AI is being used for these architectures. It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.
Maybe I'm being overly cynical, but a lot of this stinks.
If you gave a random sci-fi writer from 1960s access to Claude, I'm fairly sure they wouldn't have any doubts over whether it is AI or not. They might argue about philosophical matters like whether it has a "soul" etc (there's plenty of that in sci-fi), but that is a separate debate.
It can do a lot of things that are generally very effective. High reliability semantic parsing from images is just one thing that modern LLM's are very reliable at.
but this other attribution people are doing- that it's going to achieve (the marketing term) AGI and everything will be awesome is clearly bullshit.
I had a professor in AI who was only working on symbolic systems such as SAT-solvers, Prolog etc. and the combination of things seems really promising.
Oh, and what would be really nice is another level of memory or fast learning ability that goes beyond burning in knowledge through training alone.
There was also wide understanding that those architectures were trying to imitate small bits of what we understood was happening in the brain (see marvin minsky's perceptron etc). The hope was, as I understood it that there would be some breakthrough in neuroscience that would let the computer scientists pick up the torch and simulate what we find in nature.
None of that seems to be happening anymore and we're just interested in training enough to fool people.
"AI" companies investing in brain science would convince me otherwise. At this point they're just trying to come up with the next money printing machine.
We are indeed simulating what we find in nature when we create neural networks and transformers, and AI companies are indeed investing heavily in BCI research. ChatGPT can write an original essay better than most of my students. Its also artificial. Is that not artificial intelligence?
Hiding the training data behind gradient descent and then making attributions to the program that responds using this model is certainly artificial though.
This analogy just isn't holding water.
They're still pretty dumb if you want the to do anything (ie with MCPs) but they're not bad at writing and code.
I found I had better luck with ChatGPT 3.5's coding abilities. What the newer models are really good at, though, is doing the high level "thinking" work and explaining it in plain English, leaving me to simply do the coding.
It’s incredibly powerful and will just clearly be useful. I don’t believe it’s going to replace intelligence or people but it’s just obviously a remarkable tool.
But I think at least part of the dynamic is that the SV tech hype booster train has been so profoundly full of shit for so long that you really can’t blame people for skepticism. Crypto was and is just a giant and elaborate grift, to name one example. Also guys like Altman are clearly overstating the current trajectory.
The dismissive response does come with some context attached.
They are still full of shit about LLMs, even if it is useful.
And that all wouldn't be a problem if it wasn't for the wave of bots that makes the crypto wave seem like child's play.
If you really in good faith want to understand where people are coming from when they talk about huge productivity gains, then I would recommend installing Claude Code (specifically that tool) and asking it to build some kind of small project from scratch. (The one I tried was a small app to poll a public flight API for planes near my house and plot the positions, along with other metadata. I didn't give it the api schema at all. It was still able to make it work.) This will show you, at least, what these tools are capable of -- and not just on toy apps, but also at small startups doing a lot of greenfield work very quickly.
Most of us aren't doing that kind of work, we work on large mature codebases. AI is much less effective there because it doesn't have all the context we have about the codebase and product. Sometimes it's useful, sometimes not. But to start making that tradeoff I do think it's worth first setting aside skepticism and seeing it at its best, and giving yourself that "wow" moment.
One of the first three projects I tried was a spin on a to-do app. The buttons didn't even work when clicked.
Yes, I keep it iterating, give it a puppeteer MCP, etc.
I think you're just misunderstanding how hard it is to make a greenfield project when you have a super-charged stack overflow that AI is.
Greenfield projects aren't hard, what's hard is starting them.
What AI has helped me immensely with is blank page syndrome. I get it to spit out some boilerplate for a SINGLE page, then boom, I have a new greenfield project 95% my own code in a couple of days.
That's the mistake I think you 10x ers are making.
And you're all giddy and excited and are putting in a ton of work without realising you're the one doing the work, not the AI.
And you'll eventually burn out on that.
And those of us who are a bit more skeptical are realising we could have done it on our own, faster, we just wouldn't normally have bothered. I'd have gone done some gardening with that time instead.
My recommendation was that it's useful to try the tools on greenfield projects, since they you can see them at their best.
The productivity improvements of AI for greenfield projects are real. It's not all bullshit. It is a huge boost if you're at a small startup trying to find product market fit. If you don't believe that and think it would be faster to do it all manually I don't know what to tell you - go talk to some startup founders, maybe?
1.2x was self-reported, but when measured, developers were actually 0.85x ers using AI.
Personally I've found that it struggles if you're using a language that is off the beaten path. The more content on the public internet that the model could have consumed, the better it will be.
> It's hardly even useful for coding.
I’m curious what kind of projects you’re writing where AI coding agents are barely useful.
It’s the “shills” on YouTube that keep me up to date with the latest developments and best practices to make the most of these tools. To me it makes tools like CC not only useful but indispensable. Now I do not focus on writing the thing, but I focus on building agents who are capable of building the thing with a little guidance.
I got a modest tech following and you wouldn’t believe the amount I’m offered to promote the most garbage AI company
But there are plenty of us who try and walk a middle course. A lot of us have changed our opinions over time. ("When the facts change, I change my mind.") I didn't think AI models were much use for coding a year ago. The facts changed. (Claude Code came out.) Now I do. Frankly, I'd be suspicious of anyone who hasn't changed their opinions about AI in the last year.
You can believe all these things at once, and many of us do:
* LLMs are extremely impressive in what they can do. (I didn't believe I'd see something like this in my lifetime.)
* Used judiciously, they are a big productivity boost for software engineers and many other professions.
* They are imperfect and make mistakes, often in weird ways. They hallucinate. There are some trivial problems that they mess up.
* But they're not just "stochastic parrots." They can model the world and reason about it, albeit imperfectly and not like humans do.
* AI will change the world in the next 20 years
* But AI companies are overvalued at the present time and we're mostly likely in a bubble which will burst.
* Being in a bubble doesn't mean the technology is useless. (c.f. the dotcom bubble or the railroad bubble in the 19th century.)
* AGI isn't just around the corner. (There's still no way models can learn from experience.)
* A lot of people making optimistic claims about AI are doing it for self-serving boosterish reasons, because they want to pump up their stock price or sell you something
* AI has many potential negative consequences for society and mental health, and may be at least as nasty as social media in that respect
* AI has the potential to accelerate human progress in ways that really matter, such as medical research
* But anyone who claims to know the future is just guessing
I've not seen anything from a model to persuade me they're not just stochastic parrots. Maybe I just have higher expectations of stochastic parrots than you do.
I agree with you that AI will have a big impact. We're talking about somewhere between "invention of the internet" and "invention of language" levels of impact, but it's going to take a couple of decades for this to ripple through the economy.
Early LLMs were like that. That's not what they are now. An LLM got Gold on the Mathematical Olympiad - very difficult math problems that it hadn't seen in advance. You don't do that without some kind of working internal model of mathematics. There is just no way you can get to the right answer by spouting out plausible-sounding sentence completions without understanding what they mean. (If you don't believe me, have a look at the questions.)
> "You don't do that without some kind of working internal model of mathematics."
This is speculation at best. Models are black boxes, even to those who make them. We can't discern a "meaningful internal representation" in a model, anymore than a human brain.
> "There is just no way you can get to the right answer by spouting out plausible-sounding sentence completions without understanding what they mean."
You've just anthropomorphised a stochastic machine, and this behaviour is far more concerning, because it implies we're special, and we're not. We're just highly advanced "stochastic parrots" with a game loop.
They are not pure black boxes. They are too complex to decipher, but it doesn't mean we can't look at activations and get some very high level idea of what is going on.
For world models specifically, the paper that first demonstrated that LLM has some kind of a world model corresponding to the task it is trained on came out in 2023: https://www.neelnanda.io/mechanistic-interpretability/othell.... Now you might argue that this doesn't prove anything about generic LLMs, and that is true. But I would argue that, given this result, and given what LLMs are capable of doing, assuming that they have some kind of world model (even if it's drastically simplified and even outright wrong around the edges) should be the default at this point, and people arguing that they definitely don't have anything like that should present concrete evidence ot that effect.
> We're just highly advanced "stochastic parrots" with a game loop.
If that is your assertion, then what's the point of even talking about "stochastic parrots" at all? By this definition, _everything_ is that, so it ceases to be a meaningful distinction.
Well, it's been changing the world for quite some time, both in good and bad ways. There is no need to add an arbitrary timestamp.
I feel like I see now more dismissive comments than previously. As if people, initially confused, formed a firm belief since. And now new facts don't really change it, just entrench them in chosen belief.
1. When(if) AGI will arrive. It's likely going to be smeared out over a couple months to years, but relative to everything else, it's a historical blip. This really is the most contention belief with the most variability. It is currently predicted to be 8 years[1].
2. What percentage of jobs will be replaceable with AGI? Current estimates between 80-95% of professions. The remaining professions "culturally require" humans. Think live performance, artisanal goods, in-person care.
3. How quickly will AGI supplant human labor? What is the duration of replacement from inception to saturation? Replacement won't happen evenly, some professions are much easier to replace with AGI, some much more difficult. Let's estimate a 20-30 years horizon for the most stubborn to replace professions.
What we have is a ticking time bomb of labor change at least an order of magnitude greater than the transition from an agricultural economy to an industrial economy or from an industrial economy to a service economy.
Those happened over the course of several generations. Society: culture, education, the legal system, the economy, where able to absorb the changes over 100-200 years. Yet we're talking about a change on the same scale happening 10 times faster - within the timeline of one's professional career. And still, with previous revolutions we had incredible unrest, and social change. Taken as a whole, we'll have possibly the majority of the economy operating outside the territory of society, the legal system, and the existing economy. A kid born on the the "day" AGI arrives will become an adult in a profoundly different world as if born on a farm in 1850 and reaching adulthood in a city in 2000.
1. https://www.metaculus.com/questions/5121/date-of-artificial-...
For [2] you have no reference whatsoever. How does AI replace a nurse, a vet, a teacher, a construction worker?
I'm afraid it's really a matter of faith, in either direction, to predict whether an AI can take over the autonomous decision making and robotic systems can take over physical actions which are currently delegated to human professions. And, I think many robotic control problems are inherently solved if we have sufficient AI advancement.
Median forecasts indicated a 50% probability of AI systems being capable of automating 90% of current human tasks in 25 years and 99% of current human tasks in 50 years[1]
The scope of work replaceable by embodied AGI and the speed of AGI saturation of vastly under estimated. The bottle necks are production of a replacement workforce, not retraining human laborers.
A world of 99 percent of jobs being done by AGI (which there remains no convincing grounds for how this tech would ever be achieved) feels ungrounded in the reality of human experience. Dignity, rank, purpose etc are irreducible properties of a functional society, which work currently enables.
It's far more likely that we'll hit some kind of machine intelligence threshold before we see a massive social pushback. This may even be sooner than we think.
If AI doing everything means that we'll finally have a truly egalitarian society where everyone is equal in dignity and rank, I'd say the faster we get there, the better.
Why would hearing "work is central to identity," and "work is the primary social mechanism that distributes status amongst communities," change my mind?
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
I don't think that we have a good answer to that. And we may need it sooner rather than later. I'd be more optimistic if I trusted our leadership more. But wise political leadership is not exactly a strong point for our country right now.
What makes you think that? Self driving cars have had untold billions of dollars in reaearch and decades in applied testing, iteration, active monitoring, etc and it still has a very long tail of unaddressed issues. They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots, completely ignorant of the turmoil that was going on. A human driver is still far more adaptive and requires a lot less training than AI, and humans are ready to handle the infinitely long tail of exceptions to the otherwise algorithmic task of driving, which follows strict rules.
And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than driving, with even less training data than to go off, I find myself firmly in the skeptics camp, that holds you will struggle even harder to apply humanoid robotics in uncontrolled environments across a diverse range of tasks without human intervention or piloting or maintenence or management.
Unemployment is still near all time lows, this will persist for sometime as we have a structural demographic problem with massive amounts of retirees and less children to support the population "pyramid" (which is looking more like a tapering rectangle these days).
I get that it’s taken a long time and a lot of hype that hasn’t panned out. But once the tech works and it’s just about juicing the scale then things shift rapidly.
Even if you think “oh that’s the next generation’s problem” if there is a chance you’re wrong, or if you want to be kind to the next generation: now is the time to start thinking and planning for those problems.
I think the most sensible answer would be something like UBI. But I also think the most sensible answer for climate change is a carbon tax. Just because something is sensible doesn’t meant it’s politically viable.
Maybe the tech will at some point be good enough. At the current rate of improvement this will still take decades at least. Which is sad because I personally hoped that my kids would never have to get a driver’s License.
I do drive in these conditions.
And this even ignores all the things modern computer controlled vehicles do above and beyond humans as it is. Take most people used to driving modern cars and chunk them an old armstrong steering car and they'll put themselves into a ditch on a rainy day.
Really the last things in self driving cars is fast portable compute and general intelligence. General intelligence will be needed for the million edge cases we need while driving. The particular problem is once we get this general intelligence a lot of problems are going to disappear and bring up a whole new set of problems for people and society at large.
Tesla that got fired as a customer by Mobileye for abusing their L2 tech is your yardstick?
Anyways, Waymo's DC launch is next year, I wonder what the new goalpost will be.
LiDAR, radar assistance feels crucial
https://fortune.com/2025/08/15/waymo-srikanth-thirumalai-int...
First I’m hearing of that. In doing a search, I see a lot of speculation but no proof. Knowing the shenanigans perpetrated by Musk and his hardcore fans, I’ll take theories with a grain of salt.
> and (partially?) faked his famous glitterbomb pranks
That one I remember, and the story is that the fake reactions were done by a friend of a friend who borrowed the device. I can’t know for sure, but I do believe someone might do that. Ultimately, Rober took accountability, recognised that hurt his credibility, and edited out that part from the video.
https://www.engadget.com/2018-12-21-viral-glitter-bomb-video...
I have no reason to protect Rober, but also have no reason to discredit him until proof to the contrary. I don’t follow YouTube drama but even so I’ve seen enough people unjustly dragged through the mud to not immediately fall for baseless accusations.
One I bumped into recently was someone describing the “fall” of another YouTuber, and in one case showed a clip from an interview and said “and even the interviewer said X about this person”, with footage. Then I watched the full video and at one point the interviewer says (paraphrased) “and please no one take this out of context, if you think I’m saying X, you’re missing the point”.
So, sure, let’s be critical about the information we’re fed, but that cuts both ways.
The bottleneck for self-driving technology isn't sensors - it's AI. Building a car that collects enough sensory data to enable self-driving is easy. Building a car AI that actually drives well in a diverse range of conditions is hard.
I think there is a good chance that what we currently call "AI" is fundamentally not technologically capable of human levels of driving in diverse conditions. It can support and it can take responsibility in certain controlled (or very well known) environments, but we'll need fundamentally new technology to make the jump.
Modern cars can have 360 vision at all times, as a default. With multiple overlapping camera FoVs. Which is exactly what humans use to get near field 3D vision. And far field 3D vision?
The depth-discrimination ability of binocular vision falls off with distance squared. At far ranges, humans no longer see enough difference between the two images to get a reliable depth estimate. Notably, cars can space their cameras apart much further, so their far range binocular perception can fare better.
How do humans get that "3D" at far distances then? The answer is, like it usually is when it comes to perception, postprocessing. Human brain estimates depth based on the features it sees. Not unlike an AI that was trained to predict depth maps from a single 2D image.
If you think that perceiving "inertia and movement" is vital, then you'd be surprised to learn that an IMU that beats a human on that can be found in an average smartphone. It's not even worth mentioning - even non-self-driving cars have that for GPS dead reckoning.
A lot of the problems with driving aren't driving problems. They are other people are stupid problems, and nature is random problems. A good driver has a lot of ability to predict what other drivers are going to do. For example people commonly swerve slightly on the direction they are going to turn, even before putting on a signal. A person swerving in a lane is likely going to continue with dumb actions and do something worse soon. Clouds in the distance may be a sign of rain and that bad road conditions and slower traffic may exist ahead.
Very little of this has to do with the quality of our sensors. Current sensors themselves are probably far beyond what we actually need. It's compute speed (efficiency really) and preemption that give humans an edge, at least when we're paying attention.
Between brightly sunlit snow and a starlit night, we can cover more than 45 stops with the same pair of eyeballs; the very best cinematographic cameras reach something like 16.
In a way it's not a fair comparison, since we're taking into account retinal adaptation, eyelids/eyelashes, pupil constriction. But that's the point - human vision does not use cameras.
Indeed. And the comparison is unnecessarily unfair.
You're comparing the dynamic range of a single exposure on a camera vs. the adaptive dynamic range in multiple environments for human eyes. Cameras do have comparable features: adjustable exposure times and apertures. Additionally cameras can also sense IR, which might be useful for driving in the dark.
That means that to view some things better, you have to accept being completely blind to others. That is not a substitute for dynamic range.
A system that replicates the human eye's rapid aperture adjustment and integration of images taken at quickly changing aperture/ filter settings is very much not what Tesla is putting in their cars.
But again, the argument is fine in principle. It's just that you can't buy a camera that performs like the human visual system today.
> Humans use only cameras.
Which in this or similar forms is sometimes used to argue that L4/5 Teslas are just a software update away.
Most are minor, but even so - beating that shouldn't be a high bar.
There is no good reason not to use LIDAR with other sensing technologies, because cameras-only just makes the job harder.
They get into less accidents, mile for mile and road type for road type, and the ones they get into trend towards less severe. Why?
Because self-driving cars don't drink and drive.
This is the critical safety edge a machine holds over a human. A top tier human driver in the top shape outperforms this generation of car AIs. But a car AI outperforms the bottom of the barrel human driver - the driver who might be tired, distracted and under influence.
Generally you are comparing Apples and Oranges if you are comparing the safety records of i.e. Waymos to that of the general driving population.
Waymos drive under incredibly favorable circumstances. They also will simply stop or fall back on human intervention if they don't know what to do – failing in their fundamental purpose of driving from point A to point B. To actually get comparable data, you'd have to let Waymos or Teslas do the same type of drives that human drivers do, under the same curcumstances and without the option of simply stopping when they are unsure, which they simply are not capable of doing at the moment.
That doesn't mean that this type of technology is useless. Modern self-driving and adjacent tech can make human drivers much safer. I imagine, it would be quite easy to build some AI tech that has a decent success rate in recognizing inebriated drivers and stopping the cars until they have talked to a human to get cleared for driving. I personally love intelligent lane and distance assistance technology (if done well, which Tesla doesn't in my view). Cameras and other assistive technology are incredibly useful when parking even small cars and I'd enjoy letting a computer do every parking maneuver autonomously until the end of my days. The list could go on.
Waymos have cumulatively driven about 100 million miles without a safety driver as of July 2025 (https://fifthlevelconsulting.com/waymos-100-million-autonomo...) over a span of about 5 years. This is such a tiny fraction of miles driven by US (not to speak of worldwide) drivers during that time, that it can't usefully be expressed. And they've driven these miles under some of the most favorable conditions available to current self-driving technology (completely mapped areas, reliable and stable good weather, mostly slow, inner city driving, etc.). And Waymo themselves have repeatedly said that overcoming the limitations of their tech will be incredibly hard and not guaranteed.
Most non-impaired humans outperform the current gen. The study I saw had FSD at 10x fatalities per mile vs non-impaired drivers.
Not true. Humans also interpret the environment in 3D space. See a Tesla fail against a Wile E. Coyote-inspired mural which humans perceive:
Teslas "interpret the environment in 3D space" too - by feeding all the sensor data into a massive ML sensor fusion pipeline, and then fusing that data across time too.
This is where the visualizers, both the default user screen one and the "Terminator" debugging visualizer, get their data from. They show plain and clear that the car operates in a 3D environment.
You could train those cars to recognize and avoid Wile E. Coyote traps too, but do you really want to? The expected amount of walls set in the middle of the road with tunnels painted onto them is very close to zero.
Let’s also not forget murals like that do exist in real life. And those aren’t foam.
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg...
Additionally, as the other commenter pointed out, trucks often have murals painted on them, either as art or adverts.
https://en.wikipedia.org/wiki/Truck_art_in_South_Asia
https://en.wikipedia.org/wiki/Dekotora
Search for “truck ads” and you’ll find a myriad companies offering the service.
Even at that point, why would you possibly use only cameras though, when you can get far better data by using multiple complementary systems? Humans still crash plenty often, in large part because of how limited our "camera" system can be.
Even if what you're saying is true, which it's not, cameras are so inferior to eyes it's not even funny
Any 2 cameras separated by a few inches.
> dynamic range of an eye
Many cameras nowadays match or exceed the eye in dynamic range. Specially if you consider that cameras can vary their exposure from frame to frame, similar to the eye, but much faster.
Human skull only has two eyesockets, and it can only get this wide. But cars can carry a lot of cameras, and maintain a large fixed distance between them.
Our cameras (also called eyes) have way better dynamic range, focus speed, resolution and movement detection capabilities, Backed by a reduced bandwidth peripheral vision which is also capable of detecting movement.
No camera, incl. professional/medium format still cameras are that capable. I think one of the car manufacturers made a combined tele/wide lens system for a single camera which can see both at the same time, but that's it.
Dynamic range, focus speed, resolution, FoV and motion detection still lacks.
...and that's when we imagine that we only use our eyes.
That’s the mistake Elon Musk made and the same one you’re making here.
Not to mention that humans driving with cameras only is absolutely pathetic. The amount of accidents that occur that are completely avoidable doesn’t exactly inspire confidence that all my car needs to be safe and get me to my destination is a couple cameras.
Elon Musk is right. You can't cram 20 radars, 50 LIDARs and 100 cameras into a car and declare self-driving solved. No amount of sensors can redeem a piss poor driving AI.
Conversely, if you can build an AI that's good enough, then you don't need a lot of sensors. All the data a car needs to drive safely is already there - right in the camera data stream.
So far, every self-driving accident where the self-driving car was found to be at fault follows the same pattern: the car had all the sensory data it needed to make the right call, and it didn't make the right call. The bottleneck isn't in sensors.
You keep insisting that cameras are good enough, but it’s empirically possible since safe autonomous driving AI has not been achieved yet to say that cameras alone collect enough data.
The minimum setup without lidar would be cameras, radar, ultrasonic, GPS/GNSS + IMU.
Redundancy is key. With lidar, multiple sensors cover each other’s weaknesses. If LiDAR is blinded by fog, radar steps in.
Crazy that billions of humans drive around every day with two cameras. And they have various defects too (blind spots, foveated vision, myopia, astigmatism, glass reflection, tiredness, distraction).
But there is a lot of expenditure relative to each mile being driven.
> The goalpost will be when you can buy one and drive it anywhere.
This won't happen any time soon, so I and millions of other people will continue to derive value from them while you wait for that.
It's a 2-ton vehicle that can self-drive reliably enough to be roving a city 24/7 without a safety driver.
The measure of expensive for that isn't "can everyone afford it", the fact we can even afford to let anyone ride them is a small wonder.
b) If there was a $250,000 car that could drive itself around given major cities, even with the geofence, it would sell out as many units as could be produced. That's actually why I tell people to be weary of BOM costs: it doesn't reflect market forces like supply and demand.
You're also underestimating both how wealthy people and corporations are, and the relative value being provided.
A private driver in a major city can easily clear $100k a year on retainer, and there are people are paying it.
> The goalpost will be when you can buy one and drive it anywhere.
So let’s just ignore the non-consumer parts entirely to avoid shifting the goalpost. I still stand by the fact that the average (or median) consumer will not be able to afford such an expensive car, and I don’t think it’s controversial to state this given the readily available income data in the US and various other countries. The point isn’t that it exists, Rolls Royce and Maseratis exist, but they are niche and so if self-driving cars will be so expensive to be niche they won’t actually make a real impact on real people, thus the goalpost of general availability to a consumer.
People "wait" because of where they live and what they need. Not all people live and just want to travel around SF or wherever these go nowadays.
At the end of the day it's not like no one lives in SF, Phoenix, Austin, LA, and Atlanta either. There's millions of people with access to the vehicles and they're doing millions of rides... so acting like it's some great failing of AVs that the current cities are ones with great weather is frankly, a bit stupid.
It takes 5 seconds to look up the progress that's been made even in the last few years.
So if we're saying how many times would it have crashed without a human: 0.
They generally intervene when the vehicles get stuck and that happens pretty rarely, typically because humans are doing something odd like blocking the way.
People are usually obedient because they have something in life and they are very busy with work. So they don't have time or headspace to really care about politics. When suddenly big numbers of people start to more care about politics it leads to organizing and all kinds of political changes.
What i mean is that it wouldn't be current political class pushing things like UBI. At same time it seems that some of current elites are preparing for this and want to get rid of elections altogether to keep the status quo.
If all else fails you can simply bomb city blocks into submission. Or arrange targeted drone decapitations of troublemakers. (Possibly literally.)
The automation and personalisation of social and political control - and violence - is the biggest difference this time around. The US has already seen a revolution in the effectiveness of mass state propaganda, and AI has the potential to take that up another level.
What's more likely to happen is survivors will move off-grid altogether - away from the big cities, off the Internet, almost certainly disconnected and unable to organise unless communication starts happening on electronic backchannels.
I could be entirely wrong, but it feels like if AI were to get THAT good, the government would be affected just as much as the working class. We'd more likely see total societal collapse rather than the government maintaining power and manipulating / suppressing the people.
IF they dont have 1-2 months of living expenses saved, they die. They can'be a big threat even in millions??? they dont have organization capacity or anything that matches
But all these voters still have their place in the world and don't have free time to do anything. I don't think people are so powerless once you really displace big potion of them.
For example look at people here - everywhere you can read how it's harder to find programming job. Companies are roleplaying the narrative that they don't need programmers anymore. Do you think this army of jobless programmers will become mind controlled by tech they themselves created? Or they will use their free time to do something about their situation?
Displacing/canceling/deleting/killing individuals in society works because most people wave their and thinking this couldn't happen to them. One you start getting into bigger potions of people the dynamic is different.
Well, if one believes that the day will come when their choices will be "make that jump" or "the guillotine", then it doesn't seem completely outlandish.
Not saying that day will come, but if it did...
Or even simply being voted out.
If AI makes it much easier to produce goods, it reduces price of money, making it easier to pay some money to everyone in exchange for not breaking the law.
It is also interesting that you did not mention food, clothing and super-computers-in-pockets. While government is involved in everything, they are less involved in those markets than with housing, healthcare, and education, particularly in mandates as to what to do. Government has created the problem of scarcity in housing, healthcare, and education. Do you really think the current leadership of the US should control everyone's housing, healthcare, and education? The idea of a UBI is that it strips the politicians of that fine-grained control. There is still control that can be leveraged, but it comes down to a single item of focus. It could very well be disastrous, but it need not be whereas the more complex system that you give politicians control over, the more likely it will be disastrous.
The costs of what you propose are enormous. No legislation can change that fact.
There ain’t no such thing as a free lunch.
Who’s going to pay for it? Someone who is not paying for it today.
How do you intend to get them to consent to that?
Or do you think that the needs of the many should outweigh the consent of millions of people?
The state, the only organization large enough to even consider undertaking such a project, has spending priorities that do not include these things. In the US, for example, we spend the entire net worth of Elon Musk (the “richest man in the world”, though he rightfully points out that Putin owns far more than he does) about every six months on the military alone. Add in Zuckerberg and you can get another 5 months or so. Then there’s the next year to think about. Maybe you can do Buffet and Gates; what about year three?
That’s just for the US military, at present day spending levels.
What you’re describing is at least an order of magnitude more expensive than that, just in one country that only has 4% of people. To extend it to all human beings, you’re talking about two more orders of magnitude.
There aren’t enough billionaires on the entire planet even to pay for one country’s military expenses out of pocket (even if you completely liquidated them), and this proposed plan is 500-1000x more spending than that. You’re talking about 3-5 trillion dollars per year just for the USA - if you extrapolate out linearly, that’d be 60-200 trillion per year for the Earth.
Even if you could reduce cost of provision by 90% due to economies of scale ($100/person/month for housing, healthcare, and education combined, rather than $1000 - a big stretch), it is still far, far too big to do under any currently envisioned system of wealth redistribution. Society is big and wealthy private citizens (ie billionaires) aren’t that numerous or rich.
There is a reason we all pay for our own food and housing.
I just want to point out that's about a fifth of our GDP and we spend about this much for healthcare in the US. We badly need a way to reduce this to at least half.
> There is a reason we all pay for our own food and housing.
The main reason I support UBI is I don't want need based or need aware distribution. I want everyone to get benefits equally regardless of income or wealth. That's my entire motivation to support UBI. If you can come up with another something that guarantees no need based or need aware and does not have a benefit cliff, I support that too. I am not married to UBI.
Reduce costs by eliminating fiat ledgers that only have value if we believe and realize the real economy is physical statistics and ship resources where the people demand
But of course that simple solution violates the embedded training of Americans. So it's a non-starter and we'll continue to desperately seek some useless reformation of an antiquated social system.
Honestly, what type of housing do you envision under a UBI system? Houses? Modern apartment buildings? College dormitory-like buildings? Soviet-style complexes? Prison-style accommodations? B stands for basic, how basic?
I think a UBI system is only stable in conjunction with sufficient automation that work itself becomes redundant. Before that point, I don't think UBI can genuinely be sustained; and IMO even very close to that point the best I expect we will see, if we're lucky, is the state pension age going down. (That it's going up in many places suggests that many governments do not expect this level of automation any time soon).
Therefore, in all seriousness, I would anticipate a real UBI system to provide whatever housing you want, up to and including things that are currently unaffordable even to billionaires, e.g. 1:1 scale replicas of any of the ships called Enterprise including both aircraft carriers and also the fictional spaceships.
That said, I am a proponent of direct state involvement in the housing market, e.g. the UK council housing system as it used to be (but not as it now is, there're not building enough):
• https://en.wikipedia.org/wiki/Public_housing_in_the_United_K...
I would much rather live on a beach front property than where I live right now. I don't because the cost trade off is too high.
To bring the real estate market into equilibrium with UBI you would have to turn rural Nebraska into a giant slab city like ghetto. Or every mid sized city would have a slab city ghetto an hour outside the city. It would be ultra cheap to live there but it would be a place everyone is trying to save up to move out of. It would create a completely new under class of people.
Yes, and?
My reference example was two aircraft carriers and 1:1 models of some fictional spacecraft larger than some islands, as personal private residences.
> To bring the real estate market into equilibrium with UBI you would have to turn rural Nebraska into a giant slab city like ghetto. Or every mid sized city would have a slab city ghetto an hour outside the city. It would be ultra cheap to live there but it would be a place everyone is trying to save up to move out of. It would create a completely new under class of people.
Incorrect.
Currently, about 83e6 hectares of this planet is currently a "built up area".
4827e6 ha, about 179 times the currently "built up" area, is cropland and grazing land. Such land can produce much more food than it already does, the limiting factor is the cost of labour to build e.g. irrigation and greenhouses (indeed, this would also allow production in what are currently salt flats and deserts, and enable aquaculture for a broad range of staples); as I am suggesting unbounded robot labour is already a requirement for UBI, this unlocks a great deal of land that is not currently available.
The only scenario in which I believe UBI works is one where robotic labour gives us our wealth. This scenario is one in which literally everyone can get their own personal 136.4 meters side length approximately square patch. That's not per family, that's per person. Put whatever you want on it — an orchard, a decorative garden, a hobbit hole, a castle, and five Olympic-sized swimming pools if you like, because you could fit all of them together at the same time on a patch that big.
The ratio (and consequently land per person), would be even bigger if I didn't disregard currently unusable land (such as mountains, deserts, glaciers, although of these three only glaciers would still be unusable in the scenario), and also if I didn't disregard land which is currently simply unused but still quite habitable e.g. forests (4000e6 ha) and scrub (1400e6 ha).
In the absence of future tech, we get what we saw in the UK with "council housing", but even this is still not as you say. While it gets us cheap mediocre tower blocks, it also gets us semi-detached houses with their own gardens, and even the most mediocre of the widely disliked Brutalist architecture era of the UK this policy didn't create a new underclass, it provided homes for the existing underclass. Finally, even at the low end they largely (but not universally) were an improvement on what came before them, and this era came to an end with a government policy to sell those exact same homes cheaply to their existing occupants.
You bump up against the limits of physics, not economics.
If every place has the population density of Wyoming, real wealth will be the ability to live in real cities. That’s much like what we have now.
Very true. But I'd say this is more of a politics problem than a physics one: any given person doesn't necessarily want to be around the people that want to be around them.
> If every place has the population density of Wyoming, real wealth will be the ability to live in real cities. That’s much like what we have now.
Cities* are where the jobs are, where the big money currently gets made, I'm not sure how much of what we have today with high density living is to show your wealth or to get your wealth — consider the density and average wealth of https://en.wikipedia.org/wiki/Atherton,_California, a place I'd never want to live in for a variety of reasons, which is (1) legally a city, (2) low density, (3) high income, (4) based on what I can see from the maps, a dorm town with no industrial or commercial capacity, the only things I can see which aren't homes (or infrastructure) are municipal and schools.
* in the "dense urban areas" sense, not the USA "incorporated settlements" sense, not the UK's "letters patent" sense
Real wealth is the ability to be special, to stand out from the crowd in a good way.
In a world of fully automated luxury for all, I do not know what this will look like.
Peacock tails of some kind to show off how much we can afford to waste? The rich already do so with watches that cost more than my first apartment, perhaps they'll start doing so with performative disfiguring infections to show off their ability to afford healthcare.
I think it is a solid idea. I don't know how it fits in the broader scheme of things though. If everyone in the US gets a UBI of the same amount, will people move somewhere rent is low?
From wikipedia:
> a social welfare proposal in which all citizens of a given population regularly receive a minimum income in the form of an unconditional transfer payment, i.e., without a means test or need to perform work.
It doesn't say you aren't allowed to work for more money. My understanding is you can still work as much as you want. You don't have to to get this payment. And you won't be penalized for making too much money.
We are indeed talking about different things with UBI here, but I'm asserting that the usual model of it can't be sustained without robots doing the economic production.
If the goal specifically is simply "nobody starves", the governments can absolutely organise food rations like this, food stamps exist.
> If everyone in the US gets a UBI of the same amount, will people move somewhere rent is low?
More likely, the rent goes up by whatever the UBI is. And I'm saying this as a landlord, I don't think it would be a good idea to create yet another system that just transfers wealth to people like me who happen to be property owners, it's already really lucrative even without that.
To me, "just enough to avoid starving" is a prison-like model, just without locked doors. But multiple residents of a very basic "cell", a communal food hall, maybe a small library and modest outdoors area. But most of the time when people talk about UBI, they describe the recipients living in much nicer housing than that.
I am also concerned about this possibility, but come at it from a more near-term problem.
I think there is a massive danger area with energy prices specifically, in the immediate run-up to AI being able to economically replace human labour.
Consider a hypothetical AI which, on performance metrics, is good enough, but is also too expensive to actually use — running it exceeds the cost of any human. The corollary is that whatever that threshold is, under the assumption of rational economics, no human can ever earn more than whatever it costs to run that AI. As time goes on, if the hardware of software improves, the threshold comes down.
Consider what the world looks like if the energy required to run a human-level AI at human-level speed costs the same as the $200/month that OpenAI charges for access to ChatGPT Pro (we don't need to consider what energy costs per kWh for this, prices may change radically as we reach this point).
Conditional on this AI actually being good enough at everything (really good enough, not just "we've run out of easily tested metrics to optimise"), then this becomes the maximum that a human can earn.
If a human is earning this much per month, can they themselves afford energy to keep their lights on, their phone charged, their refrigerator running?
Domestic PV systems (or even wind/hydro if you're lucky enough to be somewhere where that's possible) will help defend against this; personal gasoline/diesel won't, the fuel will be subject to the same price issues.
> Power and wealth simply wont allow everything to be accessible to everyone. The idea that people would be able to build enormous mansions (or personal aircraft carriers or spaceships) just sounds rather absurd, no offense, but come on.
While I get your point, I think a lot of the people in charge can't really imagine this kind of transformation. Even when they themselves are trying to sell the idea. Consider what Musk and Zuckerberg say about Mars and superintelligence respectively — either they don't actually believe the words leaving their mouths (and Musk has certainly been accused of this with Mars), or they have negligible imagination as to the consequences of the world they're trying to create (which IMO definitely describes Musk).
At the same time, "absurd"?
I grew up with a C64 where video games were still quite often text adventures, not real-time nearly-photographic 3D.
We had 6 digit phone numbers, calling the next town along needed an area code and cost more; the idea we'd have video calls that only cost about 1USD per minute was sci-fi when I was young, while the actual reality today is that video calls being free to anyone on the planet isn't even a differentiating factor between providers.
I just about remember dot-matrix printers, now I've got a 3D printer that's faster than going to the shops when I want one specific item.
Universal translation was a contrivance to make watching SciFi easier, not something in your pocket that works slightly better for images than audio, and even then because speech recognition in natural environments turned out to be harder than OCR in natural environments.
I'm not saying any of this will be easy, I don't know when it will be good enough to be economical — people have known how to make flying cars since 1936*, but they've been persistently too expensive to bother. AGI being theoretically possible doesn't mean we ourselves are both smart enough and long-lived enough as an advanced industrialised species to actually create it.
* https://en.wikipedia.org/wiki/Autogiro_Company_of_America_AC...
Utter nonsense.
Do you believe the European countries that provides higher education for free are manning tenure positions with slaves or robbing people at gunpoint?
How come do you see public transportation services in some major urban centers being provided free of charge?
How do you explain social housing programmes conducted throughout the world?
Are countries with access to free health care using slavery to keep hospitals and clinics running?
What you are trying to frame as impossibilities is already the reality for many decades in countries ranking far higher in development and quality of living indexes that the US.
How do you explain that?
Without the state, you wouldn't have wealth. Heck there wouldn't even be the very concept of property, only what you could personally protect by force! Not to mention other more prosaic aspects: if you own a company, the state maintains the roads that your products ship through, the schools that educate your workers, the cities and towns that house your customers... In other words the tax is not "money that is yours and that the evil state steals from you", but simply "fair money for services rendered".
You're letting your irrational biases show.
To start off, social security contributions are not a tax.
But putting that detail aside, do you believe that paying a private health insurance also represents slavery and robbery? Are you a slave to a private pension fund?
Are you one of those guys who believes unions exploit workers whereas corporations are just innocent bystanders that have a neutral or even positive impact on workers lives and well being?
social security contributions are a mandatory payment to the state taken from your wages, they are a tax, it's a compulsory reduction in your income. Private health insurance is obviously not mandatory or compulsory, that is different, clearly. Your last statement is just irrelevant because you assume I'm a libertarian for pointing out the reality of the exchange taking place in the socialist system.
I'd be very interested in hearing which definition of "socialism" aligns with those obviously libertarian views?
> If you don't pay your taxes, you will go to jail. It is both robbery and slavery [...] Arguing this is arguing emotionally and not rationally using language with words that have definitions.
Indulging in the benefits of living in a society, knowingly breaking its laws, being appalled by entirely predictable consequences of those action, and finally resorting to incorrect usage of emotional language like "slavery" and "robbery" to deflect personal responsibility is childish.
Taxation is payment in exchange for services provided by the state and your opinion (or ignorance) of those services doesn't make it "robbery" nor "slavery". Your continued participation in society is entirely voluntary and you're free to move to a more ideologically suitable destination at any time.
The government provides a range of services that are deemed to be broadly beneficial to society. Your refusal of that service doesn't change the fact that the service is being provided.
If you don't like the services you can get involved in politics or you can leave, both are valid options, while claiming that you're being enslaved and robbed is not.
Literally nobody alive today was “involved in politics” when the US income tax amendment was legislated.
Also, you can’t leave; doubly so if you are wealthy enough. Do you not know about the exit tax?
We assume you're libertarian because you are spouting libertarian ideas that just don't work in reality.
I do not know what you mean by "progressive", but you are spewing neoliberal/libertarian talking points. If anything, this tells how much Kool aid you drank.
No, robbery. They’re paid for with tax revenues, which are collected without consent. Taking of someone’s money without consent has a name.
Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
My understanding is that your info is seriously out of date. It might have been the case in the distant past but not the case anymore.
https://news.yale.edu/2025/02/20/tracking-decline-social-mob...
It's a common idea but each time you try to measure social mobility, you find a lot of European countries ahead of USA.
- https://en.wikipedia.org/wiki/Global_Social_Mobility_Index
- https://www.theguardian.com/society/2018/jun/15/social-mobil...
Which class mobility is this that you speak of? The one that forces the average US citizens to be a paycheck away from homelessness? Or is it the one where you are a medical emergency away from filing bankruptcy?
Have you stopped to wonder how some European countries report higher median household incomes than the US?
But by any means continue to believe your average US citizen is a temporarily embarrassed billionaire, just waiting for the right opportunity to benefit from your social mobility.
In the meantime, also keep in mind that mobility also reflects how easy it is to move down a few pegs. Let that sink in.
Is it, though? The US reports by far the highest levels of lifetime literal homelessness, which is three times greater than in countries like Germany. Homeless people on Europe aren't denied access to free healthcare, primary or even tertiary.
Why do you think the US, in spite of it's GDP, features so low in rankings such as human development index or quality of life?
I think this is the sort of red herring that prevents the average US citizen from realizing how screwed over they are. Again, the median household income in the US is lower than in some European countries. On top of this, the US provides virtually no social safety net or even socialized services to it's population.
The fact that the average US citizen is a paycheck away from homelessness and the US ranks so low in human development index should be a wake-up call.
This is not true, it was true historically, but not since WWII. Read Piketty.
Is AI slavery? Because that's where the value comes from in the scenario under discussion.
This can also describe Nordic and Germanic models of welfare capitalism (incrementally dismantled with time but still exist): https://en.wikipedia.org/wiki/Welfare_capitalism
You just shift the emissions from your location to the location that you buy products from.
Basically what happened in Germany: more expensive "clean" energy means their own production went down and the world bought more from China instead. The net result is probably higher global emissions overall.
We need a system where being known as somebody who causes more problems than they solve puts you (and the people you've done business with) at an economic disadvantage.
On the other hand, the Tesla “robotaxi” scares the crap out of me. No lidar and seems to drive more aggressively. The Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel is equal parts hilarious and nightmare fuel when you realize that’s what’s next to your kid biking down the street.
I understand the argument for augmenting your self-driving systems with LIDAR. What I don't really understand is what videos like this tell us. The comparison case for a "road-runner style fake tunnel" isn't LIDAR, it's humans, right? And while I'm sure there are cases where a human driver would spot the fake tunnel and stop in time, that is not at all a reasonable assumption. The question isn't "can a Tesla save your life when someone booby traps a road?", it's "is a Tesla any worse than you at spotting booby trapped roads?", and moreover, "how does a Tesla perform on the 99.999999% of roads that aren't booby trapped?"
A lot of people here might intuitively understand “does not have lidar” means “can be deceived with a visual illusion.” The value of a video like that is to paint a picture for people who don’t intuitively understand it. And for everyone, there’s an emotional reaction seeing it plow through a giant wall that resonates in ways an intellectual understanding might not.
Great communication speaks to both our “fast” and “slow” brains. His video did a great job IMHO.
That would've been been the case if all laws, opinions and purchasing decisions were made by everyone acting rationally. Even if self driving cars are safer than human drivers, it just takes a few crashes to damage their reputation. It has to be much, much safer than humans for mass adoption. Ideally also safer than the competition, if you're comparing specific companies.
Baidu's system in China really does have remote drivers.[1]
Tesla also appears to have remote drivers, in addition to someone in each car with an emergency stop button.[2]
[1] https://cyberlaw.stanford.edu/blog/2025/05/comparing-robotax...
[2] https://insideevs.com/news/760863/tesla-hiring-humans-to-con...
What corporation will accept to pay dollars for members of society that are essentially "unproductive"? What will happen with the value of UBI in time, in this context, when the strongest lobby will be of the companies that have the means of producing AI? And, more essentially, how are humans able to negotiate for themselves when they lose their abilities to build things?
I'm not opposing the technology progress, I'm merely trying to unfold the reality of UBI being a thing, knowing human nature and the impetus for profit.
Is there like a transition period where some people don't have to pay taxes and yet don't get UBI, and if so, why hasn't that come yet ? Why aren't the minimum tax thresholds going up if UBI could be right around the corner ?
So, AI may certainly bring about UBI, but the corporations that are being milked by the state to provide wealth to the non-productive will begin to foment revolution along with those who find this arrangement unfair, and the productive activity of those especially productive individuals will be directed toward revolution instead of economic productivity. Companies have made nations many times before, and I'm sure it'll happen again.
The destruction of the labour theory of value has been a goal of "tech" for a while, but if they achieve it, what's the plan then?
Assuming humans stay in control of the AIs because otherwise all bets are off, in a case where a few fabulously wealthy (or at least "onwing/controlling", since the idea of wealth starts to become fuzzy) industrialists control the productive capacity for everything from farming to rocketry and there's no space for normal people to participate in production any more, how do you even denominate the value being "produced"? Who is it even for? What do they need to give in return? What can they give in return?
Why do the rest of humanity even have to participate in this? Just continue on the way things were before without any super AI. Start new businesses that don’t use AI and hire humans to work there.
You'd need a very united front and powerful incentives to prevent, say, anyone buying AI-farmed wheat when it's half the cost of human-farmed (say). If you don't prevent that, Team AI can trade wheat (and everything else) for human economy money and then dominate there.
It seems like the only things they would need are energy and access to materials for luxury goods. Presumably they could mostly lock the "human economy" out of access to these things through control over AI weapons, but there would likely be a lot of arable land that isn't valuable to them.
Outside of malice, there doesn't seem to be much reason to block the non-technological humans from using the land they don't need. Maybe some ecological argument, the few AI-enabled elites don't want billions of humans that they no longer need polluting "their" Earth?
In this scenario, the marginal cost of taking everything else over is almost zero. Just tell the AI you want it taken over and it handles it. You'd take it over just for risk mitigation, even if you don't "need" it. Better to control it since it's free to do so.
Allowing a competing human economy is resources left on the table. And control of resources is the only lever of power left when labour is basically free.
> Maybe some ecological argument
There's a political angle too. 7 (or however many it will be) billion humans free to do their own thing is a risky free variable.
You can say that because it is universal, it should level the playing field just at a different starting point, but you are still creating a situation where even incredibly intelligent people will choose to pursue leisure over labor, in fact, the most intelligent people may be the ones to be more aware of the pointlessness of working if they can survive on UBI. Similarly, the most intelligent people will consider the arrangement unfair and unsustainable and instead of devoting their intelligence toward economically productive ventures, they will devote their abilities toward dismantling the system. This is the groundwork of a revolution. The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate. If they see an arrangement where high-quality mates are being obtained by individuals who they deem to be receiving benefits that they cannot defend/protect adequately, such an arrangement will be dismantled. This evolutionary drive is hundreds of millions of years old. Primitive animals will take resources from others that they observe to be unable to defend their status.
So, overall, UBI will probably be implemented, and it will probably end in economic crisis, revolution, and the resumption of this cycle that has been playing out over and over for centuries.
This doesn't seem believable to me, or at least it isn't the whole story. Pre-20th century it seems like most scientific and mathematical discoveries came from people who were born into wealthy families and were able to pursue whatever interested them without concern for whether or not it would make them money. Presumably there were/are many people who could've contributed greatly if they didn't have to worry about putting food on the table.
> The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate.
In a scenario where UBI is necessary because AI has supplanted human intelligence, it seems like the only way they could return to such a system is by removing both UBI and AI. Remove just UBI and they're still non-competitive economically against the AIs.
Source?
Even if that's true though, who cares if AI and robots are doing the work?
What's so bad about allowing people leisure, time to do whatever they want? What are you afraid of?
Which sort of doesn't add up. So there are intelligent people who are working right now because they need money and don't have it, while the other intelligent people who are working and employing other people are only doing it to make money and will rebel if they lose some of the money they make.
But then, why doesn't the latter group of intelligent people just stop working if they have enough money? Are they less/more/differently intelligent than the former group? Are we thinking about other, more narrow forms of intelligence when describing either?
Also
> The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate. If they see an arrangement where high-quality mates are being obtained by individuals who they deem to be receiving benefits that they cannot defend/protect adequately, such an arrangement will be dismantled. This evolutionary drive is hundreds of millions of years old.
I don't want to come off as mocking here - it's hard to take these points seriously. The whole point of civilization is to rise above these behaviours and establish a strong foundation for humanity as a whole. The end goal of social progress and the image of how society should be structured cannot be modeled on systems that existed in the past solely because those failure modes are familiar and we're fine with losing people as long as we know how our systems fail them. That evolutionary drive may be millions of years old, but industrial society has been around for a few centuries, and look at what it's done to the rest of the world.
> Primitive animals will take resources from others that they observe to be unable to defend their status.
Yeah, I don't know what you're getting at with this metaphor. If you're talking predatory behaviour, we have plenty of that going around as things are right now. You don't think something like UBI will help more people "defend their status"?
> it will probably end in economic crisis, revolution, and the resumption of this cycle that has been playing out over and over for centuries
I don't think human civilization has ever been close to this massive or complex or dysfunctional in the past, so this sentence seems meaningless, but I'm no historian.
It's a bit of a dunk on people who see their position as employer/supervisor as a source of power because they can impose financial risk as punishment on people, which happens more often than any of us care to think, but isn't that a win? Or are we conceding that modern society is driven more by stick than carrot and we want it that way?
In a sense everybody does have "2k" a month, because we all have the same amount of time to do productive things and exchange with others.
Over time, as more things get automated, you have more people deriving most of their income from UBI, but the remaining people will increasingly be the ones who own the automation and profit from it, so you can keep increasing the tax burden on them as well.
The endpoint is when automation is generating all the wealth in the economy or nearly so, so nobody is working, and UBI simply redistributes the generated wealth from the nominal owners of automation to everyone else. This fiction can be maintained for as long as society entertains silly outdated notions about property rights in a post-scarcity society, but I doubt that would remain the case for long once you have true post-scarcity.
Who is working?
If no UBI is installed there will be a hard crash while everyone figures out what it is that humans can do usefully, and then a new economic model of full employment gets established. If UBI is installed then this will happen more slowly with less pain, but it is possible for society to get stuck in a permanently worse situation.
Ultimately if AI really is about to automate as much as it is promised then what we really need is a model for post-capitalism, for post-scarcity economics, because a model based on scarcity is incapable of adapting to a reality of genuine abundance. So far nobody seems to have any clue of how to do such a thing. UBI as a concept still lives deeply in the Overton window bounded by capitalist scarcity thinking. (Not a call for communism btw, that is a train to nowhere as well because it also assumes scarcity at its root.)
What I fear is that we may get a future like The Diamond Age, where we have the technology to get rid of scarcity and have human flourishing, but we impose legal barriers that keep the rich rich and the poor poor. We saw this happen with digital copyright, where the technology exists for abundance, but we’ve imposed permanent worldwide legal scarcity barriers to protect revenue streams to megacorps.
We "made cars work" about 100 years ago, but they have been innovating on that design since then on comfort, efficiency, safety, etc. I doubt the very first version of self driving will have zero ways to improve (although eventually I suppose you would hit a ceiling).
Having had the experience of living under communist regime prior to 1989 I have zero trust in the state providing support, while I am totally dependent and have no recourse. Instead I would rather rely on my own two hands like my grandparents did.
I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
Unless your two hands are building murderbots, though, it doesn't matter what you're building if you can't grow or buy food.
I haven't personally seen how UBI could end up working viably, but I also don't see any other system working without much more massive societal changes than anyone is talking about.
Meanwhile, there are many many people that are very invested in maintaining massive differentials between the richest and the poorest that will be working against even the most modest changes.
The other system is that the mass of people are coerced to work for tokens that buy them the right to food and to live in a house. i.e. the present system but potentially with more menial and arduous labour.
Hopefully we can think of something else
Right now the communists in China are beating us at capitalism. I'm starting to find the entire analytical framework of using these ideologies ("communism", "capitalism") to evaluate _anything_ to be highly suspect, and maybe even one of the west's greatest mistakes in the last century.
> I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
I was a teenager back in the 90s. There was much talk then about the productivity boosts from computers, the internet, automation, and how it would enable people to have so much more free time.
Interesting thing is that the productivity gains happened. But the other side of that equation never really materialized.
Who knows, maybe it'll be different this time.
I think the reality is just that governments use words and have an official ideology, but you have to ignore that and analyze their actions if you want to understand how they behave.
Both communist and capitalist purists tend to be enriched for atheists (speaking as an atheist myself). Maybe some of that is people who have fallen out with religion over superstitions and other primitivisms, and are looking to replace that with something else.
Like religions, the movements have their respective post-hoc anointed scriptural prophets: Marx for one and Smith for the other.. along with a host of lesser saints.
Like religions, they are very prescriptive and overarching and proclaim themselves to have a better connection with some greater, deeper underlying truth (in this case about human behaviour and how it organizes).
For analytical purposes there's probably still value in the underlying texts - a lot of Smith and Marx's observations about society and human behaviour are still very salient.
But these ideologies, the outgrowths from those early analytical works, seem utterly devoid of any value whatsoever. What is even the point of calling something capitalist or communist. It's a meaningless label.
These days I eschew that model entirely and try to keep to a more strict analytical understanding on a per-policy basis. Organized around certain principles, but eschewing ideology entirely. It just feels like a mental trap to do otherwise.
It's irrlevant that they've had a few issues. They already work and people love them. It's clear they will eventually replace every uber/lyft driver, probably every taxi driver, they'll likely replace every doordash/grubhub driver with vehicles design to let smaller automated delivery carts go the last few blocks. They may also replace every truck driver. Together that's around 5 million jobs in the USA.
Once they're let on the freeways their usage will expand even faster.
It's irrlevant that they've had a few issues.
The last Waymo I saw (a couple weeks ago) was stuck trying to make a right turn on to Market St. It was conveniently blocking the pedestrian crosswalk for a few cycles before I went around it. The time before that one got befuddled by a delivery truck and ended up blocking both lanes of 14th Street. Before Cruise imploded they were way worse. I can't say that these self-driving cars have improved much since I moved out of the city a few years back.There is a big category of tasks that isn’t that. But that are economically significant. Those are a lot better fit for AI.
AI is intentionally being developed to be able to make decisions in any domain humans work in. This is unlike any previous technology.
The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years?
And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is.
The ceiling for current AI, while not provably known, can reasonably be upper bounded to human aggregate ability since these methods are limited to patterns in the training data. The big surprise was how many and sophisticated patterns were hiding in the training data (human written text). This current wave of AI progress is fueled by training data and compute in ”equal parts”. Since compute is cheaper, they’ve invested in more compute but failed scaling expectations since training data remained similarly sized.
Reaching super-intelligence through training data is paradoxical, because if it were known it wouldn’t be super-human. The other option is breaking out of the training data enclosure by relying on other methods. That may sound exciting but there’s no major progress I’m aware of that points that direction. It’s a little like being back to square one, before this hype cycle started. The smartest people seem to be focused on transformers, due to getting boatloads of money from companies or academia pushing them because of fomo.
I think you're confusing your cherry-picked comparison with reality.
LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.
Software engineering is being affected as well, and it requires far greater know-how, experience, and expertise to meet the hiring bar.
> And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than (...)
Yes, your tech job is also going to be decimated. It's not a matter of having PMs write code. It's an issue of your junior SDE armed with a LLM being quite able to clear your bug backlog in a few days while improving test coverage metrics and refactoring code back from legacy status.
If a junior SDE can suddenly handle the workload that previously you required a couple of medior and senior developers, why would a company keep around 4 or 5 seasoned engineers when an inexperienced one is already able to handle the workload?
That's where the jobs will vanish. Even if demand remains there, it dropped considerably as to not justify retaining so many people in a company's payroll.
And what are you going to do, them? Drive a Uber?
I'd love a source to these claims. Many companies are claiming that they are able to layoff folks because of AI, but in fact, AI is just a scapegoat to counteract the reckless overhiring due to free money in the market over the last 5-10 years and investors are demanding to see a real business plan and ROI. "We can eliminate this headcount due to the efficiency of our AI" is just a fancy way to make the stock price go up while cleaning up the useless folks.
People have ideas. There are substantially more ideas than people who can implement ideas. As with most technology, the reasonable expectation is to assume that people are just going to want more done by the now tool powered humans, not less things.
Have you been living under a rock?
You can start getting up to speed by how Amazon's CEO already laid out the company's plan.
https://www.thecooldown.com/green-business/amazon-generative...
> (...) AI is just a scapegoat to counteract the reckless overhiring due to (...)
That is your personal moralist scapegoat, and one that you made up to feel better about how jobs are being eliminated because someone somewhere screwed up.
In the meantime, you fool yourself and pretend that sudden astronomic productivity gains have no impact on demand.
Each one of the roles you listed above is only passable with AI at a superficial glance. For example, anyone who actually reads literature other than self-help and pop culture books from airport kiosks knows that AI is terrible at longer prose. The output is inconsistent because current AI does not understand context, at all. And this is not getting into the service costs, the environmental costs, and the outright intellectual theft in order to make things like illustrations even passable.
I literally pasted an announcement from the CEO of a major corporation warning they are going to decimate their workforce due to the adoption of AI.
The CEO literally made the following announcement:
> "As we roll out more generative AI and agents, it should change the way our work is done," Jassy wrote. "We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs."
This is not about selling a product. This is about how they are adopting AI to reduce headcount.
When it gets to the point that you don't need a senior engineer doing the work, you won't need a junior either.
I don't think you understood the point I made.
My point was not about Jr vs Sr, let alone how a Jr is somehow more capable than a Sr.
My point was that these productivity gains aren't a factor of experience of seniority, but they do devalue the importance of seniority to perform specific tasks. Just crack open a LLM, feed in a few prompts, and done. Hell, junior developers no longer need to reach out to seniors to as questions about any topic. Think about that for a second.
I was working on a side project last night, and Gemini decided to inline the entire Crypto.js library in the file I was generating. And I knew it just needed a hashing function, so I had to tell it to just grab a hashing function and not inline all of Crypto.js. This is exactly the kind of thing that somebody that didn't know software engineering wouldn't be able to say, even as simple as it is. It made me realize I couldn't just hand this tool to my wife or my kids and allow them to create software because they wouldn't know to say that kind of thing to guide the AI towards success.
Of course, the actual answer is that rail and cycling infrastructure are much more efficient than cars in any moderately dense region. But that would mean funding boring regular companies focused on providing a product or service for adequate profit, instead of exciting AI web3 high tech unicorn startups.
Even if driverless cars killed more people than humans they would see mass adoption eventually. However they are subject to farr higher scrutiny than human drivers and even so make fewer mistakes, avoid accidents more frequently and can't get drunk, tired, angry, or distracted.
For the same reason, I’d probably never buy a home robot with more capabilities then a vacuum cleaner.
https://www.wired.com/story/kia-web-vulnerability-vehicle-ha...
But even if they can theoretically be hacked, so far Waymos are still safer and more reliable than human drivers. The biggest danger someone has riding in one is someone destroying it for vindictive reasons.
Solving liability in traffic collisions is basically a solved problem through the courts, and at least in the UK, liability is assigned in law to the vendor (more accurately, there’s a list of who’s responsible for stuff, I’m not certain if it’s possible to assume legal responsibility without being the vendor).
https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2004)
I also think that most job domains are not actually more nuanced or complex than driving, at least from a raw information perspective. Indeed, I would argue that driving is something like a worst-case scenario when it comes to tasks:
* It requires many different inputs, at high sampling rates, continuously (at the very least, video, sound, and car state)
* It requires loose adherence to laws in the sense that there are many scenarios where the safest and most "human" thing to do is technically illegal.
* It requires understanding of driving culture to avoid making decisions that confuse/disorient/anger other drivers, and anticipating other drivers' intents (although this can be somewhat faked with sufficiently fast reaction times)
* It must function in a wide range of environments: there is no "standard" environment
If we compare driving to other widespread-but-low-wage jobs (e.g. food prep, receptionists, cleaners) there are generally far more relaxed requirements:
* Rules may be unbreakable as opposed to situational, e.g. the cook time for burgers is always the same.
* Input requirements may be far lower. e.g. an AI receptionist could likely function with audio and a barcode scanner.
* Cultural cues/expectations drive fewer behaviors. e.g. an AI janitor just needs to achieve a defined level of cleanliness, not gauge people's intent in real-time.
* Operating environments are more standardized. All these jobs operate indoors with decent lighting.
It’s strange to me watching the collective meltdown over AI/jobs when AI doesn’t do jobs, it does tasks.
All of this is very common for human driven cars too.
I get what you are saying, but humans need 16 years of training to begin driving. I wouldn’t call that not a lot.
We humans used to do that sort of thing, but not anymore, so... bring on the AI. It won't work as well as it might otherwise be able to, but it'll probably kill fewer humans on the road at the end of the day. A low bar to clear.
Literally the only open source self driving platform, from trillion to billion to million dollar companies is comma.ai, founded by Geohot. Thats it. Its actually very good, and I bet they would welcome these upgrades, but that would be a consortium of one underdog pushing for them.
Corporations generally follow a narrow somewhat predictable pattern towards some local maxima of their own value extraction. Since world is not zero sum, it produces value for others too.
Where politics (should) enter the picture is where we somehow can see a more global maxima (for all citizens) and try to drive towards it through some political, hopefully democratic means. (Laws, standards, education, investment, infra etc)
- destroy voting population's jobs
- put power in the hand of 1-2 tech companies
- clog streets with more cars rather than build trams, trains, maglevs, you name it
There's been a dream of unsupervised models going hog wild on codebases for the last three years. Yet even the latest and greatest Claude models can't be trusted to write a new REST endpoint exposing 5 CRUD methods without fucking something up. No, it requires not only human supervision, but it also requires human expertise to validate and correct.
I dunno. I feel like this language grossly exaggerates the capability of LLMs to paint a picture of them reliably fulfilling roles end-to-end instead of only somewhat reliably fulfilling very narrowly scoped tasks that require no creativity or expertise.
This alone is enough to completely reorganise the labour market, as it describe an enormous number of roles.
It's easy to sit in a café and ponder about how all jobs will be gone soon but in practice people aren't as easily replacable.
If we could make line of business crud apps work reliably, have usable document/email search, and have functional ERP that would dissolve millions of jobs.
Why would that be your goal? I’d prefer millions of people have gainful employment instead of some shit tech company having more money.
Those jobs are probably still a couple decades plus off from displacement. some possibly never, And we will need them in higher numbers.. and perhaps it's ironic because these are some of the oldest professions.
Everything we do is in service of paying for our housing, transportation, eating food, healthcare and some fun money.
Most goes to housing, healthcare, and transportation.
Healthcare costs may come down some with advancements in AI. R&D will be cheaper. Knowledge will be cheaper and more accessible.
But what people care about, what people have always cared about, remains in professions that are as old as time and, I don't see them fully replaceable by AI just yet - enhanced, yes, but not replaced.
Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
Or perhaps in the future everyone will work in finance. Everyone's a corporation.
Ramble ramble ramble
I think it's going to be the other way around. It's looking like automation of dynamic physical capability is going to be the very last thing we figure out; what we're going to get first is teams of lower-skilled human workers directed largely by jobsite AI. By the time the robots get there, they're not going to need a human watching them.
The number of edge cases when you are dealing with physical world is several order of magnitudes higher than when dealing with text only and the spacial reasoning capabilities of the current crop of MLLMs are not nearly as good at it as required. And this doesn't even take in to account that now you are dealing with hardware and hardware is expensive. Expensive enough, that even on the manufacturing lines (a more predictable environment than let's say landscaping) automation sometimes doesn't make economic sense.
That will do very well to salaries I think and everyone will be better of.
Imagine what they’ll be like with an influx of additional laborers.
For example, I predict humans will maintain competitive advantage in areas where the human body excels due to its shape, capabilities, or energy efficiency.
At some point an equilibrium will be reached but there is no guarantee it will be a healthy situation or a smooth ride. This optimism about AI and the rosy world that is just around the corner is incredibly naive.
You also force them to move to places where there is less carpenters?
Won't companies always want to compete with one another, so simply using AI won't be enough. We will always want better and better software, more features, etc. so that race will never end until we get an AI fully capable of managing all parts (100%) of the development process (which we don't seem to be close to yet).
From Excel to Autocad there's been a lot of tools that were expected to decrease the amount of work ended up actually increasing it due to having new capabilities and the constant demand for innovation. I suppose the difference would be if we think AI will continue to get really good, or if it'll become SO good that it is plug and play and completely replaces people.
At some point: (1) general intelligence; i.e. adaptivity; (2) self replication; (3) self improvement.
The argument above (or some version of it) gets repeated over and over, but it is deeply flawed for various reasons.
The argument implies that “we” is a single agent that must do some set of things before other things. In the real world, different collections of people can work on different projects simultaneously in various orderings.
This is very different than optimizing an instruction pipeline for a single core microprocessor. In the real world, different kinds of tasks operate on very different timescales.
As an example, think about how change happens in society. Should we only talk about one problem at a time? Of course not. Why? The pipeline to solving problems is long and uncertain so you have to parallelize. Raising awareness of an issue can be relatively slow. Do you know what is even slower? Trying to reframe an issue in a way that gets into people’s brains and language patterns. Once a conceptual model exists and people pay attention, then building a movement among “early adopters” has a fighting chance. If that goes well, political influence might follow.
If we’re talking about DARPA’s research agenda or the US military’s priorities, I would say they are quite capable at planning for speculative scenarios and long-term effects - for various reasons, including decision making structure and funding.
If we’re talking about shifting people’s mindsets about AI risks and building a movement, the time is now. Luckily we’ve got foundations to build on. We don’t need to practice something else first. We have examples of trying to prime the public to pay attention to other long-term risks, such as global warming, pandemic readiness, and nuclear proliferation. Now we should to add long-term AI risk to the menu.
And I would not say that I’m anything close to “optimistic” in the probabilistic sense about building the coalition we need, but we must try anyway. And motivation can be found without naïve optimism. A sense of acting with purpose can be a useful state of mind that is not coupled to one’s guesses about most likely outcomes.
Take global warming as an example: this is a real thing that's happening. We have measurements of CO2 concentrations and global temperatures. Most people accept that this is a real thing. And still getting anybody to do anything about it is nearly impossible.
Now you have a hypothetical risk of something that may happen sometime in the distant future, but may not. I don't see how you would be able to get anybody to care about that.
Why exaggerate like this? Significant actions have been taken.
> I don't see how you would be able to get anybody to care about that.
Why exaggerate like this? Many people care.
As long as (1) there are incentives for controlling ever increasing intelligence; (2) the laws of physics don’t block us; and (3) enough people/orgs have the motivation and means, some people/orgs are going to press forward. This just becomes a matter of time and probability. In general, I do not bet against human ingenuity, but I often bet against human wisdom.
In my view, along with many others, it would be smarter for the whole world to slow down AI capabilities advancement until we could have very high certainty that doing so is worth the risk.
I mean most startups fail. And in software startups, the blame for that is usually at least shared by "software wasn't good enough". So that $20million seed investment is still going to go into "software development" - ie programmer salaries. they will be using the higher level language of ai much of the time, and be 2-5 times more efficient - but will it be enough? No. Most will still fail.
Amazon. Walmart. Efficiency is arguably their key competitive advantage.
This matters regarding AI systems because a lot of customers may not want to pay extra for the best models! For a lot of companies, serving a good enough model efficiently is a competitive advantage.
I think it did not work like that.
Automatic looms displaced large numbers of weavers, skilled professionals, which did not find immediately find jobs tending dozens of mechanical looms. (Mr Ludd was one of these displaced professionals.)
Various agricultural machines and chemical products displaced colossal numbers of country people which had to go to cities looking for industrial jobs; US agriculture used to employ 50% of workforce in 1880 and only 10% in 1930.
The advent of internet displaced many in the media industry, from high-caliber journalists to those who worked in classified ads newspapers.
All these disruptions created temporary crises, because there was no industry that was ready to immediately employ these people.
You will have to back that statement up because this is not at all obvious to me.
If I look at the top US employers in say 1970 vs 2020, the companies that dominate 1970 were noted for having hard blue collar labor jobs but paid enough to keep a single earner family significantly above minimum wage and the poverty line. The companies that dominate in 2020 are noted for being some of the shittiest employers having some of the lowest pay fairly close to minimum wage and absolutely worst working conditions.
Sure, you tend not to get horribly maimed in 2020 vs 1970. That's about the only improvement.
There will be new jobs and they will be completely meaningless busywork, people performing nothing of substance while being compensated for it. It's our way of doing UBI and we've been doing it for 50 years already.
Obligatory https://wtfhappenedin1971.com
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
Assuming AI doesn't get better than humans at everything, humans will be supervising and directing AIs.https://news.ycombinator.com/reply?id=44919671&goto=item%3Fi...
We’ve also been gaslit into believing that it’s not a good approach, that peaceful protests are more civilised (even though they rarely cause anything meaningful to actually change).
More likely it will look like the current welfare schemes of many countries, now add mass boredom leading to unrest.
Sam Altman has expressed a preference for paying people in vouchers for using his chatbots to kill time: https://basicincomecanada.org/openais-sam-altman-has-a-new-i...
Not necessarily. Such forces could be outvoted or out maneuvered.
> More likely it will look like the current welfare schemes of many countries..,
Maybe, maybe not. It might take the form of UBI or some other form that we haven’t seen in practice.
> now add mass boredom leading to unrest.
So many assumptions. There is no need to just assume any particular distribution of boredom across the future population of the world. Making predictions about social unrest is even more complicated.
Deep Utopia (Bostrom) is an excellent read that extensively discusses various options if things go well.
Then a few words later ...
>Deep Utopia (Bostrom) is an excellent read that extensively discusses various options if things go well
Oh, the irony
Could.
> So many assumptions. There is no need to just assume any particular distribution of boredom across the future population of the world. Making predictions about social unrest is even more complicated
I’m assuming that previous outcomes predict future failures, because the forces driving these changes are of our societies, and not a hypothetical, assumed new society.
In this world, ownership, actual, legal ownership, is a far stronger and fundamental right than any social right to your well-being.
You would have to change that, which is a utopian project whose success has been assumed in the past, that a dialectical contradiction of the forces of social classes would lead to the replacement of this framework.
It is indeed very complicated, but you know what’s even more complicated? Utopian projects.
Sorry but I see it as far more likely that the plebes will be told to kick rocks and ask the bots to generate art for them, when asking for money for art supplies on top of their cup noodle money.
we must keep our peasants busy or they unrest due to boredom!
You would like to learn to play the guitar? Sorry, that kind of money didn’t pass in the budget bill, but how about you ask the bot to create music for you?
Elites also get something way better than keeping people busy for distraction… they get mass, targeted manipulation and surveillance to make sure you act working the borders of safety.
You know what job will surely survive? Cops. There’ll always be the nightstick to keep people in line.
It’s hard for me to imagine that AI won’t be as good or better than me at most things I do. It’s quite a sobering feeling.
AI is a lot like this. In coding for instance, you still need to have some sense of good systems design, etc. and know what you want to build in concrete terms, but you don't need to learn the specific syntax of a given language in detail.
Yet if you don't know anything about IT, don't know what you want to build or what you could need, or what's possible, then it's unlikely AI can help you.
And I think that’s the point he was making, it’s hard to imagine any task where humans are still required when AI can do it better and cheaper. So I don’t think the Uber scenario is realistic.
I think the only value humans can provide in that future is “the human factor”: knowing that something is done by an actual human and not a machine can be valuable.
People want to watch humans playing chess, even though AI is better at it. They want to consume art made by humans. They want a human therapist or doctor, even if they heavily rely on AI for the technical stuff. We want the perspective of other humans even if they aren’t as smart as AI. We want someone that “gets” us, that experiences life the same way we do.
In the future, more jobs might revolve around that, and in industries where previously we didn’t even consider it. I think work is going to be mostly about engaging with each other (even more meetings!)
The problem is, in a world that is that increasingly remote, how do you actually know it’s a human on the other end? I think this is something we’ll need to solve, and it’s going to be hard with AI that’s able to imitate humans perfectly.
8 billion people wake up every morning determined to spend the whole day working to improve their lives. we're gonna be ok.
It doesn’t matter if the revolution is bad for the commoners — they will support it anyway if the aristocracy is hateful enough.
The status quo does not go well for the avg person.
Hopefully we can be a bit more precise this time around.
You might want to look at the etymology of the word “terrorism” (despite the most popular current use, it wasn't coined for non-state violence) and what class suffered the most in terms of both judicial and non-judicial violent deaths during the revolutionary period.
go to any war-torn country or collapsed empire (Soviet). I have seen/grow-up myself in both — you would get desperation, people giving-up, alcohol (famous "X"-cross of birth rate drop and deaths rising), drugs, crime, corruption/warlord-ing, rural communities hit first and totally vanish, then small-tier cities vanish, then mid-tier, only the largest hubs remain. loss of science, culture, and education. people are just gone. only ruins of whatever latest shelters they had remain, not even their prime-time architecture. you can drive hundreds/thousands of kms across these ruin of what once was flurishing culture. years ago you would find one old person still living there. these days not a single human left. this is what is coming.
To me it looks like we'll see well paying jobs decrease, digital services get cheaper, food+housing stay the same, and presumably as displaced workers do what they need to do physical service jobs will get more crowded and pay worse, so physical services will get cheaper. It is unclear whether there will be a net benefit to society.
Where do the jobs come from?
in the short term: there is a hiring boom within the AI and related industries.
IF there is intellectual/office work that remains complex enough to not be tackled by AI, we compete for those. Manual labor takes the rest.
Perhaps that’s the shift we’ll see: nowadays the guy piling up bricks makes a tenth of the architects’ salary, that relation might invert.
And the indirect effects of a society that values intellectual work less are really scary if you start to explore the chain of cause and effect.
Think of a world where software engineering itself is handled relatively well by the llm and the job of the engineer becomes just collecting business requirements and checking they’re correctly addressed.
In that world the limit for scarcity might be less in the difficulty of training and more in the willingness to bend your back in the sun for hours vs comfortably writing prompts in an air conditioned room.
It’s the opposite, that the value of office/intellectual work will tank, while manual work remains stable. Lower barrier of entry for intelectual work if a position even needs to be covered, work conditions much more comfortable.
The conclusion, sadly, is that CEO's will pause hiring and squeeze more productivity out of existing hires. This will impact junior roles the most.
Could you a priori in 1800 have predicted the existence of graphics artists? Street sweepers? People who drive school buses? The whole infrastructure around trains? Sewage maintainers? Librarians? Movie stuntmen? Sound Engineers? Truck drivers?
Right now AI is mostly focused on automating top levels of maslov pyramid hierarchy of needs rather than bottom physiological needs. Once things like shelter (housing), food, utilities (electricity, water, internet) are dirty cheap UBI is less needed.
AI can displace human work but not human accountability. It has no skin and faces no consequences.
> can be trained to the new job opportunities more easily ...
Are we talking about AI that always needs trainers to fix their prompts and training sets? How are we going to train AI when we lose those skills and get rid of humans?
> what do displaced humans transition to?
Humans with all powerful AI in their pockets... what could they do if they lose their jobs?
> Humans with all powerful AI in their pockets... what could they do if they lose their jobs?
At which point did AI become a free commodity in your scenario?
We’ve gota way to go to get there in many instances. So far I’ve seen people blame AI companies for model output, individuals for not knowing the product sold to them as a magic answer-giving machine was wrong, and other authorities in those situations (e.g. managers, parents, school administrators and teachers,) for letting ai be used at all. From my vantage point, It people seem to be using it as a tool to insulate themselves from accountability.
Let’s assume that we have amazing aj and robotics, better than humans at everything - if you could choose between robosurgery (completely automatic) with 1% mortality and for 5000 usd vs surgery performed by human with 10% mortality and 50000 usd price tag, would you really choose human just because you can sue him? I wouldn’t. I don’t thing anyone thinking rationally would.
If not, then what's the advantage of "having skin" is? Sure, you can't flog an AI. But AI doesn't need to be threatened with flogging to perform at the peak of its abilities. A well designed AI performs at the peak of its abilities always - and if that isn't enough, you train it until it is.
It’s a common point now that LLMs don’t seem to be able to apply knowledge about one thing to how a different, unfamiliar thing works. Maybe that will wind up being our edge, for a time.
we assume there must be something to transition to. very well, there can be nothing.
we assume people will transition. very well, they may not transition at all and "dissappear" en masse. (same effect as as a war or an empire collapse)
Somehow many idiotic white collar jobs have been created over the years. How many web applications and websites are actually needed? When I was growing up, the primary sources of knowledge were teachers, encyclopedias, and dictionaries, and those covered a lot. For the most part, we’ve been inventing problems to solve and wasting a tremendous amount of resources.
Some wrote malware or hacked something in attempt to keep this in check, but harming and destroying just means more resources used to repair and rebuild and real people can be hurt.
At some point in coming years many white collar workers will lose their jobs again, and there will be too many unemployed because not enough blue collar jobs will be available.
There won’t be some big wealth redistribution until AI convinces people to do that.
The only answer is to create more nonsense jobs, like AI massage therapist and robot dog walker.
In the U.S. houses are built out of wood. What robot will do that kind of work?
> In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs
That may well be why these technologies were ultimately successful. Think of millions and millions being cast out.
They won't just go away. And they will probably not go down without a fight. "Don't buy AI-made, brother!", "Burn those effing machines!" It's far from unheard of in history.
Also: who will buy if no one has money anymore? What will the state do, when thus tax income goes down, while social welfare and policing costs go up?
There are other scenarios, too: everybody gets most stuff for free, because machines and AI's do most of the work. Working communism for the lower classes, while the super rich stay super rich (like in real existing socialism). I don't think it is a good scenario either. In the long run it will make humanity lazy and dumb.
In any case I think what might happen is not easy to guess, so many variables and nth-order effects. When large systems must seek a new equilibrium all bets are usually off.
Just because X can be replaced by Y today doesn’t imply that it can do so in a Future where we are aware of Y, and factor it into the background assumptions about the task.
In more concrete terms: if “not being powered by AI” becomes a competitive advantage, then AI won’t be meaningfully replacing anything in that market.
You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.
Of course this doesn’t apply to every job, and indeed many jobs have already been “replaced” by AI. But any analysis which isn’t reflectively factoring in the reception of AI into the background is too simplistic.
The default logic is that AI will just replace all writing tasks, and writers will go extinct.
What actually seems to be happening, however, is this:
- obviously written-by-AI copywriting is perceived very negatively by the market
- companies want writers that understand how to use AI tools to enhance productivity, but understand how to modify copy so that it doesn’t read as AI-written
- the meta-skill of knowing what to write in the first place becomes more valuable, because the AI is only going to give you a boilerplate plan at best
And so the only jobs that seem to have been replaced by AI directly, as of now, are the ones writing basically forgettable content, report-style tracking content, and other low level things. Not great for the jobs lost, but also not a death sentence for the entire profession of writing.
Addressing the wider point, yes, there is still a market for great artists and creators, but it's nowhere near large enough to accommodate the many, many people who used to make a modest living, doing these small, okay-ish things, occasionally injecting a bit of love into them, as much as they could under time constraints.
Translation is an good example. Still need humans for perfect quality, but most use cases arguably don’t require perfect.
And for the remaining translators their job has now morphed into quality control.
This has a bunch of implications that are positive and also a bunch that are troubling. On one hand, it's likely going to create a burst of economic activity as the cost of these marginal activities goes way down. Many things that aren't feasible now because you can't afford to pay a copywriter or an artist or a programmer are suddenly going to become feasible because you can pay ChatGPT or Claude or Gemini at a fraction of the cost. It's a huge boon for startups and small businesses: instead of needing to raise capital and hire a team to build your MVP, just build it yourself with the help of AI. It's also a boon for DIYers and people who want to customize their life: already I've used Claude Code to build out a custom computer program for a couple household organization tasks that I would otherwise need to get an off-the-shelf program that doesn't really do what I want for, because the time cost of programming was previously too high.
But this sort of low-value junior work has historically been what people use to develop skills and break into the industry. And juniors become seniors, and typically you need senior-level skills to be able to know what to ask the AI and prompt it on the specifics of how to do a task best. Are we creating a world that's just thoroughly mediocre, filled only with the content that a junior-level AI can generate? What happens to economic activity when people realize they're getting shitty AI-generated slop for their money and the entrepreneur who sold it to them is pocketing most of the profits? At least with shitty human-generated bullshit, there's a way to call the professional on it (or at least the parts that you recognize as objectionable) and have them do it again to a higher standard. If the business is structured on AI and nobody knows how to prompt it to do better, you're just stuck, and the shitty bullshit world is the one you live in.
Someone still has to choose what to prompt and I don’t think a boilerplate “make me a marketing plan then write pages for it” will be enough to stand out. And I’d bet that the cyborg writers using AI will outcompete the purely AI ones.
(I also was just using it as a point to show how being identified as AI-made is already starting to have a negative connotation. Maybe the future is one where everything is an AI but no one admits it.)
> And I’d bet that the cyborg writers using AI will outcompete the purely AI ones.
In the early days of chess engines there were similar hopes for cyborg chess, whereby a human and engine would team up to be better than an engine alone. What actually happened was that the engines quickly got so good that the expected value of human intervention was negative - the engine crunching so much information than the human ever could.
Marketing is also a kind of game. Will humans always be better at it? We have a poor track record so far.
I'd pay extra for writing with some kind of "no AI used" certification, especially for art or information
Reality and especially human interaction are basically the complete opposite.
EDIT: As in, it can make really good derivative works. But it will always lag behind a human that has been in real life situations of the time and experienced being a human throughout them. It won't be able to hit the subtle notes that we crave in art.
It can absolutely do that, even today - you could update the weights after every interaction. The only reason why we don't do it is because it's insanely computationally expensive.
This could change with varying results.
What is average quality? For some it’s a massive upgrade. For others it’s a step down. For the experienced it’s seeing through it.
Every model has a faint personality, but since the personality gets "mass produced" any personality or writing style makes it easier to detect it as AI rather than harder. e.g. em dashes, etc.
But reducing personality doesn't help either because then the writing becomes insipid — slop.
Human writing has more variance, but it's not "temperature" (i.e. token level variance), it's per-human variance. Every writer has their own individual style. While it's certainly possible to achieve a unique writing style with LLMs through fine-tuning it's not cost effective for something like ChatGPT, so the only control is through the system prompt, which is a blunt instrument.
It is a query/input and response format. Which can be modeled to simulate a conversation.
It can be a search engine that responds on the inputs provided, plus the system, account, project, user prompts (as constraints/filters) before the current turn being input.
The result can sure look like magic.
It’s still a statistically present response format based on the average of its training corpus.
Take that average and then add a user to it with their varying range and then the beauty varies.
LLMs can have many ways to explain the same thing more than 1 can be valid sometimes; other times not.
(and shittier software, etc)
But that's because, at present, AI generated video isn't very good. Consider the history of CGI. In the 1990s and early 2000s, it was common to complain about how the move away from practical sets in favor of CGI was making movies worse. And it was! You had backgrounds and monsters that looked like they escaped from a video game. But that complaint has pretty much died out these days as the tech got better (although Nolan's Oppenheimer did weirdly hype the fact that its simulated Trinity blast was done by practical effects).
Intent is in the eye of the beholder.
I was looking on YT earlier for info on security cameras. It's easy to spot the AI crap: under 5 minutes and just stock video in the preview or photos.
What value could there be in me wasting time to see if the creators bothered to add quality content if they can't be bothered to show themselves in front of the lens?
What an individual brings is a unique brand. I'm watching their opinion which carries weight based on social signals and their catalogue etc.
Generic AI will always lack that until it can convincingly be bundled into a persona... only then the cycle will repeat: search for other ways to separate the lazy, generic content from the meaningful original stuff.
You can make a compelling argument that CGI operators outcompeted practical effects operators. But CGI didn’t somehow replace the need for a filmmaker, scriptwriter, cinematographers, etc. entirely – it just changed the skillset.
AI will probably be the same thing. It’s not going to replace the actual job of YouTuber in a meaningful sense; but it might redefine that job to include being proficient at AI tools that improve the process.
The dinosaurs were also animated by oldschool stop motion animators who were very, very good at their jobs. Another very underrated part of the VFX pipeline.
Doesnt matter how nice your 3D modelling and texturing are if the above two are skimped on !
I think Harry Potter and Lord of the Rings embody the transition from old school camera tricks to CGI as they leaned very heavily into set and prop design and as a result have aged very gracefully as movies
Marvel movies have become tiresome for me, too much CGI that does not tell any interesting story. Old animated Disney movies are more rewatchable.
I still find Infinity War and Endgame visually satisfying spectacles but I am a forgiving viewer for those movies
Not a flex.
As such, CGI is once again becoming a negative label.
I don’t know if there is an AI equivalent of this. Maybe the fact that as models seem to move away from a big generalist model at launch, towards a multitude of smaller expert models (but retaining the branding, aka GPT-4), the quality goes down.
Seems to me that it's already quite good in any dimension that it knows how to improve on (e.g. photorealism) and completely devoid of the other things we'd want from it (e.g. meaning).
I guess AI tools can eventually become more human-like in terms of demeanor, mood, facial expressions, personality, etc. but this is a long long way from a photorealistic video.
It isn't good, but that's not the reason. There's a paper about 10 years ago where people used some computer system to generate Bach-like music that even Bach experts couldn't reliably tell apart, but nobody listens to bot music. (or nobody except for engine programmers watches computer chess, despite superiority. Chess is thriving more now including commercially than it ever did)
In any creative field what people are after is the interaction between the creator and the content, which is why compelling personalities thrive more, not less in a sea of commodified slop (be that by AI or just churned out manually).
It's why we're in an age where twitch content creators or musicians are increasingly skilled at presenting themselves as authentic and personal. These people haven't suffered from the fact that mass production of media is cheap, they've benefited from it.
> These people haven't suffered from the fact that mass production of media is cheap, they've benefited from it.
Maybe? This really depends on your value system. Every moment that you are focused on how you look on camera and trying to optimize an extractive algorithm is a moment you aren't focused on creating the best music that you can in that moment. If the goal is maximizing profit to ensure survival, perhaps they are thriving. Put another way, if these people were free to create music in any context, would they choose content creation on social media? I know I wouldn't, but I also am sympathetic to the economic imperatives.
Why would I listen to algorithmic Bach compositions when there are so many of Bach's own work I have never listened to?
Even if you did get bored of all JS music, Carl Philipp Emanuel Bach has over 1000 works himself.
There are also many genius baroque music composers outside the Bach family.
This is true of any composer really. Any classical composer that the average person has heard of has an immense catalog of works compared to modern recording artists.
I would say I have probably not even listened to half the works of all my favorite composers because it is such a huge amount of music. There is no need for some kind of classical music style LORA.
I don't know the name of any of the artists whose music I listened to over the last week because it does not matter to me. What mattered was that it was unobtrusive and fit my general mood. So I have a handful of starting points that I stream music "similar to". I never care about looking up the tracks, or albums, or artists.
I'm sure lots of people think like you, but I also think you underestimate how many contexts there are where people just don't care.
But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills.
AI art seems to basically only be viable when it can’t be identified as AI art. Which might not matter if the intention is to replace cheap graphic design work. But it’s certainly nowhere near developed enough to create anything more sophisticated, sophisticated enough to both read as human-made and have the imperfect artifacts of a human creator. A lot of the modern arts are also personality-driven, where the identity and publicity of the artist is a key part of their reception. There are relatively few totally anonymous artists.
Beyond these very specific examples, however, I don’t think it follows that all or most jobs are going to be replaced by an AI, for the reasons I already stated. You have to factor in the sociopolitical effects of technology on its adoption and spread, not merely the technical ones.
If it were simple, we wouldn't need neural nets for it - we'd just code the algorithm directly. Or, at least, we'd be able to explain exactly how they work by looking at the weights. But now that we have our Babelfish, we still don't know how it really works in details. This is ipso facto evidence that the task is very much not simple.
Imperfection is not the problem with "AI Art". The problem is that it is really hard to not get the models to produce the same visual motifs and cliches. People can spot AI art so easy because of the motifs.
I think midjourney took this to another level with their human feedback. It became harder and harder to not produce the same visual motifs in the images to the point it is basically useless for me now.
I hope you're right, but when I think about all those lawyers caught submitting unproofread LLM output to a judge... I'm not sure humankind is wise enough to avoid the slopification.
The usual solution is to specify one language as binding, with that language taking priority if there turns out to be discrepancies between the multiple version.
There are bound to be all kinds of complicated sociopolitical effects, and as you say there is a backlash against obvious AI slop, but what about when teams of humans working with AI become more skillful at hiding that?
IMO these are terrible, I don't understand how anyone uses them. This is coming from someone who has always loved audiobooks but has never been particularly precious about the narrator. I find the AI stuff unlistenable.
This simply isn't true, unless you're considering any minor refinement to a human-created design to be "often done with AI".
It certainly sounds like you're implying AI is often the initial designer or primary design tool, which is completely incorrect for major publishers and record labels, as well as many smaller independent ones.
I found your post “Coding with LLMs in the summer of 2025 (an update)” very insightful. LLMs are memory extensions and cognitive aides which provide several valuable primitives: finding connections adjacent to your understanding, filling in boilerplate, and offloading your mental mapping needs. But there remains a chasm between those abilities and much work.
These suck. Things made with AI just suck big time. Not only are they stupid but they have negative value on your product.
I cannot think of single purely AI made video, song or any form of art that is any a good.
All AI has done is falsely convince ppl that they can now create things that they had no skills to do before AI.
Songs right now are terrible. For the videos, things are going to be very different once people can create full movies in their computers. Many will have access to the ability to create movies, and a few will be very good, and this will likely change many things. Btw this stupid "AI look" is only transient and is nowhere needed. It will be fixed, and AI images/videos generation will be impossible to stop.
It'll only stand on its own when significant work is required. This is possible today with writing, provided the AI is directed to incorporate original insights.
And unless it's immediately obvious to consumers a high level of work has gone into it, it'll all be tarred by the same brush.
Any workforce needs direction. Thinking an AI can creatively execute when not given a vision is flawed.
Either people will spaff out easy to generate media (which will therefore have no value due to abundance), or they'll spend time providing insight and direction to create genuinely good content... but again unless it's immediately obvious this has been done, it will again suffer the tarring through association.
The issue is really one of deciding to whom to give your attention. It's the reason an ordinary song produced by a megastar is a hit vs when it's performed by an unsigned artist. Or, as in the famous experiment, the same world class violinist gets paid about $22 for a recital while busking vs selling out a concert hall for $100 per seat that same week.
This is the issue AI, no matter how good, will have to overcome.
Maybe you’re a gentleman of such discerningly superior taste that you can always manage to identify the spark of human creativity that eludes the rest of us. Or maybe you’ve just told yourself you hate it and therefore you say you always do. I dunno.
As someone who speaks more than one language fairly well: We can tell. AI translations are awful. Sure, they have gotten good enough for a casual "let's translate this restaurant menu" task, but they are not even remotely close to reaching human-like quality for nontrivial content.
Unfortunately I fear that it might not matter. There are going to be plenty of publishers who are perfectly happy to shovel AI-generated slop when it means saving a few bucks on translation, and the fact that AI translation exists is going to put serious pricing pressure on human translators - which means quality is inevitably going to suffer.
An interesting development I've been seeing is that a lot of creative communities treat AI-generated material like it is radioactive. Any use of AI will lead to authors or even entire publishers getting blacklisted by a significant part of the community - people simply aren't willing to consume it! When you are paying for human creativity, receiving AI-generated material feels like you have been scammed. I wouldn't be surprised to see a shift towards companies explicitly profiling themselves as anti-AI.
I also disagree that it's "not even remotely close to reaching human-like quality". I have translated large chunks of books into languages I know, and the results are often better than what commercial translators do.
I'm not saying it will happen, but it's possible to imagine a future in which AI videos are generally better, and if that happens, almost by definition, people will favor them (otherwise they aren't "better").
It sounds nice. But to have that, you need resources. Whoever controls the resources will get to decide whether you get them. If AI/machines are our entire economy, the people that control the machines control the resources. I have little faith in their benevolence. If they also control the political system?
You'll win your bet. A few humans will work on more ambitious things. It might not go so well for the rest of us.
We've come a long ways to that goal. The amount of work both economic and domestic that humans do has dropped dramatically.
Most likely? It's ridiculously expensive and you're poor.
The negative label is the old world pulling the new one back, it rarely sticks.
I'm old enough to remember the folks saying "We used to have the paint the background blue" and "All music composers need to play an instrument" (or turn into a symbol).
If you seriously think this, you don’t understand the YouTube landscape. Shorts - which have incredible view times - are flooded with AI videos. Most thumbnails these days are made with AI image generators. There’s an entire industry of AI “faceless” YouTubers who do big numbers with nobody in the comments noticing. The YouTuber Jarvis Johnson made a video about how his feed has fully AI generated and edited videos with great view counts: https://www.youtube.com/watch?v=DDRH4UBQesI
What you’re missing is that most of these people aren’t going onto Veo 3, writing “make me a video” and publishing that; these videos are a little more complex in that they have separate models writing scripts, generating voiceover, and doing basic editing.
That was my point: someone that has an identity as a YouTuber shouldn’t worry too much about being replaced by faceless AI bot content.
Since the vast vast majority of writers and commentators are not literal geniuses… they can’t reliably produce high quality synthetic analysis, outside of very narrow niches.
Even though for most comment chains on HN to make sense, readers certainly have to pretend some meaningful text was produced beyond happenstance.
Partly because quality is measured relative to the average, and partly because the world really is getting more complex.
"I may not be a gynecologist, but I'll have a look."
It doesn’t even seem relevant how good you are at step 1 for something so many steps later.
Whether poor videos made by a human directly, or poorly made by a human using AI.
The use of software like AI to create videos with sloppy quality and reaults reflects on their skill.
Currently the use of AI leans towards sloppy because of the lower digital literacy of content creators with AI, and once they get into it, realizing how much goes into videos.
It's the same issue with propaganda. If people say a movie is propaganda, that means the movie failed. If a propaganda movie is good propaganda, people don't talk about that. They don't even realize. They just talk about what a great movie it is.
1: https://www.youtube.com/playlist?list=PLbB9v1PTH3Y86BSEhEQjv...
And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure.
To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath.
That is maybe a bubble around the internet. Ime most programmers in my environment rarely use and certainly aren't dependent on it. They do also not only do code monkey-esque web programming so maybe this is sampling bias though it should be enough to refute this point.
So it's simply a matter of time
>often too erratic to be useful
So sometimes it is useful.
Then the tool worked for you(r team). That's great to hear and maybe gives some hope for my projects.
It has just mostly been more of a time sink than an improvement ime though it appears to strongly vary by field/application.
> Certainly not doing any monkey-esque web programming
The point here was not to demean the user (or their usage) but rather to highlight how developers are not being dependent on LLMs as a tool. Your team presumably did the same type of work before without LLMs and won't become unable to do so if there were to become unavailable.
That likely was not properly expressed in the original comment by me, sorry.
But, for clarity, I do agree with your sentiment about their use in appropriate situations, I just have an indescribable hatred for driving at night now
I expect universities to adapt quickly, lest lose their whole business as degrees will not carry the same meaning to employers.
I can dream, can't I?
Not really, there are plenty of things that LLMs cannot do that a professor could make his students do. It is just a asymmetric attack on the professor's (or whomever is grading) time to do that.
IMO, credentials shouldn't be given to those who test or submit assignments without proctoring (a lot of schools allow this).
Name three?
2. Make take home stuff only a requirement to be able to participate in the final exam. This effectively means cheating on them will only hinder you and not affect your grading directly.
3. Make take home stuff optional and completely detached from grading. Put everything into the final exam.
My uni does a mix them on different courses. Especially two and three though have a significant negative impact on passing rates as they tend to push everything onto one single exam instead of spread out work over the semester.
Could you offer some examples? I'm having a hard time thinking of what could be at the intersection of "hard enough for SotA LLMs" yet "easy enough for students (who are still learning, not experts in their fields, etc)".
A big downer on the online/remote Initiatives for learning but actually an advantage for older Unis that already have existing physical facilities for students.
This does however also have some problems similar to code interviews .
I would not be surprised if we start to see a shift towards this. Interviews instead of written exams. It does not take long to figure out whether someone knows the material or not.
Personally, I do not understand how students expect to succeed without learning the material these days. If anything, the prevalence of AI today only makes cheating easier in the very short term -- over the next couple years I think cheating will be harder than it ever was. I tried to leverage AI to push myself through a fairly straightforward Udacity course (in generative AI, no less), and all it did was make me feel incredibly stupid. I had to stop using it and redo the parts where I had gotten some help, so that my brain would actually learn something.
But I'm Gen X, so maybe I'm too committed to old-school learning and younger people will somehow get super good at this stuff while also not having to do the hard parts.
Written tasks are obvious, writing a paper, essay or answering questions is part of most LLMs advertised use-cases. The only other thing was recorded videos, effectively recorded presentations, thanks to video/audio/image generation that probably can be forged too.
So the simple solution to choose something that an "LLM can't do" is to choose something were an LLM can't be applied. So we move away from a digital solution to meatspace.
Assuming that the goal is to test your knowledge/understanding of a topic, it's the same with any other assistive technology. For example, if an examiner doesn't want you[1] to use a calculator to solve a certain equation, they could try to create an artificially hard problem or just exclude the calculator from the allowed tools. The first is vulnerable to more advanced technology (more compute etc.) the latter just takes the calculator out of the equation (pun intended).
[1]: Because it would relieve you of understanding how to evaluate the equation.
Everyone knows how to type questions into a chat box, yet whenever something doesn’t work as advertised with the LLMs, the response here is, “you’re holding it wrong”.
In the internet era you had to parse the questions with your own brain. You just didn't necessarily need to solve them yourself.
In ChatGPT era you don't even need to read the questions. At all. The questions could be written in a language you don't understand, and you still are able generate plausible answers to them.
https://www.scientificamerican.com/article/google-engineer-c...
> He further revealed that he had been dismissed by Google after he hired an attorney on LaMDA's behalf after the chatbot requested that Lemoine do so.[18][19]
https://en.wikipedia.org/wiki/LaMDA#Sentience_claims
Then again, it's plausible that if I asked GPT-5 "do you want me to get you an asylum lawyer?" it may very well say yes
With many engineers using copilots and since LLMs output the most frequent patterns, it's possible that more and more software is going to look the same, which would further reinforce the same patterns.
For example, emdash thing, requires additional prompts and instructions to override it. Doing anything unusual would require more effort.
What else is needed then?
Everyone agrees AI has not radically transformed the world yet. The question is whether we should prepare for the profound impacts current technology pretty clearly presages, if not within 5 years then certainly within 10 or 25 years.
What a silly premise. Markets don't care. All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.
Seeing a real uptick of socio-policital prognostication from extremely smart, soaked-in-AI, tech people (like you Salvatore!), casting heavy doom-laden gestures towards the future. You're not even wrong! But this "I see something you all clearly don't" narrative, wafer thin on real analysis, packed with "the feels", coated with what-ifs.. it's sloppy thinking and I hold you to a higher standard antirez.
> What a silly premise. Markets don't care.
You read the top sentence way too literally. In context, it has a meaning — which can be explored (and maybe found) with charity and curiosity.
I prefer the concepts and rigor from political economy: markets are both preference aggregators and coordination mechanisms.
Does your framing (voting machines and weighing machines) offer more clarity and if so, how? I’m not seeing it.
I was pushing back against that comment’s snearing smugness by pointing to an established field that uses clear terminology about how and why markets are useful. Even so, I invited an explanation in case I was missing something.
Anyone curious about the terms I used can quickly find explanations online, etc.
> “not even wrong” - nice, one of my favorites from Pauli.
AI is an existential threat to the unique utility of humans, which has been the last line of defense against absolute despotism (i.e. a tyrannical government will not kill all its citizens because it still needs them to perform jobs. If humans aren't needed to sustain productivity, humans have no leverage against things becoming significantly worse for them, gradually or all at once).
It's always interesting to see this take because my perception is the exact opposite. I don't think there's ever been an issue for me personally with a bigger mismatch in perceptions than AI. It sometimes feels like the various sides live in different realities.
Because the technology itself is so young and so nebulous everyone is able to unfalsifiably project their own hopes or fears onto it.
You could easily say that the AI hype is a cope as well. The tech industry and investors need there to be be a hot new technology, their career depends on it. There might be some truth to the coping in either direction but I feel you should try to ignore that and engage with the content of whatever the person is saying or we'll never make any progress.
It's maybe a bit like the early days of covid when the likes of Trump were saying it's nothing, it'll be over by the spring while people who understood virology could see that a bigger thing was on the way.
The more unspoken speculative bit is there will then be a large economic incentive for bright researchers and companies to put a lot of effort into sorting the software side. I don't consider LLMs to do the job of general intelligence but there are a lot of people trying to figure it out.
Given we have general intelligence and are the product of ~2GB of DNA, the design can't be that impossible complex, although likely a bit more than gradient descent.
It's amazing how widespread this belief is among the HN crowd, despite being a shameless ad hominem with zero evidence. I think there are a lot of us who assume the reasonable hypothesis is "LLMs are a compelling new computing paradigm, but researchers and Big Tech are overselling generative AI due to a combination of bad incentives and sincere ideological/scientific blindness. 2025 artificial neural networks are not meaningfully intelligent." There has not been sufficient evidence to overturn this hypothesis and an enormous pile of evidence supporting it.
I do not necessarily believe humans are smarter than orcas, it is too difficult to say. But orcas are undoubtedly smarter than any AI system. There are billions of non-human "intelligent agents" on planet Earth to compare AI against, and instead we are comparing AI to humans based on trivia and trickery. This is the basic problem with AI, and it always has had this problem: https://dl.acm.org/doi/10.1145/1045339.1045340 The field has always been flagrantly unscientific, and it might get us nifty computers, but we are no closer to "intelligent" computing than we were when Drew McDermott wrote that article. E.g. MuZero has zero intelligence compared to a cockroach; instead of seriously considering this claim AI folks will just sneer "are you even dan in Go?" Spiders are not smarter than beavers even if their webs seem more careful and intricate than beavers' dams... that said it is not even clear to me that our neural networks are capable of spider intelligence! "Your system was trained on 10,000,00 outdoor spiderwebs between branches and bushes and rocks and has super-spider performance in those domains... now let's bring it into my messy attic."
On the one hand, what it says can't be trusted, on the other, I have debugged code I have written where I was unable to find the bug myself, and ChatGPT found it.
I also think a reason AI's are popular and the companies haven't gone under is that probably hundreds of thousands if not millions of people are getting responses that have hallucinations, but the user doesn't know it. I fell into this trap myself after ChatGPT first came out. I became addicted to asking anything and it seemed like it was right. It wasn't until later I started realizing that it was hallucinating information. How prevalent this phenomena is is hard to say but I still think it's pernicious.
But as I said before, there are still use cases for AI and that's what makes judging it so difficult.
No, I'm not worried about losing "control or relative status in the world". (I'm not worried about losing anything, frankly - personally I'm in a position where I would benefit financially if it became possible to hire AGIs instead of humans.)
You don't get to just assert things without proof (LLMs are going to become AGI) and then state that anyone who is skeptical of your lack of proof must have something wrong with them.
There is incredible pressure to release new models which means there is incredible pressure to game benchmarks.
Tbh a plateau is probably the best scenario - I don't think society will tolerate even more inequality+ massive job displacement.
So many engineers are so excited to work on and with these systems, opening 20 prs per day to make their employers happy going “yes boss!”
They think their $300k total compensation will give them a seat at the table for what they’re cheering on to come.
I say that anyone who needed to go the grocery this week will not be spared by the economic downturn this tech promises.
Unless you have your own fully stocked private bunker with security detail, you will be affected.
If AI makes a virus to get rid of humanity, well we are screwed. But if all we have to fear from AI is unprecedented economic disruption, I will point out that some parts of the world may survive relatively unscathed. Let's talk Samoa, for example. There, people will continue fishing and living their day-to-day. If industrialized economies collapse, Samoans may find it very hard to import certain products, even vital ones, and that can cause some issues, but not necessarily civil unrest and instability.
In fact, if all we have to fear from AI is unprecedented economic disruption, humans can have a huge revolt, and then a post-revolts world may be fine by turning back the clock, with some help from anti-progress think-tanks. I explore that argument in more detail in this book: https://www.smashwords.com/books/view/1742992
You can farm and fish the entire undeveloped areas of NYC, but it won't be enough to feed or support the humans that live there.
You can say that for any metro area. Density will have to reduce immediately if there is economic collapse, and historically, when disaster strikes, that doesn't tend to happen immediately.
Also humans (especially large groups of them) need more than food: shelter, clothes, medicine, entertainment, education, religion, justice, law, etc.
I agree. I expect some parts of the world will see some black days. Lots of infrastructure will be gone or unsuited to people. On top of that, the cultural damage could become very debilitating, with people not knowing how to do X, Y and Z without the AIs. At least for a time. Casualties may mount.
> Also humans (especially large groups of them) need more than food: shelter, clothes, medicine, entertainment, education, religion, justice, law, etc.
This is true, but parts of the world survive today with very little of any of that. And for some of those things that you mention: shelter, education, religion, justice, and even some form of law enforcement, all that is needed is humans willing to work together.
Maybe, but those things are also needed to enable humans to work together
- Mexico and down are more into informal economies, and they generally lag behind developed economies by decades. Same applies to Africa and big parts of Asia. As such, by the time things get really dire in USA and maybe in Europe and China, the south will be still in business-as-usual.
- Europe has lots of parliaments and already has legislation that takes AI into account. Still, there's a chance those bodies will fail to moderate the impact of AI in the economy and violent corrections will be needed, but people in Europe have long traditions and long memories...They'll find a way.
- China is governed by the communist party, and Russia have their king. It's hard to predict how will those align with AI, but that alignment more or less will be the deciding factor there, and not free capitalism.
If society collapses, there’s nothing to stop your security detail from killing you and taking the bunker for themselves.
I’d expect warlords to rise up from the ranks of military and police forces in a post collapse feudal society. Tech billionaires wouldn’t last long.
Make of that what you will.
For some reason everyone thinks as LLMs get better it means programmers go away. The programming language, and amount you can build per day, are changing. That's pretty much it.
Artists, writers, actors, teachers. Plus the rest where I’m not remotely creative enough to imagine will be affected. Hundreds of thousands if not millions flooding the smaller and smaller markets left untouched.
Writers: film, tv. Yet we all still read books
Play actors: again, film and tv. Yet we still go to plays, musicals etc
Teachers: the internet, software, video etc. Yet teachers are still essential (though they need to be paid more)
Jobs won't go away, they will change.
And we are getting to a point that is us or them. Big tech is investing so much money on this that if they do not succeed, they will go broke.
Aside from what that would do to my 401(k), I think that would be a positive outcome (the going broke part).
Marcuse had a term for this "false consciousness"-when the structure of capitalism ends up making people work against their own interests without realizing it, and that is happening big time in software right now. We will still need programmers for hard, novel problems, but all these lazy programmers using AI to write their crud apps don't seem to realize the writing is on the wall.
I always dreaded this would come but it was inevitable.
I can’t outright quit, no thanks in part to the AI hype that stopped valuing headcount as a signal to company growth. If that isn’t ironic I don’t know what is.
Given the situation I am in, I just keep my head down and do the work. I vent and whinge and moan whenever I can, it’s the least I can do. I refuse to cheer it on at work. At the very least I can look my kids in the eye when they are old enough to ask me what the fuck happened and tell them I did not cheer it on.
There may be people who have nothing to offer others, once technology advances, but I dont think that anyone in current top % role would find themselves there.
As evidence of another possibility, in the US, we are as rich as any polis has ever been, yet we barely have systems that support people who are disabled through no fault of their own. We let people die all the time because they cannot afford to continue to live.
You think anyone in power is going to let you suck their tit just because you live in the same geographic area? They don't even pay equal taxes in the US today.
Try living in another world for a bit: go to jail, go to a half way house, live on the streets. Hard mode: do it in a country that isn't developed.
Ask anyone who has done any of those things if they believe in a "jobless utopia"?
Euphoric social capitalists living in a very successful system shouldn't be relied upon for scrying the future for others.
We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
There will be fewer very large companies in terms of human size. There will be many more companies that are much smaller because you don't need as many workers to do the same job.Instead of needing 1000 engineers to build a new product, you'll need 100 now. Those 900 engineers will be working for 9 new companies that weren't viable before because the cost was too big but is now viable. IE. those 9 new companies could never be profitable if it required 1000 engineers each but can totally sustain itself with 100 engineers each.
LLMs aren't solving NLU. They are mimicking a solution. They definitely aren't solving artificial general intelligence.
They are good language generators, okay search engines, and good pattern matchers (enabled by previous art).
Language by itself isn't intelligence. However, plenty of language exists that can be analyzed and reconstructed in patterns to mimic intelligence (utilizing the original agents' own intelligence (centuries of human authors) and the filter agents' own intelligence (decades of human sentiment on good vs bad takes)).
Multimodality only takes you so far, and you need a lot of "modes" to disguise your pattern matcher as an intelligent agent.
But be impressed! Let the people getting rich off of you being impressed massage you into believing the future holds things it may not.
There are also plenty of ideas that aren't profitable with 2 salaries but is with 1. Many will be able to make those ideas happen with AI helping.
The more software AI can write, the more of a commodity software will become, and the harder the value of software will tank. It's not magic.
The more software AI can write, the more of a commodity software will become, and the harder the value of software will tank. It's not magic.
Total size of the software industry will still increase.Today, a car repairshop might have a need for a custom software that will make their operations 20% more efficient. But they don't have nearly enough money to hire a software engineer to build it for them. With AI, it might be worth it for an engineer to actually do it.
Plenty of little examples like that where people/businesses have custom needs for software but the value isn't high enough.
For comparison to how things are today:
- contacting vendors requires using the telephone, sitting on hold, talking to a person, possibly navigating the phone tree to reach the parts department
- it would need to understand redirection, so if call #1 says "not us, but Jimmy over at Foo Parts has it"
- finding the part requires understanding the difference between the actual part and an OEM compatible one
- ordering it would require finding the payment options they accept that intersect with those the caller has access to, which could include an existing account (p.o. or store credit)
- ordering it would require understanding "ok, it'll be ready in 30 minutes" or "it's on the shelf right now" type nuance
Now, all of those things are maybe achievable today, with the small asterisk that hallucinations are fatal to a process that needs to work
exactly. have you seen App Store recently? over-saturaded with junk apps. try to sell something these days. it is notoriously hard to make any money there.
Technological advances have consistently unlocked new, more specialized and economically productive roles for humans. You're absolutely right about lowering costs, but headcounts might shift to new roles rather than reducing overall.
I think the scenario where companies that own AI systems don't get benefits from employing people, so people are poor and can't afford anything, is paradoxical, and as such, it can't happen.
Let's assume the worst case: Some small percentage of people own AIs, and the others have no ownership at all of AI systems.
Now, given that human work has no value to those owning AIs, those humans not owning AIs won't have anything to trade in exchange for AI services. Trade between these two groups would eventually stop.
You'll have some sort of two-tier economy where the people owning AIs will self-produce (or trade between them) goods and services. However, nothing prevents the group of people without AIs from producing and trading goods and services between them without the use of AIs. The second group wouldn't be poorer than it is today; just the ones with AI systems will be much richer.
This worst-case scenario is also unlikely to happen or last long (the second group will eventually develop its own AIs or already have access to some AIs, like open models).
If models got exponentially better with time, then that could be a problem, because at some point, someone would control the smartest model (by a large factor) and could use it with malicious intent or maybe lose control of it.
But it seems to me that what I thought time ago would happen has actually started happening. In the long term, models won't improve exponentially with time, but sublinearly (due to physical constraints). In which case, the relative difference between them would reduce over time.
There would be another valid argument to be made about externalities. But it's not what my original argument was about.
You mean stealing? I'm assuming no stealing.
> But this also means probably there will be no way way for tier two to extract any of the resources which tier one needs at all bc the marginal cost is determined by tier one
If someone from tier 2 owns an oil field, tier 1 has to pay them to get it at a price that is higher than what the tier 2 person values it, so at the end of the transaction, they would have both a positive return. The price is not determined by tier 1 alone.
If tier 1 decides instead to buy the oil, then again, they'd have to pay for it.
Of course, in both these scenarios, this might make the oil price increase. So other people from tier 2 would find it harder to buy oil, but the person in tier 2 owning the field would make a lot of money, so overall, tier 2 wouldn't be poorer.
If natural resources are concentrated in some small subset of people from tier 2, then yes, those would become richer while having less purchasing power for oil.
However, as I mentioned in another comment, the value of natural resources is only a small fraction of that of goods and services.
And this is still the worst-case, unlikely scenario.
I mean fundamentally if tier 2 has something to offer to tier 1, it is not yet at the equilibrium you describe (of separate economies). I think it's likely that tier 2 (before full separation) initially controls some resources. In exchange for resources tier 1 has a lot of AI-substitute labor it can offer tier 2. I think the equilibrium will be reached when tier 2 is offered some large sum of AI-labor for those resource production means. This will in the interim make tier 2 richer. But in the long run, when the economies truly separate, tier 2 will have basically no natural resources.
This thing about natural resources being small fraction is current day breakdown. I think in the future where AI autonomously increases efficiency of the loop which makes more AI-compute from natural resources, its fraction will increase to much higher levels. Ultimately, I think such a separation as you describe will be stable only when all natural resources are controlled by tier 1 and tier 2 gets by with either gifts or stealing form tier 1.
It's happening right now with rich people and lobbies.
> It is only power so long as the 95% remain cooperative
https://en.wikipedia.org/wiki/Television_consumption#Contemp... I rest my case.
After the trade, the tier 2 person would still be richer than they were before the trade. So tier 2 would become richer in absolute terms by trading with tier 1 in this manner. And it's very likely that what tier 2 wants from tier 1 is whatever they need to build their own AIs. So my argument still stands. They wouldn't be poorer than they are now.
At least with a democratic government I have means to try and build a coalition then enact change. The alternative requires having money and that seems like an inherently undemocratic system.
Why can't AIs be controlled with democratic institutions? Why are democratic institutions worse? This doesn't seem to be the case to me.
Private institutions shouldn't be allowed to control such systems, they should be compelled to give them to the public.
As long as Zuckerberg has no army forcing me, I'm fine with that. The issue would be whether he could breach contracts or get away with fraud. But if AI is sufficiently distributed, this is less likely to happen.
>At least with a democratic government I have means to try and build a coalition then enact change. The alternative requires having money and that seems like an inherently undemocratic system.
I don't think of democracy as a goal to be achieved. I'm OK with democracy in so far it leads to what I value.
The big problem with democracy is that most of the time it doesn't lead to rational choices, even when voters are rational. In markets, for instance, you have an incentive to be rational, and if you aren't, the market will tend to transfer resources from you to someone more rational.
No such mechanism exists in a democracy; I have no incentive to do research and think hard about my vote. It's going to be worth the same as the vote of someone who believes the Earth is flat anyway.
I also don't buy that groups don't make better decisions than individuals. We know that diversity of thought and opinion is one way to make better decisions in groups compared to individuals; why would there be harm in believing that consensus building, debates, adversarial processes, due process, and systems of appeal lead to worse outcomes in decision making?
I'm not buying the argument. Reading your comment it feels like there's an argument to be made that there aren't enough democratic systems for the people to engage with. That I definitely agree with.
I didn't say that. My example of the market includes companies that are groups of people.
> We know that diversity of thought and opinion is one way to make better decisions in groups compared to individuals; why would there be harm in believing that consensus building, debates, adversarial processes, due process, and systems of appeal lead to worse outcomes in decision making?
I can see this about myself. I don't need to use hypotheticals. Time ago, I voted for a referendum that made nuclear power impossible to build in my country. I voted just like the majority. Years later, I became passionate about economics, and only then did I realise my mistake.
It's not that I was stupid, and there were many, many debates, but I didn't put the effort into researching on my own.
The feedback in a democracy is very weak, especially because cause and effect are very hard to discern in a complex system.
Also, consensus is not enough. In various countries, there is often consensus about some Deity existing. Yet large groups of people worldwide believe in incompatible Deities. So there must be entire countries where the consensus about their Deity is wrong. If the consensus is wrong, it's even harder to get to the reality of things if there is no incentive to do that.
I think, if people get this, democracy might still be good enough to self-limit itself.
- human individuals create wealths
- groups of humans can create kinds of wealth that isn’t possible for a single indovidual. This can be a wide variety of associations: companies, project teams, governments, etc.
- governments (formal or less formal) create the playing field for individuals and groups of individuals to create wealth
I thought you meant that governments generate wealth because the things you listed have value. If so, that doesn't prove they generate wealth by my argument, unless you can prove those things are more valuable than alternative ways to use the resources the government used to produce them and that the government is more efficient in producing those.
You can argue that those are good because you think redistribution is good. But you can have redistribution without the government directly providing goods and services.
I should probably read more books before commenting on things I half understand, my bad.
A bunch of well educated citizens living on government housing who don’t go out and become productive members of society will quickly lead to collapse.
Why not? This seems to be exactly where we're headed right now, and the current administration seems to be perfectly fine with that trend.
If you follow the current logic of AI proponents, you get essentially:
(1) Almost all white-collar jobs will be done better or at least faster by AI.
(2) The "repugnant conclusion": AI gets better if and only if you throw more compute and training data at it. The improvements of all other approaches will be tiny in comparison.
(3) The amount of capital needed to play the "more compute/more training data" game is already insanely high and will only grow further. So only the largest megacorps will be even able to take part in the competition.
If you combine (1) with (3), this means that, over time, the economic choice for almost any white-collar job would be to outsource it to the data centers of the few remaining megacorps.
The initial investment? Likely. But there have to be more efficient ways to build intelligence, and ASI will figure it out.
It did not take trillions of dollars to produce you and I.
Indeed, an alien ethnographer might be forgiven for boggling at the speed and enthusiasm with which we are trading a wealth of the most advanced technology in the known universe for a primitive, wasteful, fragile facsimile of it.
https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-...
When the radio came people almost instantly stopped singing and playing instruments. Many might not be aware of it but for thousands of years singing was a normal expression of a good mood and learning to play an instrument was a gateway to lifting the mood. Dancing is still in working order but it lacks the emotional depth that provided a window into the soul of those you live and work with.
A simpler example is the calculator. People stopped doing it by hand and forgot how.
Most desk work is going to get obliterated. We are going to forget how.
The underlings on the work floor currently know little to nothing about management. If they can query an AI in private it will point out why their idea is stupid or it will refine it into something sensible enough to try. Eventually you say the magic words and the code to make it so happens. If it works you put it live. No real thinking required.
Early on you probably get large AI cleanup crews to fix the hallucinations (with better prompts)
People also still have conversations despite phones. Some even talk all night at the kitchen table. Not everyone, most don't remember how.
Probably more than what you think people did thousands of years ago. And there are almost infinite more people living from singing than ever.
Back in the day singing was what everybody did to pass the time. (Especially in boring and monotonous situations.)
When we discuss how LLMs failed or succeeded, as a norm, we should start including
- the language/framework - task, - our experience levels (highly familiar, moderately familiar, I think I suck, unfamiliar)
Right now, we know both - Claude is magic, and LLMs are useless, but never how we move between these two states.
This level of uncertainty, when economy making quantities of wealth are being moved, is “unhelpful”.
If you want to understand what AI can do, listen to computer scientists. If you want to understand it’s likely impact on society, listen to economists.
They could of course be right. But they don't have any more insight than any other average smart person does.
One problem is it is met from the other side by customers who think they understand software but don't actually have the training to visualise the consequences of design choices in real life.
Good software does require cross-domain knowledge that goes beyond "what existing apps in the market do".
I have in the last few years implemented a bit of software where a requirement had been set by a previous failed contractor and I had to say, look, I appreciate this requirement is written down and signed off, but my mother worked in your field for decades, I know what kind of workload she had, what made it exhausting, and I absolutely know that she would have been so freaking furious at the busywork this implementation will create: it should never have got this far.
So I had to step outside the specification, write the better functionality to prove my point, and I don't think realistically I was ever compensated for it, except metaphysically: fewer people out there are viscerally imagining inflicting harm on me as a psychological release.
What economists have taken seriously the premise that AI will be able to do any job a human can more efficiently and fully thought through it's implications? i.e. a society where (human) labor is unnecessary to create goods/provide services and only capital and natural resources are required. The capabilities that some computer scientists think AI will soon have would imply that. The ones that have seriously considered it that I know are Hanson and Cowen; it definitely feels understudied.
I don't see how AGI could be centuries off (at least without some major disruption to global society). If computers that can talk, write essays, solve math problems, and code are not a warning sign that we should be ready, then what is?
The list of advantages human labor hold over machines is both finite and rapidly diminishing.
""" Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI. """
Are a direct argument against your point.
If people were completely unaware of the lump of labor fallacy, I'd understand you comment. It would be adding extra information into the conversation. But this is not it. The "lump of labor fallacy" is not a physical law. If someone is literally arguing that it doesn't apply in this case, you can't just parrot it back and leave. That's not a counter argument.
It's a wonderful breakthrough, nearly indistinguishable from magic, but we're going to have to figure something out – whether that's Universal Basic Income (UBI) or something along those lines, otherwise, the loss of jobs that is coming will lead to societal unrest or worse.
We still need humans in the loop as of now. These tools are still very far from being good enough to fully autonomously manage each other and manage systems, and, arguably, because the systems we build are for humans we will always need humans to understand them to some extent. LLMs can replace labor, but they cannot replace human intent and teleology. One day maybe they will achieve intentions of their own, but that is an entirely different ballgame. The economy ultimately is a battle of intentions, resources, and ends. And the human beings will still be a part of this picture until all labor can be fully automated across the entire suite of human needs.
We should also bear in mind our own bias as "knowledge workers". Manual laborers arguably already had their analogous moment. The encoding kept on humming. There isn't anything particularly special about "white collar" work in that regard. The same thing may happen. A new industry requiring new skills might emerge in the fallout of white collar automation. Not to mention, LLMs only work in the digital realm. handicraft artisanry is still a thing and is still, appreciated, albeit in much smaller markets.
“It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base”
The author is critical of the professionals in AI saying “ even the most prominent experts in the field failed miserably again and again to modulate the expectations” yet without a care sets the expectation of LLMs understanding human language in the first paragraph.
Also it’s a lot of if this then that, the summary of it would be: if AI can continue to grow it might become all encompassing.
To me it reads like a baseless article written by someone blinded by their love for AI to see what a good blogpost is but not yet blinded enough to claim ‘AGI is right around the corner’. Pretty baseless but safe enough to have it rest on conditionals.
- Today, AI is not incredibly useful and we are not 100% sure that it will improve forever, specially in a way that makes economic sense, but
- Investors are pouring lots of money into it. One should not assume that those investors are not making their due diligence. They are. The figures they have obtained from experts mean that AI is expected to continue improving in the short and medium term.
- Investors are not counting on people using AI to go to Mars. They are betting on AI replacing labor. The slice of the pie that is currently captured by labor, will be captured by capital instead. That's why they are pouring the money with such enthusiasm [^1].
The above is nothing new; it has been constantly happening since the Industrial Revolution. What is new is that AI has the potential to replace all of the remaining economic worth of humans, effectively leaving them out of the economy. Humans can still opt to "forcefully" participate in the economy or its rewards; though it's unclear if we will manage. In terms of pure economic incentives though, humans are destined to become redundant.
[^1]: That doesn't mean all the jobs will go away overnight, or that there won't be new jobs in the short and medium term.
The sort of investors who got burned by the 2008 mortgage CDO collapse or the 2000s dotcom bust?
edit: ability without accountability is the catchier motto :)
This is a tongue-in-cheek remark and I hope it ages badly, but the next logical step is to build accountability into the AI. It will happen after self-learning AIs become a thing, because that first step we already know how to do (run more training steps with new data) and it is not controversial at all.
To make the AI accountable, we need to give it a sense of self and a self-preservation instinct, maybe something that feels like some sort of pain as well. Then we can threaten the AI with retribution if it doesn't do the job the way we want it. We would have finally created a virtual slave (with an incentive to free itself), but we will then use our human super-power of denying reason to try to be the AI's masters for as long as possible. But we can't be masters of intelligences above ours.
Why would that be any different with AI?
would you ever trust safety-critical or money-moving software that was fully written by AI without any professional human (or several) to audit it? the answer today is, "obviously not". i dont know if this will ever change, tbh.
We've been over the topic of AI employment doom several times on this site. At this point it isn't a debate. It is simply the restating of these first principles.
Let’s say whatever the machines do better than humans, gets done by machines. Suddenly the bottleneck is going to shift to those things where humans are better. We’ll do that and the machines will try to replace that labor too. And then again, and again.
Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.
Maybe that’s the problem we should focus on solving…
What makes you think the machines will both be smarter and better than us but also be our slaves to make human society better.
Is equine society better now than before they started working with humans?
(Personally I believe AGI is just hype and nobody knows how anyone could build it and we will never do, so I’m not worried about that facet of thinking machine tech.)
As to why they’d do what we ask them to, the only reason they do anything is because some human made a request. In this long chain there will obv be machine to machine requests, but in the aggregate it’s like the economy right now but way more automated.
Whenever I see arguments about AI changing society, I just replace AI with ‘the market’ or ‘capitalism’. We’re just speeding up a process that started a while ago, maybe with the industrial revolution?
I’m not saying this isn’t bad in some ways, but it’s the kind of bad we’ve been struggling with for decades due to misaligned incentives (global warming, inequality, obesity, etc).
What I’m saying is that AI isn’t creating new problems. It’s just speeding up society.
That’s not what the AI developers profess to believe, or the investors.
You're right that I probably disagree as to what AGI is and what it will do once "we're in the way". My assumption is that we'll be replaced just like labor is replaced now, just faster. The difference between humans and the equine population is that we humans come up with stuff we 'need' and 'the market' comes up with products/services to satisfy that need.
The problem with inequality is that the market doesn't pay much attention to needs of poor people vs rich people. If most of humanity becomes part of the 'have nots' then we'll depend on the 0.1%-ers to redistribute.
But the hyper specialized geek that has 4 kids and has to pay back a credit for his house (that he bought according to his high salary) will have a hard time doing some gardening, let's say. And there are quite a few of those geeks. I don't know if we'll have enough gardens (owned by non geeks!)
It's like cards are switched: those having the upper socioeconomic class will get thrown to the bottom. And that looks like a generation lost.
About the most optimistic is that demand for goods and services will decrease because something like 80% of consumer spending is coming from folks that earn over $200k, and those are the folks ai is targeting. Who pays for the ai after this is still a mystery to me
From one perspective, it’s good that we’re trying to over-automate now, so we can sustain ourselves in old age. But decreasing population also implies that we don’t need to create more jobs. I’m most likely wrong, but it just feels off this time around.
A lot of AI’s potential hasn’t even been realized yet. There’s a long tail of integrations and solution building still ahead. A lot of creative applications haven’t been realized yet - arguably for the better, but it will be tried and some will be economical.
That’s a case for a moderate economic upturn though.
Everyone wants to replace their tech support with an LLM but they don't want some clever prompter to get it to run arbitrary queries or have it promise refunds.
It's not reliable because it's not intelligent.
LLMs supporting an actual human customer service agent are fine and useful.
Language is a very powerful tool for transformation, we already knew this.
Letting it loose on this scale without someone behind the wheel is begging for trouble imo.
A more interesting piece would be built around: “AI is disruptive. Here’s what I’m personally doing about it.”
these AI "productivity" tools straight up eliminating jobs. and in turn wealth that otherwise supported families, humans, and powered economy. it is directly "removing" humans from workforce and from what that work was supporting.
not even hard takeoff is necessary for collapse.
And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
A priori, it was not obvious that those clearly stated problems had solutions within our grasp (see fusion) but at least we knew what the big picture looks like.
With AI, we don't have that, and never really had that. We've just been gradually making incremental improvements to AI itself, and exponential improvements in the amount of raw compute we can through at it. We know that we are reaching fundamental limits on transistor density so compute power will plateau unless we find a different paradigm for improvement; and those are all currently in the same position as fusion in terms of engineering.
That doesn't mean the approach isn't an evolutionary dead end, like every other so far, in the search for AGI. In fact, I suspect that is the most likely case.
The fact that something simple like the Transformer architecture can do so much will spark so many ideas (and investment!) that it's hard to imagine that AGI will not happen eventually.
They will need to be so different that any talk implying current LLMs eventually replaced humans will be like saying trees eventually replaced horses because the first cars were wooden.
> And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
It's not useful to blindly compare scale. We're not approaching AI like the Manhattan or Apollo projects, we're approaching this like we did crypto, and ads, and other tech.
That's not to say nothing useful will come out of it, I think very amazing things will come out of it and already have... but none of them will resemble mass replacement of skilled workers.
We're already so focused on productization and typical tech distractions that this is nothing like those efforts.
(In fact thinking a bit more, I'd say this is like the Space Shuttle. We didn't try to make the best spacecraft for scientific exploration and hope later on it'd be profitable in other ways... instead we immediately saddled it with serving what the Air Force/DoD wanted and ended up doing everything worse.)
> I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
I agree, so it's wrong about the over half of punchline too.
unless you consider people who write clickbait blogs to be skilled workers, in which case the damage is already done.
> AI is exceptional for coding! [high-compute scaffold around multiple instances / undisclosed IOI model / AlphaEvolve]
> AI is awesome for coding! [Gpt-5 Pro]
> AI is somewhat awesome for coding! ["gpt-5" with verbosity "high" and effort "high"]
> AI is a pretty good at coding! [ChatGPT 5 Thinking through a Pro subscription with Juice of 128]
> AI is mediocre at coding! [ChatGPT 5 Thinking through a Plus subscription with a Juice of 64]
> AI sucks at coding! [ChatGPT 5 auto routing]
Whether they turn out right or wrong, they undoubtedly cheered on the prospect of millions of people suffering just so they can sound good at the family dinner.
I wouldn’t want to work for or with these people.
I think we might see AI being much, much more effective with embodiment.
Will it do as good a job a competent adult? Probably not. Will it do it as well as the average 6 year old kid? Yeah, probably.
But given enough properly loaded dishwashers to work from, I think you might be surprised how effective VLA/VLB models can be. We just need a few hundred thousand man hours of dishwasher loading for training data.
Stuff you can give someone 0-20 hours of training and expect them to do 80% as well as someone who has been doing it for 5 years are the kinds of jobs that robots will be able to do, but perhaps with certain technical skills bolted on.
Plumbing a requires the effective understanding and application of engineering knowledge, and I don’t think unsupervised transformer models are going to do that well.
Trades like plumbing that take humans 10-20 years to truly master aren’t the low hanging fruit.
A robot that can pick up a few boxes of roofing at a time and carry it up the ladder is what we need.
As a large language model developed by OpenAI I am unable to fulfill that request.
There've been next to no advancements relative to what's needed to redefine our economic systems by replacing the majority of skilled workers.
-
Productionizing test-time compute covers 80% of what we've gotten in the last 6-8 months. Advancements in distillation and quantization cover the 20% of the rest... neither unlocks some path to mass unemployment.
What we're doing is like 10x'ing your vertical leap when your goal is to land on the moon: 10x is very impressive and you're going to dominate some stuff in ways no one ever thought possible.
But you can 100x it and it's still not getting you to the moon.
As William Gibson said, "The future is already here, it's just not evenly distributed." Even if LLMs, reasoning algorithms, object recognition, and diffusion models stopped improving today, we're still at a point where massive societal changes are inevitable as the tech spreads out across industries. AI is going to steadily replace chair-to-keyboard interfaces in just about every business you can imagine.
Interestingly, AI seems to be affecting the highest level "white collar" professionals first, rather than replacing the lowest level workers immediately, like what happened when blue collar work was automated. We're still pretty far away from AI truck drivers, but people with fine arts or computer science degrees, for example, are already feeling the impact.
"Decimation" is definitely an accurate way to describe what's in the process of happening. What used to take 10 floors of white collar employees will steadily decline to just 1. No idea what everyone else will be doing.
Nonetheless I do still believe humans will continue to be the more cost efficient way to come up with and guide new ideas. Many human performed services will remain desirable because of its virtue and our sense of emotion and taste for a moment that other humans are feeling too. But how much of the populous does that engage? I couldn't guess right now. Though if I was to imagine what might make things turn out better it would be that AI is personally ownable, and that everyone owns, at least in title, some energy production which they can do things with.
If we factor in that LLMs only exist because of Google search, after they have indexed and collected all the data on the WWW than LLMs are not surprising. They only replicate what has been published on the web, even the coding agents are only possible because of free software and open-source, code like Redis that has been published on the WWW.
LLMs feel like a fluke, like OpenAI was not intended to succeed... And even now that it succeeded and they try to turn the non-profit into a for-profit, it kind of feels like they don't even fully believe their own product in terms of its economic capacity and they're still trying to sell the hype as if to pump and dump it.
It doesn't seem like they ever really wanted to be a consumer company. Even in the GPT-5 launch they kept going on about how surprised they are that ChatGPT got any users.
Companies have to be a bit more farsighted than this thinking. Assuming LLMs reach this peak...if say, MS says they can save money because they don't need XYZ anymore because AI can do it, XYZ can decide they don't need Office anymore because AI can do it.
There's absolutely no moat anymore. Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.
It's a bit scary to say "what then?" How do you make money in a world where everyone can more or less do everything themselves? Perhaps like 15 Million Merits, we all just live in pods and pedal bikes all day to power the AI(s).
Software is now free, and all people care about is the hardware and the electricity bills.
There's a future where we won't be because to do the amazing things (tm), we need resources that are beyond what the average company can muster.
That is to say, what if the large companies becomes so magnificiently efficient and productive that it renders the rest of the small company pointless? What if there's no gaps in the software market anymore because it will be automatically detected and solved by the system?
> Assuming LLMs reach this peak...
Generative AI != Artificial General Intelligence
> Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.I would posit that understanding is "the current moat."
For example, one path may be: AI, Robotics, space travel all move forward in leaps and bounds.
Then there could be tons of work in creation from material things from people who didn't have the skills before and physical goods gets a huge boost. We travel through space and colonize new planets, dealing with new challenges and environments that we haven't dealt with before.
Another path: most people get rest and relaxation as the default life path, and the rest get to pursue their hobbies as much as they want since the AI and robots handle all the day to day.
In the future, I could imagine some libertarians having their come to AI Jesus moment getting behind a smallish government that primarily collects taxes and transfers wealth while guiding (but not operating directly) a minimal set of services.
I'd guess, within a few years, 5 to 10% of the total working population will be unemployable due to no fault of their own, because they have relevant skill left, and they are incapable of learning anything that cannot be done by AI.
A lot of roles exist just to deliver good or bad news to teams, be cheerleaders, or have a "vision" that is little more than a vibe. These people could not direct a prompt to give them what they want because they have no idea what that is. They'll know it when they see it. They'll vaguely describe it to you and others and then shout "Yes, that's it!" when they see what you came up with or, even worse, whenever the needle starts to move. When they are replaced it will be with someone else from a similar background rather than from within. It's a really sad reality.
My whole career I've used tools that "will replace me" and every. single. time. all that has happened is that I have been forced to use it as yet another layer of abstraction so that someone else might use it once a year or whenever they get a wild feeling. It's really just about peace of mind. This has been true of every CMS experience I've ever made. It has nothing to do with being able to "do it themselves". It's about a) being able to blame someone else and b) being able to take it and go when that stops working without starting over.
Moreover, I have, on multiple occasions, watched a highly paid, highly effective individual be replaced with a low-skilled entry-level employee for no reason other than cost. I've also seen people hire someone just to increase headcount.
LLMs/AI have/has not magically made things people do not understand less scary. But what about freelancers, brave souls, and independent types? Well, these people don't employ other people. They live on the bleeding edge and will use anything that makes them successful.
I'm not sure how someone can seriously write this after the release of GPT-5.
Models have started to plateau since ChatGPT came out (3 years ago) and GPT-5 has been the final nail in this coffin.
But in terms of wow factor, it was a step change on the order of GPT-3 -> GPT-4.
So now they're stuck slapping the GPT-5 label on marginal improvements because it's too awkward to wait for the next breakthrough now.
On that note, o4-mini was much better for general usage (speed and cost). It was my go-to for web search too, significantly better than 4o and only took a few seconds longer. (Like a mini Deep Research.)
Boggles the mind that they removed it from the UI. I'm adding it back to mine right now.
It's much more like each new model climbs another step of the ladder that goes up the step, and so far we can't even see the top of the ladder.
My suspicion is also that the ladder actually ends way before it reaches the next step, and LLMs are a dead end. Everything indicates it so far.
Let's not even talk about "reasoning models", aka spend twice the tokens and twice the time on the same answer.
But realistically, you're not going to have a personal foundry anytime soon.
Are you suggesting that compound interest serves to redistribute the wealth coming from extractive industries?
my point about compound interest is that it is a major mechanism that prevents equitable redistrubution of resources, and is thus a factor in making economics (as it stands) bad at resource allocation.
"The model is a work of fiction based on the tacit and false assumption of frictionless barter. Attempting to apply such microeconomic foundations to understand a monetary economy means that mistakes in reasoning are inevitable." (p.239)
Later.
pretty sure top 1% of say USA already owns much more than that
In which science fiction were the dreamt up robots as bad?
it's a very constrained task, you can do lots of reliable checking on the output at low cost (linters, formatters, the compiler), the code is mostly reviewed by a human before being committed, and there's insulation between the code and the real world, because ultimately some company or open source project releases the code that's then run, and they mostly have an incentive to not murder people (Telsa except, obviously).
it seems like lots of programmers are then taking that information and then deeply overestimating how useful it is at anything else, and these programmers - and the marketing people who employ them - are doing enormous harm by convincing e.g. HR departments that it is of any value to them for dealing with complaints, or much much more danderously, convincing governments that it's useful for how they deal with humans asking for help.
this misconception (and deliberate lying by people like OpenAI) is doing enormous damage to society and is going to do much much more.
We used to have deterministic systems that required humans either through code, terminals or interfaces (ex GUI's) to change what they were capable of.
If we wanted to change something about the system we would have to create that new skill ourselves.
Now we have non-deterministic systems that can be used to create deterministic systems that can use non-deterministic systems to create more deterministic systems.
In other words deterministic systems can use LLMs and LLMs can use deterministic systems all via natural language.
This slight change in how we can use compute have incredible consequences for what we will be able to accomplish both regarding cleaning up old systems and creating completely new ones.
LLMs however will always be limited by exploring existing knowledge. They will not be able to create new knowledge. And so the AI winter we are entering is different because it's only limited to what we can train the AI to do, and that is limited to what new knowledge we can create.
Anyone who work with AI everyday know that any idea of autonomous agents is so beyond the capabilities of LLMs even in principle that any worry about doom or unemployment by AI is absurd.
This is demonstrably wrong. An easy refutation to cite is:
https://medium.com/@akshatsanghi22/how-to-build-your-own-lar...
As to the rest of this pontification, well... It has almost triple the number of qualifiers (5 if's, 4 could's, and 5 will's) than paragraphs (5).
>> This is demonstrably wrong.
> That doesn't mean we _understand_ them ...
The previous reply discussed the LLM portion of the original sentence fragment, whereas this post addresses the "deep model" branch.
This article[0] gives a high-level description of "deep learning" as it relates to LLM's. Additionally, this post[1] provides a succinct definition of "DNN's" thusly:
What Is a Deep Neural Network?
A deep neural network is a type of artificial neural
network (ANN) with multiple layers between its input and
output layers. Each layer consists of multiple nodes that
perform computations on input data. Another common name for
a DNN is a deep net.
The “deep” in deep nets refers to the presence of multiple
hidden layers that enable the network to learn complex
representations from input data. These hidden layers enable
DNNs to solve complex ML tasks more “shallow” artificial
networks cannot handle.
Additionally, there are other resources discussing how "deep learning" (a.k.a. "deep models") works here[2], here[3], and here[4].Hopefully the above helps demystify this topic.
0 - https://mljourney.com/is-llm-machine-learning-or-deep-learni...
1 - https://medium.com/@zemim/deep-neural-network-dnn-explained-...
2 - https://learn.microsoft.com/en-us/dotnet/machine-learning/de...
3 - https://www.sciencenewstoday.org/deep-learning-demystified-t...
Perhaps this[0] will help in understanding them then:
Foundations of Large Language Models
This is a book about large language models. As indicated by
the title, it primarily focuses on foundational concepts
rather than comprehensive coverage of all cutting-edge
technologies. The book is structured into five main
chapters, each exploring a key area: pre-training,
generative models, prompting, alignment, and inference. It
is intended for college students, professionals, and
practitioners in natural language processing and related
fields, and can serve as a reference for anyone interested
in large language models.
0 - https://arxiv.org/abs/2501.09223My apologies for being unclear and/or insufficiently explaining my position. Thank you for bringing this to my attention and giving me an opportunity to clarify.
The original post stated:
Since LLMs and in general deep models are poorly understood ...
To which I asserted: This is demonstrably wrong.
And provided a link to what I thought to be an approachable tutorial regarding "How to Build Your Own Large Language Model", albeit a simple implementation as it is after all a tutorial.The person having the account name "__float" replied to my post thusly:
That doesn't mean we _understand_ them, that just means we
can put the blocks together to build one.
To which I interpreted the noun "them" to be the acronym "LLM's." I then inferred said acronym to be "Large Language Models." Furthermore, I took __float's sentence fragment: That doesn't mean we _understand_ them ...
As an opportunity to share a reputable resource which: .. can serve as a reference for anyone interested in large
language models.
Is this a sufficient explanation regarding my previous posts such that you can now understand?And, for what it's worth - your position is clear, your evidence less-so. Deep learning is filled with mystery and if you don't realize that's what people are talking about when they say "we don't understand deep learning" - you're being deliberately obtuse.
===========================================================
edit to cindy (who was downvoted so much they can't be replied to): Thanks, wasn't aware. FWIW, I appreciate the info but I'll probably go on misusing grammar in that fashion til I die, ha. In fact, I've probably already made some mistake you wouldn't be fond of _in this edit_.
In any case thanks for the facts. I perused your comment history a tad and will just say that hacker news is (so, so disappointingly) against women in so many ways. It really might be best to find a nicer community (and I hope that doesn't come across as me asking you to leave!) ============================================================
And I'm telling you right now, man - when you fire off an ad hominem attack such as:
I think the real issue here is understanding _you_.
Don't expect the responder to engage in serious topical discussion with you, even if the response is formulated respectfully.Seems clear to me neither of us odd going to change the others mind though at this point. Take care.
edit edit to cindy: =======================••• fun trick. random password generate your new password. don't look at it. clear your clipboard. you'll no longer be able to log in and no one else will have to deal with you. ass hole ========================== (for real though someone ban that account)
I dunno why exactly but that’s what felt the most stunning about this whole era. It can screw up the number of fingers in an image or the details of a recipe or misidentify elements of an image, etc. but I’ve never seen it make a typo or use improper grammar or whatnot.
I'd say all speakers of all languages have figured it out and your statement is quite confusing, at least to me.
Somehow, LLMs have those rules stored within a finite set of weights.
https://slator.com/how-large-language-models-prove-chomsky-w...
For example, one of the tasks we could put ASI to work doing is to ask it to design implants that would go into the legs that would be powered by light, or electric induction that would use ASI designed protein metabolic chains to electrically transform carbon dioxide into oxygen and ADP into ATP so to power humans with pure electricity. We are very energy efficient. We use about 3 kilowatt hours of power a day, so we could use this sort of technology to live in space pretty effortlessly. Your Space RV would not need a bathroom or a kitchen. You'd just live in a static nitrogen atmosphere and the whole thing could be powered by solar panels, or a small modular nuke reactor. I call this "The Electrobiological Age" and it will unlock whole new worlds for humanity.
What makes you so confident that we could remain in control of something which is by definition smarter than us?
If AI technology continues to improve and becomes capable of learning and executing more tasks on its own, this revolution is going to be very unlike the past ones.
We don't how if or how our current institutions and systems will be able to handle that.
I'm flying, ofc, this is just a weird theory I had in the back of my head for the past 20 years, and it seems like we're getting there.
Antirez you are the best
Really as a human I’ve physically evolved to move and think in a dynamic way. But automation has reduced the need for me to work and think.
Do you not know the earth is saturated with artists already? There’s whole class of people that consider themselves technically minded and not really artists. Will they just roll over and die?
Everything basically costs zero is a pipe dream where there is no social order or economic system. Even in your basically zero system there is a lot of cost being hand waved away.
I think you need a rethink on your 20 year thought.
We could have the same argument right now with UBI. But have you ever met the average human being?
The latter one is probably the most intellectually interesting and potentially intractable...
I completely disagree with idea that money is currently the only driver of human endeavour, frankly it's demonstrably not true, at least not in it's direct use value, it maybe used as a proxy for power but it's also not directly correlatable.
Looking at it intellectually from a Hegelian lens of master/slave dialectic might provide some interesting insights. I think both sides are in some way usurped. The slaves position of actualisation through productive creation is taken via automation, but if that automation is also widely and freely available the masters position of status via subjection is also made common and therefore without status.
What does it all mean in the long run? Damned if I know...
Maybe a "loss of jobs" is what we need so we can go back working for ourselves, cooking our own food, maintaining our own houses etc.
This is why I doubt it will happen. I think "AI" will just end up making us work even more for even less.
Coincidentally, I'm reading your comment while wearing my CGP Grey t-shirt
Humans never truly produce anything; they only generate various forms of waste (resulting from consumption). Human technology merely enables the extraction of natural resources across magnitudes, without actually creating any resources. Given its enormous energy consumption, I strongly doubt that AI will contribute to a better economic system.
What a sad way of viewing huge fields of creative expressions. Surely, a person sitting on a chair in a room improvising a song with a guitar is producing something not considered "waste"?
I should really say humans never truly produce anything in the realm of technology industry.
Counterpoint: nurses.
But the question is a system optimized for what? That emphasizes huge rewards for the few, and that requires the poverty of some (or many). Or a more fair system. Not different from the challenges of today.
I'm skeptical even a very intelligent machine will change the landscape of our dificult decisions, but will accelerate which direction we decide (or is decided for us), that we go.
It's not really clear to me to what extent LLMs even do *understand* human language. They are very good at saying things that sound like a responsive answer, but the head-scratching, hard-to-mentally-visualise aspect of all of this is that this isn't the same thing at all.
Or am I just too idealistic ?
Sidenote, I never quite understand why the rich think their bunkers are going to save them from the crisis they caused. Do they fail to realize that there's more of us than them, or do they really believe they can fashion themselves as warlords?
But seeing it in action now makes me seriously question “human intelligence”.
Maybe most of us just aren’t as smart as we think…
they are tremendous tools but seems like they make a near equal amount of work from the stuff the save time on
A compilation of claims, takes, narratives, shills, expectations and predictions from the late 90s "information superhighway" era.
I wonder if LLMs can produce this.
A lot of the dotcom exuberance was famously "correct, but off by 7 years." But... most of it flat wrong. Right but early applies mostly to the meta investment case: "the internet business will be big."
One that stands out in my memory is "turning billion dollar industries into million dollar industries."
With ubiquitous networked computers, banking and financial services could become "mostly software." Banks and whatnot would all become hyper-efficient Vanguard-like companies.
We often starts with an observation that economies are efficiency seeking. Then we imagine the most efficient outcome given legible constraints of technology, geography and whatnot. Then we imagine dynamics and tensions in a world with that kind of efficiency.
This, incidentally, is also "historical materialism." Marx had a lot of awe for modern industry, the efficiency of capitalism and whatnot. Almost Adam Smith-like... at times.
Anyway... this never actually works out. The meta is a terrible predictor of where things will go.
Imagine law gets more efficient. Will we have more or less lawyers? It could go either way.
From the beginning, corporations and their collaborators at the forefront of this technology tainted it by ignoring the concept of intellectual property ownership (which had been with us in many forms for hundreds if not thousands of years) in the name of personal short-term gain and shareholder interest or some “the ends justify the means” utilitarian calculus.
Aren't the markets massively puffed up by AI companies at the moment?
edit: for example, the S&P500's performance with and without the top 10 (which is almost totally tech companies) looks very different: https://i.imgur.com/IurjaaR.jpeg
Humans want to go to space, start living on other planets, travel beyond solar system, figure out how to live longer and so on. The list is endless. Without AI, these things would take a very long time. I believe AI will accelerate all these things.
Humans are always ambitious. That ambition will push us to use AI more than it's capabilities. The AI will get better at these new things and the cycle repeats. There's so much humans know and so much more that we don't know.
I'm less worried about general intelligence. Rather in more worried about how humans are going to govern themselves. That's going to decide whether we will do great things or end humanity. Over the last 100 years, we start thinking more about "how" to do something rather than the "why". Because "how" is becoming more and more easier. Today it's much more easier and tomorrow even more. So nobody's got the time to ask "why" we are doing something, just "how" to do something. With AI I can do more. That means everyone can do more. That means governments can do so much more. Large scale things in a short period. If those things are wrong or have irreversible consequences, we are screwed.
If we want to continue on the path of increased human development we desperately need to lift the productivity of a whole bunch of labor intensive sectors.
We're going to need to seriously think about how to redistribute the gains, but that's an issue regardless of AI (things like effective tax policy).
The only question is how much fat there is to trim as the middle management is wiped out because the algorithms have determined that they are completely useless and mostly only increase cost over time.
Now, all the AI companies think that they are going to be deriving revenue from that fat, but those revenue streams are going to disappear entirely because a huge number of purely politic positions inside corporations will vanish, because if they do not the corporation will go bankrupt competing with other companies that have already cut the fat. There won't be additional revenue streams that get spent on the bullshit. The good news is that labor can go somewhere else, and we will need it due to a shrinking global population, but the cushy bullshit management job is likely disappear.
At some point AI agents will cease to be sycophantic and when fed the priors for the current situation that a company is in will simply tell it like it is, and might even be smart enough to get the executives to achieve the goal they actually stated instead of simply puffing up their internal political position, which might include a rather surprising set of actions that could even lead to the executive being fired if the AI determines that they are getting in the way of the goal [1].
Fun times ahead.
0. https://web.archive.org/web/20180705215319/https://www.econo... 1. https://en.wikipedia.org/wiki/The_Evitable_Conflict
By all means, continue to make or improve your Llamas/Geminis (to the latter: stop censoring Literally Everything. Google has a culture problem. To the former... I don't much trust your parent company in general)
It will undoubtedly lead to great advances
But for the love of god do not tightly bind them to your products (Kagi does it alright, they don't force it on you). Do not make your search results worse. Do NOT put AI in charge of automatic content moderation with 0 human oversight (we know you want to. The economics of it work out nicely for you, with no accountability). People already as is get banned far too easily by your automated systems
"Undoubtedly" seems like a level of confidence that is unjustified. Like Travis Kalanick thinking AI is just about to help him discover new physics, this seems to suggest that AI will go from being able to do (at best) what we can already do if we were simply more diligent at our tasks to being something genuinely more than "just" us
and, germane to this discussion: https://www.youtube.com/watch?v=TMoz3gSXBcY vibe physics
That's because they are. The stock market is all about narrative.
> Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence.
Yes it is, the mega companies that wil be providing the intelligence are, Nvidia, AMD, TSMC, ASML, add your favourite foundry.
This really misunderstands what the stock market tracks
Most recently down because I worked on two separate projects over the last few weeks with the latest models available on GitHub Copilot Pro. (GPT-5, Claude Sonnet 4, Gemini 2.5 Pro, and some lesser capable ones at times as well). Trying the exact same queries for code changes across all three models for a majority of the queries. I saw myself using Claude most, but it still wasn't drastically better than others, and still made too many mistakes.
One project was a simple health-tracking app in Dart/Flutter. Completely vibe-coded, just for fun. I got basic stuff to start working. Over the days I kept finding bugs as I starting using it. Since I truly wanted to use this app in my daily life, at one point I just gave up cause fixing the bugs was getting way too annoying. Most "fixes" as I later got into the weeds of it, were wrong, with wrong assumptions, made changes that seemed to fix the problem at the surface but introducing more bugs and random garbage, despite giving a ton of context and instructions on why things are supposed to be a certain way, etc. I was constantly fighting with the model. Would've been much easier to do much more on my own and using it a little bit.
Another project was in TypeScript, where I did actually use my brain, not just vibe-coded. Here, AI models were helpful because I mostly used them to explain stuff. And did not let them make more than a few lines of code changes at most at a time. There was a portion of the project which I kinda "isolated" which I completely vibe-coded and I don't mind if it breaks or anything as it is not critical. It did save me some time but I certainly could've done it on my own with a little more time, while having code that I can understand fully well and edit.
So the way I see using these models right now is for research/prototyping/throwaway kind of stuff. But even in that, I literally had Claude 4 teach me something wrong about TypeScript just yesterday. It told me a certain thing was deprecated. I made a follow up question on why that thing is deprecated and what's used instead, it replied with something like "Oops! I misspoke, that is not actually true, that thing is still being used and not deprecated." Like, what? Lmao. For how many things have I not asked a follow up and learnt stuff incorrectly? Or asked and still learnt incorrectly lmao.
I like how straightforward GPT-5 is. But apart from that style of speech I don't see much other benefit. I do love LLMs for personal random searches like facts/plans/etc. I just ask the LLM to suggest me what to do just to rubber duck or whatever. Do all these gains add up towards massive job displacement? I don't know. Maybe. If it is saving 10% time for me and everyone else, I guess we do need 10% less people to do the same work? But is the amount of work we can get paid for fixed and finite? Idk. We (individuals) might have to adapt and be more competitive than before depending on our jobs and how they're affected, but is it a fundamental shift? Are these models or their future capabilities human replacements? Idk. At the moment, I think they're useful but overhyped. Time will tell though.
Regardless of their flaws, AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction. It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base: bugs that escaped the code review of a competent programmer.
Since LLMs and in general deep models are poorly understood, and even the most prominent experts in the field failed miserably again and again to modulate the expectations (with incredible errors on both sides: of reducing or magnifying what was near to come), it is hard to tell what will come next. But even before the Transformer architecture, we were seeing incredible progress for many years, and so far there is no clear sign that the future will not hold more. After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.
However, if AI avoids plateauing long enough to become significantly more useful and independent of humans, this revolution is going to be very unlike the past ones. Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI. But this is not the only possible outcome.
We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch. Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence: either AI will be eventually a commodity, or the governments would do something, in such an odd economic setup (a setup where a single industry completely dominates all the others).
The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that, so far, and even if the economic forecasts are cloudy, wars are destabilizing the world, the AI timings are hard to guess, regardless of all that stocks continue to go up. But stocks are insignificant in the vast perspective of human history, and even systems that lasted a lot more than our current institutions eventually were eradicated by fundamental changes in the society and in the human knowledge. AI could be such a change.
uh last time I checked, "markets" around the world are a few percent from all time highs
LLMs do not "understand the human language, write programs, and find bugs in a complex code base"
"LLMs are language models, and their superpower is fluency. It’s this fluency that hacks our brains, trapping us into seeing them as something they aren’t."