Before our tools: a bookkeeper spends 80% of their time on data entry and transaction categorisation, 20% on actually thinking about the numbers. After: those ratios flip. The bookkeeper is still there, still needed, but now they're doing the part that actually requires judgment.
The catch nobody talks about is the transition period. The people who were really good at the mechanical part (fast data entry, memorised category codes) suddenly find their competitive advantage has evaporated. And the people who were good at the thinking part but slow at data entry are suddenly the most valuable people in the room. That's a real disruption for real humans even if the total number of jobs stays roughly the same.
I think the "AI won't take your job" framing misses this nuance. It's not about headcount. It's about which specific skills get devalued and how quickly people can retool. In accounting at least, the answer is "slowly" because the profession moves at glacial speed.
AI reduces the penalty for weak domain context. Once the work is packaged like that, the “thinking part” becomes far easier to offshore because:
- Training time drops as you’re not teaching the whole craft, you’re teaching exception-handling around an AI-driven pipeline.
- Quality becomes more auditable because outputs can be checked with automated review layers.
- Communication overhead shrinks with fewer back-and-forth cycles when AI pre-fills and structures the work.
- Labor arbitrage expands and the limiting factor stops being “can we find someone locally who knows our messy process” and becomes “who is cheapest who can supervise and resolve exceptions.”
So yeah, the jobs mostly remain and some people become more valuable. But the clearing price for that labor moves toward the global minimum faster than it used to.
The impact won’t show up as “no jobs,” it is already showing up as stagnant or declining Western salaries, thinner career ladders, and more of the value captured by the firms that own the workflows rather than the people doing the work.
How many of those do you see around?
people have been saying that since 2022.
when and how. hmm??
show your work.
or is this just more slype being spewed...
This is why (personal experience) I am seeing a lot of FullStack jobs compared to specialized Backend, FE, Ops roles. AI does 90% of the job of a senior engineer (What the CEOs believe) and the companies now want someone that can do the full "100" and not just supply the missing "10". So that remaining 90 is now coming from an amalgamation of other responsibilities.
I would expect a lot of product engineering to specialize further into domains like healthtech, fintech, adtech, etc. While the in-the-weeds engineering will be platform, infra, and embedded systems type folks.
Real median salary, and real median wages are both rising for the last couple years. Maybe they would have risen faster if there was no AI, but I don't think you can say there has been a discernible impact yet.
But the job had better take fewer people, or the automation is not justified.
There's also a tradeoff between automation flexibility and cost. If you need an LLM for each transaction, your costs will be much higher than if some simple CRUD server does it.
Here's a nice example from a more physical business - sandwich making.
Start with the Nala Sandwich Bot.[1] This is a single robot arm emulating a human making sandwiches. Humans have to do all the prep, and all the cleaning. It's slow, maybe one sandwich per minute. If they have any commercial installations, they're not showing them. This is cool, but ineffective.
Next is a Raptor/JLS robotic sandwich assembly line.[2] This is a dozen robots and many conveyors assembling sandwiches. It's reasonably fast, at 100 sandwiches per minute. This system could be reconfigured to make a variety of sandwich-format food products, but it would take a fair amount of downtime and adjustment. Not new robots, just different tooling. Everything is stainless steel or food grade plastic, so it can be routinely hosed down with hot soapy water. This is modern automation. Quite practical and in wide use.
Finally, there's the Weber automated sandwich line.[3] Now this is classic single-purpose automation, like 1950s Detroit engine lines. There are barely any robots at all; it's all special purpose hardware. You get 600 or more sandwiches per minute. Not only is everything stainless or food-grade plastic, it has a built-in self cleaning system so it can clean itself. Staff is minimal. But changing to a product with a slightly different form factor requires major modifications and skills not normally present in the plant. Only useful if you have a market for several hundred identical sandwiches per minute.
These three examples show why automation hasn't taken over. To get the most economical production, you need extreme product standardization. Sometimes you can get this. There are food plants which turn out Oreos or Twinkies in vast quantities at low cost with consistent quality. But if you want product variations, productivity goes way, way down.
[1] https://nalarobotics.com/sandwich.html
Also the statement “show why automation hasn’t taken over” is truely hysterically wrong. Yeah, sure, no automation has taken over since the Industrial Revolution
Not necessarily. Automation may also just result in higher quality output because it eliminates mistakes (less the case with "AI" automation though) and frees up time for the humans to actually quality control. This might require the people on average to be more skilled though.
Even if it only results in higher output volume you often have the effect that demand grows also because the price goes down.
They show three cases of what happened when a process was mechanized.
The "good case" was the Linotype. Typesetting became cheaper and the number of works printed went up, so printers did better.
The "medium case" was glassblowing of bottles. Bottle making was a skilled trade, with about five people working as a practiced team to make bottles. Once bottle-making was mechanized, there was no longer a need for such teams. But bottles became cheaper, so there were still a lot of bottlemakers. But they were lower paid, because tending a bottle-making machine is not a high skill job.
The "bad case" was the stone planer. The big application for planed stone was door and window lintels for brick buildings. This had been done by lots of big guys with hammers and chisels. Steam powered stone planers replaced them. Because lintels are a minor part of buildings, this didn't cause more buildings to be built, so employment in stone planing went way down.
Those are still the three basic cases. If the market size is limited by a non-price factor, higher productivity makes wages go down.
In many cases, this is a fallacy.
Much like programming, there is often essentially an infinite amount of (in this case) bookkeeping tasks that need to be done. The folks employed to do them work on the top X number of them. By removing a lot of the scut work, second order tasks can be done (like verification, clarification, etc.) or can be done more thoroughly.
Source: Me. I have worked waaaay too much on cleaning up the innards of less-than-perfect accounting processes.
It IS about headcount in a lot of cases.
Systems engineering is an extremely hard computer science domain with few engineers either interested in it, or good at it.
Building dashboards is tedious and requires organizational structure to deliver on. This is the bread and butter of what agents are good at building right now. You still need organization and communication skills in your company and to direct the coding agents towards that dashboard you want and need. Until you hit a implementation wall and someone will need to spend time trying to understand some of the code. At least with dashboards, you can probably just start over from scratch.
It's arguably more work to prompt in english to an AI agent to assist you in hard systems problems, and the signals the agent would need to add value aren't readily available (yet?!). Plus, there's no way systems engineers would feel comfortable taking generated code at face-value. So they definitely will spend the extra mental energy to read what is output.
So I don't know. I think we're going to keep marching forward, because that's what we do, but I also don't think this "vibe-coded" automated code generator phase we're in right now will ultimately last. It'll likely fall apart and the pieces we put back together will likely return us to some new kind of normal, but we'll all still need to know how to be damn good software engineers.
I've found some power use cases with LLMs, like "explore", but everyone seems misty eye'd that these coding agents can one-shot entire features. I suspect it'll be fine until it's not and people get burned by what is essentially trusting these black boxes to barf out entire implementations leaving trails of code soup.
Worse is that junior engineers can say they're "more productive" but it's now at the expense of understanding what it is they just contributed.
So, sure, more productive, but in the same way that 2010s move fast and break things philosophy was, "more productive." This will all come back to bite us eventually.
Once you have automated extensively, all of the remaining work is cognitively demanding and doing 8 hours of that work every day is exhausting.
Good accounting teams will have more time and resources to do things like identify fraud, waste, duplicated processes, etc. They will also have time to streamline/optimize existing practices.
Good teams will earn many multiples of their cost in terms of savings or increased earnings.
There may be increased competition for the low-cost “just meet the legal compliance requirements” offerings, but any business that makes money and wants to make more will gladly spend more than the minimum for better service.
The argument might be fundamentally sound, but now we're automating the part that requires judgement. So if the accountants aren't doing the mechanical part or the judgement part, where exactly is the role going? Formalised reading of an AI provided printout?
It seems quite reasonable to predict that humans just won't be able to make a living doing anything that involves screens or thinking, and we go back to manual labour as basically what humans do.
We've presumably all seen the progress of humanoid robotics; they're currently far from emulating human manual dexterity, but in the last few years they've gotten pretty skilled at rapid locomotion. And robots will likely end up with a different skill profile at manual tasks than humans, simply due to being made of different materials via a more modular process. It could be a similar story to the rise of the practical skills of chatbots.
In theory we could produce a utopia for humans, automating all the bad labor. But I have little optimism left in my bones.
What parts of the job require judgement that is resistant to automation? What percentage of customers need that?
If the hours an accountant spends on a customer go from 4 per month to 1, do you reckon they can sustainably charge the same?
No, not necessarily. There are different kinds of automation.
Earlier in my career I sold and implemented enterprise automation solutions for large clients. Think document scanning, intelligent data extraction and indexing and automatic routing. The C-level buyers overwhelmingly had one goal: to reduce headcount. And that was almost always the result. Retraining redundant staff for other roles was rare. It was only done in contexts where retaining accumulated institutional knowledge was important and worth the expense.
Here's the thing though: to overcome objections from those staff, whom we had to interview to understand the processes we were automating, we told them your story: you aren't being replaced, you're being repurposed for higher-level work. Wouldn't it be nice if the computer did the boring and tedious parts of your job so that you can focus on more important things? Most of them were convinced. Some, particularly those who had been around the block, weren't.
Ultimately, technologies like AI will have the the same impact. They weren't quite there yet, but I think it's just a matter of time.
For many businesses this is the only way to significantly reduce costs.
No society can possibly absorb that kind of disruption over such a short time.
Also even assuming AI could completely replace lawyers. Lawyers control the legislature. They may not be able to stop your local model from telling you how to do something, but they can stop you from actually doing it without a lawyer.
Oh..
That's because we prefer improved living standards over less work. If we only had to live by the standards of one century ago or more, we could likely accomplish that by working very little.
https://www.aei.org/wp-content/uploads/2019/01/cpichart2019-...
A pair of Nike jordans or air maxes is often in the ~120 range and made of far inferior materials.
Boots have never been cheaper/accessible before. The people that bring up repairable shoes don’t wear them or buy from shit brands like Thursday, doc martins, or timberland. You deserve your poor quality footwear.
That's more because we are never given the chance. We only get to keep working or fall of the rat race and at best be delegated to Big Lebowski style pariah existance.
Living quarters, transportation, healthcare, food. What were theses figures in 1926, and how much work is needed to achieve them.
Is that trend still true? I can look from the 50s to 2000s and buy into it. I'm not clear it is holding true by all metrics beyond the 2000s, and especially beyond maybe the 2020s. Yes, we have better tech, but is life actually better right now? I think you could make the argument that we were in a healthier and happier society in that sweet spot from 95 - 2005 or so. At least in NA.
We've seen so much technological innovation, but cost of living has outpaced wages, division is rampant, and the technology innovations we have have mostly been turned against us to enshitify our lives and entrap us in SaaS hell. I'd argue medical science has progressed, but also become more inaccessible, and, somehow, people believe in western medicine LESS. Does not help that we've also seen a decline in education.
So do we still prefer improving our standards of living in the current societal framework?
People talk about how socially progressive Scandinavia is but they have a shitload of petroleum resources and that money goes into social programs.
If a company's process produces waste, it should bear the entire cost of leaving the environment the way they found it rather than just pumping the waste into it. If a company's products are not reused, it should bear the cost of taking the used product back and restoring the world to the way it was before the product was built.
Of all the Scandinavian countries, only Norway has any oil resources of significance.
The Scandinavian welfare model is primarily tax-funded.
It looks like you're right and their oil exports are all import/export rather than domestic, but that's still a good bit of mineral wealth.
The racist moral panic over "welfare queens" seems to be a counter example.
Expect a real movement to reduce the number of citizens in this country. Specifically, if you can’t trace your lineage to a founding father (including for kids of Geman or iish immigrants), than they want you disenfranchised.
Heritage Americans vs “hyphenated Americans”
I'm not the ownership class, this is unfair. You are the ownership class. People with money or who grew up with money are overwhelmingly left leaning.
And who exactly do you think controls these items?
The right wing here are the only people where I live with an actual viable plan for helping working people, even low class working people. The left makes deliberate choices that everyone knows will make things worse for lower class working people.
Even if we assume there are tons of jobless immigrants being ‘imported’ they would be renters, not buyers.
Generally, house pricing is primarily a supply problem. Removing immigrants will make this worse given that they are 30%+ of the construction workforce.
Last year the US voted to hand over the reigns, in all branches of government, to a party whose philosophy is to slash government spending and reduce people’s dependence on the government.
To all the US futurists who are fantasizing about a post-scarcity world where we no longer work, I’d like to understand how that fits in with the current political climate.
They will hoard the resources, land, anything that is needed for people to stay alive.
There is zero actual intentional reduction of dependence, just elimination of government support.
I agree with you on principle, but I don't think it's straightforward as your point states.
A few people
A lot of people would not choose to work for half the time as they do now because they do actually like to buy things.
Issue is that virtually no company offers that deal unless you already have noteriety or money at the level of retiring anyway.
I'm 0.6 centuries old and have never heard that said for existing tech. Human level AI could presumably do human work by definition but that's not the case before we get that, including now.
https://www.npr.org/2015/08/13/432122637/keynes-predicted-we...
However, most people want fruits and vegetables instead of getting rickets, goiter, and cholera from an 1800s diet. Many are even willing to work 80+ hours a week to do so.
Hence why prior technological changes that increased productivity didn't result in living lives of extended leisure, despite some predictions to that effect. Instead people kept working to raise the overall standard of living to what could be achieved when using the new tools to their fullest extent. Doing more, not doing the same with less effort. As you say, we're not animals. We can strive for better.
It's not our production capabilities that keep people hungry; it's either greed or the problem of distribution.
Automation will definitely amplify production but it'll certainly continue to make rich richer and poor, well, the same. As inequality grows, so too does the authoritarian need to control the differential.
You are in danger. Unless you estimate the odds of a breakthrough at <5%, or you already have enough money to retire, or you expect that AI will usher in enough prosperity that your job will be irrelevant, it is straight-up irresponsible to forgo making a contingency plan.
I'm baffled that so many people think that only developers are going to be hit and that we especially deserve it. If AI gets so good that you don't need people to understand code anymore, I don't know why you'd need a project manager anymore either, or a CFO, or a graphic designer, etc etc. Even the people that seem to think they're irreplaceable because they have some soft power probably aren't. Like, do VC funds really need humans making decisions in that context..?
Anyway, the practical reason why I'm not screaming in terror right now is because I think the hype machine is entirely off the rails and these things can't be trusted with real jobs. And honestly, I'm starting to wonder how much of tech and social media is just being spammed by bots and sock puppets at this point, because otherwise I don't understand why people are so excited about this hypothetical future. Yay, bots are going to do your job for you while a small handful of business owners profit. And I guess you can use moltbot to manage your not-particularly-busy life of unemployment. Well, until you stop being able to afford the frontier models anyway, which is probably going to dash your dream of vibe coding a startup. Maybe there's a handful of winners, until there's not, because nobody can afford to buy services on a wage of zero dollars. And anyone claiming that the abundance will go to everyone needs to get their head checked.
Personally, rather devolving into nihilism, I'd rather try to hedge against suffering that fate. Now is the time to invest and save money. (or yesterday)
Unless you’re investing in guns, ammo, food, and a bunker. We’re talking worse unemployment than depression era Germany. And structurally more significant unemployment because the people losing their jobs were formally very high earners.
I do also think the mid level bad outcome isn’t super likely because of AI is good enough to replace a lot of white collar jobs, I think it could replace almost all of them.
It's not the odds of the breakthrough, but the timeline. A factory worker could have correctly seen that one day automation would replace him, and yet worked his entire career in that role.
There have been a ton of predictions about software engineers, radiologists, and some other roles getting replaced in months. Those predictions have clearly been not so great.
At this point the greater risk to my career seems to be the economy tanking, as that seems to be happening and ongoing. Unfortunately, switching careers can't save you from that.
What contingencies can you really make?
Start training a physical trade, maybe.
If this the end of SWE jobs, you better ride the wave. Odds are you're estimate on when AI takes over are off by half a career, anyways.
I would like to know if there's some kind of inflection point, like the so-called Laffer curve for taxes, where once an economy has X% unemployment, it effectively collapses. I'd imagine it goes: recession -> depression -> systemic crisis and appears to be somewhere between 30-40% unemployment based on history.
I do. Show me any evidence that it is imminent.
> or you expect that AI will usher in enough prosperity that your job will be irrelevant
Not in my lifetime.
> it is straight-up irresponsible to forgo making a contingency plan.
No, I'm actually measuring the risk, you're acting as if the sky is falling. What's your contingency plan? Buy a subscription to the revolution?
I’ve been working on my contingency plan for a year-and-a-half now. I won’t get into what it is (nothing earth shattering) but if you haven’t been preparing, I think you’re either not paying enough attention or you’re seriously misreading where this is all going.
Yes I said the word that none of these company want to say in their press conference.
Besides that, why aren't we seeing any metrics change on Github? With a supposedly increase of productivity so large a good chunk of the workforce is fired, we would see it somewhere.
More likely they get fired for no reason, never rehired, and the people left get burned out trying to hold it all together.
If you fail as a "higher up" you're no longer higher up. Then someone else can take your place. To the extent this does not naturally happen is evidence of petty or major corruptions within the system.
The large, overwhelming majority of my team's time is spent on combing through these tickets and making sense of them. Once we know what the ticket is even trying to say, we're usually out with the solution in a few days at most, so implementation isn't the bottleneck, nowhere near.
This scenario has been the same everywhere I've ever worked, at large, old institutions as well as fresh startups.
The day I'll start worrying is when the AI is capable of following the web of people involved to translate what the vaguely phrased ticket that's been backlogged for God knows how long actually means
In my case the AI is actively detrimental unless I hand hold it with every single file it should look into, lest it dive into weird ancient parts of the codebase that bear no relevance to the task at hand. Letting the latest and "greatest" agents loose is just a recipe for frustration and disaster despite lots of smart people trying their hardest to make these infernal tools be of any use at all. The best I've gotten out of it was some light Vue refactoring, but even then despite AGENTS.md, RULES.md and all the other voodoo people say you should do it's a crapshoot.
What helps is also getting it to generate a docs of your code so that it has map.
This is actually how humans understand a large code base too. We don’t hold a large code base in memory — we navigate it through docs and sampling bits of code.
If the answer is that the AI cranks out code faster than the team can digest and review it and faster than you can spec out the features, what’s the point? I can see completely shifting your workflow, letting skills atrophy, adopting new dependencies, and paying new vendors if it’s boosting your final output 5 or 10x.
But if it’s a 20% speed up is it worth it?
I've been doing stuff with recent models (gemini 3, claude 4.5/6, even smaller, open models like GLM5 and Qwen3-coder-next) that was just unthinkable a few months back. Compiler stuff, including implementing optimizations, generating code to target a new, custom processor, etc. I can ask for a significant new optimization feature in our compiler before going to lunch and come back to find it implemented and tested. This is a compiler that targets a custom processor so there is also verilog code involved. We're having the AI make improvements on both the hardware and software sides - this is deep-in-the-weeds complex stuff and AI is starting to handle it with ease. There are getting to be fewer and fewer things in the ticket tracker that AI can't implement.
A few months ago I would've completely agreed with you, but the game is changing very rapidly now.
I don't agree they have solved this problem, at all, or really in any way that's actually usable.
It's hard to predict how quickly it will be solved and by whom first, but this appears to be a software engineering problem solvable through effort and resources and time, not a fundamental physical law that must be circumvented like a physical sciences problem. Betting it won't be solved enough to have an impact on the work of today relatively quickly is betting against substantial resources and investment.
Plenty of things get substantial resources and investment and go nowhere.
Of course I could be totally wrong and it's solved in the next couple years, it's almost impossible to make these predictions either way. But I get the feeling people are underestimating what it takes to be truly intelligent, especially when efficiency is important.
Well that is easily disproved by the fact that people have children with higher IQ's than their own.
Not to say we can't create machines that far surpass our abilities on a single or small set of axis.
First of all, it's not necessary for one person to build that super-intelligence all by themselves, or to understand it fully. It can be developed by a team, each of whom understands only a small part of the whole.
Secondly, it doesn't necessarily even require anybody to understand it. The way AI models are built today is by pressing "go" on a giant optimizer. We understand the inputs (data) and the optimizer machine (very expensive linear algebra) and the connective structure of the solution (transformer) but nobody fully understands the loss-minimizing solution that emerges from this process. We study these solutions empirically and are surprised by how they succeed and fail.
We may find we can keep improving the optimization machine, and tweaking the architecture, and eventually hit something with the capacity to grow beyond our own intelligence, and it's not a requirement that anyone understands how the resulting model works.
We also have many instances in nature and history of processes that follow this pattern, where one might expect to find a similar "law". Mammals can give birth to children that grow bigger than their parents. We can make metals puter than the crucible we melted them in. We can make machines more precise than the machines that made those parts. Evolution itself created human intelligence from the repeated application of very simple rules.
Yes, it seems likely to me.
It seems like the ultimate in hubris to assume we are capable of creating something we are not capable of ourselves.
So to create something that exceeds our capabilities is not a matter of hubris (as if physical laws cared about hubris anyway), it's an unambiguously ordinary occurrence.
SOTA models are superhuman in a narrow sense, in that they have solid background knowledge of pretty much any subject they've been trained on. That's great. But no, it doesn't turn your AI datacenter into "a country of geniuses".
Wonder what that means for meatspace.
Edit: Would also disagree this isn’t a physics problem. Pretty sure power required scales according to problem complexity. At a certain level of problem complexity we’re pretty much required to put enough carbon in the atmosphere to cook everyone to a crisp.
Edit 2: illustrative example, an Epic in Jira: “Design fusion reactor”
This is no different than onboarding a new member of the team, and I think openAI was working on that "frontier"
>We started by looking at how enterprises already scale people. They create onboarding processes. They teach institutional knowledge and internal language. They allow learning through experience and improve performance through feedback. They grant access to the right systems and set boundaries. AI coworkers need the same things.
And tribal knowledge will not be a moat once execs realize that all they need to do is prioritize documentation instead of "code velocity" as a metric (sure any metric gets goodhearted, but LLMs are great at sifting through garbage to find the high perplexity tokens).
>But context limitation is fundamental to the technology in its current form
This may not be the case, large enough context-windows plus external scratchpads would mostly obviate the need for true in context learning. The main issue today is that "agent harnesses" suck. The fact that claude code is considered good is more an indication of how bad everything else is. Tool traces read like a drunken newb brute-forcing his way through tasks. LLMs can mostly "one-shot" individual functions, but orchestrating everything is the blocker. (Yes there's progress in metr or whatever but I don't trust any of that, else we'd actually see the results in real-world open source projects).
LLMs don't really know how to interact with subagents. They're generally sort of myopic even with tool calls. They'll spend 20 minutes trying to fix build issues going down a rabbit hole without stepping back to think. I think some sort of self-play might end up solving all of these things, they need to develop a "theory of mind" in the same way that humans do, to understand how to delegate and interact with the subagents they spawn. (Today a failure case is agents often don't realize subagents don't share the same context.)
Some of this is certainly in the base model and pretraining, but it needs to be brought out in the same way RL was needed for tool use.
It's a coding agent that takes a ticket from your tracker, does the work asynchronously, and replies with a pull request. It does progressively understand the codebase. There's a pre-warming step so it's already useful on the first ticket, but it gets better with each one it completes.
The agent itself is done and working well. Right now I'm building out the infrastructure to offer it as a SaaS.
If anyone wants to try it, hit me up. Email is in my profile. Website isn't live yet, but I'm putting together a waitlist.
I think it's more nuanced than that. I'd say that - 0% can't be implemented by AI - but a lot of them can be implemented much faster thanks to AI - a lot of them can be implemented slower when using AI (because author has to fix hallucinations, revert changes that caused bugs)
As we learn to use these tools, even in their current state, they will increase productivity by some factor and reduce needs for programmers.
I have seen numerous 25-50% productivity boosts over my career. Not a single one of them reduced the overall need for programmers.
I can’t even think of one that reduced the absolute number of programmers in a specific field.
I look at my ticket tracker and I see basically 100% of it that can be done by AI. Some with assistance because business logic is more complex/not well factored than it should be, but most of the work that is done AI is perfectly capable of doing with a well defined prompt.
Live stream validation results as they come in
The body doesn't give much other than the high-level motivation from the person who filed the ticket. In order to implement this, you need to have a lot of context, some of which can be discovered by grepping through the code base and some of which can't:- What is the validation system and how does it work today?
- What sort of UX do we want? What are the specific deficiencies in the current UX that we're trying to fix?
- What prior art exists on the backend and frontend, and how much of that can/should be reused?
- Are there any scaling or load considerations that need to be accounted for?
I'll probably implement this as 2-3 PRs in a chain touching different parts of the codebase. GPT via Codex will write 80% of the code, and I'll cover the last 20% of polish. Throughout the process I'll prompt it in the right direction when it runs up against questions it can't answer, and check its assumptions about the right way to push this out. I'll make sure that the tests cover what we need them to and that the resultant UX feels good. I'll own the responsibility for covering load considerations and be on the line if anything falls over.
Does it look like software engineering from 3 years ago? Absolutely not. But it's software engineering all the same even if I'm not writing most of the code anymore.
This cyborg process is exactly how we're using AI in our organisation as well. The human in the loop understands the full context of what the feature is and what we're trying to achieve.
Funny enough, all the questions that you posed are things that come up right away that the agent asks itself, and then goes and tries to understand and validate an answer, sometimes with input from the user. But I think this planning mechanism is really critical to being able to have an AI generate an understanding, then have it be validated by a human before beginning implementation.
And by planning I don't necessarily mean plan mode in your agent harness of choice. We use a custom /plan skill in Claude Code that orchestrates all of this using multiple agents, validation loops, and specific prompts to weed out ambiguities by asking clarifying questions using the ask user question tool.
This results in taking really fuzzy requirements and making them clear, and we automate all of this through linear but you could use your ticket tracker of choice.
There will be 3 “software” companies left. And shortly after that society will collapse because of AI can do that it can do any white collar job.
For the UX, have it explore your existing repos and copy prior art from there and industry standards to come up with something workable.
Web scale issues can be inferred by the rest of the codebase. If your terraform repo has one RDS server, vs a fleet of them, multi-region, then the AI, just as well as a human, can figure out if it needs Google Spanner level engineering or not. (probably not)
Bigger picture though, what's the process of a human logs an under specified ticket and someone else picks it up and has no clue what to do with it? They're gonna go ask the person who logged the bug for their thoughts and some details beyond "hurr Durr something something validation". If we're at the point where AI is able to make a public blog post shaming the open source developer for not accepting a patch, throwing questions back to you in JIRA about the details of the streaming validation system is well within its capabilities, given the right set of tools.
Ticket trackers are perfect for this. Just start with asking AI to take this unclear, ambiguous ticket and come up with a real plan for how to accomplish it. Review the plan, update your ticket system with the plan, have coworkers review it if you want.
Then when ready, kick off a session for that first phase, first PR, or the whole thing if you want.
Why aren’t there 10x the number of games on steam? Why aren’t people releasing new integrated programming language/OS/dev environments?
Why does our backlog look exactly the same as when I left for posterity leave 4 months ago?
I’m still waiting for the evidence. I still haven’t seen externally verifiable evidence that AI is a net productivity boost for the ability to ship software.
That doesn’t mean that it isn’t. It does mean that it isn’t big enough to be obvious.
I’m very closet watching every external metric I can find. Nothing yet. Just saw the steam metrics for January. Fewer titles than January last year.
That's a sign that you have spurious problems under those tickets or you have a PM problem.
Also, a job is a not a task- if your company has jobs which is a single task then those jobs would definitely be gone.
Um, you do realize that "the memory" is just a text file (or a bunch of interlinked text files) written in plain English. You can write these things out yourself. This is how you use AI effectively, by playing to its strengths and not expecting it to have a crystal ball.
Take even the most unskilled labor that people can think about such as flipping a burger at a restaurant like McDonald's. In reality that job is multiple different roles mixed into one that are constantly changing. Multiple companies have experimented with machines and robots to perform this task all with very limited success and none with any proper economics.
Let's be charitable and assume that this type of fast food worker gets paid $50,000 a year. For that job to be displaced it needs to be performed by a robot that can be acquired for a reasonable capital expenditure such as $200,000 and requires no maintenance, upkeep, or subscription fees.
This is a complete non-reality in the restaurant industry. Every piece of equipment they have cost them significant amounts and ongoing maintenance even if it's the most basic equipment such as a grill or a fryer. The reality is that they pay service technicians and professionals a lot of money to keep that equipment barely working.
Flipping burgers is WAY more demanding than I ever imagined. That's the danger of AI:
It takes jobs faster than creating new ones PLUS for some fields (like software development) downshifting to just about anything else is brutal and sometimes simply not doable.
Forget becoming manager at McDonald's or be even good at flipping burgers at the age of 40: you are competing with 20yr olds doing sports with amazing coordination etc
I have no idea what in the world you are talking about. Most 20 year olds working at McDonald’s are stoned and move at half a mile an hour whether it’s a lunch rush or it’s 2am. I worked retail for years before I finally switched full time to programming. It’s certainly not full of amazing motivated athletes with excellent coordination. You’re lucky if most of them can show up to work on time more than half the time.
As a white collar computer guy, I can waste some time on Reddit. Or go for a walk and grab coffee. Or let people know that I’m heading out for a couple of hours to go to the doctor. There are a LOT of little freedoms tha you take for granted if you haven’t worked a shitty minimum wage job. Getting on trouble for punching in one minute late, not being allowed to sit down, socializing too much when you’re not on a break.
I’m pretty sure that most tech employees would just quit when encountering a manager like that
The average salary for a software developer in Oregon is $118k/yr. The average salary for an HVAC technician in Oregon is $74k/yr.
It's for sure less, but the gap is smaller than some might think. I think some markets (SF) distort the cost a bit.
People are worried about white-collar not blue-collar jobs being replaced. Robotics is obviously a whole different field from AI.
I agree, but people are conflating the two. We have seen a lot of advancements in robotics, but as of current that only makes the economics worse. We're not seeing the complexity of robots going down and we're seeing the R&D costs going up, etc.
If it didn't make sense a few years ago to buy a crappy robot that can barely do the task because your business will never make money doing it, it probably doesn't make sense this year to buy a robot that still can't accomplish the tasks and is more expensive.
Being the hype-man that he is I assume he meant humanoid robots - I think he's being silly here, and the sentence made me roll my eyes.
It merely reflects the designer's willingness to engage in sci-fi tropes.
Any job that is predominantly done on a computer though is at risk IMO. AI might not completely take over everything, but I think we'll see way fewer humans managing/orchestrating larger and larger fleets of agents.
Instead of say 20 people doing some function, you'll have 3 or 4 prompting away to manage the agents to get the same amount of work done as 20 people did before.
So the people flipping the burgers and serving the customers will be safe, but the accountants and marketing folks won't be.
And that's probably something most people are okay with. Work that can be automated should be and humans should be spending their time on novel things instead of labor if possible.
A free society.
If you want to make the argument that singularity has occurred and that knowledge oracles are no longer needed, that's a bold claim.
If you want to make the argument it would escape our control, etc. that's a valid argument for proper controls.
If you want to make the argument that LLMs are sentient and that it's not ethical to "enslave" them, that's also a pretty bold stance currently.
Humans have been inventing technology and improving the quality of life (of our species!) for a very long time and that strategy hasn't changed IMO
Why didn't the Internet cause a massive death plague?
Eventually that will change and the role of a customer service agent will be redefined.
It will happen to you.
This is correct. This also is a lot more complex than it sounds and creates a lot of work. Cooking those products creates byproducts that must be handled.
> and the cashiers have largely been replaced by self-order terminals so that employees no longer even need to speak rudimentary English
Yet most of the customers still have to interact with an employee because "the kiosk won't let me". Want to add Mac sauce? Get the wrong order in the bag? Machine took payment but is out of receipt paper? Add up all these "edge cases" and a significant amount of these "contactless" transactions involved plenty of contact!
> It will happen to you.
Any labor that can be automated should be. Humans are not supposed to spend their time doing meaningless tasks without a purpose beyond making an imaginary number go up or down.
Okay so the job of "cook" just became "grease disposal engineer"?
> Yet most of the customers still have to interact with an employee because "the kiosk won't let me"
That hasn't stopped some places I've visited from only allowing people to order from the kiosk. Literally I've said something to the person behind the counter who pointed to the iPad and when I said I wanted something else, shrugged and said we can't do that.
That is the current way the job works. The idea that even the most basic "burger flipper" job is isolated into a single dimension (flipping a burger) is false. That worker has to get supplies, prepare ingredients, stage them between cooking, dispose of waste product, etc.
> Literally I've said something to the person behind the counter who pointed to the iPad and when I said I wanted something else, shrugged and said we can't do that.
That's because corporate told them to maximize kiosk usage or because the employee was lazy. That's always going to happen. The McDonalds in Union Station DC has broken glass on the floor, because it's a shithole and the employees don't care, but it means not much else IMO
It's a non reality in America's extremely piss poor restaurant industry. We have a competency crisis (the big key here) and worker shortage that SK doesn't, and they have far higher trust in their society.
> While a highly automated McDonald’s in South Korea (or the experimental "small format" store in Texas) might look empty, the total headcount remains surprisingly similar to a standard restaurant
LLMs don't create anything new, they simply replace human computer i/o, with tokens. That's it, leaving the humans who are replaced to fight for a limited number of jobs. LLMs are not creating new jobs, they only create "AI automate {insert business process} SaaS" that are themselves heavily automated.. I suppose there are more datacenter jobs (for now), and maybe some new ML researcher positions.. but I don't really see job growth.. Are we supposed to just all go work at a datacenter or in the semiconductor industry (until they automate that too)?
You are thinking too linearly. When the price of goods and services go down because the cost to produce those goods are services decreases, that means things are cheaper. Now that things are cheaper we have more money to spend on other goods or services.
Who knows what industries will be created because of this alleged release of human labor.
When the refrigerator was invented we didn’t just replace an industry of shipping ice, we created new industries that relied on refrigeration. That’s creative destruction. That’s economic growth.
This is not to mention that I find the scope and scale of AI displacement to be highly dubious and built on hype.
Do you walk around with a blindfold on? Are you extremely privileged? Sounds like it. Tell this to the 25% of new college grads that have been unemployed for 12 months, or working as a barista with 100k in debt. Eventually they'll be knocking on your penthouse/mansion door.
I haven't seen the same firing sprees outside of FAANG and their wannabees.
At best, some people expect all this to work out, so they sit back unaware of weight of these battles. Worst case is plain ol' ignorance of what's going on around them.
What DOES go up with automation is demand. Fewer farmers today than 100 years ago, but significantly more mouths to feed.
What also increases is new kinds of jobs; entirely new fields. The automobile shrank the number of buggy whip makers, but taxi drivers increased. Then the internet increased Uber drivers on top of taxi drivers.
Get ready for french revolution v2, but global, the ruling class only exists because the working class tolerates them. This just won't work.
Datacenters are very automated. They already don't require many people and they're going to be needing less and less humans in them going forward.
Semiconductor manufacturing is also very heavily automated.
The jobs of the future may be that you're a court jester for Larry Ellison, or that you do something else that's fundamentally pointless but happens to be something that a person with money wants. Companion, entertainment, errands. Now, that may sound dystopian, but on some level, so are most white collar jobs today. Microsoft employs 200k people. How many of these are directly involved in shipping money-making products - five percent? Ten? The rest is there essentially for the self-sustaining bureaucracy itself. And there's no reason for that bureaucracy to exist except the whims of people with money and power - delegation, empire-building, pet projects, etc.
And I know datacenters and semiconductor manufacturing don't employ a lot of people, thats my point, the advent of llms replaces more jobs than it creates.
A bunch of revolutionaries who carried a campaign of murder that ultimately had little bearing on the economic standing or job prospects of French citizens?
I'm not saying that people will be content or that there will be no revolutions in the future. There might be. But most jobs are a social construct. A relatively small fraction of employed people are essential to the well-being of mankind. For every construction guy, there are office managers, assistants to the office manager, municipal form-pushers, etc. It's not that these jobs are completely pointless, but we could do without them and the damage would be probably less than the cumulative payroll.
If AI can do 80% of your tasks but fails miserably on the remaining 20%, that doesn't mean your job is safe. It means that 80% of the people in your department can be fired and the remaining 20% handle the parts the AI can't do yet.
> The most important thing to know about labor substitution...is this: labor substitution is about comparative advantage, not absolute advantage. The question isn’t whether AI can do specific tasks that humans do. It’s whether the aggregate output of humans working with AI is inferior to what AI can produce alone: in other words, whether there is any way that the addition of a human to the production process can increase or improve the output of that process... AI can have an absolute advantage in every single task, but it would still make economic sense to combine AI with humans if the aggregate output is greater: that is to say, if humans have a comparative advantage in any step of the production process.
It might all wash out eventually, but eventually could be a long time with respect to anybody’s personal finances.
There exists some fact about the true value of AI, and then there is the capitalist reaction to new things. I'm more wary of a lemming effect by leaders than I am of AI itself.
Which is pretty much true of everything I guess. It's the short sighted and greedy humans that screw us over, not the tech itself.
Or would you just do more stuff?
I feel like most software projects have an endless backlog.
Better IDEs, programming languages, packages, frameworks, etc have increased our productivity, reduced bugs -- but rarely reduced headcount.
Ever hard anyone migrate from php+jQuery to react+node and reduce head count due to increased productivity?
I sometimes reminiscent about the LAMP stack being super productive. But at the time I didn't write tests :)
The remaining "surplus" 20% roles retained will then be devoted to developing features and implementing fixes using AI where those features and fixes would previously not have been high enough priority to implement or fix.
When the price of implementing a feature drops, it becomes economically viable (and perhaps competitively essential) to do so -- but in this scenario, AI couldn't do _all_ the work to implement such features so that's why 40% rather than 20% of the developer roles would be retained.
The 40% of developer roles that remain will, in theory, be more efficient also because they won't be spending as much time babysitting the "lesser" developers in the 60% of the roles that were eliminated. As well, "N" in the Mythical Man Month is reduced leading to increased efficiency.
(No, I have no idea what the actual percentages would be overall, let alone in a particular environment - for example, requirements for Spotify are quite different than for Airbus/Boeing avionics software.)
"Work" isn't a finite thing. It's not like all the people in your office today had to complete 100% of their tasks, and all of them did.
"Work" is not a static thing. At least not in positions of many knowledge-worker careers.
The idea of a single day's unit of "work" being 100%, is really sophomoric.
Also, If 100% of a labor force now has 80% more time...wouldn't it behoove the company to employ the existing workforce in more of the revenue generating activities? Or find a way to retain as much of the institutional knowledge?
Doom, fear-mongering and hopelessness is not a sustainable approach.
Amazon fulfillment centers are a good example of automation shrinking the role of humans. We haven't seen total headcounts go down because Amazon itself has been growing. While the human role shrinks, the total business grows and you tread water. But at some point, Amazon will not be able to grow fast enough to counterbalance the shrinking human role in the FC and total headcount will decrease until one day it disappears entirely.
I’m more worried that even if these tools do a bad job people will be too addicted to the convenience to give them up.
Example: recruiters locked into an AI arms race with applicants. The application summaries might be biased and contain hallucinations. The resumes are often copied wholesale from some chat bot or other. Nobody wins, the market continues to get worse, but nobody can stop either.
Having said that, it's hard to imagine jobs like mine (working on np-complete problems) existing if the LLMs continue advancing at the current rate, and its hard to imagine they wont continue to accelerate since they're writing themselves now, so the limitations of human ability are no longer a bottleneck.
E.g. once I was tasked to build a new matching algorithm for a trading platform, and upon fully understanding of the specs I realized it can be interpreted as a mixed integer programming problem; the idea got shot down right away because PM don't understand it. There're all kinds of limiting factors once you get into the details.
--- I said it because I did the analysis, and realized that if I implement the original version, which basically is a crazy way to iteratively solve the MIP problem, it's much harder to reason with internally, and much harder to code correctly. But obviously it keep the broker happy (the developer is doing exactly what I said)
Software engineers work on Jira tickets, created by product managers and several layers of middle managers.
But the power of recent models is not in working on cogs, their true power is in working on the entire mechanism.
When talking about a piece of software that a company produces, I'll use the analogy of a puzzle.
A human hierarchy (read: company) works on designing the big puzzle at the top and delegating the individual pieces to human engineers. This process goes back and forth between levels in the hierarchy until the whole puzzle slowly emerges. Until recently, AI could only help on improving the pieces of the puzzle.
Latest models got really good at working on the entire puzzle - big picture and pieces.
This makes human hierarchy obsolete and a bottleneck.
The future seems to be one operator working on the entire puzzle, minus the hierarchy of people.
Of course, it's not just about the software, but streams of information - customer support, bug tickets, testing, changing customer requirements.. but all of these can be handled by AI even today. And it will only get better.
This means different things depending on which angle you look at it - yes, it will mean companies will become obsolete, but also that each employee can become a company.
Given the rest of your argument that makes no sense. Why should that one operator exist? If AI is good at big picture and the entire puzzle, I don’t see why that operator shouldn’t be automated away by the AI [company] itself?
I’m a pretty big generalist professionally. I’ve done software engineering in a broad category of fields (Game engines, SaaS, OSS, distributed systems, highly polished UX and consumer products), while also having the experience of growing and managing Product and Design teams. I’ve worn a lot of hats over the years.
My most recent role I’m working on a net new product for the company and have basically been given fully agency over this product: from technical, budget, team, process, marketing, branding and positioning.
Give someone experienced like me capital, AI and freedom and you absolutely can build high quality software and a pretty blinding pace.
I’m starting to get the feeling than many folks struggling with adopting or embracing AI well for their job has more to do with their job/company than AI
Ai might not replace current work but it’s already replacing future hypothetical work. Now whether it can actually do that the job is besides the point in the short term. The way business models work is that if there’s an option to reduce your biggest cost (labour) you’d very much give it a go first. We might see a resurgence of labour if it turns out be all hype but for the short to medium term they’ll be a lot of disruption.
Think we’re already seeing that in employment data in the US, as new hiring and job creation slows. A lot of that will for sure be the current economic environment but I suspect (more so in tech focused industries) that will also be due to tech capex in place of headcount growth
Lest we forget, software engineers aren't exactly ordinary people: they make quite a bit above the median wage.
AI taking our jobs is scary because it will turn us into "ordinary people". And ordinary people are not ok. They're barely surviving.
Therefore, the best way to increase profit is to lower cost.
for me the 2 main factors are:
1. whether your company's priority is growing or saving
- growing companies especially in steep competition fight for talent and ai productivity results in more hiring to outcompete
- saving companies are happy to cut jobs to save on margin due to their monopoly or pressure from investors
2. how 'sequence of tasks-like' your job is
- SOTA models can easily automate long running sequences of tasks with minimal oversight
- the more your job resembles this the more in-danger you are (customer service diffusion is just starting, but i predict this will be one of the first to be heavily disrupted)
- i'm less worried about jobs where your job is a 'role' that comes with accountability and requires you to think big picture on what tasks to do in the first place
It also argues that models have existed for years and we're yet to see significant job loss. That's true, but AI is only now crossing the threshold of being both capable and reliable enough to be automate common tasks.
It's better to prepare for the disruption than the sink or swim approach we're taking now in hopes that things will sort themselves out.
This is exactly what chess experts like Kasparov thought in the late 90s: “a grandmaster plus a computer will always beat just a computer”. This became false in less than a decade.
One of the things that drove the tech boom in the 2010s was cloud computing driving the cost of starting an internet company into the ground.
What happens when there’s software you think should exist, and you no longer need to hire a bunch of people at $150k-$250k per year to build it?
What happens when 200 out-of-work former software engineers take a look at your software and use LLMs to quickly build their own version each undercutting everyone else's prices in a race to the bottom?
There will also be far fewer positions demanding these skills. Easy access to generating code has moved the bottleneck in companies to positions & skills that are substantially harder to hire for (basically: Good Judgement); so while adding Agentic Sorcerers would increase a team's code output, it might be the wrong code. Corporate profit will keep scaling with slower-growing team sizes as companies navigate the correct next thing to build.
The self-setup here is too obvious.
This is exactly why man + machine can be much worse than just machine. A strong argument needs to address what we can do as an extremely slow operating, slow learning, and slow adapting species, that machines that improve in ability and efficiency monthly and annually will find they cannot do well or without.
It is clear that we are going through a disruptive change, but COVUD is not comparable. Job loss is likely to have statistics more comparable to the Black Plague. And sensible people are concerned it could get much worse.
I don’t have the answers, but acknowledging and facing the uncertainty head on won’t make things worse.
Here's an article:
https://history.wustl.edu/news/how-black-death-made-life-bet...
But yes, if lots of people deathed by AI, the remaining humans might have more job security! Could that be called a "soft landing"?
Not sure if there's an analogy to make somewhere though
Maybe this is overly optimistic, but if AI starts to have negative impacts on average people comparable to the plague, it seems like there's a lot more that people can do. In medieval Europe, nobody knew what was causing the plague and nobody knew how to stop it.
On the other hand, if AI quickly replaces half of all jobs, it will be very obvious what and who caused the job loss and associated decrease in living standards. Everybody will have someone they care about affected. AI job loss would quickly eclipse all other political concerns. And at the end of the day, AI can be unplugged (barring robot armies or Elon's space-based data centers I suppose).
We can't stop OpenClaw, because humans are curious. It just takes one unleashed model with a crypto account and some way to make money for the first independent AI's to start bleeding into cyberspace.
We can't opt out of AI competition, because other individuals, organizations and nation states are not going to stop, and not going to leverage their AI if they get ahead of us.
> AI job loss would quickly eclipse all other political concerns.
True. I think this is one of only a few certainties.
1 You are not affected somehow (you got savings, connections, not living paycheck to paycheck, and have food on the table).
2 You prefer to persue no troubles in matters of complexity.
Time will tell, is showing it already.
Those who downplay it are either business owners themselves or have been employed for 2+ years.
I think a lot of software engineers who _haven't_ looked for jobs in the past few years don't quite realize what the current market feels like.
As an American I found a new job last year (Staff SW), and it was falling off a log easy, for a 26% pay bump.
I know smart and capable people that have been unemployed for 6+ months now, and a few much longer. Some have been through multiple layoffs.
I am presently employed, but have looked for a job. The market is the worst I've seen in my almost 30 year career. I feel deeply for anyone who needs a new job right now. It is really bad out there.
And it's now at 80 million views! https://x.com/mattshumer_/status/2021256989876109403
It appears to have really caught the zeitgeist.
I work on this technology for my job and while I'm very bullish pieces like that are as you said slopish and as I'll say breathless because there are so many practical challenges here to deal with standing between what is being said there and where we are now.
Capability is not evenly distributed and it's getting people into loopy ideas of just how close we are to certain milestones, not that it's wrong to think about those potential milestones but I'm wary of timelines.
I just don't understand people working on improving ai. It just isn't worth the risk.
A cynical/accelerationist perspective would be: it enables you to rake in huge amounts of money, so no matter what comes next, you will be set up to endure it better than most.
I’m younger than most on this site. I see the next decades of my life being defined by a multi-generational dark age via a collapse in literacy (“you use a calculator right?”), median prosperity (the only truly functional distribution system we have figured out is labor), and loss of agency (kinda obvious). This outcome is now, as of 2026, essentially priced into the public markets and accepted as fact by most media outlets.
“It’s inevitable” is at least a hard point to argue with. “Well I’M so productive, I’m having the time of my life”, the dominant position in many online tech spaces, seems short-sighted at best.
I miss being a techno optimist, it’s much more fun. But it’s increasingly hard.
Every generation has its strains and the internet just amplifies it because outrage is currency. Those strains are things you only start to notice as you start to get older so they seem novel when in reality in the scheme of humanity is basically standard.
Fwiw if the market actually priced it in it would be in freefall since the market would be shortly irrelevant. We are due for a correction soon though.
Internet discourse is a facsimile of real life and often not how real life operates in my experience.
So I see all the discourse around extremes on either end and based on lived experience and working in the field think theres a much neater middle ground we'll ultimately arrive at thanks to people working very hard to land the plane so to speak.
Did the 80 million people believe what they were reading?
Have we now transitioned to a point where we gaslight everyone for the hell of it just because we can, and call it, what, thought-provoking?
Those numbers are likely greatly exaggerated. Twitter is nowhere near where it was at its peak. You could almost call it a ghost town. Linkedin but for unhinged crypto- and AI bros.
I'm sure the metrics report 80 million views, but that's not 80 million actual individuals that cared about it. The narrative just needs these numbers to get people to buy into the hype.
https://arxiv.org/abs/2510.15061
I thought normies would have caught onto the EM dash, overuse of semicolons, overuse of fancy quotes, lack of exclamation marks, "It's not X, it's Y", etc. Clearly I was wrong.
they don't care about the majority losing jobs, or even starving to death so long as they ensure a great future for themselves and the people they, supposedly, care about.
“Some of you may die, but that’s a risk I’m willing to make” -also Elon Mush probably
I ain't paying for shit.
... for the 3rd year in a row. Feels like the new 'year of the Linux desktop'
Maybe I am wrong, but the history of business on the web says I am right. If you go back and look at why those businesses think they are successful, and if that analysis is correct, then I am.
At first, it's a pretty big energy hog and if you don't know how to work it, it might crash and burn.
After some time, the novelty wears off. More and more people begin using it because it is a massive convenience that does real work. Luddites who still walk or ride their bike out of principle will be mocked and scoffed.
Then the mandatory compliance will come. A government-issued license will be required to use it and track its use. This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.
Last will come the AI-integrated brain computer interface. You won't have any choice when machine-gun-wielding Optimus robots coral you into a self-driving Tesla bus to the nearest FEMA camp to receive your Starlink-connected Neuralink N1 command and control chip. You will be decapitated if you refuse the mark of the beast. Rev 20:4
That's just an American thing, I've never owned a car and most people of my age I know haven't either.
Choose to die
The real world is much more resilient and stubborn. The industrial revolution indeed wiped out a lot of jobs. But it created a lot more new ones. Agriculture and food production no longer is >90% of the economy. The utopian version of that (we all get free food) never happened. The dystyopian version (we'll all starve) didn't happen either. And the Luddite version (we'll all go back to artisanal farming) didn't happen either. What happened is that well fed laborers went to work doing completely different stuff. Subsistence farming now only exists in undeveloped countries and regions in e.g. rural Africa.
The simple reality is that we have 8 billion people probably growing towards 10 billion. These people are going to buy and spend stuff with their income. Whatever that is, is what the economy is and what we collectively value. If AI puts us all out of work, people aren't going to sit on their hands and go back to subsistence farming. They'll fill the time with whatever is is that they can create income with so they can spend it on things that are valuable to them.
This notion of value is what is key. Because if AI lowers the cost of something, it simply becomes cheaper. We need a lot of valuable and scarce resources to power AI. That isn't cheap. So, there's an equilibrium of stuff that is valuable enough to automate with it that people still want to pay for by committing their valuable resources to it. Which as they become scarcer become more valuable and more interesting from an economic point of view. The economy adapts towards activity that facilitates value creation. We're opportunists. It all boils down to what we can do for each other that is valuable and interesting to us. Whatever that is, is where there will be a lot of growth.
I'm in software, I'm not worried about less work. I'm worried about handling the barrage of stuff I don't have time to do that I now need to start worrying about doing. There's no way I'm going to do any of that without AI. It's already generating more work than I can handle. This isn't frivolous stuff that I don't need, it's stuff that's valuable to my company because we can sell it to other companies who need that stuff.
That's a weird way of saying 80 million times.
There are humans that can't do any mental work that AI can't. Those humans are not useful for mental work and that's what can cause real AI job loss. The bar for being useful for mental work is increasing rapidly..
Jobs that are easy disappear and are replaced with jobs that are no longer as easy, either requiring more mental skills (that many people don't have) or are soul crushing manual jobs that are also getting harder constantly..
So yes, YOU are not worried, because you are privileged here.
AI will buy us some time from economic collapse, though on the bright side the environment can recover a bit since human growth was the worse stressor
I'm not worried about AI job loss in the programming space. I can use Claude to generate ~80% of my code precisely because I have so much experience as a developer. I intuitively know what is a simple mechanical change; that is to say, uninteresting editing of lines of code; as opposed to a major architectural decision. Claude is great at doing uninteresting things. I love it because that leaves me free to do interesting things.
You might think I'm being cocky. But I've been strongly encouraging juniors to use Claude as well, and they're not nearly as successful. When Claude suggests they do something dumb--and it DOES still suggest dumb things--they can't recognize that it's dumb. So they accept the change, then bang their head on the wall as things don't work, and Claude can't figure it out to help them. Then there are bad developers who are really fucked by Claude. The ones who really don't understand anything. They will absolutely get destroyed as Claude leads them down rabbit holes. I have specific anecdotes about this from people I've spoken to. One had Claude delete a critical line in an nginx config for some reason and the dev spent a week trying to resolve it. Another was tasked with doing a simple database maintenance script, and came back two weeks later (after constant prodding by teammates for a status update) with a Claude-written reimplementation of an ORM. That developer just thought they would need another day of churning through Claude tokens to dig themselves out of an existential hole. If you can't think like a developer, these tools won't help you.
I have enough experience to review Claude's output and say "no, this doesn't make sense." Having that experience is critical, especially in what I call the "anti-Goldilocks" zone. If you're doing something precise and small-scoped, Claude will do it without issues. If you try to do something too large ("write a Facebook for dogs app") Claude will ask for more details about what you're trying to do. It's the middle ground where things are a problem: Claude tries to fill in the details when there's something just fundamentally wrong with what it's being asked.
As a concrete example, I was working on a new project and I asked Claude to implement an RPC to update a database table. It did so swimmingly, but also added a "session.commit()" line... just kind in the middle of somewhere. It was right to do so, of course, since the transaction needed to be committed. And if this app was meant to a prototype, sure. But anyone with experience knows that randomly doing commits in the middle of business logic code is a recipe for disaster. The issue, of course, was not having any consistent session management patterns. But a non-developer isn't going to recognize that that's an issue in the first place.
Or a more silly example from the same RPC: the gRPC API didn't include a database key to update. A mistake on my part. So Claude's initial implementation of the update RPC was to look at every row in the table and find ones where the non-edited fields matched. Makes... sense, in a weird roundabout way? But God help whoever ends up vibe coding something like that.
The type of AI fears are coming from things like this in the original article:
> I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. [...] when I test it, it's usually perfect.
Which is great. How many developers are getting paid full-time to make new apps on a regular basis? Most companies, I assume, only build one app. And then they spend years and many millions of dollars working on that app. "Making a new app from scratch" is the easy part! What's hard is adding new features to that app while not breaking others, when your lines of code go from those initial tens of thousands to tens of millions.
There's something to be said about the cheapness of making new software, though. I do think one-off internal tools will become more frequent thanks to AI support. But developers are still going to be the ones driving the AI, as the article says.
That being said the real danger is not coming from AI today, it's more C-suites believing AI can just zero shot any problem you throw at it.
I’m definitely worried about job loss as a result of the AI bubble bursting, though.
Secondly David Oks attended Masters School for his high school, an elite private boarding school with tuition currently running 72kUSD/year if you stay there the whole time, and 49kUSD/year if you go there just for schooling (https://en.wikipedia.org/wiki/Masters_School). I am going to generally say that people who were able to have 150k+ spent on their high school education (to say nothing of attending Oxford at 30kGBP/year for international student tuition) might just possibly be people who have enough generational family wealth that concerns like job losses seem pretty abstract or not something to really worry about.
It's just another in a long series of articles downplaying the risks of AI job losses, which, when I dig into the author's background, are written by people who have never known any sort of financial precarity in their lives, and are frequently involved AI investment in some manner.
At no point have worker rights and conditions advanced without being demanded, sometimes violently. The history of maritime safety is written in blood. The robber baron era was peppered with deadly clashes such as the Homestead Strike. As a reminder, we had a private paramilitary force for the wealthy called the Pinkerton Detective Agency (despite the name, they were hired thugs) that at it's peak outnumbered the US Army.
Heck, you can go back to the Black Death when there was a labor shortage to work farms and the English Crown tried to pass laws to cap wages to avoid "gouging" by peasants for their labor.
Automation could be very good for society. It could take away menial jobs so we all benefit. But this won't happen naturally because that's essentially a wealth transfer to the poor and the wealthy just won't stand for that.
No, what's going to happen is that AI specifically and automation in general will be used to suppress labor wages and furhter transfer wealth to the already wealthy. We don't need to replace everyone for this to happen. Displacing just 5% of the workforce has a massive effect on wages. The remaining 95% aren't asking for raises and they're doing more work for the same wages as they pick up whatever the 5% was doing.
We see this exact pattern in the permanent layoff culture in tech right now. At the top you have a handful of AI researchers who command $100M+ pay packages. The vast majority are either happy to still have a job or have been laid off, possibly multiple times, and spend a ton of time going through endless interview rounds for jobs that may not even exist.
This two-tiered society is very much in our near future (IMHO).
In the Depression you had wandering hoboes who were constantly moving, seeking temporary low-paid work and a meal. This situation was so bad we got real socialist change with the New Deal.
2008 killed the entry-level job market and it has yet to recover. That's why you see so many millenails with Masters degrees and a ton of student debt working as baristas. Covid popped the tech labor bubble, something tech companies had been wanting for a long time. Did you not notice that they all started doing layoffs at about the exact same time? Even when they're massively profitable?
So the author isn't worried about job loss? Delusional. We're teetering on the edge of complete societal collapse.
What happens when you have a surplus of able bodied young people who are angry and without purpose? What's the easiest way to divert all that anger and give them purpose at the same time?
People in developing nations worked around this by immigrating.