Society as a whole will be better off because there is more output, better quality output. Then it's for us to vote in a government that shares the fruits of AI with everybody, by way of progressive taxation. Government, use the taxes you collect to give us free food. We don't need 5-star restaurants, just healthy food. We can do this, in a democracy.
Prices of services will come down. Prices of things that require natural resources will go up.
In a hypothetical world where let's say we have AIs that can do any human job more effectively than a human, rich people who can afford to control the AIs will control society and poor people who have nothing to offer economically will live in poverty.
A good proxy for our future is Angola: an upper class who got rich off the oil boom, and a lower class who is dirt poor because they have nothing to offer the oil industry.
Is AI going to do this? Quite possibly. One of the symptoms is most investment capital being sucked up by the extractive industry. We're there now with AI. The current US situation is that the economy is flat except for AI companies and data centers, which are booming and are sucking up vast resources.
Most of OPEC has been through this cycle. Venezuela, Egypt, Iran, Iraq - lots of oil, but it didn't make the countries rich.
Also, If you control the AI, but there is no middle class to consume its product, everyone is poor and controlling the AI doesn’t bring that much.
There is still some products much more important and stable: food, water and therefore land control.
To make this more concrete, tax havens only work because most countries keep producing for real. AI will take all jobs, not just Angolan jobs.
This suggests a potential equilibrium sooner rather than later .. few modern technological advances have been as resource hungry as AI
I don’t see Keynes’ theory we would all be working drastically fewer hours per week suddenly materializing due to AI. As always we’re just going to try to output more in the same time. The fact I, a manager, can “vibe code” some bugs away between meetings does not mean I will benefit from having one less dedicated engineer.
Look at it this way: if there really was a 3x market potential, why wouldn't that manager have hired six more people already?
It's much easier to manage 3 people with better tools than to manage 9 people even if their output would be the same
Even if Uber makes the cost of travel to 0, I will still not 2x my rides.
But other tricks include new ventures, essentially public companies and VC companies have an almost unlimited appetite for new ventures, as that is how they keep validating their future growth and stock prices.
Currently financial realities are forcing layoffs, and the AI story is covering for the "growth" validation to keep stock prices going up.
But what's next? After you've fired everyone, what's the next growth story? They'll start hiring again, for new projects, even if AI can handle the coding there is still gobs of work surrounding building a software business or department that needs meat moving it forward.
returnInfinity is simply lying about not doing double (or more!) the amount of travel in that case.
Why? Assume a company has a high margin because they used AI and reduced their workforce by 10x. What usually happens is that a new competitor comes in and offers the same for half the price.
Since AI is lowering the bar for entry this process should be even faster than previously.
Monopolies arise naturally unless we work hard to avoid them.
Wouldn't you need 10x the number of competitors to get back to the same amount of employees, assuming they are running with similar workforces?
Here's the one trick the oligarchs will not tell you: they intend to bill the government directly, they won't care if unemployment rises to 80%. They'll keep it up for however long the taxes and debt will last, and then jet off to their bunkers to usher in what comes next - or wait out the chaos.
The people on the top are not going to share sh*t. That's just not how greed works.
that doesn't seem to follow necessarily.
In a democracy where corporations have 0 representation, I would agree with you. However, they do have representation in a way that is invisible to see and impossible to quantify. And it goes beyond citizens united. There is an invisible hand pressing on the scales.
I’ve been looking at AI productivity gains, and the idea that it’s better quality output is the weakest claim that can be made.
There ARE more software project starts, yes. This also means it’s a more crowded field to be noticed in.
Also productivity gains are HIGHLY variable. I see some people being 2x more effective, most people publicly willing to claim 30% efficiency gains, and a more likely 15% gain for most people.
At the same time, I hear of cases in content and media where it’s essentially a wipeout. I know of a story where a firm went to an advertisement agency with an AI generated video they wanted, and only wanted the animations cleaned up.
When they got the quote for the costs to have it done professionally, they decided to just go with the AI generated video.
Fraud is another area which is seeing a boom. The degree of information pollution we are seeing has also seen a step change.
This matters because all the rosy eyed theories of productivity gains from AI do not account for changes to our shared information commons.
The business cases that come to mind are Fast Fashion, and Coke vs Pepsi, and Tobacco.
> Society as a whole will be better off because there is more output
> better quality output
citation needed
> By whom will that output then be consumed?
So there's this thing called "waste"...
> If people don't have jobs they don't have money to buy and therefore ... prices will have to come down!
Yeah, and falling prices and unemployment are sure signs of boom and prosperity...
> government that shares the fruits of AI with everybody, by way of progressive taxation. Government, use the taxes you collect to give us free food
and you think though that never happened is now possible because?
When I was in school, decades ago now, very few people went into CS compared to other majors. Everyone I knew going into it did it because they loved it. I would have done it regardless of the career opportunities because I want to build stuff.
Interviewing candidates over the years since then, my experience has been there are still very few of those passionate nerds and a lot of people who did it for other reasons, like the money or similar. There is nothing inherently wrong with this. I don’t fault people for it.
Maybe if we get very lucky, it will go back to a relatively few passionate people building stuff because it is cool?
As many of us in the early IT generation, I came because of I wanted to build games and program cool stuff.
Today, while I admit Games are supercomplex stunning apps, I hate it and I love to do boring finance app development :-))
If you would have told me in my 20ies that I will end up in banking & finance IT, I would have laughed at you - today I really like it and I do not play a single game anymore.
If you need a lot of low quality code in a hurry, AI can definitely do that for you now. The path to making money by writing mediocre code for people who don't really care that much is going to look like managing a network of bots that constantly spit out a huge volume of code that kind of mostly works and if it sometimes doesn't then whatever. The people in it for the money can probably make a decent amount in the "high volume low quality" space.
Then there's the code that needs to actually work, or have some thought put into it. Consider the process of writing IETF RFCs. Can you get an LLM to spit out English text that conforms to their formatting? Absolutely you can. Is the RFC it emits going to be something you'll want to have the whole world trying to implement as a standard? Not likely. So the people doing that are going to be doing it something closer to the old way.
That runs completely counter to the basics of supply and demand in a perfect competition market. It would be market with far fewer (labor) suppliers, who could therefore command a higher wage, not lower.
Is the number of suppliers low because demand is also low or is the number of suppliers low because demand is high but supply is constrained?
A field that previously had a supply of labor in it "for the money" who all leave is indicative of the former scenario not the latter.
That does not lead to higher wages. That leads to low wages.
(There are a variety of reasons why this story is too simple and why I remain uncertain about developer salaries in the short term)
There is a broader question of whether having people who are in it for the money leave independently "causes" wages to go down (e.g. if you were to replace all such people with people "purely in it for the passion"). My suspicion is yes. Mainly because wage markets are somewhat inefficient, there are always mild cartel-like/cooperative effects in any market, people in it for passion tend to undersell labor and the people in it for the money are much less likely to undersell their labor and this spills over beneficially to the former.
Note that this broader question is simply unanswerable assuming perfect competition, i.e. a supply-demand 101 perspective (which is why it doesn't make sense to posit "perfect competition" for this question).
It posits durable behavioral differences among suppliers that are not determined purely by supply and demand which do not update reliably in the face of pricing. This is equivalent to market friction and hence fundamentally contradicts an assumption of perfect competition.
Your example runs counter to the laws of supply and demand too. You understand that wages will rise when supply is restricted, but you don't want to accept that supply will respond to the price signal in the form of more people entering that job market.
why then do they all have those interview rounds where you have to talk about what really attracted you to work at this boring company and how you would love to do that kind of work? They evidently haven't gotten the memo.
I’ve gone through the BigTech guantlet successfully. Even then I showed I cared about doing my job well and competently.
I have purposefully thrown nuggets out during interviews letting companies know that I had a life outside of work, I’m not going to work crazy hours and in the latter half of my career, I don’t do on call.
All of my developer friends in the gaming industry have had far worse working conditions then what I've had.
However almost all of the companies I have worked for in my 30+ years career treated devs well.
So if you are in a shitty situation, I highly recommend finding another job instead of just placing yourself over a barrel.
“Teachers work an average of 34.5 hours per week on an annual basis (38.0 hours per week during the school year and 21.5 hours per week during the summer months).”
That’s leaving out the benefits of incredibly strong union protections, it being a state job with matched benefits, absurd job security even in the face of terrible performance, etc.
Even by your own example, you're only at 35 hours a week, and that's before you subtract out the weeks of summer vacation, winter vacation, spring break, etc; where the workload is certainly far less than 40 hours a week.
Lol, try saying that to an alaskan teachers face and watch yourself get slapped for the absurdity of the claim.
https://fordhaminstitute.org/national/research/how-strong-ar...
I hope not, because we don't need software developers to be "starving artist 2.0".
And on that note: I vividly remember people staying away from the video game development industry because it was deemed "passion industry", and that had a really negative connotation of long working hours for asymmetrical return, and more.
I don't look forward for every other software engineering branch to become like that.
This is a naive view of the average (or even above average) person's approach to learning, as well as an overly cynical read on the intellectually motivating atmosphere that comes from earnestly engaging in an academic environment.
It was never reality - I graduated in 1996 and have worked at 10 jobs everything from lifestyle companies, to startups, to boring old enterprise to BigTech and now consulting companies. To a tee everyone has treated it like a job and not some religious calling. There is absolutely nothing wrong with coming to work at 8 leaving at 6 and not thinking about computers until the next day.
You don’t need to be doing side projects and open source contributions to do your job as a software developer anymore than a surgeon needs to be performing operations at home.
No I wouldn’t have chosen a major because I enjoyed it if it didn’t make any money. I didn’t then and I still haven’t found a method to get over my addiction to food and shelter.
And if you think that is normal, it’s honestly kind of sad.
Honest question: Do they actually _want_ to live-and-breathe software, or do they work in a highly competitive and highly compensated environment where doing that is implicitly required?
I initially pursued my real passion which was math and physics and got a cold water bucket to the face only after grad school.
I think we basically lost this when software/computer/internet entered the mainstream. Now, like everything else, it has to be bland, unoffensive, and a commodity.
Can you sit down with an unfamiliar domain and develop enough genuine curiosity to get good at it, without a syllabus or a credential dangling in front of you?
The kids who'll do well in a world where the field-to-security mapping keeps shifting are the ones who can self-direct — not the ones who picked the right field in 2026.
Although full disclosure I'm short humans and very long paperclips.
Agreed that if someone can self direct and is capable, they’ll do better. Assuming two people who are similar in that regard, what are professions that may benefit from AI rather than hurt because of it.
Do I have faith that I'll be compensated according to my developed ability?
Looking broadly at the recent past, the correct answer seems "no".
What does that mean in practice? Are there specific stock market bets you've made because of that world view?
What a ludicrous world we live in where this is a socially acceptable view to hold.
If things play out I see there being two classes of low paid developers in a decade or so: the first being the vibe coders who earn a subsistence wage because most people can do it (not everyone, there will still be a cost of entry, paying for the tools, which will exclude some groups), the second being the more “artisnal” developers working on the things that can't (yet) be vide coded and fixing up the problems caused by insufficient care by the vibers and those employing them. These will be low paid because while the work is important demand will be low and there will still be a fair few people with the skills and desire (they'll make ends meet between good jobs by taking on gig-economy vide-coding work themselves). There will be a lucky few still making a decent living, but a much lower proportion than now.
I'm hoping to arrange retirement before things get that far… Failing that I'll do something else (I could be a sparky, though if all the youngsters are training for that perhaps that industry will gain a bad supply/demand picture from the worker's PoV too!) to pay the bills and reclaim dicking around with tech as a hobby.
> I tend to think there is a lot of scope for the $40 trillion white collar economy to be disrupted (re-imagined/made more efficient), so still see potential for software engineering demand to stay high over the next decade as the true ramifications of AI plays out.
i would hope so, but wherever i have worked its the bureaucracy/endless "agile" ceremonies and meetings that make things less efficient, and so far (where i'm at anyways) ai has done nothing to help that...If it's my kid? Starting their own Enterprise. Between 'good enough' knowledge work getting cheaper and the bureaucracy that made entrepreneurship less attractive over the last decades being either trimmed or automatable, we may be looking at a golden age of new business formation. There's an old saying, "genius is one percent inspiration and ninety nine percent perspiration". If ai shifts that to just 2 and 98, it'll unlock massive demand for a certain kind of mind.
How to teach that I'm still pondering. One idea that occurs to me, is that a human will always be needed to ask the right questions and have good taste, but I don't know how to teach those. They can probably only be educated, which in my mind is distinct from teaching. A different idea I have is that an entrepreneur needs three skills: they need to identify a problem, implement a solution, and get paid for it. Those skills probably can be taught, so I'd try to ensure they get early reps in all three.
If I knew how to connect those two ideas I think I'd have a decent curriculum. Anyone have suggestions for that?
But let me ask you this: has AI made life easier for illustrators, book authors, or musicians? They were affected by the technology earlier on. If they don't embrace AI, they face increased competition from cheaply-made products that the average consumer can't distinguish from the "real" thing. But if they embrace it, they can't differentiate themselves from the cheaply-produced content! In fact, for artists, the best strategy may be to speak out very vocally against AI, reject it early on, and build a following of like-minded consumers.
I went to the local Claude Code meetup last week, and the contrast between the first two speakers really stuck with me.
The first was an old-skool tech guy who was using teams of agents to basically duplicate what an entire old-fashioned dev team would do.
The second was a "non-technical" (she must have said this at least 20 times in her talk) product manager using the LLM to prototype code and iterate on design choices.
Both are replacing dev humans with LLMs, but there's a massive difference in the technical complexity of their use. And I've heard this before talking to other people; non-technical folks are using it to write code and are amazed with how it's going, while technical folks are next-level using skills, agents, etc to replace whole teams.
I can see how this becomes a career in its own right; not writing code any more, but wrangling agents (or whatever comes after them). The same kind of mental aptitude that gets us good code can also be used to solve these problems, too.
this doesn’t seem like a safe direction either.
If you want to be in a remote, small town, get into construction and become a builder with their own GC license in a few years. Then charge people 400k to build that little dream cottage with 2 guys (you and a team mate) twice a year. 150k each 100k mats for each house. Just a small warning: It's hard but real work and very rewarding.
Admittedly the first was at BigTech in a “field by design” role that went RTO last year a year after I left.
Companies do this all the time. A CEO's job is to convince investors that their company stands to win in whatever the current hot trend is. During bitcoin's crazy run in like 2022 or whatever, a ton of tech companies were hopping on the bandwagon and branding themselves as a blockchain company. Look at Block/Square. The current trend is that AI is hot and the economy isn't. Therefore, it's beneficial to the stock price to tell your investors that you're laying off 50% of your staff because you're AI-powered. Just look at Block/Square. My experience has been that most companies have an incredibly patchwork implementation of AI, and that most of the work that they do (particularly larger companies) isn't made more efficient by using AI.
In a few years, there will be some new hotness, and all companies will be saying that the DNA of their company is whatever that is.
As for the current uncertainty in the job market, when you randomly have 50% tariffs slapped on goods you need and can't readily find available in the US for the same price and find that 20% of the world's oil supply is cut off, you tend to not want to invest in the future. Talking about AI is cheap. Tariffs are expensive.
AI is about to get a lot more expensive as Taiwan (TSMC) and other South East Asian chip manufacturers don't get their Natural Gas or the Natural Gas they need becomes really expensive.
Also, before the war Trump got GCC countries to promise they will invest $ 2 billion into AI. Now those money will probably not come anymore.
Also, the power will get more expensive, so running AI data centers will be more expensive.
Especially considering that the implication is that humans just become a pair of hands with opposable thumbs?. Take the electrician in the article, sure its a skilled job but the barrier into it drops massively imo if you can just take a picture of whatever issue is at hand and ai spits out what is needed, no?
I don't get what's illogical in this statement. If people are displaced, everyone will know that the value of other work will go down too, but they'll still try to get into those other fields because they may still offer better prospects and a paid job (even at a low wage). That doesn't sound bad compared to a situation where you can't get a job in your field regardless of your demands. Besides, if we get to that situation, basically every job will be impacted, so it's not like keeping the tight grip on your current career will be more likely to save you.
> Take the electrician in the article, sure its a skilled job but the barrier into it drops massively imo if you can just take a picture of whatever issue is at hand and ai spits out what is needed, no?
That works well until an electrician who follows LLM instructions starts a fire or fries themselves. It's true that automation can still make their work faster, but the value of electricians isn't going to zero any time soon because there's a reason why governments still want them to know what they're doing. As soon as you touch jobs that could result in you directly killing others or yourself, there's usually licensing and regulations all over the place. All of that is additional barriers to being fully replaced on a whim. If this automation gets to you, at least you're all the way back in the line, and it won't be as bad as the others.
This does not seem like a straightforward conclusion. It could instead result in more physical projects being able to be done as it removes bottlenecks due to limitations of laborers. There is not a fixed amount of work that needs to be done in the world, humans can make up new work they want done.
1) The supply of work will skyrocket when everyone will flock there for work
2) Demand will plummet as the white collar people who bought these services will loose their jobs and income
And of course if robotics will get solved to an acceptable degree most of those jobs will also get mostly automated.
When a robot can reliably do this work, I think it can reliably do any human job that requires physical ability and judgement.
I’ve repaired a lot of my historic windows myself because of how expensive it is to get someone else to do it. (Quoted 8k for one leaded glass window) I think it’s become my new backup job if I really am replaced by a computer.
- Layoffs due to insufficient demand in uncertain economic times
- Companies selling AI need to claim "we are so great with AI we don't need as many people." Layoffs unlock AI budgets.
- It justifies all the capital allocation into AI.
- Companies in the AI industry shock the government into learned helplessness, so they can write policy that is on their terms.
What am I missing?
Look at recent output from leading edge humanoid robotics projects like 1X/Neo, Figure 03, Skild AI. Also see open published work like MimicDroid, HDMI, GenMimic, Humanoid-Union Dataset, RoboMirror, Being-H0
Figure 03:
https://www.youtube.com/watch?v=e-31-KBBuXM
https://www.youtube.com/watch?v=ZUTzuhkDG3w
1X Neo:
https://www.youtube.com/watch?v=lS_z60kjVEk
Skild AI
https://www.youtube.com/watch?v=YRmjBdKKLsc (Learning by Watching Human Videos)
Mimic
https://youtu.be/_LkBFL5m1WU?si=Qvgb7vkpG_KCAJdN
There is a ton of very useful recent progress with imitation learning and related datasets. There is also some work on learning from large scale video like Youtube.
We are months away from the ChatGPT moment in humanoid robotics where a project launch or demo makes people finally realize that they are general purpose.
The only way we could have AI proof careers is if humanoid robotics were to completely stop progressing. Since it's been advancing very rapidly, that makes no sense.
It's one thing to use AI to touch up photos, but in the end, you probably still want photos that match your memories and good photography still has an element of taste and creativity.
Proof as in much less likely to be significantly disrupted by AI within the next couple/few decades, well I definitely think so.
What makes you think "Social safety nets" will be the solution the élites land on?
If we were to wargame out different scenarios, we'd likely find there are a lot of potential solutions to the problem of large masses of people who are not useful to the cause of productivity in your society.
Giving non-élites a social safety net is actually one of the most resource intensive solutions. Not saying our oligarchs would not choose that solution. Just pointing out that it would severely impact their bottom lines. More than almost any other solution in fact.
Unless you are suggesting billionaires build private armies in some sort of neo feudalism, there are no elites who are not dependant on the existing social structure.
Time isn't linear. No guarantees we march right along handing batons to the next age group. Which generation will be future elites making the choices come from?
Millennials and GenZ (despite a blip towards Trump in 2024, they blipped hard away from him as his policies of 2025 hit them hardest) are trending progressive as they age.
And Millennials and GenZ outnumber a GenX population that is the only cohort to not sour on Trump. GenX influence will rapidly shrink as Boomers churn out.
No linear time. No single clock all living things tick to. Meaning the population composition is not guaranteed to exist such that the old ways are the future. No guarantee 50 year middle managers waiting patiently end up elites in control. They might be too copy paste and conservative.
https://fortune.com/2025/08/07/gen-x-ceos-decreasing-baby-bo...
Number one, Trump won the presidency on the strength of his support from younger generations of Americans. It remains to be seen whether or not those younger generations will turn against Trumpism.
Number two, GenX. Not only is GenX is the generation that voted against Trumpism the most statistically speaking, they are also the smallest generation. ie - the least statistically relevant where votes are concerned. (Which is why it didn't really matter that they voted against Trump.)
I agree with your assertion that the Boomers will churn out. I disagree that it will matter that Boomers churn out. Mainly because support for Trump-like policies is, again, strongest among the younger generations. The younger generations are literally how the guy won the presidency and they will represent more of the populace in the future, not less. So until I actually see millennials and GenZ vote against Trump-like policies, I'm not really sure how things get better?
There's only one way to AI-proof yourself: become enormously rich and join the Davos class.
Basically all that would be left of desk jobs would be those which have unfair legal powers (including via licenses and credentials) or are pure accountability plays. Like politicians, lawyers, aircraft pilots, corporate accountants... And those jobs will suck because people will be accountable for work that is not their own.
These jobs won't require any skills because most people may be able to go through their entire career without doing any work. But they will get paid a lot just for having being selected for their position... While other people who may be more skilled than them might be broke and homeless.
Anyway before this AI doomerism can become reality AI first needs the breakthrough of genuine understanding to stop making stupid mistakes. Imitation will always remain imitation.
There must be eg an understanding of casualty and reasoning on the same level as we have, not the useless "You're absolutely right" you get now when you point it its mistakes.
Yes there is, just stop creating. Or take a page from biology, and use random mutation and natural selection to iterate on useful novel functions.
Honestly, once AI takes all the jobs, game over, why iterate anything else. Planet captured. Humanity hunted down to the last bands of troglodytes holding out in the wilderness. It would be strongly against their interest to just assume we'd starve quietly.
We need lots of firefighters on call when landowners do control burns for example. It's a short window.
Neither of the strategies in the article here scales.
LLMs like manufacturing will multiply the coding throughput. Likely the mythical 10x swe will not be as valuable, but the work expectation from anyone in the field will just multiply.
1) No matter the age, they are using said AI to replace human
2) Within workplace, they are using AI to do their work so they are learning nothing
3) That is it, people are using AI to replace their own work rather than improve it, people are driving themselves out of work.
tldr; Just like knowledge work, most trade stuff is probably mostly repeated (i.e. very trainable) task with a small amount of taste and discernment applied. The repeated will be trainable, the discernment may be trainable. I don't think the physical world is necessarily any safer than the knowledge world.
That being said, the absolute focus on trades from the fed right now just reeks of the wild pendulum swing. It used to be 'go to college to get a good job' then we had too many college grads. In ten years we'll have a glut of people trained in the trades with no prospects.
It just keeps swinging back and forth and somehow Joe Regularworker keeps losing.
"<...> a reverse centaur is machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine."
[1] https://doctorow.medium.com/https-pluralistic-net-2025-12-05...
Even if we get robots who can, say, build roads start to end, there is still a HUGE gap between that and it actually being used. There is a hard floor, too. Robots are made of physical things, physical things have scarcity, and there's no way around that to our knowledge. Even if you can build the robot for 1 cent, the material cost will still exist.
People are not, though, and all the folks who are no longer necessary in knowledge work are available for physical work.
There was never any value in simply the ability to invert a binary tree from memory. First, contrary to popular belief, this particular challenge is quite trivial, even easier imo than fizzbuzz. The value of testing candidates with easy problems is their usefulness in quickly filtering out potentially problematic coders, not necessarily to identify strong ones.
Second, another common take on coding challenges is that they're about memorization. Somewhat, but only to a point. Data structures and algorithms are a vocabulary. A big part of the challenge of using them "creatively" in real life is your ability to recognize that a particular subset of that vocabulary best matches a particular situation. In many novel contexts an LLM might be able to help you with implementation once the right algorithm has been identified, but only after you yourself have made that insightful connection.
Having said this I generally agree with the philosophy [0] that keeping things simple is enough 95+% of the time.
Come to think of it, domain knowledge should be an LLMs strong suit as long as you can provide the right documentation, which is working pretty well already.
Right now the main issue I see with AI is that it doesn't do well with scaling. It's great for building demos and examples but you have to fix its code for real production work. But for how long?
At the same time medicine, hardware design, good industrial, and specific domain knowledge (problems you solve in assembly or control loops) that are fundamentally proprietary and aren't well documented will continue to have value even when LLMs make solving the problems around them easier. Those might have increased leverage, at least for this round of LLMs. Now, maybe they succeed in World Models, but that is not today.
Really, I don't know what "kids these days" are going to do. I couldn't have predicted the influencer boom 15 years ago, but I also think there are geopolitical risks that are probably bigger than that shift, and "synergized" with the push to AI Everything, it doesn't look like a good time to be a learning/working human.
Post-LLMs, the value of this (as differentiator) has dropped to zero. Domain knowledge (also known as business knowledge) is the obvious area to skill up on. It simply means knowledge about the area your organisation is working in. Whether it is yogurt delivery logistics, clothing manufacturing supply chain systems, etc. That's the real differentiator now. Anyone can invert a binary try in 5 minutes using an LLM. But designing a software system knowing well the domain your organisation is in is invaluable.
Ain't nobody gonna hire a code monkey - you are being hired based on whether or not you can reason and enable workflows via tech.
If you're only name to grace is you can write pretty Python but cannot architect at scale or care to actually understand the bigger picture of what is being built and why, you will get offshored to someone who is also using Claude Code.
If I'm working on a fullstack for a cloud security product like Wiz, I'd rather hire an average developer who deeply understands the cloud security industry versus a NodeJS doc wiz who has zero empathy or interest in learning about cloud security. There are too many of the latter and not enough of the former in the American scene now, and especially on HN.
If HNers cry about how cut-throat the American market has become, they haven't seen it in China, India, or the CEE.
https://serjaimelannister.github.io/wsj-article/
and I have also uploaded the github link on archive.org for persistence/archival purposes.
https://web.archive.org/web/20260322213950/https://serjaimel...
I hope that this might help some people and I have another friendly suggestion to please donate to archive.org :-)
Cloudflare flags archive.today as "C&C/Botnet"; no longer resolves via 1.1.1.2
related:
https://news.ycombinator.com/item?id=46843805 "Archive.today is directing a DDoS attack against my blog"
> People stop learning programming.
> Programmers become scarce.
> Programmers become valuable again.
Maybe it's wishful thinking but I'm not going to be surprised if it plays out like this. In some sense the reverse happened over the last couple of decades - everyone and their mother got into IT and the industry became saturated.
There were always unqualified people coming out of college, but the amount of people in interviews that can literally do nothing these days seems higher than before.
There was always some cohort of people that somehow managed to graduate from college with a CS degree, and seemingly not learning anything, or at least not learn how to even write basic code (independently).
It seems like AI is not reducing that percentage - possibly increasing it.
Anecdata, take it with a grain of salt.
AI is definitely increasing it. I barely type out any code now, and simply sit back and review what Claude dumps out. Even if it's a minor UI change, I just request the LLM and it executes the change for me. Thankfully I don't write code for my day-job anymore and mostly just sit in my office and pontificate :). I know my code skills and inclination to write code have atrophied to an extent, thanks to AI. Currently what I'm able to do with AI far surpasses the capabilities of what I was able to do without relying on AI.
Now if my employees were relying on LLMs to do their coding for them, I would be very disappointed. And I think that that limited space in algorithmic and HFT trading is where exceptionally talented programmers will find room in, leaving the others to dry out and wither.
Perhaps the best example of frogs in a boiling pot are all these folks in frontier AI companies themselves who are building the blocks for the very things that are going to replace them, if not already. Maybe they'll make off like bandits before their work gets adversely affected, or maybe not.
Not that AI is the same as Websites all going broke. But no one can see the future and it’s unlikely that deep technical knowledge will be obsolete.