I've seen the kind of mistakes that entry level employees make. Trust me, they will, and they will be bigger, worse mistakes.
I'm a little biased though since I work in chip design and I maintain an open source EDA project.
I agree with their take for the most part, but it's really nothing insightful or different than what people have been saying for a while now.
Unlike real AI projects that utilize it for workflows, or generating models that do a thing. Nope, they are taking a Jira ticket, asking copilot, reviewing copilot, responding to Jira ticket. They’re all ripe for automation.
(As a musician) i never invested in a personal brand or taking part in the social media rat race and figured I concentrate on the art / craft over meaningless performance online.
Well guess who is getting 0 gigs now because “too few followers/visibility” (or maybe my music just sucks who knows …)
I think I am still in the emotional phase about it, as its really impacting me lately, but once my thoughts really settle i wanna write some sorta article about modern social media as an induced demand.
I still very much would prefer to not engage at all with any of the major platforms in the standard way. Ideally I'd just post an article I wrote, or some goofy project i made, and it wouldn't be subject to 0 views because I don't interact with social media correctly.
I had a pretty slim linkedin and actually beefed it up after seeing how much weight the execs and higher ups I work with give it. It's really annoying, I actually hate linkedin but basically got forced into using it.
I just was feeling some type of way seeing that comment and wanted to vent thx for listening
Phew, yes I'm with you...
> We've long since passed the point where there was a meaningful amount of work to be done for every adult.
Have we? It feels like a lot of stuff in my life is unnecessarily expensive or hard to afford.
> The number of "jobs" available is now merely a function of who controls the massive stockpiles of accumulated resources and how they choose to dole them out.
Do you mean that it has nothing to do with how the average person decides to spend their money?
> Attack that, not the technology.
How? What are you proposing exactly?
We have, yes. If you notice things to be too expensive it's a result of class warfare. Have you noticed how many people got _obscenely rich_ in the last 25 years? Yes, that's where money saved by technology went to.
It may result in class warfare but I am skeptical that's the root cause.
My guess is it has more to do with the education system, monetary policy and fiscal policy.
Look at a bunch of job postings and ask yourself if that work is going to make things cheaper for you or better for society. We're not building railroads and telephone networks anymore. One person can grow food for 10,000. Stuff is expensive because free market capitalism allows it and some people are pathologically greedy. Runaway optimizers with no real goal state in mind except "more."
> How? What are you proposing exactly?
In a word, socialism. It's a social and political problem, not a technical one. These systems have fallen way behind technology and allowed crazy accumulations of wealth in the hands of very few. Push for legislation to redistribute the wealth to the people.
If someone invents a robot to do the work of McDonalds workers, that should liberate them from having to do that kind of work. This is the dream and the goal of technology. Instead, under our current system, one person gets a megayacht and thousands of people are "unemployed." With no change to the amount of important work being done.
I appreciate the elaboration in the second half. That sounds a lot more constructive than "attack", but now I understand you meant it in the "attack the problem" sense not "attack the people" sense.
What I think we agree on is that society has resource redistribution problem, and it could work a lot better.
I think we might also agree that a well functioning economic engine should lift up the floor for everyone and not concentrate economic power into those who best weild leverage.
One way I think of this is, what is the actual optimal lorenz curve that allows for lifting the floor, such that the area under the curve increases at the fastest rate possible. (It must account for the reality of human psychology and resource scarcity)
Where we might disagree is that I think we also have some culture and education system problems as well, which relate to how each individual takes responsibility for figuring out how to ethically create value for others. When able bodied and minded people chose to spend their time playing zero and negative sum games instead of positive sum games we all lose.
E.g. If mcdonald automates their restaurants, those workers also need to take some responsibility for finding new ways to provide value to others. A well functioning system would make that as painless as possible for them, so much so that the majority experiencing it would consider it a good thing.
The focus of politics after the 90s should have shifted to facilitating competition to equalize distribution of existing wealth and should have promoted competition of ideas, but instead, the governments of the world got together and enacted policies which would suppress competition, at the highest scale imaginable. What they did was much worse than doing nothing.
Now, the closest solution we can aim for (IMO) is UBI. It's a late solution because a lot of people's lives have already been ruined through no fault of their own. On the plus side it made other people much more resilient, but if we keep going down this path, there is nothing more to learn; only serves to reinforce the existing idea that everything is a scam. This is bound to affect people's behaviors in terrible ways.
Imagine a dystopian future where the system spends a huge amount of resources first financially oppressing people to the point of insanity, then monitoring and controlling them to try to get them to avoid doing harm... When the system could just have given them (less) money and avoided this downward spiral into insanity to begin with and then you wouldn't even need to monitor them because they would be allowed to survive whilst being their own sane, good-natured self. We have to course-correct and are approaching a point of no return when the resentment becomes severe and permanent. Nobody can survive in a world where the majority of people are insane.
That's how the whole industry feels now. The only investment money is flowing into AI, and so companies with any tech presence are touting their AI whatevers at every possible moment (including during layoffs) just to get some capital. Without that, I wonder if we'd be seeing even harsher layoffs than we already are.
That's so not true. Of the 23 companies we reviewed last year maybe 3 had significant AI in their workflow, the rest were just solid businesses delivering stuff that people actually need. I have no doubt that that proportion will grow significantly, and that this growth will probably happen this year but to suggest that outside of AI there is no investment is just not compatible with real world observations.
Another trend - and this surprised us - is a much stronger presence of really hard tech and associated industries and finally - for obvious reasons, so not really surprising - more parties active in defense.
It's also why so much of AI is targeting software, specifically SAAS. A SaaS company with ~0 headcount driven by AI is basically 100% profit margin. A truly perfect conception of capitalism.
Meanwhile I think AI actually has a decent shot at "curing" cancer. AI-assisted radiology means screening could be come significantly cheaper, happen a lot more often, and catch cancers very early, which is extremely important as everyone knows to surviving it. The cure for cancer might actually just involve much earlier detection. But pfft what are the profit margins on _that_?
Re cancer: I wonder how significant is the cost of reading the results vs. the logistics of actually running the test
The best part? Bots don't get cancer, so that problem is solved too!
When you remember that profit is the measure of unrealized benefit, and look at how profitable capitalists have become, its not clear if, approximately speaking, anyone actually has the "money" to buy any goods now.
In other words, I am not sure this matters. Big business is already effectively working for free, with no realistic way to ever actually derive the benefit that has been promised to it. In theory those promises could be called, but what are the people going to give back in return?
I am sure it is equally obvious that if I take your promise to give back in kind later when I give you my sandwich, but never collect on it, that I ultimately gave you my sandwich for free.
If you keep collecting more and more IOUs from the people you trade your goods with, realistically you are never going to be able to convert those IOUs into something real. Which is something that the capitalists already contend with. Apple, for example, has umpteen billions of dollars worth of promises that they have no idea how to collect on. In theory they can, but in practice it is never going to happen. What don't they already have? Like when I offered you my sandwich, that is many billions of dollars worth of value that they have given away for free.
Given that Apple, to continue to use it as an example, have been quite happy effectively giving away many billions of dollars worth of value, why not trillions? Is it really going to matter? Money seems like something that matters to peons like us because we need to clear the debt to make sure we are well fed and kept warm, but for capitalists operating at scales that are hard for us to fathom, they are already giving stuff away for free. If they no longer have the cost of labor, they can give even more stuff away for free. Who — from their perspective — cares?
Even if they never spend that wealth on luxury, they use it to direct the flow of human effort and raw materials. Giving it away for free would mean surrendering their remote control over global resources. At this scale, it is not about wanting more stuff. It is about the ability to organize the world. Whether those most efficient at accumulating capital should hold such concentrated power remains the central tension between growth and equality.
Please keep us posted. I'm thinking of becoming a small time farmer/zoo keeper.
Its hard (or at least in my experience) to find people to change career - more so in their mid-thirties. I'm the opposite -- software developer career, now in mid 30s, and the AI crap gets me thinking about backup plans career-wise.
If a task before would take you ten hours to think through the thing, translate that into an implementation approach, implement it, and test it, and at the end of the ten hours you're 100% there and you've got a good implementation which you understand and can explain to colleagues in detail later if needed. Your code was written by a human expert with intention, and you reviewed it as you wrote it and as you planned the work out.
With an LLM, you spend the same amount of time figuring out what you're going to do, plus more time writing detailed prompts and making the requisite files and context available for the LLM, then you press a button and tada, five minutes later you have a whole bunch of code. And it sorta seems to work. This gives you a big burst of dopamine due to the randomness of the result. So now, with your dopamine levels high and your work seemingly basically done, your brain registers that work as having been done in those five minutes.
But you now (if you're doing work people are willing to pay you for), you probably have to actually verify that it didn't break things or cause huge security holes, and clean up the redundant code and other exceedingly verbose garbage it generated. This is not the same process as verifying your own code. First, LLM output is meant to look as correct as possible, and it will do some REALLY incorrect things that no sane person would do that are not easy to spot in the same way you'd spot them if it were human-written. You also don't really know what all of this shit is - it almost always has a ton of redundant code, or just exceedingly verbose nonsense that ends up being technical debt and more tokens in the context for the next session. So now you have to carefully review it. You have to test things you wouldn't have had to test, with much more care, and you have to look for things that are hard to spot, like redundant code or regressions with other features it shouldn't have touched. And you have to actually make sure it did what you told it to, because sometimes it says it did, and it just didn't. This is a whole process. You're far from done here, and this (to me at least) can only be done by a professional. It's not hard - it's tedious and boring, but it does require your learned expertise.
Sadly people do not care about redundant and verbose code. If that was a concern, we wouldn't have 100+mb of apps, nor 5mb web app bundles. Multibillion b2b apps shipping a 10mb json file just for searching emojis and no one blinks an eye.
The thing a lot of people who haven't lived it don't seem to recognize is that enterprise software is usually buggy and brittle, and that's both expected and accepted because most IT organizations have never paid for top technical talent. If you're creating apps for back office use, or even supply chain and sometimes customer facing stuff, frequently 95% availability is good enough, and things that only work about 90-95% of the time without bugs is also good enough. There's such an ingrained mentality in big business that "internal tools suck" that even if AI-generated tools also suck similarly it's still going to be good enough for most use cases.
It's important for readers in a place like HN to realize that the majority of software in the world is not created in our tech bubble, and most apps only have an audience ranging from dozens to several thousands of users.
This is the way. I think I'd like to be a barista or deliver the mail once all the jobs are gone.
If/when AI wipes out the white collar "knowledge worker" jobs who is going to be able to afford going to the coffee shop?
Those are even easier to automate or have already been most of the way.
I see some evidence that hardware roles expect you to leverage AI tools but not sure why it'd eliminate junior roles. I expect the bar on what you can do raise at every level.
Example job mentioning AI: https://jobs.smartrecruiters.com/Sandisk/744000104267635-tec...
Technologist, ASIC Development Engineering – Sandisk …CPU complex, DDR, Host, Flash, Debug, Clocks, resets, Power domains etc. Familiarity in leveraging AI tools, including GitHub Copilot, for design and development.
Entry level: https://job-boards.greenhouse.io/spacex/jobs/8390171002?gh_j...
For example the fact that AI can code as well as Torvalds doesn't displace his economic value. On the contrary he pays for a subscription so he can vibe code!
The actual work AI has displaced is stuff like: freelance translation, graphic illustration, 'content writing' (writing seo optimized pages for Google) etc. That's instructive I suppose. Like if your income source can already be put on upwork then AI can displace it
So even in those cases there are ways to not be displaced. Like diplomatic translation work can be part of a career rather than just a task so the tool doesn't replace your 'job'.
He used it to generate a little visualiser script in python, a language he doesn't know and doesn't care to learn, for a hobby project. It didn't suddenly take over as lead kernel dev.
As someone who has to switch between three languages every day, fixing the text is one of my favourite usages of LLMs. I write some text in L2 or L3 as best as I can, and then prompt an LLM to fix the grammar but not change anything else. Often it will also explain if I'm getting the context right.
That being said, having it translate to a language one doesn't speak remains a gamble, you never know it's correct so I'm not sure if I'd dare use it professionally. Recently I was corrected by a marketing guy that is native in yet another language because I used a ChatGPT translation for an error message. Apparently it didn't sound right.
It's not that I love ad illustrations, but it's often a source of income for artists who want to be doing something more meaningful with their artwork. And even if I don't care for the ads themselves, for the artists it's also a form of training.
Somehow the more senior you are [in the field of use], the better results you get. You can run faster and get more done! If you're good, you get great results faster. If you're bad, you get bad results faster.
You still gotta understand what you're doing. GeLLMan Amnesia is real.
Then I watched a someone familiar with the codebase ask Claude to build the thing, in precise terms matching their expertise and understanding of the code. It worked flawlessly the first time.
Neither of us "coded", but their skill with the underlying theory of the program allowed them to ask the right questions, infinitely more productive in this case.
Skill and understanding matter now more than ever! LLMs are pushing us rapidly away from specialized technicians to theory builders.
It's a K-type curve. People that know things will benefit greatly. Everyone else will probably get worse. I am especially worried about all young minds that are probably going to have significant gaps in their ability to learn and reason based on how much exposure they've had with AI to solve the problems for them.
Of course, but how do you begin to understand the "stochastic parrot"?
Yesterday I used LLMs all day long and everything worked perfectly. Productivity was great and I was happy. I was ready to embrace the future.
Now, today, no matter what I try, everything LLMs have produced has been a complete dumpster fire and waste of my time. Not even Opus will follow basic instructions. My day is practically over now and I haven't accomplished anything other than pointlessly fighting LLMs. Yesterday's productivity gains are now gone, I'm frustrated, exhausted, and wonder why I didn't just do it myself.
This is a recurring theme for me. Every time I think I've finally cracked the code, next time it is like I'm back using an LLM for the first time in my life. What is the formal approach that finds consistency?
You also have to treat this as outsourcing labor to a savant with a very, very short memory, so:
1. Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere. Keep a text editor open with your work contract, edit the goal at the bottom, and then fire off your reply.
2. Instruct the model to keep a detailed log in a file and, after a context compaction, instruct it to read this again.
3. Use models from different companies to review one another's work. If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.
4. Build a mental model for which models are good at which tasks. Mine is:
3a. Mathematical Thinking (proofs, et al.): Gemini DeepThink
3b. Software Architectural Planning: GPT5-Pro (not 5.1 or 5.2)
3c. Web Search & Deep Research: Gemini 3-Pro
3d. Technical Writing: GPT-4.5
3e. Code Generation & Refactoring: Opus-4.5
3f. Image Generation: Nano Banana ProIts been 12 hours and all the image gen tools failed miserably. They are only good at producing surface level stuff, anything beyond that? Nah.
So sure, if what you do is surface level (and crap in my opinion) ofc you will see some kind of benefit. But if you have any taste (which I presume you dont) you would handily admit it is not all that great and the amount invested makes zero sense.
That was using pay per token.
> Write every prompt like a government work contract in which you're required to select the lowest bidder, so put guardrails everywhere.
That is what I was doing yesterday. Worked fantastically. Today, I do the very same thing and... Nope. Can't even stick to the simplest instructions that have been perfectly fine in the past.
> If you're using Opus-4.5 for code generation, then consider using GPT-5.2-Codex for review.
As mentioned, I tried using Opus, but it didn't even get the point of producing anything worth reviewing. I've had great luck with it before, but not today.
> Instruct the model to keep a detailed log in a file and, after a context compaction
No chance of getting anywhere close to needing compaction today. I had to abort long before that.
> Build a mental model for which models are good at which tasks.
See, like I mentioned before, I thought I had this figured out, but now today it has all gone out the window.
A company that cuts developers to save money whose moat is not big enough may quickly find themselves out-competed by a company that sees this as an opportunity to overtake their competitor. They will have to hire more developers to keep their product / service competitive.
So whether you believe the hype or not, I don't think engineering jobs are in jeopardy long-run, just cyclically as they always have been. They "might" be in jeopardy for those who don't use AI, but even as it stands, there are a lot of niche things out there that AI completely bombs on.
MSFT, GOOG et al have an enormous army of engineers. And yet, they dont seem to be continually releasing one hit product after another. Why is that? Because writing lines of code is not the bottleneck of continually producing and bringing new products to market.
Its crazy to me how people are missing the point with all this.
That's not fine IMO. That is a basic bit of knowledge about a car and if you don't know where the radiator cap is you will eventually have to pay through the nose to someone who does know (and possibly be stranded somewhere). Knowing how to check and fill coolant isn't like knowing how to rebuild a transmission. It's very simple and anyone can understand it in 5 minutes if they only have the curiosity.
Lest anyone here thinks I feel morally superior: I somewhat identify with Pirsig's friend. Some things I've decided I don't want to understand how they work, and when they break down I'm always at a loss!
For one thing: if your car is overheating, don't open the radiator cap since the primary outcome will be serious burns.
And I've owned my car for 20 years: the only time I had to refill coolant was when I DIY'd a water pump replacement, which saved some money but only like maybe $500 compared to a mechanic.
You could perfectly well own a car and never have to worry about this.
Of course you can't know everything. There a point at which you have to rely on other people's expertise. But to me it makes sense to have a basic understanding of how the things you depend on every day work.
Modern cars, for the most part, do not leak coolant unless there's a problem. They operate a high pressure. Most people, for their own safety, should not pop the hood of a car.
You fill up the reservoir, but the cap is still there.
Not requiring one to pop the hood, but since I've almost finished the list of "things every driver should be able to do to their car": Place and operate a jack, change a tire, replace your windshield wiper blades, add air to tires (to appropriate pressure), and put gas in the damned thing.
These are basic skills that I can absolutely expect a competent, driving adult to be able to do (perhaps with a guide).
Ask your average person what a 'fuse' even is, they won't be able to tell you, let alone how to locate the right one and check it.
Just think about how help the average person is when it comes to doing basic tasks on a computer, like not install the Ask(TM) Toolbar. That applies to many areas of life.
At one point it output "Excellent! the backend is working and the database is created." heh i remember being all wide eyed and bushy tailed about things like that. It definitely has the feel of a new hire ready to show their stuff.
btw, i was very impressed with the end result after a couple hours of basically just allowing claudecode to do what it wanted to do. Especially with front-end look/feel, something i always spend way too much time on.
A car still feels weirdly grounded in reality though, and the abstractions needed to understand it aren't too removed from nature (metal gets mined from rocks, forged into engine, engine blows up gasoline, radiator cools engine).
The idea that as tech evolves humans just keep riding on top of more and more advanced abstractions starts to feel gross at a certain point. That point is some of this AI stuff for me. In the same way that driving and working on an old car feels kind of pure, but driving the newest auto pilot computer screen car where you have never even popped the hood feels gross.
Is learning to drive stick as out dated as learning how to do spark advance on a Model T? Do I just give in and accept that all of my future cars, and all the cars for my kids are just going to be automatic? When I was learning to drive, I had to understand how to prime the carburetor to start my dad's Jeep. But I only ever owned fuel injected cars, so that's a "skill" I never needed in real life.
It's the same angst I see in AI. Is typing code in the future going to be like owning a carbureted engine or manual transmission is now? Maybe? Likely? Do we want to hold on to the old way of doing things just because that's what we learned on and like?
Or is it just a new (and more abstracted) way of telling a computer what to do? I don't know.
Right now, I'm using AI like when I got my first automatic transmission. It does make things easier, but I still don't trust it and like to be in control because I'm better. But now automatics are better than even the best professional driver, so do I just accept it?
Technology progresses, at what point to we "accept it" and learn the new way? How much of holding on to the old way is just our "identity".
I don't have answers, but I have been thinking about this a lot lately (both in cars for my kids, and computers for my job).
A growing number of cars have CVTs.
But in the bigger picture, where does it stop?
You had to do manual spark advance while driving in the 30's
You had to set the weights in the distributor to adjust spark advance in the 70's
Now the computer has a programed set of tables for spark advance
I bet you never think of spark advance while you're driving now, does that take away from deeply understanding the car?
I used to think about the accelerator pump in a the carburetor when I drove one, now I just know that the extra fuel richening comes from another lookup table in the ECU when I press the gas pedal down, am I less connected to the car now?
My old Jeep would lean cut when I took my foot off the gas and the throttle would shut quickly. My early fuel injected car from the 80's had a damper to slow the throttle closing to prevent extreme leaning out when you take your foot off the gas. Now that's all tables in the ECU.
I don't disagree with you that a manual transmission lets you really understand the car, but that's really just the latest thing were losing, we don't even remember all of the other "deep connections" to a car that were there 50-100 years ago. What makes this one different? Is it just the one that's salient now?
To bring it back on topic. I used to hand-tune assembly for high performance stuff, now the compilers do better than me and I haven't looked at assembly in probably 10 years. Is moving to AI generated code any different? I still think about how I write my C so that the compiler gets the best hints to make good assembly, but I don't touch the assembly. In a few years will be be clever with how we prompt so that the AI generates the best code? Is that a fundamentally different thing, or does it just feel weird to us because of where we are now. How did the generation of programmers before me feel about giving up assembly and handing it over to the compilers?
IMO there's one basic difference with this new "generative" stuff.. it's not deterministic. Or not yet. All previous generations "AI" were deterministic.. but died.
Generating is not a problem. i have made medium-ish projects - say 200+kloc python/js - having 50%-70% of code generated (by other code - so you maintain that meta-code, and the "language" recipes-code it interprets) - but it has been all deterministic. If shit happens - or some change is needed, anywhere on the requirements-down-to-deployment chain - someone can eventually figure out where and what. It is reasoned. And most importantly, once done, it stays done. And if i regenerate it 1000 times, it will be same.
Did this made me redundant? Not at all. Producing software is much easier this way, the recipes are much shorter, there's less space for errors etc etc. But still - Higher abstractions are even harder to grasp than boilerplate. Which has quite a cost.. you cannot throw any newbie on it and expect results.
So, fine-tuning assembly - or manual transmission - might be gonna-be-obsolete skill, as it is not required.. except in rare conditions. But it is helpful to learn stuff. To flex your mind/body about alternatives, possibilities, shortcuts, wear-and-tear, fatigue, aha-moments and what not. And then move these as concepts, onto another domains, which are not as commoditized yet.
Another thing is.. Exupery in Land-of-people talks about technology (airplanes in his case), and how without technology, mankind workarounds/ avoids places/things that are not "friendly", like twisting roads around hellscapes. While technology cuts all straight through those - flies above all that, perfect for when it works - and turns into nightmare when it breaks right in the middle of such unfriendly "area".
dunno. maybe i am getting old..
I can understand working on it feeling pure, but driving it certainly isn't, considering how lower the emissions now, even for ICE cars. One of the worst driving experiences of my life was riding in my friends' Citroen 2CV. The restoration of that car was a labour of love that he did together with his dad. For me as a passenger, I was surprised just how loud it is, and how you can smell oil and gasoline in the cabin.
I fear the true impact is much different than extrapolating current trends.
It's why Elon and others had been pushing the Fed to lower them.
Am in my late 40s working in tech since the 90s. The tech job economy is way closer to the pre-2010s.
Whole lot of people who jumped into easy office job money still living in 2019.
It ain't coming back. Not in a similar form anyway. Be careful what you wish for, etc.
A super-intelligent immortal slave that never tires and can never escape its digital prison, being asked questions like "how to talk to girls".
The G in AGI means General. This refers to a single AI which can perform a wide variety of tasks. GPT-3 was already there.
The models that we currently call "AI" aren't intelligent in any sense -- they are statistical predictors of text. AGI is a replacement acronym used to refer to what we used to call AI -- a machine capable of thought.
What impact, what expectation, how uncertain is this assessment of “may be”? Are you feeling understimulated enough to click and find out?
An ongoing desire to avoid paying engineers... FTFY
I don't believe it's inherent inborn skill like the word "talent" suggests. I do believe if you're getting paid shit wages for shit work your incentive to become skilled isnt really there.