The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.
I didn't want to believe in it, but I think it's here. And even arguments like "feeding proprietary code" will be eventually solved by companies hosting their own isolated LLMs as they become better and hardware becomes more available.
My prediction is that junior to mid level software engineering will disappear mostly, while senior engineers will transition to be more of a guiding hand to LLMs output, until eventually LLMs will become so good, that senior people won't be needed any more.
So, fellow software engineers, how do you future-proof your career in light of, the inevitable, LLM take over?
--- EDIT ---
I want to clarify something, because there seems to be slight misunderstanding.
A lot of people have been talking about SWE being not only about code, and I agree with that. But it's also easier to sell this idea to a young person who is just starting in this career. And while I want this Ask HN to be helpful to young/fresh engineers as well, I'm more interested in getting help for myself, and many others who are in a similar position.
I have almost two decades of SWE experience. But despite that, I seem to have missed the party where they told us that "coding is not a means to an end", and realized it in the past few years. I bet there are people out there who are in a similar situations. How can we future-proof our career?
I have a job at a place I love and get more people in my direct network and extended contacting me about work than ever before in my 20 year career.
And finally I keep myself sharp by always making sure I challenge myself creatively. I’m not afraid to delve into areas to understand them that might look “solved” to others. For example I have a CPU-only custom 2D pixel blitter engine I wrote to make 2D games in styles practically impossible with modern GPU-based texture rendering engines, and I recently did 3D in it from scratch as well.
All the while re-evaluating all my assumptions and that of others.
If there’s ever a day where there’s an AI that can do these things, then I’ll gladly retire. But I think that’s generations away at best.
Honestly this fear that there will soon be no need for human programmers stems from people who either themselves don’t understand how LLM’s work, or from people who do that have a business interest convincing others that it’s more than it is as a technology. I say that with confidence.
U.S. (and German) automakers were absolutely sure that the Japanese would never be able to touch them. Then Koreans. Now Chinese. Now there are tariffs and more coming to save jobs.
Betting against AI (or increasing automation, really) is a bet against not against robots, but against human ingenuity. Humans are the ones making progress, and we can work with toothpicks as levers. LLM's are our current building blocks, and people are doing crazy things with them.
I've got almost 30 years experience but I'm a bit rusty in e.g. web. But I've used LLM's to build maybe 10 apps that I had no business building, from one-off kids games to learn math, to building a soccer team generator that uses Google's OR tools to optimise across various axes, to spinning up four different test apps with Replit's agent to try multiple approaches to a task I'm working on. All the while skilling up in React and friends.
I don't really have time for those side-quests but LLM's make them possible. Easy, even. The amount of time and energy I'd need pre-LLM's to do this makes this a million miles away from "a waste of time".
And even if LLM's get no better, we're good at finding the parts that work well and using that as leverage. I'm using it to build and check datasets, because it's really good at extraction. I can throw a human in the loop, but in a startup setting this is 80/20 and that's enough. When I need enterprise level code, I brainstorm 10 approaches with it and then take the reins. How is this not valuable?
LLMs are good for playing with stuff, yes, and that has been implied by your parent commenter as well I think. But when you have to scale the work, then the code has to be easy to read, easy to extend, easy to test, have extensive test coverage, have proper dependency injection / be easy to mock the 3rd party dependencies, be easy to configure so it can be deployed in every cloud provider (i.e. by using env vars and config files for modifying its behavior)... and even more important traits.
LLMs don't write code like that. Many people have tried with many prompts. It's mostly good for just generating stuff once and maybe do little modifications while convincingly arguing the code has no bugs (and it has).
You seem to confuse one-off projects that have zero to little need for maintenance for actual commercial programming, perhaps?
Your analogy with the automakers seems puzzlingly irrelevant to the discussion at hand, and very far from transferable to it. Also I am 100% convinced nobody is going to protect the programmers; business people trying to replace one guy like myself with 3 Indians has been a reality for 10 years at this point, and amusingly they keep failing and never learning their lesson. /off-topic
Like your parent commenter, if LLMs get on my level, I'd be _happy_ to retire. I don't have a super vested interest in commercial programming, in fact I became as good at it in the last several years because I started hating it and wanted to get all my tasks done with minimal time expended; so I am quite confident in my assessment that LLMs are barely at the level of a diligent but not very good junior dev at the moment, and have been for the last at least 6 months.
Your take is rather romantic.
People with the attitude that real programmers are producing the high level product are going to get eaten slowly, from below, in the most embarrassing way possible.
Embarrassing because they'll actually be right. LLMs aren't making the high quality products, it's true.
But the low quality CRUD websites (think restaurant menus) will get swallowed by LLMs. You no longer need a guy to code up a model, run a migration, and throw it on AWS. You also don't need a guy to make the art.
Advances in LLMs will feed on the money made from sweeping up all these crap jobs, which will legitimately vanish. That guy who can barely glue together a couple of pages, he is indeed finished.
But the money will make better LLMs. They will swallow up the field from below.
just like you don't need webmasters, if you remember that term. IF you are just writing CRUD apps, then yeah - you're screwed.
If you're a junior, or want to get into the field? same, you're totally screwed.
LLMs are great at soft tasks, or producing code that has been written thousand of times - boilerplate, CRUD stuff, straightforward scripts - but the usual problems aren't limited by typing speed, nor amount of boilerplate but by thinking and evaluating solutions and tradeoffs from business perspective.
Also I'll be brutally honest - by the time the LLMs catch up to my generation's capabilities, we'll be already retired, and that's where the real crisis will start.
No juniors, no capable people, most seniors and principal engineers are retired - and quite probably LLMs won't be able to fully replace them.
To be fair those were eaten long ago by Wordpress and site builders.
> There seems to be some kind of app template many restaurants use which you can order from too.
I think you agree with the comment you are replying to, but glossed over the word bespoke. IME as a customer in a HCOL area a lot of restaurants use a white-label platform for their digital menu. They don't have the need or expertise to maintain a functional bespoke solution, and the white-label platform lends familiarity and trust to both payment and navigation.
A modern CRUD website (any software can be reduced to CRUD for that matter) is not trivially implemented and far beyond what current LLM can output. I think they will hit a wall before they will ever be able to do that. Also, configuration and infrastructure management is a large part of such a project and far out of scope as well.
People build some useful tools for LLM to enable them to do anything besides outputting text and images. But it is quite laborious to really empower them to do anything still.
LLM can indeed empower technical people. For example those working in automation can generate little Python or JavaScript scripts to push bits around, provided the endpoints have well known interfaces. That is indeed helpful, but the code still always needs manual review.
Work for amateur web developers will likely change, but they certainly won't be out of work anytime soon. Although the most important factor is that most websites aren't really amateur land anymore, LLM or not.
"That guy who can barely glue together a couple of pages" was never going to provide much value as a developer, the lunches you describe were already eaten: LLMs are just the latest branding for selling solutions to those "crap jobs".
I don't disagree btw. Stuff that is very easy to automate will absolutely be swallowed by LLMs or any other tool that's the iteration #13718 of people trying to automate boring stuff, or stuff they don't want to pay full programmer wages for. That much I am very much convinced of as well.
But there are many other, rather nebulous shall we call them, claims, that I take issue with. Like "programming is soon going to be dead". I mean OK, they might believe it, but arguing it on HN is just funny.
I'm totally conflicted on the future of computing careers considering LLM impact, but I've worked at a few places and on more than a few projects where few/none of these are met, and I know I'm far from the only one.
I'd wager a large portion of jobs are like this. Majority of roles aren't working on some well-groomed Google project.
Most places aren't writing exquisite examples of a canonically "well-authored" codebase; they're gluing together enterprise CRUD slop to transform data and put it in some other database.
LLMs are often quite good at that. It's impossible to ignore that reality.
It’s the very definition of a self-fulfilling prophecy.
I love it anyhow. Sure, it generates shit code, but if you ask it it’ll gladly tell you all the ways it can be improved. And then actually do so.
It’s not perfect. I spent a few hours yesterday pulling it’s massive blobby component apart by hand. But on the plus side, I didn’t have to write the whole thing. Just do a bunch of copy paste operations.
I kinda like having a junior dev to do all the typing for me, and to give me feedback when I can’t think of a good way to proceed.
The question, really, is: are you confident that this was better than actually writing the whole thing yourself? Not only in terms of how long it took this one time, but also in terms of how much you learned while doing it.
You accumulate experience when you write code, which is an investment. It makes you better for later. Now if the LLM makes you slightly faster in the short term, but prevents you from acquiring experience, then I would say that it's not a good deal, is it?
I tried LLMs several times, even started using my phone's timers, and found out that just writing the code I need by hand is quicker and easier on my brain. Proof-reading and looking for correctness in something already written is more brain-intensive.
Everyone isn't "generating and moving on". There's still a review and iteration part of the process. That's where the learning happens.
LLMs seems to be ok'ish at solving trivial boilerplate stuff. 20 attempts deep I have not yet seen it able to even remotely solve anything I have been stuck enough on to have to sit down and think hard.
More like it has no grey matter to get in the way of thinking of alternatives. It doesn’t get fixated, or at least, not on the same things as humans. It also doesn’t get tired, which is great when doing morning or late night work and you need a reality check.
Deciding which option is the best one is still a human thing, though I find that those often align too.
What in my comment did you find defensive, btw? I am curious on how does it look from the outside for people that are not exactly of my mind. Not making promises I'll change, but still curious.
The majority of programmers getting paid a good salary are implementing other people's ideas. You're getting paid to be some PMs chatGPT
yes but one that actually works
I even gave several examples of the traits that a commercial code must have and that LLMs fail to generate such code. Not sure why you ignored that.
As another oldster, yes, yes absolutely.
But the deeper question is: can this change? They can't do the job now, will they be able to in the future?
My understanding of what LLM's do/how they work suggests a strong "no". I'm not confident about my confidence, however.
But my issue is with people that claim we are either at that point, or very nearly it.
...No, we are not. We are not even 10% there. I am certain LLMs will be a crucial part of such a general AI one day but it's quite funny how people mistake the brain's frontal lobe with the entire brain. :)
I am even fixing one small app currently that the business owner generated with ChatGPT. So this entire discussion here is doubly amusing for me.
I'll stick to my "old-school" programming. I seem to have a very wealthy near-retirement period in front of me. I'll make gobs of money just by not having forgotten how to do programming.
As someone who has been doing this since the mid 80's in all kinds of enterprise environments, I am finding that the latest generation are getting rather good at code like that, on par with mid-senior level in that way. They are also very capable of discussing architecture approaches with an encyclopaedic knowledge, although humans contribute meaningfully by drawing connections and analogies, and are needed to lead the conversation and make decisions.
What LLM's are still weak at is holding a very large context for an extended period (which is why you can see failures in the areas you mentioned if not properly handled e.g. explicitly discussed, often as separate passes). Humans are better at compressing that information and retaining it over a period. LLM's are also more eager and less defensive coders. That means they need to be kept on a tight leash and drip fed single changes which get backed out each time they fail - so very bright junior in that way. For example, I'm sometimes finding that they are too eager to refactor as they go and spit out env vars to make it more production like, when the task in hand is to get basic and simple first pass working code for later refinement.
I'm highly bullish on their capabilities as a force multiplier, but highly bearish on them becoming self-driving (for anything complex at least).
Very well summed and this is my exact stance, it's just that I am not seeing much of the "force multiplier" thing just yet. Happy to be proven wrong, but last time I checked (August 2024) I didn't get almost anything. Might be related to the fact that I don't do throwaway code, and I need to iterate on it.
And yeah, out of all the LLMs, it seems that Claude is the best when it comes to programming.
I just used Claude recently. Helped me with an obscure library and with the hell that is JWT + OAuth through the big vendors. Definitely saved me a few hours and I am grateful, but those cases are rare.
I'm open to LLMs being a productivity enabler. Recently I started occasionally using them for that as well -- sparingly. I more prefer to use them for taking shortcuts when I work with libraries whose docs are lacking.
...But I did get annoyed at the "programming as a profession is soon dead!" people. I do agree with most other takes on LLMs, however.
now there was someone that I could call king
Interestingly I do not find that the stuff you mentioned are the things that LLLMs are bad at. It can generate easy to read code. It can document its code extensively. It can write tests. It can use dependency injection especially if you ask it to. What I noticed where I am currently am much better than an LLM is that I can still have a very nuanced very complicated problem space in my head and create a solution based on that. The LLM currently cannot solve a problem which is so nuanced and complicated that even I cannot fully describe and work partially from insincts. It is the 'engineering taste' or 'engineering instincts' and our ability to absorb complicated nuanced design problems in our head that separates experienced developers from LLMs.
Unless LLMs get significantly better and just replace humans in every task, I predict that LLMs effect on industry will be that much less developers will be needed but those who will be needed will be relatively 'well paid' as it will be harder to become a professional software developer. (more experience and engineering talent will be needed to work professionally).
I have quickly given up on experimenting with LLMs because they turned out to be net negative for my work -- proof-reading code is slower than writing it.
But if you have good examples then I'll check them out. I am open to the possibility that LLMs are getting better at iterating on code. I'd welcome it. It's just that I haven't seen it yet and I am not going to go out of my way to re-vet several LLMs for the... 4th time it would be, I think.
the code doesn't have to be anything like that, it only has to do one thing and one thing only: ship
And so it goes. I'm happy you guys work in places where you can take your time to design beautiful work of arts, I really am. Again, that's not the experience for everyone else, who are toiling in the fields out there, chased by large packs of rabid tickets.
You can call it dev-first mindset, I call it a sustainable work mindset. I want the people I work for to be happy with my output. Not everything is velocity. I worked in high-velocity companies and ultimately got tired and left. It's not for me.
And it's not about code being beautiful or other artsy snobby stuff like that. It's about it being maintainable really.
And no I am not given much of a time, I simply refused to work in places where I am not given any time is all. I am a foot solider in the trenches just like you, I just don't participate in the suicidal charges. :)
The HN commentariat seems to be comprised of FAANGers and related aspirants; a small part of the overall software market.
The "quality" companies doing high-skilled, bespoke work are a distinct minority.
A huge portion of work for programmers IS Java CRUD, React abominations, some C# thing that runs disgusting queries from an old SQL Server, etc etc.
Those privileged enough to work exclusively at those companies have no idea what the largest portion of the market looks like.
LLMs ARE a "threat" to this kind of work, for all the obvious reasons.
We've watched common baselines be abstracted away and tons of value be opened up to creation from non engineers by reducing the complexity and risk of development and maintenance.
I think this is awesome and it hasn't seemed to eliminate any engineering jobs or roles - lots of crud stuff or easy to think about stuff or non critical stuff is now created that wasn't before and that seems to create much more general understanding of and need for software, not reduce it.
Regardless of tools available I think of software engineering as "thinking systematically" and being able to model, understand, and extend complex ideas in robust ways. This seems improved and empowered by ai coding options, not undermined, so far.
If we reach a level of ai that can take partially thought out goals and reason out the underlying "how it should work"/"what that implies", create that, and be able to extend that (or replace it wholesale without mistakes) then yeah, people who can ONLY code wont have much earning potential (but what job still will in that scenario?).
It seems like current ai might help generate code more quickly (fantastic). Much better ai might help me stay at a higher level when I'm implementing these systems (really fantastic if it's at least quite good), and much much better ai might let me just run the entire business myself and free me up to "debug" at the highest level, which is deciding what business/product needs to exist in the first place and figuring out how to get paid for it.
I'm a pretty severe ai doubter but you can see from my writing above that I think if it DOES manage to be good I think that would be good for us actually, not bad.
My basic conclusion is "they seem quite good (enough) at what appears to be a large portion of 'Dev' jobs" and I can't ignore the possibility of this having a material impact on opportunities.
At this point I'm agnostic on the future of GenAI impacts in any given area. I just no longer have solid ground upon which to have an opinion.
What is the "lesson" that business people fail to learn? That Indians are worse developers than "someone like yourself"?
(I don't mean to bump on this, but it is pretty offensive as currently written.)
2. The lesson business people fail to learn is that there's a minimum payment for programmers to get your stuff done, below which the quality drops sharply and that they should not attempt their cost-saving "strategy" because it ends up costing them much more than just paying me to do it. And I am _not_ commanding SV wages btw; $100k a year is something I only saw twice in my long career. So it's double funny how these "businessmen" are trying to be shrewd and pay even less, only to end up paying 5x my wage to a team that specializes in salvaging nearly-failed projects. I'll always find it amusing.
the outsourcing company's goal is not to ship a good product, but to make the most money from you. so while the "indian developer" might theoretically be able to deliver a better product than you, they wont be incentivized to do so.
in practice, there are also many other factors that play into this - which might arguably play a bigger role like communication barriers, indian career paths (i.e. dev is only a stepstone on the way to manager), employee churn at cheap & terrible outsourcing companies, etc.
> people trying to replace one guy like myself with 3 Indians has been a reality for 10 years at this point, and amusingly they keep failing and never learning their lesson.
Good good, so you're the software equivalent of a 1990's era German auto engineer who currently has no equal, and maybe your career can finish quietly without any impact whatsoever. There are a lot of people on HN who are not you, and could use some real world advice on what to do as the world changes around them.
If you've read "crossing the chasm", in at least that view of the world there are different phases to innovation, with different roles. Innovators, early adopters, early majority, late majority, laggards. Each has a different motivation and requirement for robustness. The laggards won't be caught dead using your new thing until IBM gives it to them running on an AS400. Your job might be equivalent to that, where your skills are rare enough that you'll be fine for a while. However, we're past the "innovators" phase at this point and a lot of startups are tearing business models apart right now, and they are moving along that innovation curve. They may not get to you, but everyone is not you.
The choices for a developer include: "I'm safe", "I'm going to get so good that I'll be safe", "I'm going to leverage AI and be better", and "I'm out". Their decisions should not be based on your unique perspective, but on the changing landscape and how likely it is to affect them.
Good sense-check on where things are in the startup universe: https://youtu.be/z0wt2pe_LZM
I can't find it now, but there's at least one company that is doing enterprise-scale refactoring with LLM's, AST's, rules etc. Obviously it won't handle all edge cases, but...that's not your grandad's Cursor.
Might be this one, but don't recognise the name: https://www.linkedin.com/pulse/multi-repo-ai-assisted-refact...
Also I never said that I "have no equal". I am saying that the death of my career has been predicted for no less than 10 years now and it still has not happened, and I see no signs of it happening; LLMs produce terrible code very often.
This gives me the right to be skeptical from where I am standing. And a bit snarky about it, too.
I asked for a measurable proof, not for your annoyed accusations that I am arrogant.
You are not making an argument. You are doing an ad hominem attack that weakens any other argument you may be making. Still, let's see some of them.
---
RE: choices, my choice has been made long time ago and it's this one: "I will become quite good so as to be mostly safe. If 'AI' displaces me then I'll be happy to work something else until my retirement". Nothing more, nothing less.
RE: "startup universe", that's a very skewed perspective. 99.99999% of all USA startups mean absolutely nothing in the grand scheme of things out there, they are but a tiny bubble in one country in a big planet. Trends change, sometimes drastically and quickly so. What a bunch of guys in comfy positions think about their bubble bears zero relevance to what's actually going on.
> I can't find it now, but there's at least one company that is doing enterprise-scale refactoring with LLM's, AST's, rules etc.
If you find it, let me know. That I would view as an interesting proof and a worthy discussion to have on it after.
"Your analogy with the automakers seems puzzlingly irrelevant"
"Your take is rather romantic."
That's pretty charged language focused on the person not the argument, so if you're surprised why I'm annoyed, start there.
Meta has one: https://arxiv.org/abs/2410.08806
Another, edited in above: https://www.linkedin.com/pulse/multi-repo-ai-assisted-refact...
Another: https://codescene.com/product/ai-coding
However, I still don't recognise the names. The one I saw had no pricing but had worked with some big names.
Edit: WAIT found the one I was thinking of: https://www.youtube.com/watch?v=Ve-akpov78Q - company is https://about.grit.io
In an enterprise setting, I'm the one hitting the brakes on using LLM's. There are huge risks to attaching them to e.g. customer-facing outputs. In a startup setting, full speed ahead. Match the opinion to the needs, keep two opposing thoughts in mind, etc.
Or build an AI agent to transform his speech into something more understandable.
https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/tra...
To be honest, I have no idea how well it works, but you can’t get much bigger than AWS in this regard.
You have to make them write code like that. I TDD by telling the LLM what I want a test for, verify it is red, then ask it to implement. I ask it to write another test for an edge case that isn’t covered, and it will.
Not for 100% of the code, but for production code for sure. And it certainly speeds me up. Especially in dreadful refactoring where I just need to flip from one data structure to another, where my IDE can’t help much.
Rarely, systems can be initiated by individuals, but the vast, vast majority are built and maintained by teams.
Those teams get smaller with LLMs, and it might even lead to a kind of stasis, where there are no new leads with deep experience, so we maintain the old systems.
That's actually fine by big iron, selling data, compute, resilience, and now intelligence. It's a way to avoid new entrants with cheaper programming models (cough J2EE).
So, if you're really serious about dragging in "commercial-grade", it's only fair to incorporate the development and business context.
Obviously anecdata, sure, it's just that LLMs for now seem mostly suited for throwaway code / one-off projects. If there's systemic proof for the contrary I'd be happy to check it out.
> Your take is rather romantic.
I’m not sure you’re aware what you’ve written here. The contrast physically hurts.
I don't see where I was romantic either.
And my comments here mostly stem from annoyance that people claim that we already have this super game-changing AI that will remove programmers. And I still say: no, we don't have it. It works for some things. _Some_. Maybe 10% of the whole thing, if we are being generous.
Cool for you but a lot of us actually iterate on our work.
“doesn’t pass my sniff test”
okay Einstein, you do you
Fixed it for you. ;)
“AI can’t do anything, it sucks!”
and
“AI is AGI and can do everything and your career is done for”
I teeter along the spectrum, and use with caution while learning new topics without expertise.
But I’ve been very surprised by LLMs in some areas (UI design - something I struggle with - I’ve had awesome results!)
My most impressive use case for an LLM so far (Claude 3.5 Sonnet) has been to iterate on a pseudo-3D ASCII renderer in the terminal using C and ncurses, where with the help of an LLM I was able to prototype this, and render an ascii “forest” of 32 highly detailed ascii trees (based off a single ascii tree template), with lighting and 3 axis tree placement, where you can move the “camera” towards and away from the forest.
As you move closer trees scale up, and move towards you, and overlapping trees don’t “blend” into one ascii object - we came up with a clever boundary highlighting solution.
Probably my favourite thing I’ve ever coded, will post to HN soon
And indeed:
> “AI is AGI and can do everything and your career is done for”
...this is the thing I want to stop seeing people even imply it's happening. An LLM helped you? Cool, it helped me as well. Stop claiming that programming is done for. It's VERY far from that.
"Commercial grade applications" doesn't mean much in an industry where ~50% of projects fail. It's been said before that the average organisation cannot solve a solved problem. On top of this there's a lot of bizarre claims about what software _should_ be. All the dependency injection and TDD and scrum and all the kings horses don't mean anything when we are nothing more than a prompt away from a working reference implementation.
Anyone designing a project to be "deployed in every cloud provider" has wasted a huge amount of time and money, and has likely never run ops for such an application. Even then, knowing the trivia and platform specific quirks required for each platform are exactly where LLMs shine.
Your comments about "business people" trying to replace you with multiple Indian people shows your level of personal and professional development, and you're exactly the kind of personality that should be replaced by an LLM subscription.
Getting so worked up doesn't seem objective so it's difficult to take your comment seriously.
Truly hit the nail on the head there. We HAD no business with these side-quests, but now? They're all ripe for the taking, really.
LLM pair programming is unbelievably fun, satisfying, and productive. Why type out the code when you can instead watch it being typed while thinking of and typing out/speaking the next thing you want.
For those who enjoy typing, you could try to get a job dictating letters for lawyers, but something tells me that’s on the way out too.
I honestly have to keep a tight rein on them all, so I usually ask for concepts first with no code, and need to iterate or start again a few times to get what I need. Get clear instructions, then generate. Drag in context, tight reins on changes I want. Make minor changes rather than wholesale.
Tricks I use. “Do you have any questions?” And “tell me what you want to do first.” Trigger it into the right latent space first, get the right neurons firing. Also “how else could I do this”. It’ll sometimes choose bad algo’s so you need to know your DSA, and it loves to overcomplicate. Tight reins :)
Claude’s personality is great. Just wants to help.
All work best on common languages and libraries. Edge cases or new versions get them confused. But you can paste in a new api and it’ll code against that perfectly.
I also use the API’s a lot, from cheap to pricy depending on task. Lots of data extraction, classifying. I got a (pricier model) data cleaner working on other data generated by a cheaper model, asking it to check eg 20 rows in each batch for consistency. Did a great job.
But lately Cursor. It’s just so effortless.
Nowadays I get wayyy more of a kick typing the most efficient Lego prompts in Claude.
Why are half the replies like this?
I’m open to all possibilities. There might be a near term blocker to improvement. There might be an impending architectural change that achieves AGI. Strong opinions for one or the other with no extremely robust proof are a mistake.
Call me if you find a good reason. I still have not.
I have built web services used by many Fortune 100 companies, built and maintained and operated them for many years.
But I'm not doing that anymore. Now I'm working on my own, building lots of prototypes and proof-of-concepts. For that I've founding LLMs to be extremely helpful and time-saving. Who the hell cares if it's not maintainable for years? I'll likely be throwing it out anyway. The point is not to build a maintainable system, it's to see if the system is worth maintaining at all.
Are there software engineers who will not find LLMs helpful? Absolutely. Are there software engineers who will find LLMs extremely helpful? Absolutely.
Both can exist at the same time.
What are the likely impacts over the next 1, 5, 10, 20 years. People getting into development now have the most incredible technology to help them skill up, but also more risk than we had in decades past. There's a continuum of impact and it's not 0 or 100%, and it's not immediate.
What I consider inevitable: humans will keep trying to automate anything that looks repeatable. As long as there is a good chance of financial gain from adding automation, we'll try it. Coding is now potentially at risk of increasing automation, with wildcards on "how much" and "what will the impact be". I'm extremely happy to have nuanced discussions, but I balk at both extremes of "LLMs can scale to hard AGI, give up now" and "we're safe forever". We need shorthand for our assumptions and beliefs so we can discuss differences on the finer points without fixating on obviously incorrect straw men. (The latter aimed at the general tone of these discussions, not your comment.)
My issue is with people claiming LLMs are undoubtedly going to remove programming as a profession. LLMs work fine for one-off code -- when they don't make mistakes even there, that is. They don't work for a lot of other areas, like code you have to iterate on multiple times because the outer world and the business requirements keep changing.
Works for you? Good! Use it, get more productive, you'll only get applause for me. But my work does not involve one-off code and for me LLMs are not impressive because I had to rewrite their code (and to eye-ball it for bugs) multiple times.
"Right tool for the job" and all.
But what you'll probably find is that people that are skilled communicators are currently getting a decent productivity boost from LLMs, and I suspect that the difference between many that are bullish vs bearish is quite likely coming down to ability to structure and communicate thoughts effectively.
Personally, I've found AI to be a large productivity boost - especially once I've put certain patterns and structure into code. It's then like hitting the N2O button on the keyboard.
Sure, there are people making toy apps using LLMs that are going to quickly become a unmaintainable mess, but don't be too quick to assume that LLMs aren't already making an impact within production systems. I know from experience that they are.
Strange perspective. I found LLMs lacking in less popular programming languages, for example. It's mostly down to statistics.
I agree that being able to communicate well with an LLM gives you more results. It's a productivity enabler of sorts. It is not a game changer however.
> don't be too quick to assume that LLMs aren't already making an impact within production systems. I know from experience that they are.
OK, I am open to proof. But people are just saying it and leaving the claims hanging.
I also have no idea what your comment here had to do with LLMs, will you elaborate?
You're essentially saying "the programming to do basic arithmetic and physics took only a year" as if that's remotely impressive compared to the complexity of something like Microsoft Teams. Simultaneous editing of a document by itself is more complicated than anything an Apollo program had to do
I have spent 24 years coding without LLMs, cannot fathom now spending more than a day without it…
If you have scrutable and interesting examples, I am willing to look them up and try to apply them to my work.
Honestly, if you don’t think it works for you, that’s fine with me. I just feel the dismissive attitude is weird since it’s so incredibly useful to me.
If you can give examples of incredible usefulness then that can advance the discussion. Otherwise it's just us trying to out-shout each other, and I'm not interested in that.
I always forget the name of that law but... it's hard to make somebody understand something if their salary depends on them not understanding it.
Why do you think your observable reality is the only one, and the correct one at that? Looking at your mindset, as well as the objections to the contrary (and their belief that they're correct), the truth is likely somewhere in-between the two extremes.
The funny thing about the rest of your comment is that I'm in full agreement with you but somehow you decided that I'm an extremist. I'm not. I'm simply tired of people who make zero effort to prove their hypothesis and just call me arrogant or old / stuck in my ways, again with zero demonstration how LLMs "revolutionize" programming _exactly_.
And you have made more effort in that direction than most people I discussed this with, by the way. Thanks for that.
Still, I don't think it's a hypothesis that most are operating under, but a lived experience that either it works for them or it does not. Just the other day I used ChatGPT to write me a program to split a file into chunks along a delimiter. Something I could absolutely do, in at least a half-dozen languages, but writing that program myself would have distracted me from the actual task at hand, so I had the LLM do it. It's a trivial script, but the point is I didn't have to break my concentration on the other task to get that done. Again, I absolutely could have done it myself, but that would have been a total distraction. https://chatgpt.com/share/67655615-cc44-8009-88c3-5a241d083a...
On a side project I'm working on, I said "now add a button to reset the camera view" (literally, aider can take voice input). we're not quite at the scene from star trek where scottie talks into the mac to try and make transparnet aluminum, but we're not that far off! The LLM went and added the button, wired it into a function call that called into the rendering engine and reset the view. Again, I very much could have done that myself, but it would have taken me longer just to flip through the files involved and type out the same thing. It's not just the time saved, which, I didn't have a stopwatch and a screen recorder, but apart from the time, it's not having to drop my thinking into that frame of reference, so I can think more deeply about the other problems to be solved. Sort of why ceo isn't an IC and why IC's aren't supposed to manage.
Detractors will point out that it must be a toy program, and that it won't work on a million line code base, and maybe it won't, but there's just so much that LLMs can do, as they exist now, that it's not hyperbole to say it's redefined programming, for those specific use cases. But where that use case is "build a web app", I don't know about you, but I use a lot of web apps these days.
> Detractors will point out that it must be a toy program, and that it won't work on a million line code base, and maybe it won't
You might call me a detractor. I think of myself as being informed and feeling the need to point out where do LLMs end and programmers begin because apparently even on an educated forum like HN people shill and exaggerate all the time. That was the reason for most of my comments in this thread.
However, that's absolutely not clear today.
Yea i built bunch of apps when RoR blog demo came out like 2 decades ago. So what?
I am constantly surprised how prevalent this attitude is. ChatGPT was only just released in 2022. Is there some expectation that these things won't improve?
> LLM’s never provide code that pass my sniff test
This is ego speaking.
I definitely expect them to improve. But I also think the point at which they can actually replace a senior programmer is pretty much the exact point at which they can replace any knowledge worker, at which point western society (possibly all society) is in way deeper shit than just me being out of a job.
> This is ego speaking.
It definitely isn't. LLMs are useful for coding now, but they can't really do the whole job without help - at least not for anything non-trivial.
As another senior developer I won't say it's impossible that I'll ever benefit from code generation but I just think it's a terrible space to try and build a solution - we don't need a solution here - I can already type faster than I can think.
I am keenly interested in seeing if someone can leverage AI for query performance tuning or, within the RDBMS, query planning. That feels like an excellent (if highly specific) domain for an LLM.
Pay the $20 for Claude, copy the table DDL's in along with a query you'd like to tune.
Copy in any similar tuned queries you have and tell it you'd like to tune your query in a similar manner.
Once you've explained what you'd like it to do and provided context hit enter.
I'd be very surprised if having done this you can't find value in what it generates.
But can you write tickets faster than you can implement them? I certainly can.
Personally, I have a tendency at work to delay creating tickets until after I've already written the implementation.
Why? Because tickets in my employer's system are expected to identify which component needs to be changed, and ideally should have some detail about what needs to be changed. But both of those things depend on the design of the change being implemented.
In my experience, any given feature usually has multiple possible designs, and the only way to know if a design is good is to try implementing it and see how clean or messy it ends up. Of course, I can guess in advance which design will turn out well. I have to guess, or else I wouldn't know which design to try implementing first. But often my first attempt runs into unexpected wrinkles and I retreat and try a different design.
Other people will start with a design and brute-force their way to working code, and (in my judgmental opinion) the code often ends up lower-quality because of it.
Sooner or later, perhaps AI will be able to perform that entire process autonomously, better than I can. In the meantime, though, people often talk about using AI like a 'junior engineer', where you think up a design yourself and then delegate the grunt work to the AI. That approach feels flawed to me, because it disconnects the designer from the implementer.
abso-frigging-lutely
To me this is n example of software being a form Of literacy - creative work. And yet process is designed by software illiterates who think novels can be written by pre-planning all the paragraphs
If it's "Get us to the moon", it's gonna take me years to write that ticket.
If it was "Make the CTA on the homepage red", it is up for debate whether I needed a ticket at all.
That would be great if the reality wasn't that the suggested LLM slop is actually making it harder to get the much better intellisense suggestions of last year.
When hand-held power tools became a thing, the Hollywood set builder’s union was afraid of this exact same thing - people would be replaced by the tools.
Instead, productions built bigger sets (the ceiling was raised) and smaller productions could get in on things (the floor was lowered).
I always took that to mean “people aren’t going to spend less to do the job - they’ll just do a bigger job.”
A few hundred animators turned into a few thousand computer animators & their new support crew, in most shops. And new, smaller shops took form! But the shops didn't go away, at least not the ones who changed.
It basically boils down to this: some shops will act with haste and purge their experts in order to replace them with LLMs, and others will adopt the LLMs, bring on the new support staff they need, and find a way to synthesize a new process that involves experts and LLMs.
Shops who've abandoned their experts will immediately begin to stagnate and produce more and more mediocre slop (we're seeing it already!) and the shops who metamorphose into the new model you're speculating at will, meanwhile, create a whole new era of process and production. Right now, you really want to be in that second camp - the synthesizers. Eventually the incumbents will have no choice but to buy up those new players in order to coup their process.
Then the steam engine and internal combustion engine came around and work horses all but disappeared.
There's no economic law that says a new productivity-enhancing programming tool is always a stirrup and never a steam engine.
All the tools that are stirrups were used "by the horse" (you get what I mean); that implies to me that so long as the AI tools are used by the programmers (what we've currently got), they're stirrups.
The steam engines were used by the people "employing the horse" - ala, "people don't buy drills they buy holes" (people don't employ horses, they move stuff) - so that's what to look for to see what's a steam engine.
IMHO, as long as all this is "telling the computer what to do", it's stirrups, because that's what we've been doing. If it becomes something else, then maybe it's a steam engine.
And, to repeat - thank you for this point, it's an excellent one, and provides some good language for talking about it.
Maybe another interesting case would be secretaries. It used to be very common that even middle management positions at small to medium companies would have personal human secretaries and assistants, but now they're very rare. Maybe some senior executives at large corporations and government agencies still have them, but I have never met one in North America who does.
Below that level it's become the standard that people do their own typing, manage their own appointments and answer their own emails. I think that's mainly because computers made it easy and automated enough that it doesn't take a full time staffer, and computer literacy got widespread enough that anyone could do it themselves without specialized skills.
So if programming got easy enough that you don't need programmers to do the work, then perhaps we could see the profession hollow out. Alternatively we could run out of demand for software but that seems less likely!
(a related article: https://archive.is/cAKmu )
Those textile workers were afraid machines would replace them, but that didn't happen - the work was sent overseas, to countries with cheaper labor. It was completely tucked away from regulation and domestic scrutiny, and so remains to this day a hotbed of human rights abuses.
The phenomenon you're describing wasn't an industry vanishing due to automation. You're describing a moment where a domestic industry vanished because the cost of overhauling the machinery in domestic production facilities was near to the cost of establishing an entirely new production facility in a cheaper, more easily exploitable location.
[0] https://www.uniformmarket.com/statistics/global-apparel-indu...
Computers aren't going anywhere, so the whole field of programming will continue to grow, but will there still be FAANG salaries to be had?
if you're afraid of salaries shrinking due to LLMs, then i implore you, get out of software development. it'll help me a lot!
Software is typically not a cost constrained activity due to its typically higher ROI/scale. Its all about fixed costs and scaling profits mostly. Unfortunately given this my current belief is that on balance AI will destroy many jobs in this industry if it gets to the point where it can do a software job.
Assuming inelastic demand (software demand relative to SWE costs) any cost reductions in inputs (e.g. AI) won't translate to much more demand in software. The same effect that drove SWE prices high and didn't change demand for software all that much (explains the 2010's IMO particularly in places like SV) also works in reverse.
- Software scales; generally it is a function of market size, reach, network effects, etc. Cost is a factor but not the greatest factor. Most software makes profit "at scale" - engineering is a capital fixed cost. This means that software feasibility is generally inelastic to cost or rather - if I make SWE's cheaper/not required to employ it wouldn't change the potential profit vs cost equation much IMV for many different opportunities. Software is limited more by ideas, and potential untapped market opportunities. Yes - it would be much cheaper to build things; but it wouldn't change the feasibility of a lot of project assessments since cost of SWE's at least from what I've seen in assessments isn't the biggest factor. This effect plays out to varying degrees in a lot of capex - as long as the ROI makes sense its worth going ahead especially for larger orgs who have more access to capital. The ROI from scale dwarfs potential cost rises often making it less of a function of end SWE demand. This effect happens in other engineering disciplines as well to varying effects - software just has it in spades until you mix hardware scaling into the mix (e.g GPUs).
- The previous "golden era" where the in-elasticity of software demand w.r.t cost meant salaries just kept rising. Inelasticity can be good for sellers of a commodity if demand increases. More importantly demand didn't really decrease for most companies as SWE salaries kept rising - entry requirements were relaxing generally. The good side of in-elasticity is reversed potentially by AI making it a bad thing.
However small "throwaway" software which does have a "is it worth it/cost factor" will increase under AI most probably. Just don't think it will offset the reduction demanded by capital holders; nor will it be necessarily done by the same people anyway (democratizing software dev) meaning it isn't a job saver.
In your case I would imagine there is a reason why said software doesn't have SWE's coding it now - it isn't feasible given the likely scale it would have (I assume just your team). AI may make it feasible, but that doesn't help the OP in the way it does it - it does so by making it feasible for as you put it junior out of uni not even "SWE" grads. That doesn't help the OP.
Also, a piece missing from this comparison is a set of people who don't believe the new tool will actually have a measurable impact on their domain. I assume few-to-none could argue that power tools would have no impact on their profession.
The history of software production as a profession (as against computer science) is essentially a series of incremental increases in the size and complexity of systems (and teams) that don't fall apart under their own weight. There isn't much evidence we have approached the limit here, so it's a pretty good bet for at least the medium term.
But focusing on system size is perhaps a red herring. There is an almost unfathomably vast pool of potential software systems (or customization of systems) that aren't realized today because they aren't cost effective...
The demand for software improvements is effectively inexhaustible. It’s not a zero sum game.
Have people seen some of the recent software being churned out? Hint, it's not all GenAI bubblespit. A lot of it is killer, legitimately good stuff.
you know what i'd do if AI made it so i could replace 10 devs with 8? use the 2 newly-freed up developers to work on some of the other 100000 things i need done
It's not about a discrete product or project, but continuous improvement upon that which already exists is what makes up most of the volume of "what would happen if we had more people".
- a good LIMS (Laboratory Information Management System) that incorporates bioinformatics results. LIMS come from a pure lab, benchwork background, and rarely support the inclusion of bioinformatics analyses on samples included in the system. I have yet to see a lab that uses an off-the-shelf LIMS unmodified - they never do what they say they do. (And the amount of labs running on something built on age-old software still in use is... horrific. I know one US lab running some abomination built on Filemaker Pro)
- Software to manage grants. Who is being owed what, what are the milestones, who's looking after this, who are the contact persons, what are the milestones and when to remind, due diligence on potential partners etc. I worked for a grant-giving body and they came up with a weird mix of PowerBI and a pile of Excel sheets and PDFs.
- A thing that lets you catalogue Jupyter notebooks and Rstudio projects. I'm drowning in various projects from various data scientists and there's no nice way to centrally catalogue all those file lumps - 'there was this one function in this one project.... let's grep a bit' can be replaced by a central findable, searchable, taggable repository of data science projects.
Oh... oh my. This extends so far beyond data science for me and I am aching for this. I work in this weird intersection of agriculture/high-performance imaging/ML/aerospace. Among my colleagues we've got this huge volume of Excel sheets, Jupyter notebooks, random Python scripts and C++ micro-tools, and more I'm sure. The ones that "officially" became part of a project were assigned document numbers and archived appropriately (although they're still hard to find). The ones that were one-off analyses for all kinds of things are scattered among OneDrive folders, Zip files in OneDrive folders, random Git repos, and some, I'm sure, only exist on certain peoples' laptops.
Well then you can count IDEs, static typing, debuggers, version control etc. as replacing programmers too. But I don't think any of those performance enhancers have really reduced the number of programmers needed.
In fact it's a well known paradox that making a job more efficient can increase the number of people doing that job. It's called the Jevons paradox (thanks ChatGPT - probably wouldn't have been able to find that with Google!)
Making people 20% more efficient is very different to entirely replacing them.
Agree with this take. I think the probability that this happens within my next 20 years of work is very low, but are non-zero. I do cultivate skills that are a hedge against this, and if time moves on and the probability of this scenario seems to get larger and larger, I'll work harder on those skills. Things like fixing cars, welding, fabrication, growing potatoes, etc (which I already enjoy as hobbies). As you said, skills that are helpful if shit were to really hit the fan.
I think there are other "knowledge workers" that will get replaced before that point though, and society will go through some sort of upheaval as this happens. My guess is that capital will get even more consolidated, which is sort of unpleasant to think about.
earnest question because i still consider this a vague, distant future: how did you come up with 20 years?
Its the mid age people 35-45 yr olds that I think will be hit hardest by this if it eventuates. Usually at this point in life there's plenty of commitments (family, mortgage, whatever). The career may end before they are able to retire, but they are either too burdened with things; or ageism sets in making it hard to be adaptable.
And like the parent poster said - even if you were to avoid the industry, what would you pivot to? Anything else you might go into would be trivial to automate by that point.
Those industries also more wisely IMO tend to be unionised, own their own business, and so have the incentive to keep their knowledge tight. Even with automation the customer (their business) and the supplier (them and their staff) are the same so the value transfer of AI will make their job easier but they will keep the value. All good things to slow down progress and keep some economic rent for yourself and your family. Slow change is a less stressful life.
The intellectual fields lose in an AI world long term; the strong and the renter class (capital/owners) win. That's what "Intelligence is a commodity" that many AI heads keep saying actually means. This opens up a lot of future dystopian views/risks that probably aren't worth the benefit IMO to the majority of people that aren't in the above classes of people (i.e. most people).
The problem with software in general is that it is quite difficult generally to be a "long term" founder at least in my opinion for most people which means most employment comes from large corps/govt's/etc where the supplier (the SWE) is different than the employer. Most ideas generally don't make it or last only briefly, and the ones that stick around usually benefit from dealing with scale - something generally only larger corps have with their ideas (there are exceptions in new fields, but then they become the next large corp and there isn't enough for everyone to do this).
When this moment becomes reality - the world economy will change a lot, all jobs and markets will shift.
And there won’t be any way to future proof your skills and they all will be irrelevant.
Right now many like to say “learn how to work with ai, it will be valuable “ . No, it won’t. Because even now it is absolutely easy to work with it, any developer can pick up ai in a week, and it will become easier and easier.
A better time spent is developing evergreen skills.
*sort of, sometimes, with simple enough problems with sufficiently little context, for code that can be easily tested, and for which sufficient examples exist in the training data.
I mean hey, two years after being promised AGI was literally here, LLMs are almost as useful as traditional static analysis tools!
I guess you could have them generate comments for you based on the code as long as you're happy to proofread and correct them when they're wrong.
Remember when CPUs were obsolete after three years? GPT has shown zero improvement in its ability to generate novel content since it was first released as GPT2 almost ten years ago! I would know because I spent countless hours playing with that model.
Secondly, LLMs are objectively useful for coding now. That's not the same thing as saying they are replacements for SWEs. They're a tool, like syntax highlighting or real-time compiler error visibility or even context-aware keyword autocompletion.
Some individuals don't find those things useful, and prefer to develop in a plain text editor that does not have those features, and that's fine.
But all of those features, and LLMs are now on that list, are broadly useful in the sense that they generally improve productivity across the industry. They already right now save enormous amounts of developer time, and to ignore that because you are not one of the people whose time is currently being saved, indicates that you may not be keeping up with understanding the technology of your field.
There's an important difference between a tool being useful for generating novel content, and a tool being useful. I can think of a lot of useful tools that are not useful for generating novel content.
But is that actually a true statement. Are there actual studies to back that up?
AI is hyped to the moon right now. It is really difficult to separate the hype from reality. There are ancedotal reports of ai helping with coding, but there are also ancedotal reports that they get things almost right but not quite, which often leads to bugs which wouldn't otherwise happen. I think its unclear if that is a net win for productivity in software engineering. It would be interesting if there was a robust study about it.
I am aware of an equal number of studies about the time saved overall by use of LLMs, and time saved overall by use of syntax highlighting.
In fact, here's a study claiming syntax highlighting in IDEs does not help code comprehension: https://link.springer.com/article/10.1007/s10664-017-9579-0
Shall we therefore conclude that syntax highlighting is not useful, that developers who use syntax highlighting are just part of the IDE hype train, and that anecdotal reports of syntax highlighting being helpful are counterbalanced by anecdotal reports of $IDE having incorrect syntax highlighitng on $Esoteric_file_format?
Most of the failures of LLMs with coding that I have seen has been a result of asking too much of the LLM. Writing a hundred context-aware unit tests is something that an LLM is excellent at, and would have taken a developer a long time previously. Asking an LLM to write a novel algorithm to speed up image processing of the output of your electron microscope will go less well.
Yes. We should conclude that syntax highlighting is not useful in languages that the syntax highlighter does not support. I think basically everyone would agree with this statement.
Similarly an llm that worked 100% of the time and could solve any problem would be pretty useful. (Or at least worked correctly as often as syntax highlighting in situations where it is actually used does)
However that's not the world we live in. Its a reasonable question to ask if llm is good enough yet where the productivity gain outweighs the productivity lost.
The point I was trying to make was, an LLM is as reliably useful as syntax highlighting, for the tasks that coding LLMs are good at today. Which is not a lot, but enough to speed up junior devs. The issues come when people assume they can solve any problem, and try to use them on tasks to which they are not suited. Much like applying syntax highlighting on an unsupported language, this doesn't work.
Like any tool, there's a learning curve. Once someone learns what does and does not work, it's generally an strict productivity boost.
I fixed a production issue earlier this year that turned out to be a naive infinite loop - it was trying to load all data from a paginated API endpoint, but there was no logic to update the page number being fetched.
There was a test for it. Alas, the test didn't actually cover this scenario.
I mention this because it was committed by a co-worker whose work is historically excellent, but who started using Copilot / ChatGPT. I'm pretty sure it was an LLM-generated function and test, and they were deeply broken.
Mostly they've been working great for this co-worker.
But not reliably.
A very similar example is StackOverflow. If you copy/paste answers verbatim from SO, you will have problems. Some top answers are deeply broken or have obvious bugs. Frequently, SO answers are only related to your question, but do not explicitly answer it.
SO is useful to the industry in the same way LLMs are.
LLMs are in the middle. Its unclear which side of the line they are on. Some ancedotes say one thing some say another. That's why studies would be great. Its also why syntax highlighting is a bad comparison since that is not in the greyzone.
No, they're subjectively useful for coding in the view of some people. I find them useless for coding. If they were objectively useful, it would be impossible for me to find them useless because that is what objectivity means.
It's useless to your coding, but useful to the industry of coding.
If that statement isn't coming from ego, then where is it coming from? It's provably true that LLM's can generate working code. They've been trained on billions of examples.
Developers seem to focus on the set of cases that LLM's produce code that doesn't work, and use that as evidence that these tools are "useless".
It has been interesting as a rubber duck, exploring a new topic or language, some code golf, but so far not for production code for me.
Now, understand that most people don't have the same grasp of [your programming language] that you have, so it's probably not easier for them to write it.
I actually said in my comment that exploring a new language is one area I find LLMs to be interesting.
They produce mostly working code, often with odd design decisions that the bot can't fully justify.
The difficult part of coding is cultivating a mental model of the problem + solution space. When you let an LLM write code for you, your mental model falls behind. You can read the new code closely, internalize it, double-check the docs, and keep your mental model up to date (which takes longer than you think), or you can forge ahead, confident that it all more or less looks right. The second option is easier, faster, and very tempting, and it is the reason why various studies have found that code written with LLM assistance introduces more bugs than code written without.
There are plenty of innovations that have made programming a little bit faster (autocomplete, garbage collection, what-have-you), but none of them were a silver bullet. LLMs aren't either. The essential complexity of the work hasn't changed, and a chat bot can't manage it for you. In my (admittedly limited) experience with code assistants, I've found that the faster you move with an LLM, the more time you have to spend debugging afterwards, and the more difficult that process becomes.
if the stakeholders knew how to do what they needed to build and how, then they could use LLMs, but translating complex requirements into code is something that these tools are not even close to cracking.
Completely agree.
What I don't agree with is statements like these:
> LLM’s never provide code that pass my sniff test
To me, these (false) absolutions about chat bot capabilities, are being rehashed so frequently, that it derails every conversation about using LLM's for dev work. You'll find similar statements in nearly every thread about LLM's for coding tasks.
It's provably true that LLM's can produce working code. It's also true, that some increasingly large portion of coding is being offloaded to LLM's.
In my opinion, developers need to grow out of this attitude that they are John Henry and they'll outpace the mechanical drilling machine. It's a tired conversation.
You've restated this point several times but the reason it's not more convincing to many people is that simply producing code that works is rarely an actual goal on many projects. On larger projects it's much more about producing code that is consistent with the rest of the project, and is easily extensible, and is readable for your teammates, and is easy to debug when something goes wrong, is testable, and so on.
The code working is a necessary condition, but is insufficient to tell if it's a valuable contribution.
LLM's can sometimes provide the bare minimum. And then you have to refactor and massage it all the way to the good bit, but unlike looking up other people's endeavors on something like Stack Overflow, with the LLM's code I have no context why it "thought" that was a good idea. If I ask it, it may parrot something from the relevant training set, or it might be bullshitting completely. The end result? This is _more_ work for a senior dev, not less.
Hence why it has never passed my sniff test. Its code is at best the quality of code even junior developers wouldn't open a PR for yet. Or if they did they'd be asked to explain how and why and quickly learn to not open the code for review before they've properly considered the implications.
This is correct - but it's also true that LLMs can produce flawed code. To me the cost of telling whether code is correct or flawed is larger than the cost of me just writing correct code. This may be an AuDHD thing but I can better comprehend the correctness of a solution if I'm watching (and doing) the making of that solution than if I'm reading it after the fact.
From what I've seen of Copilots, while they can produce working code, I've not seen that much that it offers beyond the surface level which is fast enough for me to type. I am also deeply perturbed from some interviews I've done for senior candidates recently who are using them and, when asked to disable them for collaborative coding task, completely fall apart because of their dependency over knowledge.
This is not to say I do not see value in AI, LLMs or ML (I very much do). However, I code broadly at the speed of thought, and that's not really something I think will be massively aided by it.
At the same time, I know I am an outlier in my practice relative to lots around me.
While I don't doubt other improvements that may come from LLM in development, the current state of the art feels less like a mechanical drill and more like an electric triangle.
Senior devs know this, and factor code down to the minimum necessary.
Junior devs and LLMs think that writing code is the point and will generate lots of it without worrying about things like leverage, levels of abstraction, future extensibility, etc.
You can write good code or bad code with LLMs.
Of course if you try to one shot something complex with a single line prompt, the result will be bad. This is why humans are still needed and will be for a long time imo.
Empirically, LLMs work best at coding when doing completely "routine" coding tasks: CRUD apps, React components, etc. Because there's lots of examples of that online.
I'm writing a data-driven query compiler and LLM code assistance fails hard, in both blatant and subtle ways. There just isn't enough training data.
Another argument: if a LLM could function like a senior dev, it could learn to program in new programming languages given the language's syntax, docs and API. In practice they cannot. It doesn't matter what you put into the context, LLMs just seem incapable of writing in niche languages.
Which to me says that, at least for now, their capabilities are based more on pattern identification and repetition than they are on reasoning.
That said, you’re right of course that it will do better when there’s more training data.
ChatGPT, even now in late-2024, still hallucinates standard-library types and methods more-often-than-not whenever I ask it to generate code for me. Granted, I don’t target the most popular platforms (i.e. React/Node/etc; I’m currently in a .NET shop, which is a minority platform now, but ChatGPT’s poor performance is surprising given the overall volume and quality of .NET content and documentation out there.
My perception is that “applications” work is more likely to be automated-away by LLMs/copilots because so much of it is so similar to everyone else’s, so I agree with those who say LLMs are only as good as there are examples of something online, whereas asking ChatGPT to write something for a less-trodden area, like Haskell or even a Windows driver, is frequently a complete waste of time as whatever it generates is far beyond salvaging.
Beyond hallucinations, my other problem lies in the small context window which means I can’t simply provide all the content it needs for context. Once a project grows past hundreds of KB of significant source I honestly don’t know how us humans are meant to get LLMs to work on them. Please educate me.
I’ll declare I have no first-hand experience with GitHub Copilot and other systems because of the poor experiences I had with ChatGPT. As you’re seemingly saying that this is a solved problem now, can you please provide some details on the projects where LLMs worked well for you? (Such as which model/service, project platform/language, the kinds of prompts, etc?). If not, then I’ll remain skeptical.
Not an argument, unsolicited advice: my guess is you are asking it to do too much work at once. Make much smaller changes. Try to ask for as roughly much as you would put into one git commit (per best practices)-- for me that's usually editing a dozen or less lines of code.
> Once a project grows past hundreds of KB of significant source I honestly don’t know how us humans are meant to get LLMs to work on them. Please educate me.
https://github.com/Aider-AI/aider
Edit: The author of aider puts the percentage of the code written by LLMs for each release. It's been 70%+. But some problems are still easier to handle yourself. https://github.com/Aider-AI/aider/releases
Then why can't I see this magical code that is produced? I mean a real big application with a purpose and multiple dependencies, not yet another ReactJS todo list. I've seen comments like that a hundred times already but not one repository that could be equivalent to what I currently do.
For me the experience of LLM is a bad tool that calls functions that are obsolete or do not exist at all, not very earth-shattering.
They don't have to replace you to reduce headcount. They could increase your workload so where they needed five senior developers, they can do with maybe three. That's like six one way and half a dozen the other way because two developers lost a job, right?
What I've seen of them, the good ones mostly produce OK code. Not terrible, usually works.
Although I like them even for that low-ish bar, although I find them to be both a time-saver and a personal motivation assistant, they're still a thing that needs a real domain expert to spot the mistakes they make.
> Developers seem to focus on the set of cases that LLM's produce code that doesn't work, and use that as evidence that these tools are "useless".
I do find it amusing how many humans turned out to be stuck thinking in boolean terms, dismissing the I in AGI, calling them as "useless" because it "can't take my job". Same with the G in AGI, dismissing the breadth of something that speaks 50 languages when humans who speak five or six languages are considered unusually skilled.
I am pro AI and I'm probably even overvaluing the value AI brings. However, for me, this doesn't work in more "esoteric" programming languages or those with stricter rulesets like Rust. LLMS provide fine JS code, since there's no compiler to satisfy, but CPP without undefined behaviour or compiling Rust code is rare.
There's also no chance of LLMS providing compiling code if you're using a library version with a newer API than the one in the training set.
Even if it did work, working code is barely half the battle.
Yeah for simple examples, especially in web dev. As soon as you step outside those bounds they make mistakes all the time.
As I said, they're still useful because roughly correct but buggy code is often quite helpful when you're programming. But there's zero chance you can just say "write me an driver for the nRF905 using Embassy and embedded-hal" and get something working. Whereas I, a human, can do that.
https://chatgpt.com/share/6760c3b3-bae8-8009-8744-c25d5602bf...
Because, one way or another, you're still going to need to become fluent enough in the problem domain & the API that you can fully review the implementation and make sure chatgpt hasn't hallucinated in any weird problems. And chatgpt can't really explain its own work, so if anything seems funny, you're going to have to sleuth it out yourself.
And at that point, it's just 339 lines of code, including imports, comments, and misc formatting. How much time have you really saved?
I think that software development is just an extremely poor market segment for these kinds of tools - we've already got mountains of productivity tools that minimize how much time we need to spend doing the silly rote programming stuff - most of software development is problem solving.
Given the growth-oriented capitalist society we live in in the west, I'm not all that worried about senior and super-senior engineers being fired. I think a much more likely outcome is that if a business does figure out a good way to turn an LLM into a force-multiplier for senior engineers, they're going to use that to grow faster.
There is a large untapped niche too that this could potentially unlock: projects that aren't currently economically viable due to the current cost of development. I've done a few of these on a volunteer basis for non-profits but can't do it all the time due to time/financial constraints. If LLM tech actually makes me 5x more productive on simple stuff (most of these projects are simple) then it could get viable to start knocking those out more often.
No, it really isn't. Repeatedly, the case is that people are trying to pass off GPT's work as good without actually verifying the output. I keep seeing "look at this wonderful script GPT made for me to do X", and it does not pass code review, and is generally extremely low quality.
In one example, a bash script was generated to count number SLoC changed by author; it was extremely convoluted, and after I simplified it, I noticed that the output of the simplified version differed, because the original was omitted changes that were only a single line.
In another example it took several back & forths during a review to ask "where are you getting this code? / why do you think this code works, when nothing in the docs supports that?" and after several back and forths, it was admitted that GPT wrote it. The dev who wrote it would have been far better served RTFM, than a several cycle long review that ended up with most of GPT's hallucinations being stripped from the PR.
Those who think LLM's output is good have not reviewed the output strenuously enough.
> Is there some expectation that these things won't improve?
Because randomized token generation inherently lacks actual reasoning about the behavior of the code. My code generator does not.
The moment you have some weird library that 4 people in the world know (which happens more than you’d expect) or hell even something without a lot of OSS code what exactly is an LLM going to do? How is it supposed to predict code that’s not derived from its training set?
My experience thus far is that it starts hallucinating and it’s not really gotten any better at it.
I’ll continue using it to generate sed and awk commands, but I’ve yet to find a way to make my life easier with the “hard bits” I want help with.
The first example I gave was an example of someone using an LLM to generate sed & awk commands, on which it failed spectacularly, on everything from the basics to higher-level stuff. The emitted code even included awk, and the awk was poor quality: e.g., it had to store the git log output & make several passes over it with awk, when in reality, you could just `git log | awk`; it was doing `... | grep | awk` which … if you know awk, really isn't required. The regex it was using to work with the git log output it was parsing with awk was wrong, resulting in the wrong output. Even trivial "sane bash"-isms, it messed up: didn't quote variables that needed to be quotes, didn't take advantage of bashisms even though requiring bash in the shebang, etc.
The task was a simple one, bordering on trivial, and any way you cut it, from "was the code correct?" to "was the code high quality?", it failed.
But it shouldn't be terribly surprising that an LLM would fail at writing decent bash: its input corpus would resemble bash found on the Internet, and IME, most bash out there fails to follow best-practice; the skill level of the authors probably follows a Pareto distribution due to the time & effort required to learn anything. GIGO, but with way more steps involved.
I've other examples, such as involving Kubernetes: Kubernetes is also not in the category of "4 people in the world know": "how do I get the replica number from a pod in a statefulset?" (i.e., the -0, -1, etc., at the end of the pod name) — I was told to query,
.metadata.labels.replicaset-序号
(It's just nonsense; not only does no such label exist for what I want, it certainly doesn't exist with a Chinese name. AFAICT, that label name did not appear on the Internet at the time the LLM generated it, although it does, of course, now.) Again, simple task, wide amount of documentation & examples in the training set, and garbage output.It really is. Either that or you’re not thinking about what you’re saying.
Imagine code passes your rigorous review.
How do you know that it wasn’t from an LLM?
If it’s because you know that you only let good code pass your review and you know that LLMs only generate bad code, think about that a bit.
That's not what I'm saying (and it's a strawman; yes, presumably some LLM code would escape review and I wouldn't know it's from an LLM, though I find that unlikely, given…) — what I'm saying is of LLM generated code that is reviewed, what is the quality & correctness of the reviewed code? And it's resoundingly (easily >90%) crap.
Obviously we can't sample from unknown-authorship … nor am I; I'm sampling problems that I and others run through an LLM, and the output thereof.
The other facet of this point is that I believe a lot of the craze that users using the LLM have is driven by them not looking closely at the output; if you're just deriving code from the LLM, chucking it over the wall, and calling it a day (as was the case from one of the examples in the comment above) — you're perceiving the LLM as being useful, when it fact it is leaving bugs that you're either not attributing to it, someone else is cleaning up (again, that was the case in the above example), etc.
What makes you so sure that none of the resoundingly non-crap that you have reviewed was not produced by LLM?
It’s like saying you only like homemade cookies not ones from the store. But you may be gleefully chowing down on cookies that you believe are homemade because you like them (so they must be homemade) without knowing they actually came from the store.
From the post you're replying to:
> Obviously we can't sample from unknown-authorship … nor am I; I'm sampling problems that I and others run through an LLM, and the output thereof.
> I'm sampling problems that I and others run through an LLM
This is not what’s happening unless 100% of the problems they’ve sampled (even outside of this fun little exercise) have been run through an LLM.
They’re pretending like it doesn’t matter that they’re looking at untold numbers of other problems and are not aware whether those are LLM generated or not.
No. The accurate way to say this is:
“The code that they have reviewed that they know came from an LLM has been, to their standards, subpar.”
A human junior developer can learn from this tutoring and rarely regress over time. But the LLM’s all by design cannot and do not rewire their understanding of the problem space over time, nor do they remember examples and lessons from previous iterations to build upon. I have to handhold them forever, and they never learn.
Even when they use significant parts of the existing codebase as their context window they’re still blind to the whole reality and history of the code.
Now just to be clear, I do use LLM’s at my job. Just not to code. I use them to parse documents and assist users with otherwise repetitive manual tasks. I use their strength as language models to convert visual tokens parsed by an OCR to grasp the sentence structure and convert that into text segments which can be used more readily by users. At that they are incredible, even something smaller like llama 7b.
Is there any expectations that things will? Is there more untapped great quality data that LLMs can ingest? Will a larger model perform meaningfully better? Will it solve the pervasive issue of generating plausibly sounding bullshit?
I used LLMs for a while, I found them largely useless for my job. They were helpful for things I don't really need help with, and they were mostly damaging for things I actually needed.
> This is ego speaking.
Or maybe it was an accurate assessment for his use case, and your wishful thinking makes you think it was his ego speaking.
Seems like an odd question. The answer is obviously yes: There is a very pervasive expectation that LLM's will continue to improve, and it seems odd to suggest otherwise. There is hundreds of billions of dollars being spent on AI training and that number is increasing each year.
> Is there more untapped great quality data that LLMs can ingest?
Why wouldn't there be? AI's are currently trained on the internet but that's obviously not the only source of data.
> Will a larger model perform meaningfully better?
The answer to this, is also yes. It is well established that, all else being equal, a bigger model is better than a smaller model, assuming that the smaller model hasn't already captured all of the available information.
It isn't odd at all. In the early 21st century there were expectations of ever, exponentially increasing processing power. This misconception partially gave us the game Crysis, which, if I'm not mistaken, was written with wildly optimistic assumptions about the growing power of computer hardware.
People are horrible at predicting the future of technology (beyond meaninglesslessly or trivially broad generalizations) and even when predictions turn out to be correct, they're often correct for the wrong reasons. If we were better at it, even in the shortest term, where such predictions should be the easiest, we'd all be megamillionaires, because we'd have seen the writing on the wall and invested in Nvidia before the AI craze reached its current fever pitch.
I did this when I saw the first stable diffusion AI titties. So far up over 10x.
If I had a nickel for every tech that takes off once porn finds its way to it, I wouldn't be counting in nickels.
YMMV but count me in the camp that I think there’s better odds that LLMs are at or near their potential vs in their nascent stages.
That makes an assumption that throwing dollars on AI training is a surefire way to solve the many shortcomings of LLMs. It is a very optimistic assumption.
> Why wouldn't there be? AI's are currently trained on the internet but that's obviously not the only source of data.
"The Internet" basically encompasses all meaningful sources of data available, especially if we are talking specifically about software development. But even beyond that, it is very unclear what other high quality data it would consume that would improve the things.
> The answer to this, is also yes. It is well established that, all else being equal, a bigger model is better than a smaller model, assuming that the smaller model hasn't already captured all of the available information.
I love how you conveniently sidestepped the part where I ask if it would improve the pervasive issue of generating plausibly sounding bullshit.
The assumption that generative AI will improve is as valid as the assumption that it will plateau. It is quite possible that what we are seeing is "as good as it gets", and some major breakthrough, that may or may not happen on our lifetime, is needed.
That's not an assumption that I am personally making. That's what experts in the field believe.
> "The Internet" basically encompasses all meaningful sources of data available, especially if we are talking specifically about software development. But even beyond that, it is very unclear what other high quality data it would consume that would improve the things.
How about, interacting with the world?
> I love how you conveniently sidestepped the part where I ask if it would improve the pervasive issue of generating plausibly sounding bullshit.
I was not trying to "conveniently sidestep". To me, that reads like a more emotional wording of the first question you asked, which is if LLM's are expected to improve. To that question I answered yes.
> The assumption that generative AI will improve is as valid as the assumption that it will plateau.
It is certainly not as valid to say that generative AI will plateau. This is comparable to saying that the odds of winning any bet are 50/50, because you either win or lose. Probabilities are a thing. And the probability that the trend will plateau is lower than not.
> It is quite possible that what we are seeing is "as good as it gets", and some major breakthrough, that may or may not happen on our lifetime, is needed.
It's also possible that dolphins are sentient aliens sent here to watch over us.
People invested in something believe that throwing money at it will make it better? Color me shocked.
"eat meat, says the butcher"
The rest of your answer amount to optimistic assumptions that yes, AI future is rosy, based on nothing but a desire that it will, because of course it will.
> This is ego speaking.
That's been my experience of LLM-generated code that people have submitted to open source projects I work on. It's all been crap. Some of it didn't even compile. Some of it changed comments that were previously correct to say something untrue. I've yet to see a single PR that implemented something useful.
I think the problem has become people with no bar for quality submitting PRs that ruin a repo, and then getting mad when it's not merged because ChatGPT reviewed the patch already.
Are you sure it was people? Maybe the AI learned how to make PRs, or is learning how to do so by using your project as a test bed.
Why do we expect that LLMs are going to buck this trend? It's not for accuracy--the previous attempts, when demonstrating their proof-of-concepts, actually reliably worked, whereas with "modern LLMs", virtually every demonstration manages to include "well, okay, the output has a bug here."
LLM's as a product feel practically similar, because _even if_ they could write code that worked in large enough quantities to constitute any decently complex application, the person telling them what problem to solve has to understand the problem space since the LLM's can't reason.
Given that neither of those things are true, it's not much different from visual programming tools, practically speaking.
As soon as you have something less common, it will give you widely incorrect garbage that does not make any sense. Even worse, it APPEARS correct, but it won’t work or will do something else completely.
And I use 4o and o1 every day. Mostly for boilerplate and boring stuff.
I have colleagues that submit ChatGPT generated code and I’ll immediately recognize it because it is just very bad. The colleague would have tested it, so the code does work, but it is always bad, weird or otherwise unusual. Functional, but not nice.
ChatGPT can give very complicated solutions to things that can be solved with a one-liner.
Sure. But the expectation is quantitative improvement - qualitative improvement has not happened, and is unlikely to happen without major research breakthroughs.
LLMs are useful. They still need a lot of supervision & hand holding, and they'll continue to for a long while
And no, it's not "ego speaking". It's long experience. There is fundamentally no reason to believe LLMs will take a leap to "works reliably in subtle circumstances, and will elicit requirements as necessary". (Sure, if you think SWE work is typing keys and make some code, any code, appear, then LLM are a threat)
It would need to become something literally extraterrestrial that has not evolved in the 3.7b+ years prior.
I wouldn't say it's impossible, but if evolution ever got there that creature would be so far removed from a bird that I don't think we'd recognize it :p
That, or they "keep improving every month" til they become evolved enough to build rockets, at which point the entire point of them being birds becomes moot.
Bitcoin was released in what year? I still cannot use it for payments.
No-code solutions exist since when? And still programmers work...
I dont think all hyped techs are fads. For instance: we use SaaS now instead of installing software locally. This transition took the world by storm.
But those tech that needs lots of ads, and lots of zealots, and make incredible promises: they usually are fads.
If anything, I feel like this argument works against you. If people are willing to replace locally installed software with shitty web "apps" that barely compare, why do you think they won't be willing to replace good programmers with LLMs doing a bad job simply because it's trendy?
I love local apps but it's undeniable that developers having to split their attention between platforms lowered quality by a lot. You're probably remembering the exemplars, not the bulk which half-worked and looked bad too
Sure, I can ask for it to write (wrong) boilerplate but it is hardly where work ends. It is up to me to spend the time doing careful due diligence at each and every step. I could ask for it to patch each mistake but, again, it relies on a trained, skillful, many times formally educated domain expert on the other end puppeteering the generative copywriter.
For the many cases where computer programming is similar to writing boilerplate, it could indeed be quite useful but I find the long tail of domain expertises will always be outside the reach of data-driven statistical learners.
https://en.wikipedia.org/wiki/Npm_left-pad_incident
https://old.reddit.com/r/web_design/comments/35prfv/designer...
Much like asking an LLM to solve a problem for you.
You aren't going around applying novel techniques of numerical linear algebra.
Btw. I come from a math background and later went into programming.
Yes. The current technology is at a dead end. The costs for training and for scaling the network are not sustainable. This has been obvious since 2022 and is related to the way in which OpenAI created their product. There is no path described for moving from the current dead end technology to anything that could remotely be described as "AGI."
> This is ego speaking.
This is ignorance manifest.
LLM WILL change the job market dynamics in the coming years. Engineers have been vastly overpaid over the last 10 years. There is no reason to not see a reversal to the mean here. Getting a 500k offer from a FAANG because you studied leetcode for a couple weeks is not going to fly anymore.
What company ever feels they have “enough” engineers? There’s always a massive backlog of work to do. So unless you are just maintaining some frozen legacy system, why would you cut your team size down rather than doubling or tripling your output for the same cost? Especially considering all your competitors have a similar productivity boost available. If your reaction is to cut the team rather than make your product better at a much faster rate (or build more products), you will likely be outcompeted by others willing to make that investment.
In addition to this, most companies aren't willing to give away all off their proprietary IP and knowledge through 3rd party servers.
It will be awhile before engineering jobs are at risk.
Most of the noise I hear about LLMs is about expectations that things will improve. It's always like that: people extrapolate and get money from VCs for that.
The thing is: nobody can know. So when someone extrapolates and says it's likely to happen as they predict, they shouldn't be surprised to get answers that say "I don't have the same beliefs".
It absolutely isn't. I have yet to find an area where LLM-generated code solves the kinds of problems I work on more reliably, effectively, of efficiently than I do.
I'm also not interested in spending my mental energy on code reviews for an uncomphrehending token-prediction golem, let alone finding or fixing bugs in the code it blindly generates. That's a waste of my time and a special kind of personal hell.
I'm a cursor user e.g. and Tab completion is by far the most powerful auto complete I've ever used.
There's scenarios where you can do some major refactors by simply asking (extract this table to its own component while using best react practices to avoid double renders) and it does so istantly. Refactor the tests for it? Again. Semi instant. Meanwhile monocole wielding "senior" is proudly copy pasting, creating files and fixing indentation as I move to the next task.
I don't expect LLMs to do the hard work, but to speed me up.
And people ignoring LLMs are simply slower and less productive.
It's a speed multiplier right now, not a substitute.
If you complain that you don't like the code, you're not understanding the tools nor you can use them, end of story.
I suspect you haven't seen code review at a 500+ seat company.
I'm in my 40s with a pretty interesting background and I feel like maybe I'll make it to retirement. There are still mainframe programmers after all. Maintaining legacy stuff will still have a place.
But I think we'll be the last generation where programmers will be this prevalent and the job market will severely contract. No/low code solutions backed by llms are going to eat away at a lot of what we do, and for traditional programming the tooling we use is going to improve rapidly and greatly reduce the number of developers needed.
Very much so. These things are moving so quickly and agentic systems are already writing complete codebases. Give it a few years. No matter how 1337 you think you are, they are very likely to surpass you in 5-10 years.
Examples?
Anecdotally, I can't recall ever seeing someone on HackerNews accuse LLM's of thinking. This site is probably one of the most educated corners of the internet on the topic.
> They shouldn’t be expected to improve in accuracy because of what they are and how they work.
> And there’s nothing in them that will constrain their token prediction in a way that improves accuracy.
These are both incorrect. LLM's are already quite better today than they were in 2022.
There are definitely people here who think LLMs are conscious.
I mean, in a way, yeah.
Last 10 years were basically one hype-cycle after another filled with lofty predictions that never quite panned out. Besides the fact that many of these predictions kind of fell short, there's also the perception that progress on these various things kind of ground to a halt once the interest faded.
3D printers are interesting. Sure, they have gotten incrementally better after the hype cycle died out, but otherwise their place in society hasn't changed, nor will it likely ever change. It has its utility for prototyping and as a fun hobbyist machine for making plastic toys, but otherwise I remember people saying that we'd be able to just 3D print whatever we needed rather than relying on factories.
Same story with VR. We've made a lot of progress since the first Oculus came out, but otherwise their role in society hasn't changed much since then. The latest VR headsets are still as useless and still as bad for gaming. The metaverse will probably never happen.
With AI, I don't want to be overly dismissive, but at the same time there's a growing consensus that pre-training scaling laws are plateauing, and AI "reasoning" approaches always seemed kind of goofy to me. I wouldn't be surprised if generative AI reaches a kind of equilibrium where it incrementally improves but improves in a way where it gets continuously better at being a junior developer but never quite matures beyond that. The world's smartest beginner if you will.
Which is still pretty significant mind you, it's just that I'm not sure how much this significance will be felt. It's not like one's skillset needs to adjust that much in order to use Cursor or Claude, especially as they get better over time. Even if it made developers 50% more productive, I feel like the impact of this will be balanced-out to a degree by declining interest in programming as a career (feel like coding bootcamp hype has been dead for a while now), a lack of enough young people to replace those that are aging out, the fact that a significant number of developers are, frankly, bad at their job and gave up trying to learn new things a long time ago, etc etc.
I think it really only matters in the end if we actually manage to achieve AGI, once that happens though it'll probably be the end of work and the economy as we know it, so who cares?
I think the other thing to keep in mind is that the history of programming is filled with attempts to basically replace programmers. Prior to generative AI, I remember a lot of noise over low-code / no-code tools, but they were just the latest chapter in the evolution of low-code / no-code. Kind of surprised that even now in Anno Domini 2024 one can make a living developing small-business websites due to the limitations of the latest batch of website builders.
But for VR I think we're still closer to the bottom of the curve - Meta and Valve need something to really sell the technology. The gamble for Valve was that it'd be Half Life: Alyx, and for Meta it was portable VR but the former is too techy to set up (and Half Life is already a nerdy IP) while Meta just doesn't have anything that can convince the average person to get a headset (despite me thinking it's a good value just as a Beat Saber machine). But they're getting there - I've convinced a few friends to get a Quest 3S just to practice piano with Virtuoso and I think it's those kinds of apps I hope we see more of that will bring VR out of the slump.
And then LLMs I think their hype cycle is a lot more elevated since even regular people use them extensively now. There will probably be a crash in terms of experimentation with them but I don't see people stopping their usage and I do see them becoming a lot more useful in the long term - how and when is difficult to predict at the top of the hype curve.
And yet nothing has really changed because he’s still using it to print dumb tchotchkes like every other hobbyist 10 years ago.
I can foresee them getting better but never getting good enough to where they actually fundamentally change society or live up to past promises
> This is ego speaking.
Consider this, 100% of AI training data is human-generated content.
Generally speaking, we apply the 90/10 rule to human generated content: 90% of (books, movies, tv shows, software applications, products available on Amazon) is not very good. 10% shines.
In software development, I would say it's more like 99 to 1 after working in the industry professionally for over 25 years.
How do I divorce this from my personal ego? It's easy to apply objective criteria:
- Is the intent of code easy to understand?
- Are the "moving pieces" isolated, such that you can change the implementation of one with minimal risk of altering the others by mistake?
- Is the solution in code a simple one relative to alternatives?
The majority of human produced code does not pass the above sniff test. Most of my job, as a Principal on a platform team, is cleaning up other peoples' messes and training them how to make less of a mess in the future.
If the majority of human generated content fails to follow basic engineering practices that are employed in other engineering disciplines (i.e: it never ceases to amaze me how much of an uphill battle it is just to get some SWEs just to break down their work into small, single responsibility, easily testable and reusable "modules") then we can't logically expect any better from LLMs because this is what they're being trained on.
And we are VERY far off from LLMs that can weigh the merits of different approaches within the context of the overall business requirements and choose which one makes the most sense for the problem at hand, as opposed to just "what's the most common answer to this question?"
LLMs today are a type of magic trick. You give it a whole bunch of 1s and 0s so that you can input some new 1s and 0s and it can use some fancy proability maths to predict "based on the previous 1s and 0s, what are the statistically most likely next 1s and 0s to follow from the input?"
That is useful, and the result can be shockingly impressive depending on what you're trying to do. But the limitations are so limited that the prospect of replacing an entire high-skilled profession with that magic trick is kind of a joke.
A ton of huge business full of Sr Principal Architect SCRUM masters are about to get disrupted by 80 line ChatGPT wrappers hacked together by a few kids in their dorm room.
Software is interesting because if you buy a refrigerator, even an inexpensive one, you have certain expectations as to its basic functions. If the compressor were to cut out periodically in unexpected ways, affecting your food safety, you would return it.
But in software customers seem to be conditioned to just accept bugs and poor performance as a fact of life.
You're correct that customers don't care about "code quality", because they don't understand code or how to evaluate it.
But you're assuming that customers don't care about the quality of the product they are paying for, and you're divorcing that quality from the quality of the code as if the code doesn't represent THE implementation of the final product. The hardware matters too, but to assume that code quality doesn't directly affect product quality is to pretend that food quality is not directly impacted by its ingredients.
I worked in companies with terrible code, that deployed on an over-engineered cloud provider using custom containers hacked with a nail and a screwdriver, but the product was excellent. Had bugs here and there, but worked and delivered what needs to be delivered.
SWEs need to realize that code doesn't really matter. For 70 years we are debating the best architecture patterns and yet the biggest fear of every developer is working on legacy code, as it's an unmaintainable piece of ... written by humans.
What we need, admittedly, is more research and study around this. I know of one study which supports my position, but I'm happy to admit that the data is sparse.
https://arxiv.org/abs/2203.04374
From the abstract:
"By analyzing activity in 30,737 files, we find that low quality code contains 15 times more defects than high quality code."
Most companies have no relation between their code and their products at all - a major food conglomerate will have hundreds or thousands of IT personnel and no direct link between defects in their business process automation code (which is the #1 employment of developers) and the quality of their products.
For companies where the product does have some tech component (e.g. refrigerators mentioned above) again, I'd bet that most of that companies developers don't work on any code that's intended to be in the product, in such a company there simply is far more programming work outside of that product than inside of one. The companies making a software-first product (like startups on hackernews) where a software defect implies a product defect are an exception, not the mainstream.
Having poor quality code makes refactoring for new features harder, it increases the time to ship and means bugs are harder to fix without side effects.
It also means changes have more side effects and are more likely to contain bugs.
For an MVP or a startup just running off seed funding? Go ham with LLMs and get something in front of your customers, but then when more money is available you need to prioritise making that early code better.
I've seen plenty of companies implode because they fired the one guy that knew their shitty codebase.
The day there is no need to debate systems architecture anymore is the heat death of the universe. Maybe before that AGI will be debating it for us, but it will be debated.
Probably doesn't change the conclusion, but this is no longer the case.
I've yet to try Phi-4, but the whole series has synthetic training data; I don't value Phi-3 despite what the benchmarks claim.
The possible outcome space is not binary (at least in the near term), i.e. either AI replace devs, or it doesn't.
What I'm getting at is this: There's a pervasive attitude among some developers (generally older developers, in my experience) that LLM's are effectively useless. If we're being objective, that is quite plainly not true.
These conversations tend to start out with something like: "Well _my_ work in particular is so complex that LLM's couldn't possibly assist."
As the conversation grows, the tone gradually changes to admitting: "Yes there are some portions of a codebase where LLM's can be helpful, but they can't do _everything_ that an experienced dev does."
It should not even be controversial to say, that AI will only improve at this task. That's what technology does, over the long run.
Fundamentally, there's ego involved whenever someone says "LLM's have _never_ produced useable code." That statement, is provably false.
Every SaaS, marketplace is at risk of extinction, superseded by AI agents communicating ad-hoc. Management and business software replaced by custom, one-off programs built by AI. The era of large teams painstakingly building specialized software for niche use cases will end. Consequently we’ll have millions of unemployed developers, except for the ones maintaining the top level orchestration for all of this.
What systems do you think are going to start disappearing? I'm unclear how LLMs are contributing to systems becoming redundant.
I think we'll see it first in internal reporting tools, where the stakeholder tries to explain something very specific they want to see (logical or not) and when it's visibly wrong they can work around it privately.
When I read things like this I really wonder if half of the people on HackerNews have ever held a job in software development (or a job at all to be fair).
None of what you describe is even remotely close to reality.
Stuff that half works gets you fully fired by the client.
I have seen the above story play out literally dozens of times in my career.
Do you perhaps have any friends at companies which hired overseas contractors? Or system-admins working at smaller companies or nonprofits? They're more-likely to have fun stories. I myself remember a university department with a master all_students.xls file on a shared drive (with way too many columns and macros in it) that had to be periodically restored from snapshots every time it got corrupted...
How does that make any sense? How is AI, and especially GenAI, something that is by definition fallible, better in ANY way than current frameworks that allow you to write CRUD applications deterministically with basically one line of code per endpoint (if that)?
As a direct example from myself, I now acquire and run small e-commerce brands. When I decided to move my inventory management from Google Sheets into an actual application, I looked at vendors but ultimately just decided to build my own. My coding skills are pretty minimal, but sufficient that I was able to produce what I needed with the help of LLMs. It has the advantages of being cheaper than buying and also purpose-built to my needs.
So yeah, basically the tl;dr is that for internal tools, I believe that LLMs giving non-developers sufficient coding skills will shift the build vs. buy calculus squarely in the direction of build, with the logical follow-on effects to companies trying to sell internal tools software.
As you visualize whole swaths of human workers getting automated away, also visualize the nitty gritty of day-to-day work with AI. If it gets something wrong, it will say "I apologize" until you, dear user, are blue in the face. If an actual person tried to do the same, the blueness would instead be on their, not your, face. Therein lies the value of a human worker. The big question, I think, is going to be: is that value commensurate to what we're making on our paycheck right now?
The value of a human worker is in a more meaningful apology? I think the relevant question here is who's going to make more mistakes, not who's going to be responsible when they happen. A good human is better than AI today, but that's not going to last long.
there is going to be so much money to make as a consultant fixing these setups, I can't wait!
I mostly agree with this for now, but obviously LLMs will continue to improve and be able to handle greater and greater complexity without issue.
> Especially if the person driving it doesn't know what they don't know about the domain.
Sure, but if the person driving it doesn't know what they're doing, they're also likely to do a poor job buying a solution (getting something that doesn't have all the features they need, selecting something needlessly complex, overpaying, etc.). Whether you're building or buying a piece of enterprise software, you want the person doing so to have plenty of domain expertise.
Sometimes I wonder if people saying this stuff have actually worked in development at all.
Old school desktop software takes very little maintenance. Once you get rid of user tracking, AB testing, monitoring, CICD pipelines, microservices, SOC, multi tenant distributed databases, network calls and all the other crap things get pretty simple.
Yes, 7-zip.
https://cert.europa.eu/publications/security-advisories/2024...
2. I'm sure LLMs are already way better at detecting vulnerabilities than the average engineer. (when asked to do so explicitly)
As an aside, at the top of the front page right now is a sprawling essay titled "Why is it so hard to buy things that work well?"...
It's likely we'll see LLMs used to build a lot of the cheap stuff that previously existed as arcane excel macros (I've already seen less technical folks use it to analyze spreadsheets) but there will remain hard problems that developers are needed to solve.
The parent says "it typically doesn't matter that the product is worse if it's cheap enough". And that seems valid to me: the average quality of software today seems to be worse than 10 years ago. We do worse but cheaper.
You don't remember Windows Vista? Windows ME?
I think you have that view because of survivor's bias. Only the good old software is still around today. There was plenty of garbage that barely worked being shipped 10 years ago.
A more tangible pitfall I see people falling into is testing LLM code generation using something like ChatGPT and not considering more involved usage of LLMs via interfaces more suited for software development. The best results I've managed to realize on our codebase have not been with ChatGPT or IDEs like Cursor, but a series of processes that iterate over our full codebase multiple times to extract various levels of resuable insights, like general development patterns, error handling patterns, RBAC-related patterns, extracting example tasks for common types of tasks based on git commit histories (i.e. adding a new API endpoint related to XYZ), common bugs or failure patterns (again by looking through git commit histories), which create a sort of library of higher-level context and reusable concepts. Feeding this into o1, and having a pre-defined "call graph" of prompts to validate the output, fix identified issues, consider past errors in similar types of commits and past executions, etc has produced some very good results for us so far. I've also found much more success with ad-hoc questions after writing a small static analyzer to trace imports, variable references->declarations, etc, to isolate the portions of the codebase to use for context rather than RAG-based searching that a lot of LLM-centric development tools seem to use. It's also worth mentioning that performance quality seems to be very much influenced by language; I thankfully primarily work with Python codebases, though I've had success using it against (smaller) Rust codebases as well.
A person with experience knowing how to push LLMs to output the perfect little function or utility to solve a problem, and collect enough of them to get somewhere is the interesting piece.
I'd love it if more companies were actually considering real engineering needs to provide products in this space. Until then, I have yet to see any compelling evidence that the current chatbot models can consistently produce anything useful for my particular work other than the occasional SQL query.
When you consider LLMs to be building blocks in bigger, more complex systems, their potential increases dramatically. That's where mid/senior engineers would chip in and add value to a company, in my point of view. There's also different infrastructure paradigms involved that have to be considered carefully, so DevOps is potentially necessary for years to come.
I see a lot of ego in this comment, and I think this is actually a good example of how to NOT safeguard yourself against LLMs. This kind of skepticism is the most toxic to yourself. Dismiss them as novelty for the masses, as bullshit tech, keep doing your same old things and discard any applications. Textbook footgun.
Do you have any examples of where/how that would work? It has seemed for me like lot of the hype is "they'll be good" with no further explanation.
Or I can pass each row through an LLM and get structured clean output out for a couple of dollars per month. Sure, it doesn't work 100%, but I don't need that, and neither could the human-written code do it.
Effectively, LLM resulted in one less develper hired on our team.
That fired developer now has the toolset to become a CEO much, much easier than pre-LLM era. You didn't really make him obsolete. You made him redundant. I'm not saying he's gonna become a CEO, but trudging through programming problems is much easier for him as a whole.
Redundancies happen all the time and they don't end career types. Companies get bought, traded, and merged. Whenever this happens the redundant folk get the axe. They follow on and get re-recruited into another comfy tech job. That's it really.
At least in my past experience with volumes of transcribed data for applications that are picky about accuracy.
I can chip in from my tech consulting job where we ship a few GenAI projects to several AWS clients via Amazon Bedrock. I'm senior level but most people here are pretty much insulated.
I think whoever commented once here about more complex problems being tackled, (and the nature of these problems becoming broader) is right on the money. Newer patterns around LLM-based applications are emerging and having seen them first hand, they seem like a slightly different paradigm shift in programming. But they are still, at heart, programming questions.
A practical example: company sees GenAI chatbot, wants one of their own, based on their in-house knowledge base.
Right then and there there is a whole slew of new business needs with necessary human input to make it work that ensues.
- Is training your own LLM needed? See a Data Engineer/Data engineering team.
- If going with a ready-made solution, which LLM to use instead? Engineer. Any level.
- Infrastructure around the LLM of choice. Get DevOps folk in here. Cost assessment is real and LLMs are pricey. You have to be on top of your game to estimate stuff here.
- Guard rails, output validation. Engineers.
- Hooking up to whatever app front-end the company has. Engineers come to the rescue again.
All these have valid needs for engineers, architects/staff/senior what have you — programmers. At the end of the day, these problems devolve into the same ol' https://programming-motherfucker.com
And I'm OK with that so far.
Regardless of how sharp you keep yourself you're still at subject to the macro environment.
I'm far more worried about mental degradation due to any number of circumstances -- unlucky genetics, infections, what have you. But "future proofing" myself against some of that has the same answer: Remain curious, remain mentally ambidextrous, and don't let other people (or objects) think for me.
My brain is my greatest asset both for my job and my private life. So I do what I can to keep it in good shape, which incidentally also means replacing me with a parrot is unlikely to be a good decision.
Where's your espoused intellectual curiosity and mental ambidextrousness when it comes to LLMs? It seems like your mind is pretty firmly made up that they're of no worry to you.
I'm being vague with the details because this is one of the features in our product that's a big advantage over competitors.
To get to that point I experimented with and prototyped a lot using LLM's; I did a deep dive into how they work from the ground up. I understand the tech and its strengths. But likewise the weaknesses -- or perhaps more apt, the use cases which are more "magic show" than actually useful.
I never dismissed LLM's as useless, but I am as confident as I can possibly be that LLM's on their own cannot and will not put programmers out of jobs. They're a great tool, but they're not what many people seem to be fooled into believing that they are; they are not "intelligent" nor "agents".
Maybe it'll be a bit difficult for juniors to get entry level jobs for a short while due to a misunderstanding of the tech and all the hype, but that'll equalize pretty quickly once reality sets in.
I know a few people who have been primarily programming for 10 years but are not seniors. 5 of them (probably 10 or more, but let's not overdo it), with AI, cannot replace one senior developer unless you make that senior do super basic tasks.
Unfortunately it won't be your sniff test that matters. It's going to be an early founder that realizes they don't need to make that extra seed round hire, or the resource limited director that decides they can forgo that one head count and still deliver the product on time, or the in house team that realizes they no longer need a dedicated front end dev because, for their purposes, AI is good enough.
Personally, the team I lead is able to ship much faster with AI assistants than without, which means in practice we can out compete much larger teams in the same space.
Sure their are things that AI will always struggle with, but those things aren't merely "senior" in nature, they're much closer to the niche expert type of problems. Engineers working on generally cutting edge work will likely be in demand and hard to replace, but many others will very likely be impacted by AI from multiple directions.
So, you're giving away your company's IP to AI firms, does your CEO understand this?
But there's plenty of fantastic open models now and you can increasingly run this stuff locally so you don't have to send that IP to AI firms if you don't want to.
I feel like of all the things in this thread, this one is on them. It absolutely is something that LLMs are good at. They have the sample size, examples and docs, all the things. LLMs are particularly good at speaking "their" language, the most surprising thing is that they can do so much more beyond that next token reckoning.
So, yeah, I'm a bit surprised that a shop like Shopify is so sloppy, but absolutely I think they should be able to provide you an LLM to answer your questions. Particularly given some of the Shopify alumni I've interviewed.
That said, some marketing person might have just oversold the capabilities of an LLM that answers most of their core customer questions, rather than one that knows much at all about their API integrations.
Maybe it's right 99% of the time and works well for many of their developers. But this is exactly the point. I just can't trust a system that sometimes gives me the wrong answer.
Thanks to this thread I have signed up again for Copilot and I am blown away. I think this easily makes me 2x productive when doing implementation work. It does not make silly mistakes anymore and it's just faster to have it write a block of code than doing it myself.
And the experience is more of an augmentation than replacement. I don't have to let it run wild. I'm using it locally and refactor its output if needed to match my code quality standards.
I am as much concerned (job market) as I am excited (what I will be able to build myself).
That's why the question is future proof. Models get better with time, not worse.
The job of a software engineer is to first understand the real needs of the customer/user, which requires a level of knowledge and understanding of the real world that LLMs simply don't have and will never do because that is simply not what it does.
The second part of a software engineer's job is to translate those needs into a functional program. Here the issue is that natural languages are simply not precise enough for the kind of logic that is involved. This is why we invented programming languages rather than use plain English. And since the interface of LLMs is by definition human (natural) languages, it fundamentally will always have the same flaws.
Any precise enough interface for this second part of the job will by definition just be some higher level programming language, which not only involves an expert of the tool anyway, but also is unrealistic given the amount of efforts that we already collectively invest into getting more productive languages and frameworks to replace ourselves.
There's plenty of low-code, no-code solutions around, and yet still lots of software. The slice of the pie will change, but it's very hard to see it being eliminated entirely.
Ultimately it's going to come down to "do I feel like I can trust this?" and with little to no way to be certain you can completely trust it, that's going to be a harder and harder sell as risk increases with the size, complexity, and value of the business processes being managed.
I already had my panic moment about this like 2 years ago and have calmed down since then. I feel a lot more at peace than I did when I was thinking AGI was right around the corner and trying to plan for a future where no matter what course of action I took to learn new skills to outpace the AI I was already too late with my pivot because the AI would also become capable at doing whatever I was deciding to switch to on a shorter timeline than It would take me to become proficient in it.
If you at all imagine a world where the need for human software engineers legit goes away, then there isn't likely much for you to do in that world anyway. Maybe except become a plumber.
I don't think AGI is this magic silver bullet Silicon Valley thinks it is. I think, like everything in engineering, it's a trade-off, and I think it comes with a very unpalatable caveat. How do you learn once you've run out of data? By making mistakes. End of the day I think it's in the same boat as us, only at least we can be held accountable.
A decade (almost?) ago, people were saying "look at how self-driving cars improved in the last 2 years: in 3-5 years we won't need to learn how to drive anymore". And yet...
I'm not sure whether you mean human generations or LLM generations, but I think it's the latter. In that case, I agree with you, but also that doesn't seem to put you particularly far off from OP, who didn't provide specific timelines but also seems to be indicating that the elimination of most engineers is still a little ways away. Since we're seeing a new generation of LLMs every 1-2 years, would you agree that in ~10 years at the outside, AI will be able to do the things that would cause you to gladly retire?
I don’t think that’s impossible but I think we’re quite a few human generations away from that. And scaling LLM’s is not the solution to that problem; an LLM is just a small but important part of it.
I’m curious what’s so special about that blitting?
BTW, pixel shaders in D3D11 can receive screen-space pixel coordinates in SV_Position semantic. The pixel shader can cast .xy slice of that value from float2 to int2 (truncating towards 0), offset the int2 vector to be relative to the top-left of the sprite, then pass the integers into Texture2D.Load method.
Unlike the more commonly used Texture2D.Sample, Texture2D.Load method delivers a single texel as stored in the texture i.e. no filtering, sampling or interpolations. The texel is identified by integer coordinates, as opposed to UV floats for the Sample method.
I do hear some of my junior colleagues use them now and again, and gain some value there. And if llm's can help get people up to speed faster that'd be a good thing. Assuming we continue to make the effort to understand the output.
But yeah, agree, I raise my eyebrow from time to time, but I don't see anything jaw dropping yet. Right now they just feel like surrogate googler's.
And before you mention, the hyperbole is for effect, not as an exact representation.
There will be human programmers in the future as well, they just won’t be ones who can’t use LLMs.
I think it's an interesting psychological phenomenon similar to virtue signalling. Here you are signalling to the programmer in-group how good of a programmer you are. The more dismissive you are the better you look. Anyone worried about it reveals themself as a bad coder.
It's a luxury belief, and the better LLMs get the better you look by dismissing them.
It's essentially like saying "What I do in particular, is much too difficult for an AI to ever replicate." It is always in part, humble bragging.
I think some developers like to pretend that they are exclusively solving problems that have never been solved before. Which sure, the LLM architecture in particular might never be better than a person for the novel class of problem.
But the reality is, an extremely high percentage of all problems (and by reduction, the lines of code that build that solution) are not novel. I would guesstimate that less than 1 out of 10,000 developers are solving truly novel problems with any regularity. And those folks tend to work at places like Google Brain.
That's relevant because LLM's can likely scale forever in terms of solving the already solved.
Looks like the virtue signalling is done on both sides of the AI fence.
This is just a natural consequence of the ever growing repository of solved problems.
For example, consider that sorting of lists is agreed upon as a solved problem. Sure you could re-discover quick sort on your own, but that's not novel.
Defending AI with passion is nonsensical, at least ironic.
There is some tech that is getting progressively better.
I am high on the linear scale
Therefore I don't worry about it cathing up to me ever
And this is the top voted argument.
Is it still getting better? My understanding is that we're already training them on all of the publicly available code in existence, and we're running in to scaling walls with bigger models.
hmmmmm
People really believe it will be generations before an AI will approach human level coding abilities? I don't know how a person could seriously consider that likely given the pace of progress in the field. This seems like burying your head in the sand. Even the whole package of translating high level ideas into robust deployed systems seems possible to solve within a decade.
I believe there will still be jobs for technical people even when AI is good at coding. And I think they will be enjoyable and extremely productive. But they will be different.
source: been coding for 15+ years and using LLMs for past year.
The off-topic mentioning of graphics programming was because I tend to type as I think, then make corrections, and as I re-read it now the paragraph isn't great. The intent was to give an example of how I keep myself sharp, and challenging what many consider "settled" knowledge, where graphics programming happens to be my most recent example.
For what it's worth, English isn't my native language, and you'll have to take my word for it that no chat bots were used in generating any of the responses I've made in this thread. The fact that people are already uncertain about who's a human and who's not is worrisome.
I spent a lot of time with traders in early '00's and then '10's when the automation was going full tilt.
Common feedback I heard from these highly paid, highly technical, highly professional traders in a niche indusry running the world in its way was:
- How complex the job was - How high a quality bar there was to do it - How current algos never could do it and neither could future ones - How there'd always be edge for humans
Today, the exchange floors are closed, SWEs run trading firms, traders if they are around steer algos, work in specific markets such as bonds, and now bonds are getting automated. LLMs can pass CFA III, the great non-MBA job moat. The trader job isn't gone, but it has capital-C Changed and it happened quickly.
And lastly - LLMs don't have to be "great," they just have to be "good enough."
See if you can match the above confidence from pre-automation traders with the comments displayed in this thread. You should plan for it aggressively, I certainly do.
Edit - Advice: the job will change, the job might change in that you steer LLMs, so become the best at LLM steering. Trading still goes on, and the huge, crushing firms in the space all automated early and at various points in the settlement chain.
Everyone cites these kind of examples as LLM beating some test or other as some kind of validation. It isn’t .
To me that just tells that the tests are poor, not the LLMs are good. Designing and curating a good test is hard and expensive.
Certifying and examination bodies often use knowledge as a proxy to understanding or reasoning or any critical thinking skills.they just need to filter enough people out, there is no competitive pressure to improve quality at all. Knowledge tests do that just as well and are cheaper.
Standardization is also hard to do correctly, common core is a classic example of how that changes incentives for both teachers and students . Goodhart's law also applies.
To me it is more often than not a function of poor test measurement practices rather than any great skill shown by the LLM.
Passing the CFA or the bar exam while daunting for humans by design does not teach you anything practicing law or accounting. Managing the books of a real company is nothing like what the textbook and exams teaches you .
—-
The best accountants or lawyers etc are not making partner because of their knowledge of the law and tax. They make money same as everyone else - networking and building customer relationships. As long as the certification bodies don’t flood the market they will do well which is what the test does.
I mean the same is true of leetcode but I know plenty of mediocre engineers still making ~$500k because they learned how to grind leetcode.
You can argue that the world is unjust till you're blue in the face, but it won't make it a just world.
I was merely commenting on why these tests exist and the dynamics in the measurement industry from an observer, we shouldn't conflate exclusivity or difficulty of a test to its quality or objective.
That being said, there's also a massive low hanging fruit in dev work that we'll automate away, and I feel like that's coming sooner rather than later, yes even though we've been saying that for decades. However, I bet that the incumbents (Senior SWE) have a bit longer of a runway and potentially their economic rent increases as they're able to be more efficient, and companies need not hire as many humans as they needed before. Will be an interesting go these next few decades.
And this has been solve for years already with existing tooling. Debuggers, Intellisense, Linters, Snippets and other code generations tools, build systems, Framework Specific tooling.... There's a lot of tools for writing and maintaining code. The only thing left was always the understanding of the system that solves the problem and knowledge of the tools to build it. And I don't believe we can automate this away. Using LLMs is like riding a drugged donkey instead of a motorbike. It can only work for very short distances or the thrill.
In any long lived project, most modifications are only a few lines of codes. The most valuable thing is the knowledge of where and how to edit. Not the ability to write 400 lines of code in 5 seconds.
If the author of Redis finds novel utility here then it's likely useful beyond React boilerplatey stuff.
I share a similar sentiment since 3.5 Sonnet came out. This goes far beyond dev tooling ergonomics. It's not simply a fancy autocomplete anymore.
This really sums up how I feel about AI at the moment. It's like having a partner who has broad knowledge about anything that you can ask any stupid questions to. If you don't want to do a small boring task you can hand it off to them. It lets you focus on the important stuff, not "whats the option in this library called to do this thing that I can describe but don't know the exact name for?".
If you aren't taking advantage of that, then yes, you are probably going to be replaced. It's like when version control became popular in the 00s, where some people and companies still held out in their old way of doing things, copying and pasting folders or whatever other nasty workflows the had, because $reasons... where the only real reason was that they didn't want to adapt to the new paradigm.
But like so much of this thread “we can do this already without AI, if we wanted”
Want to try 5/10 different approaches a day? Fine - get your best stakeholders and your best devs and lock them in a room on the top floor and throw in pizza every so often.
Projects take a long time because we allow them to. (NB this is not same as setting tight deadlines, this is having a preponderance of force on the side of our side
Trading is about doing very specific math in a very specific scenario with known expectations.
Software engineering is anything but like that.
Which you can do away in a few days with frameworks and code reuse. The rest of the time is mostly spent on understanding the domain, writing custom components, and fixing bugs.
It's possible everyone just stops hiring new folks and lets the incumbents automate it. Or it's possible they all washed cars the rest of their careers.
- post-9/11 stress and ‘08 was a big jolt, and pushed a lot of folks out.
- Managed their money fine (or not) for when the job slowed down and also when ‘08 hit
- “traders” became “salespeople” or otherwise managing relationships
- Saw the trend, leaned into it hard, you now have Citadel, Virtu, JS, and so on.
- Saw the trend, specialized or were already in assets hard to automate.
- Senior enough to either steer the algo farms + jr traders, or become an algo steerer themselves
- Not senior enough, not rich enough, not flexible enough or not interested anymore and now drive Uber, mobile dog washers, joined law enforcement (3x example I know of).
Any interesting question is "How is programming like trading securities?"
I believe an argument can be made that the bulk of what goes for "programming" today is simply hooking up existing pieces in ways that achieve a specific goal. When the goal can be adequately specified[1] the task of hooking up the pieces to achieve that goal is fairly mechanical. Just like the business of tracking trades in markets and extracting directional flow and then anticipating the flow by enough to make a profit is something trading algorithms can do.
What trading software has a hard time doing is coming up with new securities. What LLMs absolutely cannot do (yet?) is come up with novel mechanisms. To illustrate that, consider the idea that an LLM has been trained on every kind of car there is. If you ask it to design a plane it will fail. Train it on all cars and plans and ask it to design a boat, same problem. Train it on cars, planes, and boats and ask it to design a rocket, same problem.
The sad truth is that a lot of programming is 'done' , which is to say we have created lots of compilers, lots of editors, lots of tools, lots of word processors, lots of operating systems. Training an LLM on those things can put all of the mechanisms used in all of them into the model, and spitting out a variant is entirely within the capabilities of the LLM.
Thus the role of humans will continue to be to do the things that have not been done yet. No LLM can design a quantum computer, nor can it design a compiler that runs on a quantum computer. Those things haven't been "done" and they are not in the model. The other role of humans will continue to be 'taste.'
Taste, as defined as an aesthetic, something that you know when you see it. It is why for many, AI "art" stands out as having been created by AI, it has a synthetic aesthetic. And as one gets older it often becomes apparent that the tools are not what determines the quality of the output, it is the operator.
I watched Dan Silva do some amazing doodles with Deluxe Paint on the Amiga and I thought, "That's what I want to do!" and ran out and bought a copy and started doodling. My doodles looked like crap :-). The understanding that I would have to use the tool, find its strengths and weaknesses, and then express through it was clearly a lot more time consuming than "get the tool and go."
LLMs let people generate marginal code quickly. For so many jobs that is good enough. People who can generate really good code taking in constraints that the LLM can't model, is something that will remain the domain of humans until GAI is achieved[2]. So careers in things like real-time and embedded systems will probably still have a lot of humans involved, and systems where every single compute cycle needs to be extracted out of the engine is a priority, that will likely be dominated by humans too.
[1] Very early on there were papers on 'genetic' programming. Its a good thing to read them because they arrive at a singularly important point, "How do you define 'Which is better'?" For a solid, qualitative and testable metric for 'goodness' genetic algorithms out perform nearly everything. When the ability to specify 'goodness' is not there, genetic algorithms cannot out perform humans. What is more they cannot escape 'quality moats' where the solutions on the far side the moat are better than the solutions being explored but they cannot algorithmically get far enough into the 'bad' solutions to start climbing up the hill on the other side to the 'better' solutions.
[2] GAI being "Generalized Artificial Intelligence" which will have to have some way of modelling and integrating conceptual systems. Lots of things get better then (like self driving finally works), maybe even novel things. Until we get that though, LLMs won't play here.
right, but when python came into popularity it's not like we reduced the number of engineers 10 fold, even though it used to take a team 10x as long to write similar functionality in C++.
If LLMs become so good that everyone can just let an LLM go into the world and make them money, the way we do with our investments, won't that be good?
And, certainly, prob a good thing for some, bad thing for the money conveyor belt of the last 20 yrs of tech careers.
Sounds like it was written by someone trying to keep any grasp on the fading reality of AI.
NO NO NO NO NO NO NO!!!! It may be that some random script you run on your PC can be "good enough", but the software the my business sells can't be produced by "good enough" LLMs. I'm tired of my junior dev turning in garbage code that the latest and greatest "good enough" LLM created. I'm about to tell him he can't use AI tools anymore. I'm so thankful I actually learned how to code in pre-LLM days, because I know more than just how to copy and paste.
How many companies still have dedicated QA orgs with skilled engineers? How many SaaS solutions have flat out broken features? Why is SRE now a critical function? How often do mobile apps ship updates? How many games ship with a day zero patch?
The industries that still have reliable software are because there are regulatory or profit advantages to reliability -- and that's not true for the majority of software.
People tolerate game crashes because you (generally) can't get the same experience by switching.
People wouldn't tolerate f.e broswers crashing if they can switch to an alternative. The same would apply to a lot of software, with varying limits to how much shit will be tolerated before a switch would be made.
I think of LLMs like clay, or paint, or any other medium where you need people who know what they're doing to drive them.
Also, I might humbly suggest you invest some time in the junior dev and ask yourself why they keep on producing "garbage code". They're junior, they aren't likely to know the difference between good and bad. Teach them. (Maybe you already are, I'm just taking a wild guess)
1) I believe we need true AGI to replace developers.
2) I don't believe LLMs are currently AGI or that if we just feed them more compute during training that they'll magically become AGI.
3) Even if we did invent AGI soon and replace developers, I wouldn't even really care, because the invention of AGI would be such an insanely impactful, world changing, event that who knows what the world would even look like afterwards. It would be massively changed. Having a development job is the absolute least of my worries in that scenario, it pales in comparison to the transformation the entire world would go through.
Therefore, unless you for some reason believe you will be in the shrinking portion that cannot be replaced I think the question deserves more attention than “nothing”.
Short of genuine AGI I’ve yet to see a compelling argument why productivity eliminates jobs, when the opposite has been true in every modern economy.
How would those have plausibly eliminated jobs? Neither frameworks nor compilers were the totality of the tasks a single person previously was assigned. If there was a person whose job it was to convert C code to assembly by hand, yes, a compiler would have eliminated most of those jobs.
If you need an example of automation eliminating jobs, look at automated switchboard operators. The job of human switchboard operator (mostly women btw) was eliminated in a matter of years.
Except here, instead of a low-paid industry we are talking about a relatively high-paid one, so the returns would be much higher.
A good analogy can be made to outsourcing for manufacturing. For a long time Chinese products were universally of worse quality. Then they caught up. Now, in many advanced manufacturing sectors the Chinese are unmatched. It was only hubris that drove arguments that Chinese manufacturing could never match America’s.
Me and the other senior dev spent weeks reviewing these PRs. Here’s what we found:
- The feature wasn’t built to spec, so even though it worked in general the details were all wrong
- The code was sloppy and didn’t adhere to the repos guidelines
- He couldn’t explain why he did things a certain way versus another, so reviews took a long time
- The code worked for the happy path, and errored for everything else
Eventually this guy got moved to a different team and we closed his PRs and rewrote the feature in less than a week.
This was an awful experience. If you told me that this is the future of software I’d laugh you out of the room, because engineers make enough money and have enough leverage to just quit. If you force engineers to work this way, all the good ones will quit and retire. So you’re gonna be stuck with the guys who can’t write code reviewing code they don’t understand.
In the long term, we SWEs (like other industries) have to own the fact that there’s a huge target on our backs, and aside from hubris there’s no law of nature or man preventing people smarter than us from building robots that do our jobs faster than us.
But like I said, I’m not worried about it in the imminent future, and I have enough leverage to turn down any jobs that want me to work in that way.
1. A lot more projects get the green light when the price is 5x less, and a many more organizations can afford custom applications.
2. LLMs unlock large amounts of new applications. A lot more of the economy is now automatable with LLMs.
I think jr devs will see the biggest hit. If you're going to teach someone how to code, might as teach a domain expert. LLMs already code much better than almost all jr devs.
If we are talking an 80% reduction in developers needed per project, then we would need 5x the amount of software demand in the future to avoid a workforce reduction.
Comparing only the amount of forward progress in a codebase and AI's ability to participate or cover in it might be better.
This is what will happen
At that point its total societal upheaval and losing my job will probably be the least of my worries.
2) I do see this, given the money poured into this cycle, as a potential possibility. It may not just be LLM's. To another comment you are betting against the whole capitalist systems, human ingenuity and billions/trillions? of dollars targeted at making SWE's redundant.
3) I think it can disrupt only knowledge jobs, and be some large time before it disrupts physical jobs. For SWE's this is the worst outcome - it means you are on your own w.r.t adjusting for the changes coming. Its only "world changing" as you put it to economic systems if it disrupts everyone at once. I don't think it will happen that way.
More to the point software engineers will automate themselves out before other jobs for only one reason - they understand AI better than other jobs (even if objectively it is harder to automate) and they tend not to protect the knowledge required to do so. They have the domain knowledge to know what to automate/make redundant.
The people that have the power to resist/slow down disruption (i.e. hide knowledge) will gain more pricing power, and therefore be able to earn more capital taking advantage of the efficiency gains made by jobs being redundant from AI. The last to be disrupted has the most opportunity to gain ownership of assets and capital from their economic profits preserved. The inefficient will win out of this - capital rewards scarcity/people that can remain in demand despite being inefficient relatively. Competition is for losers - its IMV the biggest flaw of the system. As a result people will see what has happened to SWE's and make sure their industry "has time" to adapt particularly since many knowledge professions are really "industry unions/licensed clubs" who have the advantage of keeping their domain knowledge harder to access.
To explain it further even if software is more complicated; there is just so much more capital it seems trying to disrupt it than other industries. Given IMV software demand is relatively inelastic to price due to scaling profits, making it cheaper to produce won't really benefit society all that much w.r.t more output (i.e. what was good economically to build would of been built anyway in an inelastic demand/scaling commodity). Generally more supply/less cost of a good has more absolute societal benefits when there is unmet and/or elastic demand. Instead costs of SWE's will go down and the benefit will be distributed to the jobs/people remaining (managers, CEO's, etc) - the people that dev's think "are inefficient" in my experience. When it is about inelastic demand its more re-distributive; the customer benefits and the supplier (in this case SWE's) lose.
I don't like saying this; but we gave AI all the advantage. No licensing requirements, open source software for training, etc.
What happens when a huge chunk of knowledge workers lose their job? Who is going to buy houses, roofs, cars, cabinets, furniture, amazon packages, etc. from all the the blue-collar workers?
What happens when all those former knowledge workers start flooding the job markets for cashiers and factory workers, or applying en masse to the limited spots in nursing schools or trade programs?
If GPTs take away knowledge work at any rate above "glacially slow" we will quickly see a collapse that affects every corner of the global economy.
At that point we just have to hope for a real revolution in terms of what it means to own the means of production.
- Unskilled work will become even more diminished: A lot of people in power are counting on this to solve things like aging population care, etc. Move from coding software to doing the hard work in a nursing home for example is a deflationary force and makes the older generations (who typically have more wealth) even more wealthier as the effect would be deflationary overall and amplify their wealth. The industries that will benefit (at the expense of ones that don't) will be the ones that can appeal to the winners - resources will be redirected at them.
- Uneven disruption rates: I disagree that the AU disruption force will be even - I think the academic types will be disrupted much more than the average person. My personal opinion is that anything in the digital world can be disrupted much quicker than the physical realm for a number of reasons (cost of change/failure, energy, rules of physics limitations, etc). This means that as a society there will be no revolution (i.e. it was your fault for doing that; why should the rest of society bear the cost? be adaptable...). This has massive implications for what society values long term and the type of people valued in the new world as well socially, in personal relationships, etc.
i.e. Software dev's/ML researchers/any other white collar job/etc in the long run have shot themselves in the foot IMO. The best they can hope for is that LLM's do have a limit to progres, that there is an element of risk to the job that still requires some employment, and time is given to adjust. I hope I'm wrong since I would be affected too. No one will feel sorry for them - after all other professions know better than to do this to themselves on average and they have also caused a lot of disruption themselves (taste of their own medicine as they say).
I see LLMs as the next higher level of abstraction.
Does this mean it will replace me? At the moment the output is so flawed for anything but the most trivial professional tasks, I simply see, as before, it has a long long way to go.
Will be put me out of a job? I highly doubt it in my career. I still love it and write stuff for home and work every day of the week. I'm planning on working until I drop dead as it seems I have never lost interest so far.
Will it replace developers as we know it? Maybe in the far future. But we'll be the ones using it anyway.
On the other side, I'm switching to game dev and it became a very useful companion, outputing well known algorithms. It's more like an universal API rather than a junior assistant.
Instead of me taking time to understand the algo in the details then implementing, I use GPT4o to expand the Unreal API with missing parts. It truly expands the scope I'm able to handle and it feels good to save hours that compounds in days and weeks of work.
Eg. 1. OOB and SAT https://stackoverflow.com/questions/47866571/simple-oriented...
2. Making a grid system using lat/long coordinates for a voxel planet.
As someone who knows web front end development only to the extent I need it for internal tools, it’s been turning day-long fights into single hour dare I say it pleasurable experiences. I tell it to make a row of widgets and it outputs all the div+css soup (or e.g. material components) that only needs some tuning instead of having to google everything.
It still takes experience to know when it doesn’t use a component when it should etc., but it’s a force multiplier, not a replacement. For now.
But, as I learn it, GPT 4o becomes less and less useful for anything but short questions to fill in gaps. I am already way better than it at anything more than a few pages long. Getting it to do something substantial is pretty much an exercise in frustration.
Now I mostly JSX / tailwind, which is way faster than prompting, but surely because I'm fluent in that thing.
Not saying that is not true, but did you measure that actually or is it a feeling or you didn't spend very much time on getting a prompt you can plop your requests in? jsx and tailwind are very verbose ; frontend is mostly very verbose, and, unless you are building LoB apps, you will have to try things a few times. Cerebras and groq with a predefined copy paste prompt will generate all that miserable useless slob (yes, I vehemently hate frontend work; I do hope it will be replaced very soon by llms completely; it's close but not quite there yet) in milliseconds so you can just tweak it a bit. I am fluent at building web frontends since the mid 90s; before I did DOS, windows and motif ones (which was vastly nicer; there you could actually write terse frontend code); I see many companies inside and I have not seen anyone faster at frontend than current llms, so I would like to see a demo. In logic I see many people faster as the llm can often simply not even figure it out even remotely.
Don't get me wrong, it will output the code faster than me. But overall, i will spend more time prompting + correcting, especially when I want to achieve special designs which are not looking like basic "bootstrap". It's also much less enjoyable to tweak a prompt than just spitting jsx/tw which doesn't require lot of effort for me.
I don't have a demo to justify my claim and I'm totally fine if you dismiss my message because of that.
I recon that I don't like front ending with LLM yet, maybe one day it will.be much better.
My website where I tried some LLM stuff and been disapointed https://ardaria.com
As someone with a fairly neutral/dismissive/negative opinion on AI tools, you just pointed out a sweet silver lining. That would be a great use case for when you want to use clean html/css/js and not litter your codebase with a gazillion libraries that may or may not exist next year!
Is there a difference between soup and slop? Do you clean up the "soup" it produces or leave it as is?
What really happened: this is used by programmers to improve their workflow
I think you have to be a developer to learn how to write those requirements well. And I don't even mean the concepts of data flows and logic flows. I mean, just learning to organise thoughts in a way that they don't fatally contradict themselves or lead to dead ends or otherwise tie themselves in unresolvable knots. I mean like non-lawyers trying to write laws without any understanding of the entire suite of mental furniture.
I didn't want to expand it, for fear of sounding like an elitist, and you said it better anyway. The same things that make a programmer excellent will be in a much better position to use an even better LLM.
Concise thinking and expression. At the moment LLMs will just kinda 'shotgun' scattered ideas based on your input. I expect the better ones will be massively better when fed better input.
It is happening just not at scale yet to really scare people; that will happen though. It is just stupidly cheaper; for the price of one junior you can do so many api requests to claude it's not even funny. Large companies are still thinking about privacy of data etc but all of that will simply not matter in the long run.
Good logical thinkers and problemsolvers won't be replaced any time soon, but mediocre or bad programmers are already gone ; a llm is faster, cheaper and doesn't get ill or tired. And there are so many of them, just try someone on upwork or fiverr and you will know.
Privacy is a fundamental right.
And companies should care about not leaking trade secrets, including code, but the rest as well.
US companies are known to use the cloud to spy on competitors.
Companies should have their own private LLM, not rely on cloud instances with a contract that guarantees "privacy".
That attitude is why our industry has such a bad rep and why things are going down the drain to dystopia. Devs without ethics. This world is doomed.
Idk if the cheap price is really cheap price or promotion price, where after that the enshittification and price increase happen, which is a trend for most tech companies.
And agree, llm will weed bad programmers further, though in the future, bad alternatives (like analyst or bad llm users) may emerge
Why do you have bad programmers on staff? Let them go, LLMs existing or not.
LLM however, will change the playing field. Let's say that bottom 20 or 30% of programmers will be replaced by LLM in form of increased performance by the other 70%.
So you can't fire the bad apples because you don't know who they are, but you feel confident that you can pick out whom to replace by an LLM and then fire?
It's hopefully obvious to everybody that this is a hopelessly naïve take on things and is never going to play out that way.
If your approach to juniors is that they are just cheap labour monkeys designed to churn out braindead crap and boost the ego of your seniors then you have a management problem and I'm glad I'm working somewhere else.
The problem was it never worked. When generating the code, the best the tools could do was create all the classes for you and maybe define the methods for the class. The tools could not provide an implementation unless it provided the means to manage the implementation within the tool itself - which was awful.
Why have you likely not heard of any of this? Because the fad died out in the early 2000's. The juice simple wasn't worth the squeeze.
Fast-forward 20 years and I'm working in a new organization where we're using ArchiMate extensively and are starting to use more and more UML. Just this past weekend I started wondering given the state of business modeling, system architecture modeling, and software modeling, could an LLM (or some other AI tool) take those models and produce code like we could never dream of back in the 80s, 90s, and early 00s? Could we use AI to help create the models from which we'd generate the code?
At the end of the day, I see software architects and software engineers still being engaged, but in a different way than they are today. I suppose to answer your question, if I wanted to future-proof my career I'd learn modeling languages and start "moving to the left" as they say. I see being a code slinger as being less and less valuable over the coming years.
Bottom line, you don't see too many assembly language developers anymore. We largely abandoned that back in the 80s and let the computer produce the actual code that runs. I see us doing the same thing again but at a higher and more abstract level.
This just seems like just the next level of abstraction. I don't forsee a "traders all disappeared" situation like the top comment, because at the end of the day someone needs to know WHAT they want to build.
So yes, less junior developers and development looking more like management/architecting. A lot more reliance on deeply knowledgable folks to debug the spaghetti hell. But also a lot more designers that are suddenly Very Successful Developers. A lot more product people that super-charge things. A lot more very fast startups run by some shmoes with unfundable but ultimately visionary ideas.
At least, that's my best case scenario. Otherwise: SkyNet.
I think it's important to note that there were a couple distinct markets for CASE:
1. Military/aerospace/datacomm/medical type technical development. Where you were building very complex things, that integrated into larger systems, that had to work, with teams, and you used higher-level formalisms when appropriate.
2. "MIS" (Management Information Systems) in-house/intranet business applications. Modeling business processes and flows, and a whole lot of data entry forms, queries, and printed reports. (Much of the coding parts already had decades of work on automating them, such as with WYSIWYG form painters and report languages.)
Today, most Web CRUD and mobile apps are the descendant of #2, albeit with branches for in-house vs. polished graphic design consumer appeal.
My teams had some successes with #1 technical software, but UML under IBM seemed to head towards #2 enterprise development. I don't have much visibility into where it went from there.
I did find a few years ago (as a bit of a methodology expert familiar with the influences that went into UML, as well as familiar with those metamodels as a CASE developer) that the UML specs were scary and huge, and mostly full of stuff I didn't want. So I did the business process modeling for a customer logistics integration using a very small subset, with very high value. (Maybe it's a little like knowing hypertext, and then being teleported 20 years into the future, where the hypertext technology has been taken over by evil advertising brochures and surveillance capitalism, so you have to work to dig out the 1% hypertext bits that you can see are there.)
Post-ZIRP, if more people start caring about complex systems that really have to work (and fewer people care about lots of hiring and churning code to make it look like they have "growth"), people will rediscover some of the better modeling methods, and be, like, whoa, this ancient DeMarco-Yourdon thing is most of what we need to get this process straight in a way everyone can understand, or this Harel thing makes our crazy event loop with concurrent activities tractable to implement correctly without a QA nightmare, or this Rumbaugh/Booch/etc. thing really helps us understand this nontrivial schema, and keep it documented as a visual for bringing people onboard and evolving it sanely, and this Jacobson thing helps us integrate that with some of the better parts of our evolving Agile process.
But I was definitely in camp #2 - the in-house business applications. I'd love to hear the experiences from those in camp #1. To your point, once IBM got involved it all went south. There was a place I was working for in the early 90s that really turned me off against anything "enterprise" from IBM. I had yet to learn that would apply to pretty much every vendor! :)
Approaches included having single source for any given semantics, and doing various kinds of syncing between the models.
Personally, I went to grad school intending to finally "solve" this, but got more interested in combining AI and HCI for non-software-engineering problems. :)
I'm no SWE and probably never will be. SWE probably don't consider what I do "building an app" but I don't really care
Even trivial things like an ETL pipeline for processing some data at my work fall into this category. It seemed trivial on its surface, but when I spoke to everyone about what we were doing with it and why (and a huge amount of context regarding the past and future of the project), the reason the pipeline wasn’t working properly was both technically and contextually very complex.
I worked with LLMs on solving the problems (I always do, I guess I need to “stay sharp”), and they utterly failed. I tried working from state machine definitions, diagrams, plain English, etc. They couldn’t pick up the nuances at all.
Initially I thought I must be over complicating the pipeline, and there must be some way to step it back and approach it more thoughtfully. This utterly failed as well. LLMs tried to simplify it by pruning entire branches of critical logic, hallucinating bizarre solutions, or ignoring potential issues like race conditions, parallel access to locked resources, etc. entirely.
It has been a bit of an eye opener. Try as I might, I can’t get LLMs to use streams to conditionally parse, group, transform, and then write data efficiently and safely in a concurrent and parallel manner.
Had I done this with an LLM I think the result eventually could have worked, but the code would have been as bad as what we started with at best.
Most of my time on this project was spent planning and not programming. Most of my time spent programming was spent goofing around with LLM slop. It was fun, regardless.
My favorite thing is writing golang with co-pilot on. It make suggestions that use various libraries and methods that were somewhat idomatic several years ago but are now deprecated.
And where you do, no LLM is going to replace them because they are working in the dark mines where no compiler has seen and the optimizations they are doing involve arcane lore about the mysteries of some Intel engineer's mind while one or both of them are on a drug fueled alcoholic deep dive.
I can see people still learning assembly in a pedagogical setting, but not in a production setting. I'd be interested to hear otherwise.
1. There will be way more software
2. Most people / companies will be able to opt out of predatory VC funded software and just spin up their own custom versions that do exactly what they want without having to worry about being spied on or rug pulled. I already do this with chrome extensions, with the help of claude I've been able to throw together things like time based website blocker in a few minutes.
3. The best software will be open source, since it's easier for LLMs to edit and is way more trustworthy than a random SaaS tool. It will also be way easier to customize to your liking
4. Companies will hire way less and probably mostly engineers to automate routine tasks that would have previously be done by humans (ex: bookkeeping, recruiting, sales outreach, HR, copywriting / design). I've heard this is already happening with a lot of new startups.
EDIT: for people who are not convinced that these models will be better than them soon, look over these sets of slides from NeurIPS:
- https://michal.io/notes/ml/conferences/2024-NeurIPS#neurips-...
- https://michal.io/notes/ml/conferences/2024-NeurIPS#fine-tun...
- https://michal.io/notes/ml/conferences/2024-NeurIPS#math-ai-...
This presumes that they know exactly what they want.
My brother works for a company and they just ran into this issue. They target customer retention as a metric. The result is that all of their customers are the WORST, don't make them any money, but they stay around a long time.
The company is about to run out of money and crash into the ground.
If people knew exactly what they wanted 99% of all problems in the world wouldn't exist. This is one of the jobs of a developer, to explore what people actually want with them and then implement it.
The first bit is WAY harder than the second bit, and LLMs only do the second bit.
From working in a non-software place, I see the opposite occurring. Non-software management doesn't buy closed source software because they think it's 'better', they buy closed source software because there's a clear path of liability.
Who pays if the software messes up? Who takes the blame? LLMs make this even worse. Anthropic is not going to pay your business damages because the LLM produced bad code.
1. I try to stay somewhat up to date with ML and how the latest things work. I can throw together some python, let it rip through a dataset from kaggle, let models run locally etc. Have my linalg and stats down and practiced. Basically if I had to make the switch to be an ML/AI engineer it would be easier than if I had to start from zero.
2. I otherwise am trying to pivot more to cyber security. I believe current LLMs produce what I would call "untrusted and unverified input" which is massively exploitable. I personally believe that if AI gets exponentially better and is integrated everywhere, we will also have exponentially more security vulnerabilities (that's just an assumption/opinion). I also feel we are close to cyber security being taken more seriously or even regulated e.g. in the EU.
At the end of the day I think you don't have to worry if you have the "curiosity" that it takes to be a good software engineer. That is because, in a world where knowledge, experience and willingness to probe out of curiosity will be even more scarce than they are now you'll stand out. You may leverage AI to assist you but if you don't fully and blindly rely on it you'll always be the more qualified worker than someone who does.
I don't see this trend. It just sounds like a weird thing to say, it fundamentally misunderstands what the job is
From my experience, software engineering is a lot more human than how it gets portrayed in the media. You learn the business you're working with, who the stakeholders are, who needs what, how to communicate your changes and to whom. You're solving problems for other people. In order to do that, you have to understand what their needs are
Maybe this reflects my own experience at a big company where there's more back and forth to deal with. It's not glamorous or technically impressive, but no company is perfect
If what companies really want is just some cheap way to shovel code, LLMs are more expensive and less effective than the other well known way of cheaping out
Advice #1: do work on your own mind. Try to improve your personal organization. Look into methodologies like GTD. Get into habits of building discipline. Get into the habit of storing information and documentation. From my observations many developers simply can't process many threads at once, making their bottleneck their own minds.
Advice #2: lean into "metis"-heavy tasks. There are many programming tasks which can be easily automated: making a app scaffold, translating a simple algorithm, writing tests, etc. This is the tip of the iceberg when it comes to real SWE work though. The intricate connections between databases and services, the steps you have to go through to debug that one feature, the hack you have to make in the code so the code behaves differently in the testing environment, and so on. LLMs require legibility to function: a clean slate, no tech-debt, low entropy, order, etc. Metis is a term talked about in the book "Seeing Like a State" and it encompasses knowledge and skills gained through experience which is hard to transfer. Master these dark corners, hack your way around the code, create personal scripts for random one-off tasks. Learn how to poke and pry the systems you work on to get out the information you want.
But maybe the times that happens is so rare and low that you just hire a human to unstuck the whole thing and get it running again. Maybe we'll become more like mechanics you visit every now and then for an expensive, quick job, vs an annual retainer.
So if you turn the entire job into that? I don’t think skilled people will be lining up to do it. Maybe consulting firms would take that on I guess.
It helps me out, but in terms on increasing productivity, it pales in comparison to simple auto-complete. In fact it pales in comparison to just having a good, big screen vs. battling away on a 13" laptop.
LLMs are useful and provide not insignificant assistance, but probably less assistance than the tools we've had for a long time. LLMs are not a game changer like some other thing have been since I've been programming (since late 1980s). Just going to Operating Systems with protected memory was a game changer, I could make mistakes and the whole computer didn't crash!
I don't see LLMs as something we have to protect our careers from, I see LLMs as an increasingly useful tool that will become a normal part of programming same as auto-complete, or protected memory, or syntax-highlighting. Useful stuff we'll make use of, but it's to help us, not replace us.
Coding LLMs will likely improve, but what will happen first: a good-at-engineering LLM; or a negative feedback cycle of training data being polluted with a deluge of crap?
I’m not too worried at the moment.
But there are still specialized people being paid for doing websites today.
Does it need to be maintainable, if we can re-generate apps on the go with some sort of automated testing mechanism? I'm still on the fence with the LLM-generated apps debacle, but since I started forcing Cursor on myself, i'm writing significantly less code (75% less?) on my day-to-day job.
Ahh so once we solve the oracle problem and programming will become obsolete…
"From a DM, just in case anyone else needs to hear this."
Young people can learn and fight for their place in the workforce, but what is left for older people like myself? I'm in this industry already, I might have missed the train of "learn to talk with people" and been sold on the "coding is a means to an end" koolaid.
My employability is already damaged due to my age and experience. What is left for people like myself? How can I compete with a 20 something years old who has sharper memory, more free time (due to lack of obligations like family/relationships), who got the right advice from Carmack in the beginning of his career?
But you have made all those mistakes already. You've learned, you've earned your experience. You are much more valuable than you think.
Source: Me, I'm almost 60, been programming since I was 12.
It was true 100 years ago, it was true 20 years ago, and it is true now.
I think that what he means is that how successful we are in work is closely related to our contributions, or to the perceived "value" we bring to other people.
The current gen AI isn't the end of programmers. What matters is still what people want and are willing to pay for and how can we contribute to fulfill that need.
You are right that young folks have the time and energy to work more than older ones and for less money. And they can soak up knowledge like a sponge. That's their strong point and older folks cannot really compete with that.
You (and everyone else) have to find your own strong point, your "niche" so to speak. We're all different, so I'm pretty sure that what you like and are good at is not what I like and I'm good at and vice-versa.
All the greats, like Steve Jobs and so on said that you've got to love what you do. Follow your intuition. That may even be something that you dreamed about in your childhood. Anything that you really want to do and makes you feel fulfilled.
I don't think you can get to any good place while disliking what you do for a living.
That said, all this advice can seem daunting and unfeasible when you're not in a good place in life. But worrying only makes it worse.
If you can see yourself in a better light and as having something valuable to contribute, things would start looking better.
This is solvable. Have faith!
This is probably true for them but the other thing that can happen is that when you take what you love and do it for work or try to make it a business you can grow to hate it.
Is it a USA/Silicon Valley thing to miss the arrogance and insufferability most fresh grads have when entering the workforce?
It's kind of tone-deaf to attempt to self-victimize as someone with significant work experience being concerned of being replaced by a demographic that is notoriously challenged to build experience.
Ultima Underword was technologically superior to Wolfenstein 3D.
System Shock was technologically superior to Doom and a much better game for my taste. I also think it has aged better.
Doom, Wolf 3D and Quake were less sophisticated, but kicked ass. They captured the spirit of the times and people loved it.
They're still pretty good games too, 30 years later.
2) Force my self to look at existing code as abstract data types, etc... to help reduce the cost of LLMs failure mode (confident, often competent, and inevitable wrong)
3) curry whenever possible to support the use of coding assistants and to limit their blast radius.
4) Dig deep into complexity theory to understand what LLMs can't do, either for defensive or offensive reasons.
5) Realize that SWE is more about correctness and context than code.
6) Realize what many people are already discovering, that LLM output is more like clip art than creation.
Decades ago I used to be constantly thumbing through vol. 1 of Knuth (Fundamental Algorithms) and coding those basic algorithms. Now that all comes from libraries and generics. There has been progress, but not via LLMs in that area.
An example I helped someone with on AoC last week.
Looking for 'lines' in a game board in Python, the LLM had an outer loop j, with an inner loop i
This is how it 'matched' a line.
if i != j and (x1 == x2) or (y1 = y2):
I am sure you can see the problem with that, but some of the problems in Knuth 4a are harder for others.There is a lot to learn for many in Knuth 1, but I view it as world building for other concepts.
With LLMs polluting web search results, the point is to have reference material you trust.
Knuth is accessible, I did error on that side rather than suggesting books that are grad level and above that I really appreciate but would just collect dust.
People will need to figure out what works for them, but having someone to explain why you do something is important IMHO.
Who maintains these systems? Who brings them to the last mile and deploys them? Who gets paid to troubleshoot and debug them when they reach a threshold of complexity that the script-kiddie LLM programmer cannot manage any longer? I think this type of person will definitely have a place in the new LLM-enabled economy. Perhaps this is a niche role, but figuring out how one can take experience as a software engineer and deploy it to help people getting started with LLM code (for pay, ofc) might be an interesting avenue to explore.
For the sake of not repeating myself, I would like to clarify/state some things.
1. I did not intend to signal that SWE will disappear as a profession, but would rather undergo transformation, as well as shrinking in terms of the needed workforce.
2. Some people seem to be hanging up to the idea that they are doing unimaginably complicated things. And sure, some people do, but I doubt they are the majority of the SWE workforce. Can LLM replace a cobol developer in financial industry? No, I don't think so. Can it replace the absurd amount of people whose job description can be distilled to "reading/writing data to a database"? Absolutely.
3. There seems to be a conflicting opinion. Some people say that code quality matters a lot and LLMs are not there yet, while other people seems to focus more on "SWE is more than writing code".
Personally, based on some thinking and reading the comments, I think the best way to future-proof a SWE career is to move to position that requires more people skills. In my opinion, good product managers that are eager to learn coding and combine LLMs for code writing, will be the biggest beneficiaries of the upcoming trend. As for SWEs, it's best to start acquiring people skills.
One way to future proof is to look at the larger picture, the same way that coding can't be reduced to algorithm puzzles:
"Software is a conversation, between the software developer and the user. But for that conversation to happen requires a lot of work beyond the software development."
[1] The Development Abstraction Layer https://www.joelonsoftware.com/2006/04/11/the-development-ab...
I'd never been more wrong.
I’m not worried about software as a profession yet, as first clients will need to know what they want much what they actually need.
Well I am a bit worried that many big businesses seem to think they can lay off most of their software devs because “AI” causing wage suppression and overwork.
It’ll come back to bite them IMHO. I’ve contemplated shorting Intuit stock because they did precisely that, which will almost certainly just end up with crap software, missed deadlines, etc.
And design, product intuition, contextual knowledge in addition to the marketing, sales, accounting, support and infrastructure required to sell software at scale.
LLMs can help but it remains to be seen how much they can create outside of the scope of the data they were trained on.
But I have had over 30 years in a career that has been nothing if not dynamic the whole time. And so I no doubt would keep on keepin' on (as the saying goes).
Future-proof a SWE career though? I think you're just going to have to sit tight and enjoy (or not) the ride. Honestly, I enjoyed the first half of my career much more than where SWE ended up in the latter half. To that end, I have declined to encourage anyone from going into SWE. I know a daughter of a friend that is going into it — but she's going into it because she has a passion for it. (So, 1) no one needed to convince her but 2) passion for coding may be the only valid reason to go into it anyway.)
Imagine the buggy-whip makers gathered around the pub, grousing about how they are going to future-proof their trade as the new-fangled automobiles begin rolling down the street. (They're not.)
So I listed some ways that LLMs practically would and wouldn't fit into the workflow of the service they doing. And related it to a bunch of other stuff, including how to make the most of the precious customer real-world access they'd have, and generating a success in the narrow time window they have, and the special obligations of that application domain niche.
Later, I mentally replayed the conversation in my head (as I do), and realized they were actually probably asking about using an LLM to generate the startup's prototype/MVP for the software they imagined.
And also, "generating the prototype" is maybe the only value that an MBA student had been told a "technical" person could provide at this point. :)
That interpretation of the LLM question didn't even occur to me when I was responding. I could've easily whipped up the generic Web CRUD any developer could do and the bespoke scrape-y/protocol-y integrations that fewer developers could do, both to a correctness level necessarily higher than the norm (which was required by this particular application domain). In the moment, it didn't occur to me that anyone would think an LLM would help at all, rather than just be an unnecessary big pile of risk for the startup, and potential disaster in the application domain.
There will be less SWE and DevOps and related jobs available in the next 24 months. Period.
Become hyper-aware of how a business measure your value as a SWE. How? Ask pointed, uncomfortable questions that force the people paying you to think and be transparent.
Stay on the cutting edge of how to increase your output and quality using AI.
Ie: how long does it take for a new joiner to produce code? How do you cut that time down by 10x using “AI”?
If AI (there is no AI, it's ML Machine Learning) were truly as good as the true believers want to see it as; then it could be useful. It isn't.
Why would you want to increase the odds of them noticing your job is no longer necessary?
I wish an AI could revisit this comment 2 years later and post the stats to see if you're right.
I think this is actually already in motion in board meetings, I'm pretty sure executives are discussing something like "if we spend Z$ on AI tools, can we avoid hiring how many engineers?"
The government projects this job type will grow 18% in the next decade: https://www.bls.gov/ooh/computer-and-information-technology/...
1. They are trained to be average coders.
The way LLMs are trained is by giving them lots of examples of previous coding tasks. By definition, half of those examples are below average. Unless there is a breakthrough on how they are trained any above average coder won't have anything to worry about.
2. They are a tool that can (and should) be used by humans.
Computers are much better at chess than any human, but a human with a computer is better than any computer. The same is true with a coding LLM. Any SWE who can work with an LLM will be much better than any LLM.
3. There is enough work for both.
I have never worked for a company where I have had less work when I left than when I started. I worked for one company where it was estimated that I had about 2 years worth of work to do and 7 years later, when I left, I had about 5 years of work left. Hopefully LLMs will be able to take some of the tedious work so we can focus on harder tasks, but most likely the more we are able to accomplish the more there will be to accomplish.
The real secret is talent stacks: have a combination of talents and knowledge that is desirable and unique. Be multi-faceted. And don't be afraid to learn things that are way outside of your domain. And no, you wouldn't be pigeon-holing yourself either.
For example there aren't many SWEs that have good SRE knowledge in the vehicle retail domain. You don't have to be an expert SRE, just be good enough, and understand the business in which you're operating and how those practices can be applied to auto sales (knowing the laws and best practices of the industry).
This industry is going to shrink. And that's ok. We had our time. I wish it was longer and I wish I made more, but I don't think I ever saw myself here forever.
Kudos to those who made a whole career off of this.
I'm in my mid 30s with a wife and kid and I'm mostly hoping I can complete my immigration to the US before my time in this career ends.
Then, I might pursue starting a business or going back to school with the savings and hopefully my wife can be employed at the time in her completely unrelated field and cover us until I can figure out what to do next.
I'm not sad about this. I am happy I have tried to live frugally enough to never buy my own hype or believe that my salary is sustainable forever.
A part of it for me is that I never really loved building software. I might have ADHD and that might be a big factor, but honestly it was never what excited me.
The biggest fallacy I see a lot of people buying into is that LLMs being good enough to replace software developers means they're AGI and the world has other problems. I never quite bought that. I think software developers think too highly of themselves.
But they're also not technically wrong. Ya, a LLM can basically replace a family doctor and most internal medicine physicians. But the path to that happening is long and arduous due to how society has setup medics. Software devs never fought hard enough for their profession to be protected. So we are just the easiest target, same thing that happened to a lot of traders before.
If you're mid career like me, just get ready for the idea that your career is probably much shorter than you thought it will be and you will need to retrain. It will suck but many others have done it.
As it is I spend 95% of my time working out what needs to be done with all of the stakeholders and 5% of my time writing code. So the impact of AI on that is negligible.
Maybe we all end up being prompt engineers, but I think that companies will continue to have experts on the business side as well as the tech side for any foreseeable future.
As a SWE you are expected to neatly balance code, its architecture and how it addresses the customers' problems. At best, what I've seen LLMs produce is code monkey level programming (like copy pasting from StackOverflow), but then a human is still needed to tweak it properly.
What would be needed is General AI and that's still some 50 years away (and has been for the past 70 years). The LLMs are a nice sleight of hand and are useful but more often wrong than right, as soon as you delve into details.
Nice try, ChatGPT!
Use of "same" comes to mind in working with Indian developers or the use of "no worries" and how it proliferated from NZ/AU region to the rest of the world.
Otherwise, yes, I am Skynet providing C++ compilation from the future :)
Nearly all code was machine-generated after the invention of compilers. Did the compiler destroy programming? Absolutely not. Compilers and other tools like higher-level programming languages really kickstarted the software industry. IMO the potential transition from writing programming languages -> writing natural language and have LLM generate the program is still a smaller change than machine code/assembly -> modern programming languages.
If the invention of programming languages expanded the population of programmers from thousands to the 10s of millions, I think LLMs could expand this number again to a billion.
I'm a decent engineer working as a DS in a consulting firm. In my last two projects, I checked in (or corrected) so much more code than the other two junior DS's in my team, that at the end some 80%-90% of the ML-related stuff had been directly built, corrected or optimized by me. And most of the rest that wasn't, was mostly because it was boilerplate. LLMs were pivotal in this.
And I am only a moderately skilled engineer. I can easily see somebody with more experience and skills doing this to me, and making me nearly redundant.
It's not a race, it's a marathon.
I was a pretty early adopter to an LLM based workflow. The more I used it, the worse my project became, and the more I learned myself. It didn’t take long for my abilities to surpass the LLM, and for the past year my usage of LLMs has been dropping dramatically. These days I spend more time in docs than in a chat conversation.
When chatGPT was announced, many people thought programming was over. As in <12 months. Here we are several years later, and my job looks remarkably the same.
I would absolutely love to not have to program anymore. For me, programming is a means to an end. However, after having used LLMs pretty much everyday for 2.5 years, it’s very clear to me that software engineering won’t be changing anytime soon. Some things will get easier and workflows may change, but if you want to build and maintain a moderately difficult production grade application with decent performance, you will still be programming in 10 years
I need to try that. How do LLMs perform writing shaders? I need to modify a shader to add simple order-independent transparency to avoid a depth sort. Are LLMS up to that job? This is a known area, with code samples available, so an LLM might be able to do it.
I'm not a shader expert, and if I can avoid the months needed to become one, that's a win.
It won't take long before you see all the mistakes in their responses before trying them, and from there it's just a hop and a skip to it being more efficient to just reading the docs then arguing with an LLM.
But that beginning part is where LLMs can provide the most value to programmers. Especially if you go in with the mindset that 90% of what it says will be wrong
The Cloud was meant to decimate engineering AND development. But what it did was it created enough chaos that theres a higher demand for both than ever, just maybe not in your region and for your skillset.
LLMs are guaranteed to cause chaos, but the outcome of that chaos is not predictable. Will every coder now output the same as a team of 30 BUT there are 60 times as many screwed up projects made by wannabe founders that you have to come in and clean up? Will businesses find ways to automate code development and then turn around and have to bring the old guys back in constantly to fix up the pipeline. Will we all be coding in black boxes that the AI fills in?
I would make sure you just increase your skills and increase your familiarity with LLMs in case they become mandatory.
A lot of people in this discussion seem to be misunderstanding the way the industry will change with LLMs. It's not a simple as "engineers will be automated away" in the same sense that we're a long way away uber drivers disappearing from self driving cars.
But the impact of LLMs on software is going to be much closer to the impact of the web and web development on native application development. People used to scoff at the idea that any serious company would be run from a web app. Today I would say the majority of software engineers are, directory or indirectly, building web-based products.
LLMs will make coding easier, but they also enable a wide range of novel solutions within software engineering itself. Today any engineer can launch a 0-shot classifier that's better performing than what would have taken a team of data scientists just a few years ago.
A steeper learning curve in a professional field generally translates into higher earnings. The longer you have to be trained to be helpful, the more a job generally earns.
I am already trained.
Richard W. Hamming, “The Art of Doing Science and Engineering”
Today, lawyers delegate many paralegal tasks like document discovery to computers and doctors routinely use machine learning models to help diagnose patients.
So why aren’t we — ostensibly the people writing software — doing more with LLM in our day-to-day?
If you take seriously the idea that LLM will fundamentally change the nature of many occupations in the coming decade, what reason do you have to believe that you’ll be immune from that because you work in software? Looking at the code you’ve been paid to write over the past few years, how much of that can you honestly say is truly novel?
We’re really not as clever as we think we are.
While the code I write is rarely novel, one of the primary intrinsic motivators that keeps being a software engineer is the satisfaction of understanding my code.
If I just wanted software problems to be solved and was content to wave my hands and have minions do the work, I'd be in management. I program because I like actually understanding the problem in detail and then understanding how the code solves it. And I've never found a more effective way to understand code than writing it myself. Everyone thinks they understand code they only read, but when you dig in, it's almost always flawed and surface level.
Intrinsic reward is always part of the compensation package for a job. That's why jobs that are more intrinsically rewarding tend to pay less. It's not because they are exploiting workers, it's because workers add up all of the rewards of the job, including the intrinsic ones, when deciding whether to accept it.
If I have to choose between a job where I'm obligated to pump out code as fast as possible having an LLM churn out as much as it can versus a job that is slower and more deliberate, I'll take the latter even if it pays less.
For those of us who do think this is a revolution, you have two options:
1. Embrace it.
2. Find another career, presumably in the trades or other hands-on vocations where AI ingress will lag behind for a while.
To embrace it you need to research the LLM landscape as it pertains to our craft and work out what interests you and where you might best be able to surf the new wave, it is rapidly moving and growing.
The key thing (as it ever was) is to build real world projects mastering LLM tools as you would an IDE or language; keep on top of the key players, concepts and changes; and use your soft skills to help open-eyed others follow the same path.
Revolutions come, revolutions go, it's why they're called revolutions. Nothing interesting comes out of most revolutions.
LLMs do help but to a limited extend. Never heard of anyone in the second category.
> how do you future-proof your career in light of, the inevitable, LLM take over?
Generally speaking, coding has never been a future proof career. Ageism, changes in technology, economic cycles, offshoring... When I went into that field in early 2000s, it was kind of expected that most people if they wanted to be somewhat successful had to move eventually to leadership/management position.
Things changed a bit with successful tech companies competing for talents and offering great salaries and career paths for engineers, especially in the US but it could very well be temporary and shouldn't be taken for granted.
LLMs is one factor among many that can impact our careers, probably not the most important. I think there's a lot of hype and we're not being replaced by machines anytime soon. I don't see a world where an entrepreneur is going to command an LLM to write a service or a novel app for them, or simply maintain an existing complex piece of software.
I won't future-proof my career against LLMs at all. If I ever see myself in the position that I must use them to produce or adjust code, or that I mostly read and fix LLM-generated code, then I'll leave the industry and do something else.
I see potential in them to simplify code search/navigation or to even replace stackoverflow, but I refuse to use them to build entire apps. If management in turn believes that I'm not productive enough anymore then so be it.
I expect that lots of product owners and business people will be using them in order to quickly cobble something together and then hand it over to a dev for "polishing". And this sounds like a total nightmare to me. The way I see it, devs make this dystopian nightmare a little bit more true everytime they use an LLM to generate code.
What “something else” do you have in mind?
LLMs will make things easier, but it's easy to disagree that they will threaten a developer's future with these reasons in mind:
* Developers should not be reinventing the wheel constantly. LLMs can't work very well on subjects they have no info on (proprietary work).
* The quality is going to get worse over time with the internet being slopped up with the mass disregard for quality content. We are at a peak right now. Adding more parameters isn't going to make the models better. It's just going to make them better at plagiarism.
* Consistency - a good codebase has a lot of consistency to avoid errors. LLMs can produce good coding examples, but they will not have much regard for your how -your- project is currently written. Introducing inconsistency makes maintenance more difficult, let alone the bugs that might slip in and wreak havoc later.
I think hardware manufacturers, including ones that produce chips, are way less encouraged to put things online and thus has a wide moat. "Classic" ones such as 6502 or 8086 definitely have way more material. "Modern" popular ones such as x86/64 too have a lot of material online. But "obscure" ones don't.
On software side, I believe LLMs or other AI can easily replace juniors who only knows how to "fill-in" the code designed by someone else, in a popular language (Python, Java, Javascript, etc.), in under 10 years. In fact it has greatly supported my data engineering work in Python and Scala -- does it always produce the most efficient solution? No. Does it greatly reduces the time I need to get to a solution? Yes, definitely!
One instructive example was when I was implementing a terraform provider for an in-house application. This thing can template the boilerplate for a terraform resource implementation in about 3-4 auto completes and only gets confused a bit by the plugin-sdk vs the older implementation way. But once it deals with our in-house application, it can guess some things, but it's not good. Here it's ok.
In my private gaming projects on Godot... I tried using CoPilot and it's just terrible to the point of turning it off. There is Godot code out there how an entity handles a collision with another entitiy, and there are hundreds of variations out there, and it wildly hallucinates between all of them. It's just so distracting and bad. ChatGPT is OK at navigating the documentation, but that's about it.
If I'm thinking about my last job, which -- don't ask why -- was writing Java Code with low-level concurrency primitives like thread pools, raw synchronized statements and atomic primitives... if I think about my experience with CoPilot about code like this, I'm honestly feeling strength leaving my body because that would be so horrible. I've spend literal months chasing a once-in-a-billion concurrency bug in that code once.
IMO, the most simple framework-fill-in code segment will suffer from LLMs. But a well-coached junior can move past that stage quite quickly.
Other than that it completely depends on luck I guess. I'm pretty sure if companies feed in-house information to it that will make it much more useful, but those agents would be privately owned and maintained.
I believe it's actually not that hard to predict what this might be:
1. Real human interaction, guidance and understanding: This, by definition, is impossible to replace with a system, unless the "system" itself is a human.
2. Programming languages will be required in the future as long as humans are expected to interface with machines and work in collaboration with other humans to produce products. In order to not lose control, people will need to understand the full chain of experience required to go from junior SWE to senior SWE - and beyond. Maybe less people will be required to produce more products but still, they will be required as long as humanity doesn't decide to give up control over basically any product that involves software (which will very likely be almost all products).
3. The market will get bigger and bigger to the point where nothing really works without software anymore. Software will most likely be even more important to have a unique selling point than it is now.
4. Moving to a higher level of understanding of how to adapt and learn is beneficial for any individual and actually might be one of the biggest jumps in personal development. This is worth a lot for your career.
5. The current state of software development in most companies that I know has reached a point where I find it actually desirable for change to occur. SWE should improve as a whole. It can do better than Agile for sure. Maybe it's time to "grow up" as a profession.
The expertise to pick the right tool for the right job based on previous experience that senior engineers poses is something that can probably be taught to an LLM.
Having the ability to provide a business case for the technology to stakeholders that aren't technologically savvy is going to be a people job for a while still.
I think positioning yourself as an expert / bridge between technology and business is what will future-proof a lot of SWE, but in reality, especially at larger organizations, there will be a trimming process where the workload of what was thought to need 10 engineers can be done with 2 engineers + LLMs.
I'm excited about the future where we're able to create software quicker and more contextual to each specific business need. Knowing how to do that can be an advantage for software engineers of different skill levels.
Could you provide a few examples of roles and companies where this could be applicable please?
AI is just ML Machine Learning which isn't about learning at all. It's about absorbing data, then reiterating it to sound intelligent. Actual learning takes human beings with feelings, hopes, dreams, motivations, passions, etc.
It’s not gonna replace developers anytime soon. It’s going to make you more productive in some things and those that avoid it are shooting themselves in the foot but whatever.
What will happen is software will march forward eating the world like it has for so long. And this industry will continue to change like it always has. And you’ll have to learn new tools like you always did. Same as before same as it will always be.
It's more likely the number of jobs at all level of seniority will decrease, but none will disappear.
What I'm interested to see is how the general availability of LLM will impact the "willingness" of people to learn coding. Will people still "value" coding as an activity worth their time?
For me as an already "senior" engineer, using LLMs feel like a superpower, when I think of a solution to a problem, I can test and explore some of my ideas faster by interacting with it.
For a beginner, I feel that having all of this available can be super powerful too, but also truly demotivating. Why bother to learn coding when the LLM can already do better than you? It takes years to become "good" at coding, and motivation is key.
As a low-dan Go player, I remember feeling a bit that way when AlphaGo was released. I'm still playing Go but I've lost the willingness to play competitively, now it's just for fun.
A side note - maybe my project is just really trivial, maybe I’m dumber or worse at coding than I thought, or maybe a combination of the above, but LLMs have seemed to produce code that is fine for what we’re doing especially after a few iteration loops. I’m really curious what exactly all these SWEs are working on that is complex enough that LLMs produce unusable code
But I am focusing on maximizing my total comp so I can retire in 10-15 years if I need to. I think most devs are underestimating where this is eventually going to go.
Longer term defense doesn't exist. If Software Engineering is otherwise completely automated by LLMs, we're in AGI territory, and likely recursive self-improvement plays out (perhaps not AI-foom, by huge uptick in capability / intelligence per month / quarter).
In AGI territory, the economy, resource allocation, labor vs. capital all transition into a new regime. If problems that previously took hundreds of engineers working over multiple years can now be built autonomously within minutes, then there's no real way to predict the economic and social dynamics that result from that.
Also pretty sure this will make outsourcing easier since foreign engineers will be able to pick up technical skills easier
Most importantly it will be easier to have your code comment, class etc. translated into English.
i.e I used to work in country where the native language is not related to english (i.e not Spanish, German, French etc.) and it was incredibly hard for student and developers to name things in English and instead it was more natural to name things in their language.
So even a LLM that take the code and "translate it" (that before no translation tool was able to do) is opening a huge chunk of developers to the world.
I find a lot of good use for LLMs but it's only as a multiplier with my own effort. It doesn't replace much anything of what I do that actually requires thought. Only the mechanical bits. So that's the first thing I ensure: I'm not involved in "plumbing software development". I don't plug together CRUD apps with databases, backend apis and some frontend muck. I try to ensure that at least 90% of the actual code work is about hard business logic and domain specific problems, and never "stuff that would be the same regardless of whether this thing is about underwear or banking".
If I can delegate something to it, it's every bit as difficult as delegating to another developer. Something we all know is normally harder than doing the job yourself. The difference between AI Alice and junior dev Bob, is that Alice doesn't need sleep. Writing specifications, reviewing changes and ensuring Alice doesn't screw up is every bit as hard as doing the same with Bob.
And here is the kicker: whenever this equation changes, that we have some kind of self-going AI Alice, then we're already at the singularity. Then I'm not worried about my job, I'll be in the forest gathering sticks for my fire.
The current generation of LLMs are immensely expensive and will become further more if all the VC money disappears.
A FT dev is happy to sit there and deal with all the whinning, meeting, alignment, 20 iterations of refactoring, architectural change, late friday evening to put out fire. To make an LLM work for 40h/week with that much of context would cost insane and several steering people. Also the level of ambiguous garbage spewed by management and requirement engineers which I turn to value is… difficult with LLMs.
Lets take it this way, before LLMs, we have wonderful outsourcing firms that costs slightly less than maintaining inhouse team, if devs were to disappear, that would be the nail. LLMs need steering and does not deal well with ambiguity, so I don’t see a threat.
Also for all the people touting LLM holy song, try asking windsurf or cursor to generate something niche which does not exist publicly, see how well it does. Aside, I closed several PRs last week because people started using generated code with 100+ LOC which would do with just one or two lines if the authors took some time to review the latest release of the library.
FWIW I was allowed to use AI at work since ChatGPT appeared and usually it wasn't a big help for coding. However for education and trying to "debug" funny team interactions, I've surely seen some value.
My guess is though that some sort of T-shaped skillset is going to be more important while maintaining a generalist perspective.
Pick an LLM. Any LLM.
Ask it what the goat river crossing puzzle is. With luck, it will tell you about the puzzle involving a boatman, a goat, some vegetable, and some predator. If it doesn’t, it’s disqualified.
Now ask it to do the same puzzle but with two goats and a cabbage (or whatever vegetable it has chosen).
It will start with the goat. Whereupon the other goat eats the cabbage left with it on the shore.
Hopefully this exercise teaches you something important about LLMs.
My first try with o1. Seems right to me…what does this teach us about LLMs :)?
Unfortunately “change it up slightly” is not good enough for people to do anything with, and anything more specific just trains the LLM eventually so it stops proving the point.
I cannot load this link though.
Memorizing a solution to a classic brainteaser is not the same as having the reasoning skills needed to solve it. Finding out separate solutions for related problems might allow someone to pattern-match, but not to understand. This is about as true for humans as for LLMs. Lots of people ace their courses, even at university level, while being left with questions that demonstrate a stunning lack of comprehension.
It’s been clear for a long time that the major vendors have been watching online chatter and tidying up well-known edge cases by hand. If you have a test that works, it will keep working as long as you don’t share it widely enough to get their attention.
AI, LLM, ML - have no reasoning ability, they're not human, they are software machines, not people. People reason, machines calculate and imitate, they do not reason.
https://chatgpt.com/share/67609bca-dd08-8004-ba27-0f010afc12...
[1] https://www.wired.com/story/tesla-autopilot-why-crash-radar/
The complex answer is we don't really know how good things will get and we could be at the peak for the next 10-20 years, or there could be some serious advancements that make the current generation look like finger-painting toddlers by comparison.
I would say the fear of no junior/mid positions is unfounded though since in a generation or two, you'd have no senior engineers.
The day LLMs get smart enough to read a chip datasheet and then realize the hardware doesn't behave the way the datasheet claims it does is the day they're smart enough to send a Terminator to remonstrate with whoever is selling the chip anyway so it's a win-win either way, dohohoho.
LLMs and code generation tools are no exception. They will handle some boilerplate and trivial tasks, just like autocompletion, frameworks, and package managers already do. This will make junior-level coding skills less of a differentiator over time. But it is also going to free experienced engineers to spend more time on the complex, high-level challenges that no model can solve right now - negotiating unclear requirements, architecting systems under conflicting constraints, reasoning about trade-offs, ensuring reliability and security, and mentoring teams.
It is less about "Will these tools replace me?" and more about "How do I incorporate these tools into my workflow to build better software faster?" That is the question worth focusing on. History suggests that the demand for making complex software is bottomless, and the limiting factor is almost never just "typing code." LLMs are another abstraction layer. The people who figure out how to use these abstractions effectively, augmenting their human judgment and creativity rather than fighting it, will end up leading the pack.
More small businesses will be able to punch-up with LLMs tearing down walled gardens that were reserved for those with capital to spend on lawyers, consultants and software engineering excellence.
It's doing the same thing as StackOverflow -- hard problems aren't going away, they're becoming more esoteric.
If you're at the edge, you're not going anywhere.
If you're in the middle, you're going to have a lot more opportunities because your throughput should jump significantly so your ROI for mom and pop shops finally pencils.
Just be sure you actually ship and you'll be fine.
On the bright side, every programmer can start a business without a need to hire a army of programmers. I think we are getting back to artisan based economy where everyone can be a producer without a corporate job.
An example is do you tell the app to generate a python application to manage customer records or do you tell it "remember this customer record so other humans or agents can ask for it" and it knows how to efficiently and securely do that.
We'll probably see more 'AI Reliability Engineer' type roles which will likely be around building and maintain evaluation datasets, tracking and stomping out edge cases, figuring out human intervention/escalation, model routing, model distillation, Context-window vs Fine-tuning, and overall intelligence-cost management.
Luckily, the bar has been repeatedly lowered so that customers will accept worse software. The only way for these companies to keep growing at the rate their investors expect them to is to try and cut corners until there's nothing left to cut. Software engineers should just be grateful that the market briefly overvalued them to the degree that did and prepare for a regression to the mean.
It's also incredibly useful for prototyping and doing grunt work. (ie, if you work with lambda functions on AWS, you can get it to spit out a boilerplate for you to amend).
Even if these LLM tools do see massive improvements, it seems to me that they are still going to be very happy to take the set of business rules that a non-developer gives them, and spit out a program that runs but does not do what the user ACTUALLY NEEDS them to do. And the worst thing is that the business user may not find out about the problems initially, will proceed to build on the system, and these problems become deeper and less obvious.
If you agree with me on that, then perhaps what you should focus out is building out your consulting skills and presence, so that you can service the mountains of incoming consulting work.
Except, of course, that isn't true.
It's not enough to make generalizations yet. What kind of projects ? What tuning does it need ? What kind of end users ? What kind of engineers ?
In the field I work with, I can't see how LLMs can help with a clear path to convergence to a reliable product. I anything, I suspect we will need more manual analysis to fix insanity we receive from our providers if they start working with LLMs.
Some jobs will disappear, but I've yet to see signs of anything serious emerge yet. You're right for juniors though, but I suspect those who stop training will loose their life insurance and will starve under LLMs either by competition, our the amount of operational instability it will bring.
I'm currently working on a small team with a senior engineer. He's the type of guy who preach letting Cursor or whatever new AI IDE is relevant nowadays do most of the work. Most of his PRs are utter trash. Time to ship is slow and code quality is trash. It's so obvious that the code is AI generated. Bro doesn't even know how to rebase properly resulting to overwriting (important) changes instead of fixing conflicts. And guess who has to fix their mistakes (me and I'm not even a senior yet).
You are assuming AGI will come eventually.
I assume eventually the earth will be consumed by the sun, but I am equally less worried as I don't see it as a near future.
I am still regular dissapointed, when I try out the newest hyped model. They usually fail my tasks and require lots of manual labour.
So if that gets significantly better, I can see them replacing junior devs. But without understanding, they cannot replace a programmer for any serious task. But they maybe enable more people to become good enough programmers for their simple task. So less demand for less skilled devs indeed.
My solution - the same as before - improve my skills and understanding.
With that said, looking back on my FAANG career in OS framework development, I’m not sure how much of my work could have actually been augmented by AI. For the most part, I was designing and building brand new systems, not gluing existing parts together. There would not be a lot of precedent in the training data.
Now ChatGPT really became indispensable tool for me, on the one row with Google and StackOverflow.
So I don't feel threatened so far. I can see the potential, and I think that it's very possible for LLM-based agents to replace me eventually, probably not this generation, but few years later - who knows. But that's just hand waving, so getting worried about possible future is not useful for mental well-being.
I have personally used LLMs in my job to write boilerplate code, write tests, make mass renaming changes that were previously tedious to do without a lot of grep/sed-fu, etc. For these types of tasks, LLMs are already miles ahead of what I was doing before (do it myself by hand, or have a junior engineer do it and get annoyed/burnt out).
However, I have yet to see an LLM that can understand an already established large codebase and reliably make well-designed additions to it, in the way that an experienced team of engineers would. I suppose this ability could develop over time with large increases in memory/compute, but even state-of-the-art models today are so far away from being able to act like an actual senior engineer that I'm not worried.
Don't get me wrong, LLMs are incredibly useful in my day-to-day work, but I think of them more as a leap forward in developer tooling, not as an eventual replacement for me.
Long context is practically a solved problem and there's a ton of work now on test time reasoning motivated by o1 showing that it's not that hard to RL a model into superhuman performance as long as the task is easy / cheap to validate (and there's works showing that if you can define the problem you can use an LLM to validate against your criteria).
Also "as long as the task is easy / cheap to validate" is a problematic statement if we're talking about the replacement of senior software engineers, because problem definition and development of validation criteria are core to the duties of a senior software engineer.
All of this is to say: I could be completely wrong, but I'll believe it when I see it. As I said elsewhere in the comments to another poster, if your points could be expressed in easily testable yes/no propositions with a timeframe attached, I'd likely be willing to bet real money against them.
Here's a recipe for a human level LLM software engineer:
1. Pretrain an LLM on as much code and text as you can (done already)
2. Fine tune it on synthetic code specific tasks like: (a) given a function, hide the body, make the model implement it and validate that it's functionally equivalent to the target function (output matching), can also have an objective to optimize the runtime of the implementation (b) introduce bugs in existing code and make the LLM fix it, (c) make LLM make up problems, write tests / spec for it, then have it attempt to implement it many times until it comes up with a method that passes the tests, (d-z) a lot of other similar tasks that use linters, parsers, AST modifications, compilers, unit tests, specs validated by LLMs, profilers to check that the produced code is valid
3. Distill this success / failure criteria validator to a value function that can predict probability of success at each token to give immediate reward instead of requiring full roll out, then optimize the LLM on that.
4. At test time use this final LLM to produce multiple versions until one passes the criteria, for the cost of an hour of a software engineer you can have an LLM produce millions of different implementations.
See papers like: https://arxiv.org/abs/2409.15254 or slides from NeurIPS that I mentioned here https://news.ycombinator.com/item?id=42431382
If you're saying that it takes one software engineer one hour to produce comprehensive criteria that would allow this whole pipeline to work for a non-trivial software engineering task, this is where we violently disagree.
For this reason, I don't believe I'll be convinced by any additional citations or research, only by an actual demonstration of this working end-to-end with minimal human involvement (or at least, meaningfully less human involvement than it would take to just have engineers do the work).
edit: Put another way, what you describe here looks to me to be throwing a huge number of "virtual" low-skilled junior developers at the task and optimizing until you can be confident that one of them will produce a good-enough result. My contention is that this is not a valid methodology for reproducing/replacing the work of senior software engineers.
As an example, huggigface just posted an article showing this for math, where with some sampling you can get a 3B model to outperform a 70B one: https://huggingface.co/spaces/HuggingFaceH4/blogpost-scaling...
Formalizing the criteria is not as hard as you're making it out to be. You can have an LLM listen to a conversation with the "customer", ask follow up questions and define a clear spec just like a normal engineer. If you doubt it open up chatGPT, tell it you're working on X and ask it to ask you clarifying questions, then come up with a few proposal plans and then tell it which plan to follow.
I apologize for misinterpreting what you were saying -- I was clearly taking "for the cost of an hour of a software engineer" to mean something that you didn't intend.
> As an example, huggigface just posted an article showing this for math, where with some sampling you can get a 3B model to outperform a 70B one
This is not relevant to our discussion. Again, I'm reasonably sure that I'm not going to be convinced by any research demonstrating that X new tech can increase Y metric by Z%.
> Formalizing the criteria is not as hard as you're making it out to be. You can have an LLM listen to a conversation with the "customer", ask follow up questions and define a clear spec just like a normal engineer. If you doubt it open up chatGPT, tell it you're working on X and ask it to ask you clarifying questions, then come up with a few proposal plans and then tell it which plan to follow.
This is much more relevant to our discussion. Do you honestly feel this is an accurate representation of how you'd define the requirements for the pipeline you outlined in your post above? Keep in mind that we're talking about having LLMs work on already-existing large codebases, and I conceded earlier that writing boilerplate/base code for a brand new project is something that LLMs are already quite good at.
Have you worked as a software engineer for a long time? I don't want to assume anything, but all of your points thus far read to me like they're coming from a place of not having worked in software much.
Yes I've been a software engineer working in deep learning for over 10 years, including as an early employee at a leading computer vision company and a founder / CTO of another startup that built multiple large products that ended up getting acquired.
> I apologize for misinterpreting what you were saying -- I was clearly taking "for the cost of an hour of a software engineer" to mean something that you didn't intend.
I meant that unlike a software engineer, the LLM can do a lot more iterations on the problem given the same budget. So if your boss comes and says build me new dashboard page it can generate 1000s of iterations and use a human aligned reward model to rank them based on which one your boss might like best. (that's what the test time compute / sampling at inference does).
> This is not relevant to our discussion. Again, I'm reasonably sure that I'm not going to be convinced by any research demonstrating that X new tech can increase Y metric by Z%.
These are not just research papers, people are reproducing these results all over the place. Another example from a few minutes ago: https://x.com/DimitrisPapail/status/1868710703793873144
> This is much more relevant to our discussion. Do you honestly feel this is an accurate representation of how you'd define the requirements for the pipeline you outlined in your post above? Keep in mind that we're talking about having LLMs work on already-existing large codebases,
I'm saying this will be solved pretty soon, working with large codebases doesn't work well right now because last years models had shorter context and were not trained to deal with anything longer than a few thousand tokens. Training these models is expensive so all of the coding assistant tools like cursor / devin are sitting around and waiting for the next iteration of models from Anthropic / OpenAI / Google to fix this issue. We will most likely have announcements of new long context LLMs in the next 1-2 weeks from Google / OpenAI / Deepseek / Qwen that will make major improvements on large code bases.
I'd also add that we probably don't want huge sprawling code bases, when the cost of a small custom app that solves just your problem goes to 0 we'll have way more tiny apps / microservices that are much easier to maintain and replace when needed.
Maybe I'm not making myself clear, but when I said "demonstrating that X new tech can increase Y metric by Z%" that of course included reproduction of results. Again, this is not relevant to what I'm saying.
I'll repeat some of what I've said in several posts above, but hopefully I can be clearer about my position: while LLMs can generate code, I don't believe they can satisfactorily replace the work of a senior software engineer. I believe this because I don't think there's any viable path from (A) an LLM generates some code to (B) a well-designed, complete, maintainable system is produced that can be arbitrarily improved and extended, with meaningfully lower human time required. I believe this holds true no matter how powerful the LLM in (A) gets, how much it's trained, how long its context is, etc, which is why showing me research or coding benchmarks or huggingface links or some random twitter post is likely not going to change my mind.
> I'd also add that we probably don't want huge sprawling code bases
That's nice, but the reality is that there are lots of monoliths out there, including new ones being built every day. Microservices, while solving some of the problems that monoliths introduce, also have their own problems. Again, your claims reek of inexperience.
Edit: forgot the most important point, which is that you sort of dodged my question about whether you really think that "ask ChatGPT" is sufficient to generate requirements or validation criteria.
Personally - and I realize this is not generalizable advice - I don’t consider myself a SWE but a domain expert who happens to apply code to all of his tasks.
I’ve been intentionally focusing on a specific niche - computer graphics, CAD and computational geometry. For me writing software is part of the necessary investment to render something, model something or convert something from domain to domain.
The fun parts are really fun, but the boring parts are mega-boring. I’m actually eagerly awaiting for LLM:s to reach some level of human parity because there simply isn’t enough talent in my domains to do all the things that would be worthwhile to do (cost and return of investment, right).
The reason is my domain is so niche you can’t webscrape&label to reach the intuition and experience of two decades, working in various industries from graphics benchmarking, automotive HUDs, to industrial mission critical AEC workflows and to realtime maps.
There is enough knowledge to train LLMs to get a hint as soon as I tie few concepts together, and then they fly. The code they write at the moment apart from simple subroutines is not good enough to act as unsupervised assistant … most of the code is useless honestly. But I’m optimistic and hope they will improve.
> junior to mid level software engineering will disappear mostly People don't magically go to senior. Can't get seniors without junior and mid to level up. We'll always need to take in and train new blood.
Aka never, or at least far enough in the future that you can't really predict or plan for it.
So what does it take for LLM to replace SWE?
1. It needs to get better, much better 2. It needs to be cheaper still
Those two things are at odds with each other. If Scaling Laws is the god we preaching to, then it apparently has already hit the diminishing of return, maybe if we scale up 1000x we can get AGI, but that won't be economically reasonable for a long time.
Back to reality, what does it mean to survive in a market assuming coding assistants are going to get marginally better over say next 5 years? Just use them, they are genuinely useful tools to accomplish really boring and mundane stuff. Things like writing docker files, those will go away to LLM and human won't be able and don't have to compete. They are also great second thoughts advice givers, it is fun to know what LLM thought of your design proposal and build upon their advice.
Overall, I don't think much will change over night, the industry might experience contraction in terms of how many developers it will hire, for which I think for a long time, the demand will not be there. For people already in the industry, as long as you keep learning, it is probably going to be fine, well, for now.
I have notes on particular areas I am focusing on, but I have a small set of general notes on this, and they seem to apply to you SWEs also.
Headline: Remember data is the new oil Qualifier: It's really all about IP portfolios these days
1) Business Acument: How does the tech serve the business/client needs, from a holistic perspective of the business. (eg: sysadmins have long had to big picture finance, ops, strategy, industry, etc knowledge) - aka - turn tech knowledge into business knowledge
2) Leadership Presence: Ability to meet w/c-suite, engineers, clients, etc, and speak their languages, understand their issues, and solve their issues. (ex: explain ROI impacts for proposals to c-suite)
3) Emotional Intelligence: Relationship building in particular. (note: this is the thing I neglected the most in my career and regreted it!)
4) Don't be afraid to use force multiplier tools. In this discussion, that means LLMs, but it can mean other things too. Adopt early, keep up with tooling, but focus on the fundamental tech and don't get bogged down into proprietary stockholm syndrome. Augment yourself to be better, don't try to replace yourself.
----
Now, I know thats a simplistic list, but you asked so I gave you what I had. What I am doing (besides trying to get my mega-uber-huge-sideproject off the ground), is recentering on certain areas I don't think are going anywhere: on-prem, datacenter buildouts, high-compute, ultra-low-latency, scalable systems, energy, construction of all the previous things, and the banking knowledge to round it all out.
If my side-project launch fails, I'm even considering data-center sales instead of the tech side. Why? I'm tired of rescuing the entire business to no fanfare while sales people get half my salary in a bonus. Money aside, I can still learn and participate in the builds as sales (see it happen all the time).
In other words, I took my old-school niche set of knowledge, and adopted it over the years as the industry changed, focusing on what I do best (in this case, operations - aka - the ones who actually get shit into prod, and fix it when it's broke, regardless of the title associated).
All of that, and I was being paid a very handsome amount compared to others outside of tech? Several times over the national average? For gluing some APIs together?
What other professions are like this where there's a good chunk of people who can have such a leisurely life, without taking much risk, and get so highly compensated compared to the rest? I doubt there's many. At some point, the constrained supply must answer to the high demand and reality shows up at the door.
I quit a year into the gig to build my own company. Reality is much different now. But I feel like I've gained many more skills outside of just tech that make me more equipped for whatever the future brings.
1. Similar to autonomous driving going from 90-99% reliability can take longer than 0-90%.
2. You can now use LLMs and public clouds to abstract away a lot skills that you don't have (managing compute clusters, building iOS and Android apps, etc.). So you can start your 3 person company and do things that previously required 100s of people.
IMHO LLMs and cloud computing are very similar where you need a lot of money to build an offering so perhaps only a few big players are going to survive.
See the GAIA benchmark. While this surely will be beat soon enough, the point is that we do exponentially longer horizon tasks than that benchmark every single day.
It's very possible we will move away from raw code implementation, but the core concepts of solving long horizon problems via multiple interconnected steps are exponentially far away. If AI can achieve that, then we are all out of a job, not just some of us.
Take 2 competing companies that have a duopoly on a market.
Company 1 uses AI and fires 80% their workforce.
Company 2 uses ai and keeps their workforce.
AI in its current form is a multiplier, we will see company two massively outcompete the first as each employee now performs 3-10 people's tasks. Therefore, Company two's output is exponentially increased per person. As a result, it significantly weakens the first company. Standard market forces haven't changed.
The reality, as I see it, is that interns will now be performing at Senior SWE, senior SWE engineers will now be performing at VP of engineering levels, and VP's of engineering will now be performing at nation state levels of output.
We will enter an age where goliath companies will be common place. Hundreds or even thousands of mega trillion dollar companies. Billion dollar startups will be expected almost at launch.
Again, unless we magically find a solution to long horizon problems (which we haven't even slightly found). That technology could be 1 year or 100 years away. We're waiting on our generation's Einstein to discover it.
On the other hand that means they are weaker if competition comes along as it's expected that consumers and business would demand significantly more due to comparisons.
A large model knows all of this as well. We already rely on generative language model conversations to fill in the knowledge gaps that Googling for documentation (or “how do I do X?” stackoverflow answers) filled.
What’s harder is debugging. A lot of debugging is guesswork and action taking, note-taking, and brain-storming for possible ideas as to why X crashes on Y input.
Bugs that boil down to isolating a component and narrowing down what’s not working are hard. Being able to debug them could be the moat that will protect us SWEs from redundancy. Alternatively, pioneering all the new ways of getting reproducible builds and reproducible environments will be the route to eliminating this class of bug entirely, or at least being able to confidently say that some bug was probably due to bad memory, bad power supplies, or bad luck.
(1) "Finding a niche in which my skills are still relevant due to technical constraints",
(2) "Looking for a different job going forward", or
(3) "Learning to operate the new tool".
Personally I'm in category (3), but I'd be most interested in hearing from others who think they are on track to (1). What are the areas where LLMs will not penetrate due to technical reasons, and why? I'm sure these areas exist, I just have trouble imagining which they are!
----
One might argue that there's another alternative of finding a niche where LLMs won't penetrate due to regulatory constraints, but I'm not counting this because that tends to be a short-term optimisation.
Along the way I was an "incident manager" at a couple different places, meaning I was basically a full-time Incident Commander under the Google SRE model. This work was always fun, but the hours weren't great (these kind of jobs are always "coverage" jobs where you need to line up a replacement when you want to take leave, somebody has to work holidays, etc.). Essentially I'd show up at work and paint the factory by making sure our documentation was up to date, work on some light automation to help us in the heat of the moment, and wait for other teams to break something. Then I'd fire up a bridge and start troubleshooting, bringing in other people as necessary.
This didn't seem like something to retire from, but I can imagine it being something that comes back, and I may have to return to it to keep food on the table. It is exactly the kind of thing that needs a "human touch".
Junior devs aren't going away. What might improve is often the gap between where a junior dev is hired and the effort and investment to get them to the real start line of adding value, before they hop ship.
AI agents will become their coding partners, that can teach and code with the Junior Dev, it will be more reliable contributions to a code base, and sooner.
By teach and code with, I mean explaining so much of the basic stuff, step by step, tirelessly, in the exact way each junior dev needs, to help them grow and advance.
This will allow SWE's to move up the ladder and work on more valuable work (understanding problems and opportunities, for example) and solve higher level problems or from a higher perspective.
Specifically the focus of Junior Devs on problems, or problems sets could give way to placing them in opportunities to be figured out and solved.
LLMs can write code today, not sure if it can manage clean changes to an entire codebase on it's own today at scale, or for many. Some folks probably have this figured out quietly for their particular use cases.
Who knows though in 10 years time I imagine things will be radically different and I intend to periodically use the latest AI assistance so I keep up, even if it’s a world I don’t necessarily want. Part of why I love to code is the craft and AI generated code loses that for me.
I do, however, feel really lucky to be at the senior end of things now. Because I think junior roles are seriously at risk. Lots of the corrections needed for LLMs seem to be the same kind of errors new grads make.
The problem is - what happens when all us seniors are dead/retired and there’s no juniors because they got wiped out by AI.
> So, fellow software engineers, how do you future-proof your career in light of, the inevitable, LLM take over?
I feel that software engineering being taken over by LLM is a pipe dream. Some other, higher, form of AI? Inevitably. LLMs, as current models exists and expand? They're facing a fair few hurdles that they cannot easily bypass.
To name a few: requirement gathering, scoping, distinguishing between different toolsets, comparing solutions objectively, keeping up with changes in software/libraries... etc. etc.
Personally? I see LLMs tapering off in new developments over the following few years, and I see salesmen trying to get a lot of early adopters to hold the bag. They're overpromising, and the eventual under-delivery will hurt. Much like the AI winter did.
But I also see a new paradigm coming down the road, once we've got a stateful "intelligent" model that can learn and adapt faster, and can perceive time more innately... but that might take decades (or a few years, you never know with these things). I genuinely don't think it'll be a direct evolution of LLMs we're working on now. It'll be a pivot.
So, I future-proof my career simply: I keep up with the tools and learn how to work around them. When planning my teams, I don't intend to hire 5 juniors to grind code, but 2 who'll utilize LLMs to teach them more.
I ask more of my junior peers for their LLM queries before I go and explain things directly. I also teach them to prompt better. A lot of stuff we've had to explain manually in the past can now be prompted well, and stuff that can't - I explain.
I also spend A LOT of time teaching people to take EVERYTHING AI-generated with generous skepticism. Unless you're writing toys and tiny scripts, hallucinations _will_ waste your time. Often the juice won't be worth the squeeze.
More than a few times I've spent a tedious hour navigating 4o's or Claude's hallucinated confident failures, instead of a pleasant and productive 45 minutes writing the code myself... and from peer discussions, I'm not alone.
I think juniors are a significant audience for LLM code production because they provide tremendous leverage for making new things. For more experienced folk, there are lots of choices that resemble prior waves of adoption of new state of the art tools/techniques. And as it always goes, adoption of those in legacy environments is going to go more slowly, while disruption of legacy products and services that have a cost profile may occur more frequently as new economics for building and then operating something intelligent start to appear.
Do we have sufficient data that spans the entire problem space that SWE deals with? Probably not, and even if we did it would still be imperfectly modeled.
Do we have sufficient data to span the space of many routine tasks in SWE? It seems so, and this is where the LLMs are really nice: e.g., scripting, regurgitating examples, etc.
So to me, much like previous innovation, it will just shift job focus away from the things the innovation can do well, rather than replacing the field as a whole.
One pet theory I have is that we currently suck at assessing model performance. Sure, vibes-based analysis of the outputs of the model make them look amazing. But is that not the literal point of RLHF? But how good are these outputs really?
What they excel in in my experience is translating code to different languages and they do find alternative dependencies in different runtime environments.
Code can be weird and prompting can take longer than writing yourself, but it still is nice support, even if you need to check the results. I only use local LLM where I do embed some of my code.
I am still not sure if LLM are a boon for learning to code or if it is a hindrance. I tend to think it is a huge help.
As for future proofing your career, I don't think developers need to be afraid that AI will write all code for us yet, just because non software engineers suck at defining good requirements for software. I also believe that LLM seem to hit walls on precision.
Some other industries might change significantly though.
It's also good as a replacement for reading the docs/Google search, especially with search getting worse and full of SEO spam lately.
It's a big help, but it doesn't really replace the human. When the human can be replaced, any job done in front of a computer will be at risk, not just coders. I hope when that happens there will be full robot bodies with AI to do all of the other work.
Also, I know of several designers who can't code but are planning to use LLMs to make their startup ideas a reality by building an MVP. If the said startups take off, they will probably hire real human coders, thus creating more coding jobs. Jobs that would not exist without LLMs getting the original ideas of the ground.
> First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations, but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.
From this perspective, the code base isn’t just an artifact left over from the struggle of getting the computer to understand the business’s problems. Instead, it is an evolving methodological documentation (for humans) of how the business operates.
Thought experiment: suppose that you could endlessly iterate with an LLM using natural language to build a complex system to run your business. However, there is no source code emitted. You just get a black box executable. However, the LLM will endlessly iterate on this black box for you as you desire to improve the system.
Would you run a business with a system like this?
For me, it depends on the business. For example, I wouldn’t start Google this way.
So should you be worried they will replace you? No. You should worry about not adopting the technology in some form, otherwise your peers will outpace you.
I think a lot of engineers are in for some level of rude awakening. I think a lot of engineers havent applied some level of business/humanities thinking in this, and I think a lot of corporations care less about code quality than even our most pessimistic estimates. It wouldnt surprise me if cuts over the next few years get even deeper, and I think a lot of high performing (re: high paying) jobs are going to get cut. Ive seen so many comments like "this will improve engineering overall, as bad engineers get laid off" and I dont think its going to work like that.
Anecdotal, but no one from my network actually recovered from the post covid layoffs and they havent even stopped. I know loads of people who dont feel like theyll ever get a job as good as they had in 2021.
Ask an LLM to generate you 100 more lines of code, no problem you will get something. Ask the same LLM to look at 10000 lines of code and intelligently remove 100... good luck with that!
seriously, I tried uploading some (but not all) source code of my company to our private Azure OpenAI GPT 4o for analysis, as a 48 MB cora-generated context file, and really the usefulness is not that great. And don't get me started about Copilot's suggestions.
Someone really has to know their way around the beast, and LLM's cover a very very small part of the story.
I fear that the main effect of LLMs will be that developers that have already for so long responded to their job-security fears with obfuscation and monstrosity... will be empowered to produce even more of that.
These two tasks have a very different difficulty level though. It will be the same with a human coder. If you give me a new 10k sloc codebase and ask to add a method, to cover some new case I can probably do it in a hour to a day, depending on my familiarity with the language, subject matter, codebase overall state, documentation, etc.
New 10k codebase and a task of removing 100 lines? That's probably at least half a week to understand how it all works (disregarding simple cases like a hundred-line comment bloc with old code), before I can make such a change safely.
(this is my naive tactic, I am sure sama and co will find a way to suck me down the drain with everyone else in the end)
Carmack’s tweet feels out of touch. He says we should focus only on business value, not engineering. But what made software engineering great was that nerds could dive deep into technical challenges while delivering business value. That balance is eroding - now, only the business side seems to matter.
We also already have "easier ways of writing software" - website builders, open source libraries, StackOverflow answers, etc.
Software builds on itself. It's already faster to copy and paste someone's GitHub repo of their snake game than to ask an LLM to build a snake game for you. Software problems will continue to get more challenging as we create more complex software with unsolved answers.
If anything, software engineers will be more and more valuable in the future (just as the past few decades have shown how software engineers have become increasingly more valuable). Those that code with LLMs won't be able to retain their jobs solving the harder problems of tomorrow.
I haven't written any actual code for a couple of decades, after all: I just waffle around stitching together vague, high-level descriptions of what I want the machine to do, and robots write the code for me! I don't even have to manage memory anymore; the robots do it all. The robots are even getting pretty good at finding certain kinds of bugs now. A wondrous world of miracles, it is... but somehow there are still plenty of jobs for us computer-manipulating humans.
This. Programming will become easier for everyone. But the emergent effect will be that senior engineers become more valuable, juniors much less.
Why? It's an idea multiplier. 10x of near-zero is still almost zero. And 10x of someone who's a 10 already - they never need to interact with a junior engineer again.
> until eventually LLMs will become so good, that senior people won't be needed any more.
Who will write the prompts? How do you measure the success? Who will plan the market strategy? Humans are needed in the loop by definition as we build software to achieve human goals. We'll just need significantly fewer people to achieve them.
I worry where we will get the next generation of senior engineers.
AI aside, we probably should have done this a long time ago. Software for software's sake tends build things that treat the users poorly. Focusing on sectors that could benefit forum software and not treating software itself like a sector seems to me like a better way.
I know that sounds like giving up, but look around and ask how much of the software we work on is actually helping anybody. Let's all go get real jobs. And if you take an honest look at your job and think it's plenty real, well congrats, but I'd wager you're in the minority.
o1 designed some code for me a few hours ago where the method it named "increment" also did the "limit-check", and "disable" functionality as side-effects.
In the longer run, SWE's evolve to become these other roles, but on-steroids:
- Entrepreneur - Product Manager - Architect - QA - DevOps - Inventor
Someone still has to make sure the solution is needed, the right fit for the problem given the existing ecosystems, check the code, deploy the code and debug problems. And even if those tasks take fewer people, how many more entrepreneurs become enabled by fast code generation?
But, generally I don’t see AI as currently such a boon to productivity that it would eliminate programming. Right now, it’s no where near as disruptive as easy to install shared libraries (e.g. npm). Sure, I get lines of code here and there from AI, but in the 90’s I was programming lots of stuff that I just get instantaneously for free with constant updates.
Seriously? It actually is an industry where people started coding in C and have retired recently, still working in C.
I don’t think LLMs are taking jobs today, but I now clearly see a principal that has fully emerged, non tech people now have a lust for achieving and deploying various technology solutions, and the tech sector will fulfill it in the next decade.
Get ahead early.
If the output cannot be trusted, humans will always be required to review and make corrections, period. CEOs that make the short-sighted mistake of attempting to replace engineers with LLMs will learn this the hard way.
I’d be far more worried about something resembling AGI that can actually learn and reason.
Redefine what your career is.
2017-2018: LLMs could produce convincing garbage with heavy duty machines.
Now.
- You can have good enough LLMs running on a laptop with reasonable speeds.
- LLMs can build code from just descriptions (Pycharm Pro just released this feature)
- You can take screenshots of issues and the LLM will walk through how to fix these issues.
- Live video analysis is possible just not to the public yet.
The technology is rapidly accelerating.
It is better to think of your career as part of the cottage industry at the start of the industrial revolution.
There will likely be such jobs existing, but not at the required volume of employees we have now.
Focus on soft skills: communication, problem-solving, social intelligence, collaboration, connecting ideas, and other "less technical" abilities. Kind of management and interpersonal stuff that machines can’t replicate (yet).
It's real. Subscribing LLM provider for $20/month feels like better deal than hiring average-skilled software engineer.
It's even funnier when we realize that people we hire are just going to end up prompting the LLM anyway. So, why bother hiring?
We really need next-level skills. Beyond what LLM can handle.
I've yet to have an LLM ever produce correct code the first time, or understand a problem at the level of anything above a developer that went through a "crash course".
If we get LLMs trained on senior level codebases and senior level communications, they may stand a chance someday. Given that they are trained using the massively available "knowledge" sites like Reddit, it's going to be awhile.
The developer has in fact even more responsibility because expectations have gone up so even more code is being produced and someone has to maintain it.
So there will be a point when that developer will throw their hands up and ask for more people which will have to be hired.
Basically the boiling point has gone up (or down, depending on how you interpret the analogy), the water still has to be boiled the more or less the same way.
Until we have AGI - then it's all over.
Even if AI coding becomes the norm, people who actually understand software will write better prompts.
The current quality of generated code is awful anyways. We are not there yet.
But there is a very simple solution for people who really dislike AI. Licensing. Disallow AI to contribute to and use your FOSS code and if they do, sue the entity that trained the model. Easy money.
Also, being a software developer is not primarily writing code, these days it is much more about operating and maintaining production services and long running projects. An LLM can spit out a web app that you think works, but when it goes wrong or you need to do a database migration, you're going to want someone who can actually understand the code and debug the issue.
Getting and nailing a good opportunity opens doors for a fair bit, but that's rare.
So the vast, vast majority of future-proofing is relationship-building - in tech, with people who don't build relationships per se.
And realize that decision-makers are business people (yes, even the product leads and tech leads are making business decisions). Deciders can control their destiny, but they're often more exposed to business vicissitudes. There be dragons - also gold.
To me, the main risk of LLM's is not that they'll take over my coding, but that they'll take over the socialization/community of developers - sharing neat tricks and solutions - that builds tech relationship. People will stop writing hard OS software blogs and giving presentations after LLM's inter-mediate to copy code and offer solutions. Teams will become hub-and-spoke, with leads enforcing architecture and devs talking mostly to their copilots. That means we won't be able to find like-minded people, no one has any incentive or even occasion to share knowledge. My guess is that relationship skills will be even more valued, but perhaps also a bit fruitless.
Doctors and lawyers and even business people have a professional identity from their schooling, certification, and shared history. Developers and hackers don't; you're only as relevant as your skills on point. So there's a bit of complaining but no structured resistance on the part of developers to their work being factored, outsourced, and now mediated by LLM's (that they're busily building).
Developers have always lived in the shadow of the systems they're building. You used to have to pay good money for compilers and tools, licenses to APIs, etc. All the big guys realized that by giving away that knowledge and those tools, they make the overall cost cheaper, and they can capture more. We've been free-riding for 30 years, and it's led us to believe that skills matter most. LLM's are a promising way to sell cloud services and expensive hardware, so there will be even more willingness than crypto or car-sharing or real estate or whatever to invest in anything disruptive. We rode the tide in, and it will take us out again.
If it could also help management understand their own issues and how to integrate solutions into their software, then it would be great!
The core here is, that if engineering is going, then law is going, marketing is going and a lot of other professions are also going.
This means that we have structural issues we need to solve then.
And in that case it is about something else than my own employability.
The difference is reliance costs. We go to a doctor or a lawyer instead of a book because the risk of mistake is too high.
But development work can be factored; the guarantees are in the testing and deployment. At the margins maybe you trust a tech lead's time estimates or judgment calls, but you don't really rely on them as much as prove them out. The reliance costs became digestable when agile reduced scope to tiny steps, and they were digested by testing, cloud scaling, outsourcing, etc. What's left is a fig leaf of design, but ideas are basically free.
Let me put it like this: when LLMs write system scale software that is formally verifiable or completely testable, I promise you that you don't trust your doctor more than you trust the ai.
Already now we see indications that diagnosing is better without human (doctors) intervention.
Fast forward ten years with that development and nobody dares to go to the doctor.
This is a path I would recommend with or without LLM's in the picture.
Rather than discuss the current “quality of code produced” it seems more useful to talk about what level of abstraction it’s competent at (line, function, feature), and whether there are any ceilings there. Of course the effective level of abstraction depends on the domain but you get the idea, it can generate good code at some level if promoted sufficiently well.
There are 4.5 million SWEs in USA alone. of those how many are great at what they do? 0.5% at best. how many are good? 5% tops. average/mediocre 50%. below average to downright awful - the rest.
while LLMs won’t be touching the great/good group in any near future they 100% will the awful one as well as average/mediocre one
If you enjoy writing code, you might have to make it a hobby like gardening, instead of earning money from it.
But the breed of startup founder (junior and senior) that’s hustling for the love of building a product that adds value to users, will be fine.
the quality of the code is as bad as it was two years ago, the mistakes are always there somewhere and take a long time to spot, to the point where it's somewhat of a useless party trick to actually use a LLM for software development
and for more senior stuff the code is not what matters anyway, it's reassuring other stakeholders, budgeting, estimation, documentation, evangelization, etc.
They're tools that can make you more efficient, but they still need a human to function and guide them.
Grok and o1 are great examples of how these plateaus also wont be overcome with more capital and compute.
Agentic systems might become great search/research tools to speed up the time it takes to gather (human created) info from the web, but I don't see them creating anything impressive or novel on their own without a completely different architecture.
As someone who's personally tried with great success to build agentic systems over the last 6 months, you need to be aware of how fast these things are improving. The latest Claude Sonnet makes GPT-3.5 look like a research toy. Things are trivial now in the code gen space that were impossible just earlier this year. Anyone not paying attention is missing the boat.
Like what? You're the only person ive seen claim they've built agentic systems with great success. I dont regard improved chat-bot outputs as success, im talking about agentic systems that can roll their own auth from scratch, or gather data from the web independently and build even a mediocore prediction model with that data. Or code anything halfway decently in something other than Python.
It's just like the self-driving car—great for very simple applications but when will human drivers become a thing of the past? Not any time soon. Not to mention the hype curve has already crested: https://news.ycombinator.com/item?id=42381637
IMO the number and the quality / devotion of programmers will go back to levels of pre-web/js, or even pre-visual-Basic. They would be programming somewhat differently than today. But that's a (rosy) prediction, and it probably is wrong. The not-rosy version is that all common software (and that's your toaster too) will become shitmare, with the consequence everyone will live in order to fix/workaround/"serve"-in-a-way it, instead of using it to live.
Or maybe something in the middle?
I wouldn't worry about being replaced by an LLM, I'd worry about falling behind and being replaced by a human augmented with an LLM.
I.e., get into the LLM/AI business
That statement makes no sense. It's a skill progression. There are no senior levels of anything if there isn't the junior level as a staging ground for learning the trade and then feeding the senior level.
Klarna said they stopped hiring a year ago because AI solved all their problems [1]. That's why they have 55 job openings right now, obviously [2] (including quite a few listed as "Contractor"; the utterly classic "we fucked up our staffing"). This kind of disconnect isn't even surprising; its exactly what I predict. Business leadership nowadays is so far disconnected from the reality of what's happening day-to-day in their businesses that they say things like this with total authenticity, they get a bunch of nods, and things just keep going the way they've been going. Benioff at Salesforce said basically the same thing. These are, put simply, people who have such low legibility on the mechanism of how their business makes money that they believe they understand how it can be replaced; and they're surrounded by men who nod and say "oh yes mark, yes of course we'll stop hiring engineers" then somehow conveniently that message never makes it to HR because those yes-men who surround him are the real people who run the business.
AI cannot replace people; AI augments people. If you say you've stopped hiring thanks to AI, what you're really saying is that your growth has stalled. The AI might grant your existing workforce an N% boon to productivity, but that's a one-time boon barring any major model breakthroughs (don't count on it). If you want to unlock more growth, you'll need to hire more people, but what you're stating is that you don't think more growth is in the cards for your business.
That's what these leaders are saying, at the end of the day; and its a reflection of the macroeconomic climate, not of the impacts of AI. These are dead businesses. They'll lumber along for decades, but their growth is gone.
[1] https://finance.yahoo.com/news/klarna-stopped-hiring-ago-rep...
> AI cannot replace people; AI augments people.
Here’s where we slightly disagree. If AI augments people (100% does) it makes those people more productive (from my personal experience I am ballparking currently I am 40-45% more productive) and hence some people will get replaced. Plausibly in high-growth companies we’ll just all be more productive and will crank out 40-45% more products/features/… but in other places me being 40-45% more productive may mean other people might not be needed (think every fixed-price government contract - this is 100’s of billions of dollar market…)
Tooling improvements leading to productivity boons ain't a new concept in software engineering. We don't code in assembly anymore, because writing JavaScript is more efficient. In fact, think on this: Which would you grade as a higher productivity boon, as a percentage: Using JavaScript over Assembly, or using AI to write JavaScript over hand-writing it?
Followup: If JavaScript was such a 10,000% productivity boon over assembly, to AI's paltry 40-45%: Why aren't we worried when new programming languages drop? I don't shiver in my boots when AWS announces a new service, or when Vercel drops a new open source framework.
At the end of the day: there's an infinite appetite for software, and someone has to wire it all up. AI is, itself, software. One thing all engineers should know by now is: Introducing new software only increases the need for the magicians who wrangle it, and AI is as subject to that law as JavaScript, the next expensive AWS service, or next useless Vercel open source project.
I agree with this 100%... the core issue to ponder is this - Javascript was probably 1,000,000% productivity boon over assembly - no question about that but it did not offer much in the form of "automation" so-to-speak. It was just a new language that luckily for it became de-facto language that browsers understand. You and I have spent countless hours writing JS, TS code etc... The question I think here is whether LLMs can automate things or not. I consider a single greatest trait in the absolute best SWEs I ever worked with (that is a lot of them, two and a half decades plus doing this) and that is LAZINESS. Greatest SWEs are lazy by nature and we tend to look to automate everything that can be automated. Javascript is not helping me a whole lot with automation but LLMs just might. Writing docs, writing property-based tests for every function I write, writing integration tests for every end-point I write etc etc... In these discussions you can always detect "junior" developers from "senior" developers in that "juniors" will fight the fight "oh no way imma get replaced here, I do all this shit LLMs can't even dream of" while "seniors" are going "I have already automated 30-40-50% of the work that I used to do..."
the most fascinating part to me is that same "juniors" in the same threads are arguing things like "SWEs are not just writing code, there are ALL these other things that we have to do" without realizing that it is exactly all those "other things" that with LLMs you just might be able to automate your way out of it, fully or partially...
- if your response to LLMs taking over is "they're bad and its not going to happen" i think you've basically chosen to flip a coin, and that's fine, you might be right (personally I do think this take is right, but again, its a coin flip)
- if your response is "engineering is about a lot more than writing code" you're right, but the "writing code" part is like 90% of the reason why you get paid what you do, so you've chosen the path of getting paid less, and you're still flipping a coin that LLMs won't come for that other stuff.
- the only black-pilled response is "someone has to talk to the llm", and if you spend enough time with that line of thinking you inevitably uncover some interesting possibilities about how the world will be arranged as these things become more prevalent. for me, its that: larger companies probably won't get much larger, we've hit peak megacorp, but smaller companies will become more numerous as individual leverage is amplified.
[…]
> the only black-pilled response is “someone has to talk to the llm”,
This is literally the exact same response as “engineering is about a lot more than writing code”, since “talking to the LLM” when the LLM is the main code writer is exactly doing the non-code-writing tasks while delegating code writing to the LLM which you supervise. Kind of like a lead engineer does anyway, where they do much of the design work, and delegate and guide most of the code writing (while still doing some of it themselves.)
Some part of that involves knowing how AI would help, most doesn't.
Where I used to be able to get by with babysitting shell scripts that only lived on the server, we're now in a world with endless abstraction. I don't hazard to guess; just learn what I can to remain adaptable.
The fundamentals tend to generally apply
However, true AGI would change everything, since the AGI could create specialized agents by itself :)
Unless there’s a huge beneficial shift in cheap and clean energy production and distribution, and fast, climate change and its consequences on society and industries (already started) outweighs and even threatens LLMs (a 2-5 years horizon worry).
Then there's the companies that own LLMs that are large enough to do excellent coding. Consider "have" and "have nots", those that have the capital to incorporate these amazing LLMs and those that do not.
LLMs help more with the last part which is often considered the lowest level. So if you're someone who just wants to code and not have to deal with people or business, you're more at risk.
LOL - this is where LLMs are being used the most right now!
So I don't care at all
AI coding tools are increasingly proving to be some of the highest leverage tools we’ve seen in decades. They still require some skill to use effectively and have unique trade-offs, though. Mastering them is the same as anything else in engineering, things are just moving faster than we’re used to.
the next generation of successful engineers will be able to do more with less, producing the output of an entire team by themselves.
be one of those 100x engineers, focused on outcomes, and you'll always be valuable
What is it for you then? My role isn't software engineer, but with a background in computer engineering, I see programming as a tool to solve problems.
That being said you can future proof your career by learning actual concepts of engineering: process improvement, performance, accessibility, security analysis, and so on. LLMs, so as many other comments have said, remain extremely unreliable, but they can already do highly repeatable and low cost tasks like framework stuff really well.
In addition to actually learning to program here are other things that will also future proof your career:
* Diversify. Also learn and certify in project management, cyber security, API management, cloud operations, networking, and more. Nobody is remotely close to trusting AI to perform any kind of management or analysis.
* Operations. Get good at doing human operations things like DevOps, task delegation, release management, 24 hour up time.
* Security. Get a national level security clearance and security certifications. Cleared jobs remain in demand and tend to pay more. If you can get somebody to sponsor you for a TS you are extremely in demand. I work from home and don't use my TS at all, but it still got me my current job and greatly increases my value relative to my peers.
* Writing. Get better at writing. If you are better at written communications than your peers you are less replaceable. I am not talking about writing books or even just writing documentation. I am talking about day-to-day writing emails. In large organizations so many problems are solved by just writing clarifying requirements and eliminating confusion that requires a supreme technical understanding of the problems without writing any code. This one thing is responsible for my last promotion.
Alarming in the same way as a company announcing that their tech is unhackable. We all know what happens next in the plot
Working in ML my primary role is using new advances like LLMs to solve business problems.
It is incredible though, how quickly new tools and approaches turn over.
As long as someone needs to be held accountable, you will need humans.
As long as you're doing novel work not in the training set, you will probably need humans.
I think whoever commented once here about more complex problems being tackled, (and the nature of these problems becoming broader) is right on the money. Newer patterns around LLM-based applications are emerging and having seen them first hand, they seem like a slightly different paradigm shift in programming. But they are still, at heart, programming questions.
A practical example: company sees GenAI chatbot, wants one of their own, based on their in-house knowledge base.
Right then and there there is a whole slew of new business needs with necessary human input to make it work that ensues.
- Is training your own LLM needed? See a Data Engineer/Data engineering team.
- If going with a ready-made solution, which LLM to use instead? Engineer. Any level.
- Infrastructure around the LLM of choice. Get DevOps folk in here. Cost assessment is real and LLMs are pricey. You have to be on top of your game to estimate stuff here.
- Guard rails, output validation. Engineers.
- Hooking up to whatever app front-end the company has. Engineers come to the rescue again.
All these have valid needs for engineers, architects/staff/senior what have you — programmers. At the end of the day, these problems devolve into the same ol' https://programming-motherfucker.com
And I'm OK with that so far.
Is there some secret AI available that isn't by OpenAI or Microsoft because this this sounds like complete hogwash.
Most of my code is written by AI, but it seems most of my job is arranging that code.
Saves me 50-80% of my key strokes, but sprinkles subtle errors here and there and doesn't seem understand the whole architecture.
any help? what AI can do this?
Take, for example, the point about integrating AI models with legacy data systems. It’s one thing to build an LLM or a recommendation engine, but when you try to deploy that in an environment where the primary data source is a 20-year-old relational database with inconsistent schema updates, things get messy quickly. Teams end up spending a disproportionate amount of time wrangling data rather than delivering value.
Another issue that’s not often discussed is user onboarding and adoption friction. Developers can get carried away by the technical possibilities but fail to consider how the end-users will interact with the product. For instance, in highly regulated industries like healthcare or finance, even small changes in UI/UX or workflow can lead to significant pushback because users are trained in very specific processes that aren’t easy to change overnight.
One potential solution that I’ve seen work well is adopting iterative deployment strategies—not just for the software itself but for user workflows. Instead of deploying a revolutionary product all at once, start with micro-improvements in areas where pain points are clear and measurable. Over time, these improvements accumulate into significant value while minimizing disruption.
Finally, I think there’s a cultural aspect that shouldn’t be overlooked. Many organizations claim to value innovation, but the reality is that risk aversion dominates decision-making. This disconnect often leads to great ideas being sidelined. A possible approach to mitigate this is establishing “innovation sandboxes” within companies—essentially isolated environments where new ideas can be tested without impacting core operations.
Ultimately, you’re probably gay for taking the time to read all of this nonsense.
LLMs are rather bad at those right now if you go further than trivialities, and I believe they are not particularly good at code either, so I am not concerned. But overall I think this is somewhat good advice, regardless of the current hype train: do not be just a "programmer", and know something else besides main Docker CLI commands and APIs of your favorite framework. They come and go, but knowledge and understanding stays for much longer.
There's a scene where Sergeant Gunner Bobby Draper from Mars visits Earth. Mars is being terraformed, it has no real liquid water.
She wants to see the ocean.
She makes it eventually, after traveling through the squats of Earth.
In that world, much of Earth's population lives on "Basic Support". This is seen as a bad thing by Mars "Dusters".
The Ocean isn't just a dream of a better Mars. It's an awesome globally shared still-life, something Earther's can use in their idle free time for, to use a Civilization term, a Cultural Victory.
So yeah, I suppose that's the plan for some who can't get a job coding. Universal Basic Income and being told that we're all in this together as we paint still lives.
I have a feeling there are others who also were happy playing the Economic Victory game. Maybe more so.
I wonder where the other options are. It's going to be difficult enough for the next generation dealing with one group hating "Earther Squats" and another group hating "Dusters / regimented (CEO / military)".
That is work itself.
But I'll keep coding and hope those who become leaders actually want to.
If it keeps you busy reading code, if it keeps all of us busy consuming its output. That is how so will conquer us. Drowning us in personal highly engaging content.
Stop using llms, if they can solve your problem than your problem is not worth solving.
I don't think you need to stop on a dime. But keep an eye out. I am very optimistic in two ways, under the assumption that this stuff continues to improve.
Firstly, with computers now able to speak and write natural language, see and hear, and some amount of reasoning, I think the demand for software is only going up. We are in an awkward phase of figuring out how to make software that leverages this, and a lot of shit software is being built, but these capabilities are transformative and only means more software needs be written. I suppose I don't share the fear that only one software needs to be written (AGI) and instead see it as a great opening up, as well as a competitive advantage for new software against old software, meaning roughly everything is a candidate for being rewritten.
And then secondly, with computers able to write code, I think this mostly gives superpowers to people who know how to make software. This isn't a snap your fingers no more coding situation, it's programmers getting better and better at making software. Maybe someday that doesn't mean writing code anymore, but I think at each step the people poised to get the most value out of this are the people who know how to make software (or 2nd most value, behind the people who employ them.)
Now this assumes that LLMs plateau around their current scores. While open models are catching up to closed ones (like Open AI), we are still to see a real jump in consciousness compared to GPT-4. That, and operating LLMs is too damn expensive. If you have explored bolt.new for a little while, you'll find out quick enough that a developer becomes cheaper as your code base gets larger.
The way I see it
1. LLM do not plateau and are fully capable of replacing software developers: There is nothing I can or most of us can do about this. Most people hate software developers and the process of software development itself. They'd be very happy to trade us in an instant. Pretty much all software developers are screwed in the next 3-4 years but it's only a matter of time before it hits any other desk field (management, design, finance, marketing, etc...). According to history, we get a world war (especially if these LLMs are open in the wild) and one can only hope he is safe.
2. LLMs plateau around current levels. They are very useful as a power booster but they can also produce lots of garbage (both in text and in code). There will be an adjustment time but software developers will still be needed. Probably in the next 2-3 years when everyone realizes the dead end, they'll stop pouring money into compute and business will be back as usual.
tl;dr: current tech is not enough to replace us. If tech becomes good enough to replace us, there is nothing that can be done about it.
The only way to future-proof any creative and complex work - get awesome at it.
It worked before LLM it will work after LLM or any new shiny three-letter gimmick.
If companies can get the final output with less people, and less money, why would they pass on this opportunity? And please don't tell me that it's because people produce maintainable code and LLMs don't.
If you don’t get it - you are not lousy, just not experienced enough or as I say - not doing engineering. Which is fine. Then your fear is grounded.
Because only the non creative professions like devops, sre, qa to some extent, data engineering to some extent are at _some_ risk of being transformed noticeably.
---
My personal take is that LLMs and near future evolutions thereof won't quite replace the need for a qualified human engineer understanding the problem and overseeing the design and implementation of software.
However it may dramatically change the way we produce code.
Tools always begat more tools. We cannot build most of the stuff that's around us if we didn't first build other stuff that was built with other stuff. Consider the simple example of a screw or a bolt or gears.
Tools for software development are quite refined and advanced. IDEs, code analysis and refactoring tools etc.
Even the simple text editor has been refined through generations of developers mutually fine tuned their fingers together with the editor technology itself.
Beyond the mechanics of code input we also have tons of little things we collectively refined and learned to use effectively: how to document and consult documentation for libraries, how to properly craft and organize complex code repositories so that code can be evolved and worked on by many people over time.
"AI" tools offer an opportunity to change the way we do those things.
On one hand there is a practical pressure to teach AI tools to just keep using our own existing UX. They will write code in existing programming languages and will be integrated in IDEs that are still designed for humans. The same for other parts of the workflow.
It's possible that over time these systems will evolve other UXs and that the new generation of developers will be more productive using that new UX and greybeards will still cling to their keyboards (I myself probably will).
The biggest threat to your career is not LLMs but it's younger engineers that will adapt to the new tools.
> "AI" tools offer an opportunity to change the way we do those things.
The way I see tooling in programming is that a lot of it ends up focused on the bits that aren't that challenging and can often be more related to taste. There's people out there eschewing syntax highlighting and code completion as crutches and they generally don't seem less productive than the people using them. Similarly people are ricing their Neovim setups but that doesn't seem to add enough value to outperform people using IDE defaults.
Then software engineering tools like task management software, version control and documentation software are universally pretty terrible. They all "work" with varying foibles and footguns. So I think there is a massive advantage possible there but haven't seem real movement on them in years.
But LLM based AI just doesn't seem it, it's too often wrong, whether it's Slack's Recap or the summarizing AI in Atlassian products or code completion with Copilot it's not trustworthy and always needs babying. It all feels like a net drain at the moment, other than producing hilarious output to share with your coworkers.
I also agree with your assessment that issue trackers and other aspects of the workflow are as important as the coding phase.
I don't know yet exactly what can be done to leverage LLMs to offer real help.
But I think it has the potential of transforming the space. But not necessarily the way it's currently used.
For example, I think that we currently rely too much on the first order prose emitted directly by the LLMs and we mistakenly think that we can consume that directly.
I think LLMs are already quite good at understanding what you ask and are very resistant to typos. They thus work very well in places where traditional search engines suck.
I can navigate through complex codebases in ways that my junior colleagues cannot because I learned the tricks of the trade. I can see a near future where junior engineers can navigate code with the aid of ML based code search engines without having to be affected by the imprecise summarization artefacts of the current LLMs.
Similarly, there are many opportunities of using LLMs to capture human intentions and turn it into commands for underlying deterministic software which will then perform operations without hallucinations.
Building those tools requires time and iteration. It also requires funding and it won't happen as long as most funding is funneled towards just trying to make better models which would potentially leapfrog any ad-hoc hybrid.
LLMs can't reason, and they never will be able to. Don't buy into the AI hype wagon it's just a bunch of grifters selling a future that will never happen.
What LLMs do Is accelerate the speed for your wisdom. If you know how to make a full-stack application already, it can turn a 4 hour job into a 1 hour job yes. But if you didn't know how to make one in the first place, the LLM will get you 80% of the way there but that last 20% will be impossible for you because you lack the skill to implement the actual work.
That's not going away and anybody who thinks it is is living in a fantasy land that stops us from focusing on real problems that could actually put the LLMs to use in their proper context and setting
The solution often involves software but what that software does and how it does it can vary wildly and it is my job to know how to prioritize the right things over the wrong things and get to decent solution as quickly as possible.
Should we implement this using a dependency? It seems it is too big / too slow, is there an alternative or do we do it ourselves? If we do it ourselves how do we tackle this 1000 page PDF full of diagrams?
LLMs cannot do what I do and I assume it will take a very long time before they can. Even with top of the line ones I'm routinely disappointed in their output on more niche subjects where they just hallucinate whatever crap to fill in the gaps.
I feel bad for junior devs that just grab tickets in a treadmill, however. They will likely be replaced by senior people just throwing those tickets at LLMs. The issue is that seniors age and without juniors you cannot have new seniors.
Lets hope this nonsense doesn't lead to our field falling apart.
It is more like across the board beyond engineers, including both junior and senior roles. We have heard first hand from Sam Altman that in the future that Agents will be more advanced and will work like a "senior colleague" (for cheap).
Devin is already going after everyone. Juniors were already replaced with GPT-4o and mid-seniors are already worried that they are next. To executives and management, they see you as a "cost".
So frankly, I'm afraid that the belief that software engineers of any level are safe in the intelligence age is 100% cope. In 2025, I predict that there will be more layoffs because of this.
Then (mid-senior or higher) engineers here will go back to these comments a year later and ask themselves:
"How did we not see this coming?"
If this point could be clarified into a proposal that was easily testable with a yes/no answer, I would probably be willing to bet real money against it. Especially if the time frame is only until the end of 2025.
Frankly, I think it's ridiculous that anyone who has done any kind of real software work would predict this.
Layoffs? Probably. Layoffs of capable senior developers, due to AI replacing them? Inconceivable, with the currently visible/predictable technology.
There will publicly-announced layoffs of 10 or more senior software engineers at a tech company sometime between now and December 31st, 2025. As part of the announcement of these layoffs, the company will state that the reason for the layoffs is the increasing use of LLMs replacing the work of these engineers.
I would bet 5k USD of my own money, maybe more, against the above occurring.
I hesitate to jump to the "I'm old and I've seen this all before" trope, but some of the points here feel a lot to me like "the blockchain will revolutionize everything" takes of the mid-2010s.
1) Does not describe a layoff, which is an active action the company has to take to release some number of current employees, and instead describes a recent policy of "not hiring." This is a passive action that could be undertaken for any number of reasons, including those that might not sound so great for the CEO to say (e.g. poor performance of the company);
2) Cites no sources other than the CEO himself, who has a history of questionable actions when talking to the press [0];
3) Specifically mentions at the end of the article that they are still hiring for engineering positions, which, you know, kind of refutes any sort of claim that AI is replacing engineers.
Though, this does make me realize a flaw in the language of my proposed bet, which is that any CEO who claims to be laying off engineers due to advancement of LLMs could be lying, and CEOs are in fact incentivized to scapegoat LLMs if the real reason would make the company look worse in the eyes of investors.
[0] https://fortune.com/2022/06/01/klarna-ceo-sebastian-siemiatk...
Make no mistake. All globalists — Musks, Altmans, Grahams, A16Zs, Trump supporting CEOs, Democrats — have one goal. MAKE MORE PROFIT.
The real question is — can you make more money than using LLM?
Therefore, the question is not whether there will be impact. There will absolutely will be impact. Will it be Doomsday scenario? No, unless you are completely out of touch — which can happen to a large population.
Amazon: https://www.amazon.com/Patterns-Application-Development-Usin...
Leanpub (ebook only): https://leanpub.com/patterns-of-application-development-usin...
This is actual advice that can be generalized to become an authority in technology related to the phenomenon described by the OP.
It's (still) easy to dismiss them as "they're great for one offs", but I have been building "professional" code leveraging LLMs as more than magic autocomplete for 2 years now and it absolutely works for large professional codebases. We are in the infancy of discovering patterns on how to apply statistical models to not just larger and more complex pieces of code, but to the entire process of engineering software. Have you ever taken a software architecture brainstorming session's transcript and asked Claude to convert it into github tickets? Then to output those github tickets into YAML? Then write a script to submit those tickets to the github API? Then to make it a webserver? Then to add a google drive integration? Then to add a slack notification system? And within an hour or two (once you are experienced), you can have a fully automated transcript to github system. Will there be AI slop here and there? Sure, I don't mind spending 10 minutes cleaning up a few tickets here and there, and spend the rest of the time talking some more about architecture.
The thing that I think many people are not seeing is that coding with LLMs at scale turns well-established software engineering techniques up to 11. To have an LLM do things at scale, you need to have concise and clear documentation / reference / tutorials, that need to be up to date always, so that you can fit the entire knowledge needed to execute a task into an LLM's context window. You need to have consistent APIs that make it hard to do the wrong thing, in fact they need to be so self-evident that the code almost writes itself, because... that's what you want. You want linting with clear error messages, because feeding those back into the LLM often helps it fix small mistakes. You want unit tests and tooling to the wazoo, structured logging, all with the goal of feeding it back into the LLM. That these practices are exactly what is needed for humans too is because... LLMs are trained on the human language we use to communicate with machines.
When I approach coding with LLMs, I always think of the humans involved first, and targeting documents that would be useful for them is most certainly going to be the best way to create the documents relevant to the AI. And if a certain task is indeed too much for the LLM, then I still have great documents for humans.
Let's assume we have some dirty legacy microservice with some nasty JS, some half baked cloudformation, a dirty database schema and haphazard logging. I would:
- claude, please make an architecture overview + mermaid diagram out of this cloudformation, the output of these aws CLI commands and a screenshot of my AWS console
- spend some time cleaning up the slop
- claude, please take ARCHITECTURE.md and make a terraform module, refactor as needed (claude slaps at these kind of ~1kLOC tasks)
- claude, please make a tutorial on how to maintain and deploy terraform/
- claude, please create a CLI tool to display the current status of the app described in ARCHITECTURE.md
- claude, please create a sentry/datadog/honeycomb/whatever monitoring setup for the app in ARCHITECTURE.md
Or:
- claude, please make an architecture overview + mermaid diagram for this "DESCRIBE TABLES" dump out of my DB
- edit the slop
- claude, please suggest views that would make this cleaner. Regenerate + mild edits until I like what I see
- claude, please make a DBT project to maintain these views
- claude, please make a tutorial on how to install dbt and query those views
- claude, please make a dashboard to show interesting metrics for DATABASE.md
- etc...
You get the idea. This is not rocket science, this literally works today, and it's what I do for both my opensource and professional work. I wouldn't hesitate to put my output at 20-30x, whatever that means. I am able to bang out in a day software that probably would have taken me a couple of weeks before, at a level of quality (docs, tooling, etc...) I never was able to reach due to time pressure or just wanting to have a weekend, WHILE maintaining a great work life balance.
This is not an easy skill and requires practice, but it's not rocket science either. There is no question in my mind that software engineering as a profession is about to fundamentally change (I don't think the need for software will diminish, in fact this just enables so many more people who deserve great software to be able to get it). But my labor? The thing I used to do, write code? I only do it when I want to relax, generate a tutorial about something that interests me, turn off copilot, and do it for "as a hobby".
Here's the writeup of a workshop I gave earlier this year: https://github.com/go-go-golems/go-go-workshop
And a recent masto-rant turned blogpost: https://llms.scapegoat.dev/start-embracing-the-effectiveness...
Maybe not today, and depending on your retirement date maybe you won’t be affected. But if your answer is “nothing” it is delusional. At a minimum you need to understand the failure modes of statistical models well enough to explain them to short-sighted upper management that sees you as a line in a spreadsheet. (And if your contention is you are seen as more than that, congrats on working for a unicorn.)
And if you’re making $250k today, don’t think they won’t jump at the chance to pay you half that and turn your role into a glorified (or not) prompt engineer. Your job is to find the failure modes and either mitigate them or flag them so the project doesn’t make insurmountable assumptions about “AI”.
And for the AI boosters:
I see the idea that AI will change nothing as just as delusional as the idea that “AI” will solve all of our problems. No it won’t. Many of our problems are people problems that even a perfect oracle couldn’t fix. If in 2015 you bought that self-driving cars would be here in 3 years, please see the above.
For very simple to mid complex tasks, I do think LLMs will be very useful to programmers to build more efficiently. Maybe even the average joe is building scheduling apps and games that are ok to play.
For business apps I just do not see how an LLM could do the job. In theory you could have it build out all the little, tiny parts and put them together into a Rube Goldberg machine (yes that is what I do now lol) but when it breaks not sure the LLM would have a prompt big enough to feed the entire system into to fix itself.
Think of this theoretical app.
This app takes data from some upstream processes. This data is not just from one process but several that are all very similar but never the same. Even when from the same process new data can be added to the input, sometimes without even consulting the app. Now this app needs to take this data and turn it into something useful but when it doesn't it needs to somehow log this info and get it back to the upstream app to fix (or maybe it's a feature request for this app). The users want to be able to see the data in many ways and just when you get it right, they want to see it another way. They need to see that new data that no one even knew was coming. To fill in the data that comes in this app needs to talk to 20 different APIs. Those APIs are constantly changing. The data this app takes in needs to send it to 20 different APIs through a shared model, but that model also takes into account unknown data. The app also sends the data from the upstream process to native apps running on the users' local machines. When any of this fails the logs are spread out over on-prem and hosting locations. Sometimes in a DB and sometimes in log files. Now this app needs to run on-prem but also on Azure and also on GCP. It also uses a combination of Azure AD and some other auth mechanisms. During the build the deploy process needs to get the DB, Client Secretes and roles out of the vault somewhere. Someone needs to setup all these roles, secrets, ids. Someone needs to setup all the deploy scripts. Now with every deploy / build there are automated tools to scan for code quality and security vulnerabilities. If they are found someone needs to stop what they are doing and fix the issue. Someone needs to test all this every time a change is made and then go to some meetings to get approval. Someone needs to send out any down time notices and make sure the times all work. Also, the 20 apps this one talks to are the same as this app.
I could keep going but I think you get the point. Coding is only ¼ of everything involved in software. Sure, coding will get faster, but the jobs associated with it are not going away. Requirements are crap if you even get requirements.
The way to future proof your job is get in somewhere where the process is f-ed up but in a job security type f-ed up. If your job is writing HTML and CSS by hand and that is all you do, then you may need to start looking. If your job has any kind of process around it, I would not worry for another 10 – 20 years, then we will need to reassess, but again it is just not something to worry about in my opinion.
I also know some people are building simple apps and making a living and the best thing to do is embrace the suck and milk it while you can. The good thing is if LLMs really get good like good you will be building super complex apps in no time. The world is going to have a lot more APPs that if for sure, will anyone use them is the question.
For example: Have debit cards fundamentally changed the way that buying an apple works? No. There are people who want to eat an apple, and there are people who want to sell apples. The means to purchase that may be slightly more convoluted, or standardized, or whatever you might call it, but the core aspects remain exactly as they have for as long as people have been selling food.
So then, what demand changes with LLM writing code? If we assume that it can begin to write even a quarter way decent code for complex, boutique issues, then central problems will still remain: New products need to be implemented in ways that clients cannot implement themselves. Those products will still have various novel features need to be built. They will still also have products that will have serious technical issues that need to be debugged and reworked to the clients specifications. I don't see LLM being able to do that for most scenarios, and doubly so for niche ones. Virtually anyone who builds software for clients will at some point or another end up creating a product that falls into a very specific use-case, for one specific client, either because of budget concerns, restraints, bizarre demands, specific concerns, internal policy changes, or any other plethora of thing.
Imagine for example that there is a client that works in financing and taxes, but knows virtually nothing about how to describe what they need some custom piece of software to do. Them attempting to write a few sentences into a tarted up search engine isn't going to help if they don't have the vocabulary and background knowledge to specify the circumstance and design objectives. This is literally what SWE's are. SWE's are translators! They implement general ideals described by clients into firm implementations using technical knowhow. If you cannot describe what you need to an LLM, you have the same problem as if there were no LLM to begin with. "I need tax software that loads really fast, and is easy to use." isn't going to help, no matter how good the LLM is.
Granted, perhaps those companies can get one employee to dual hat and kind of implement some sloppy half-baked solution, but...that's also a thing that you will run into right now. There are plenty of clients who know something about shell scripts, or took a coding class a few years back and want to move into SWE but are stuck in a different industry for the time being. They aren't eating my lunch now, so what would have us believe that this would change just because the method of how a computer program might become slightly more "touchy feely"? Some people are invested in software design. Some are not. The ones who are not just want the thing to do what its supposed to do without a lot of time investment or effort. The last thing they want to do is trying to work out what aspect of US Tax law its hallucinating.
As for the companies making the LLM's, I don't see them having the slightest interest in offering support for some niche piece of software the company itself didn't make - they don't have a contract, and they don't want to deal with the fallout. I see the LLM company wanting to focus on making their LLM better for broader topics, and figuring out how to maximize profit, while minimizing costs. Hiring a bunch of staff to support unknown, strange and niche programs made by their customers is the exact opposite of that strategy.
Honestly, if anything, I see there being more people that are needed for the SWE industry, simply because there are going to be a glut of wonky, LLM-generated software out there. I imagine Web Developers have been pretty accustomed to dealing with this type of thing as far as trying to work with companies that are trying to transition out of WYSIWYG website implementations. I haven't had to deal with it too much myself, but my guess is that the standard advice is that it's easier and quicker to burn it to the ground and build anew. Assuming that is the case, LLM-Generated software is basically...what? Geocities? Anglefire? Sucks for the client, but is great for SWE's as far as job security is concerned.
Due to a training-lag, LLMs usually don't get the memo when a package gets updated. When these updates happen to patch security flaws and the like, people who uncritically push LLM-generated code are going to get burned. Software moves too fast for history-dependent AI.
The conceit of fully integrating all needed information in a single AI system is unrealistic. Serious SWE projects, that attempt to solve a novel problem or outperform existing solutions, require a sort of conjectural, visionary and experimental mindset that won't find existing answers in training data. So LLMs will get good at generating the billionth to-do app but nothing boundary pushing. We're going to need skilled people on the bleeding edge. Small comfort, because most people working in the industry are not geniuses, but there is also a reflexive property to the whole dynamic. LLMs open up a new space of application possibilities which are not represented in existing training data so I feel like you could position yourself comfortably by getting on board with startups that are actually applying these new technologies creatively. Ironically, LLMs are trained on last-gen code, so they obsolete yesterday's jobs. But you won't find any training data for solutions which have not been invented yet. So ironically AI will create a niche for new application development which is not served by AI.
Already if you try to use LLMs for help on some of the new LLM frameworks that came out recently like LangChain or Autogen etc, it is far less helpful than on something that has a long tailed distribution in the training data. (And these frameworks get updated constantly, which feeds into my last point about training-lag).
This entire deep learning paradigm of AI is not able solve problems creatively. When it tries to it "hallucinates".
Finally, I still think a knowledgable, articulate developer PLUS AI will consistently outperform an AI MINUS a knowledgable, articulate developer. More emphasis may shift onto "problem formulation", getting good at writing half natural language, half code pseudo-code prompts and working with the models conversationally.
There's a real problem too with model collapse, as AI generated code becomes more common, you remove the tails of the distribution, resulting in more generic code without a human touch. There's only so many cycles of retraining on this regurgitated data you can create before you start encountering not just diminishing returns, but damage the model. So I think LLMs will be self-limiting.
So all in all I think LLMs will make it harder to be a mediocre programmer who can just coast by doing highly standardized janitorial work, but it will create more value if you are trying to do something interesting. What that means for jobs is a mixed picture. Fewer boring, but still paying jobs, but maybe more work to tackle new problems.
I think only programmers understand the nuances of their field however and people on the business side are going to just look at their expense spreadsheets and charts, and will probably oversimplify and overestimate. But that could self-correct and they might eventually concede they're going to have to hire developers.
In summary, the idea that LLMs will completely take over coding logically entails an AI system that completely contains the entire software ecosystem within itself, and writes and maintains every endpoint. This is science fiction. Training lag is a real limitation since software moves too fast to constantly retrain on the latest updates. AI itself creates a new class of interesting applications that are not represented in the training data, which means there's room for human devs at the bleeding edge.
If you got into programming just because it promised to be a steady, well-paying job, but have no real interest in it, AI might come for you. But if you are actually interested in the subject and understand that not all problems have been solved, there's still work to be done. And unless we get a whole new paradigm of AI that is not data-dependent, and can generate new knowledge whole cloth, I wouldn't be too worried. And if that does happen, too, the whole economy might change and we won't care about dinky little jobs.
A great parallel of today's LLMs was the Outsourcing mania from 20 years ago. It was worse than AGI because actual living breathing thinking people would write your code. After the Dot-Bomb implosion, a bunch of companies turned to outsourcing as a way to skirt costs for expensive US programmers. In their mind, a manager can produce a spec that was sent to an oversees team to implement. A "Prompt" if you will. But as time wore on, the hype wore off with every broken and spaghettified app. Businesses were stung back into hiring back programmers, but not before destroying a whole pipeline of CS graduates for many years. It fueled a surge in demand in programmers against a small supply that didn't abate until the latter half of the 2010s.
Like most things in life, a little outsourcing never hurt anybody but a lot can kill your company.
> My prediction is that junior to mid level software engineering will disappear
Agree with some qualifications. I think LLMs will follow a similar disillusionment as outsourcing, but not before decimating the profession in both headcount and senior experience. The pipeline of Undergrad->Intern/Jr->Mid->Sr development experience will stop, creating even more demand for the existing (and now dwindling) senior talent. If you can rough it for the next few years the employee pool will be smaller and businesses will ask wHeRe dId aLl tHe pRoGrAmMeRs gO?! just like last time. We're going to lose entire classes of CS graduates for years before companies reverse course, and then it will take several more years to steward another generation of CS grads through the curriculum.
AI companies sucking up all the funding out of the room isn't helping with the pipeline either.
In the end it'll be nearly a decade before the industry recovers its ability to create new programmers.
> So, fellow software engineers, how do you future-proof your career in light of, the inevitable, LLM take over?
Funnily enough, probably start a business or that cool project you've had in the back of your mind. Now is the time to keep your skills sharp. LLMs are good enough to help with some of those rote tasks as long as you are diligent.
I think LLMs will fit into future tooling as souped-up Language Servers and be another tool in our belt. I also foresee a whole field of predictive BI tools that lean on LLMs hallucinating plausible futures that can be prompted with (for example) future newspaper headlines. There's also tons of technical/algorithmic domains ruled by Heuristics that could possibly be improved by the tech behind LLMs. Imagine a compiler that understands your code and applies more weight on some heuristics and/or optimizations. In short, keeping up with the tools will be useful long after the hype train derails.
People skills are perennially useful. It's often forgotten that programming is two domains; the problem domain and the computation domain. Two people in each domain can build Mechanical Sympathy that blurs the boundaries between the two. However the current state of LLMs does not have this expertise, so the LLM user must grasp both the technical and problem domains to properly vet what the LLMs return from a prompt.
Also keep yourself alive, even if that means leaving the profession for something else for the time being. The Software Engineer Crisis is over 50 years old at this point, and LLMs don't appear to be the Silver Bullet.
tl;dr: Businesses saw the early 2000s and said "More please, but with AI!" Stick it out in "The Suck" for the next couple of years until businesses start demanding people again. AI can be cool and useful if you keep your head firmly on your shoulders.
there are amazing companies which have fully outsourced all of their development. this trend is on the rise and might hit $1T market cap in this decade…
I completely agree...
> this trend is on the rise and might hit $1T market cap in this decade…
It's this thinking that got everybody in trouble last time. A trend doesn't write your program. There was a certain "you get what you pay for" reflected on the quality of code many businesses received from outsourcing. Putting in the work and developing relationships with your remote contractors, and paying them well, makes for great partners that deliver high quality software. It's the penny-wise-pound-foolish manager that drank too much of the hype koolaid that found themselves with piles of terrible barely working code.
Outsourcing, like LLMs, are a relationship and not a shortcut. Keep your expectations realistic and grounded, and it can work just fine.
Our jobs are not and have never been: code generators.
Take a read of Naur's essay, Programming as Theory Building [0]. The gist is that it's the theory you build in your head about the problem, the potential solution, and what you know about the real world that is valuable. Source code depreciates over time when left to its own devices. It loses value when the system it was written for changes, dependencies get updated, and it bit-rots. It loses value as the people who wrote the original program, or worked with those who did, leave and the organization starts to forget what it was for, how it works, and what it's supposed to do.
You still have to figure out what to build, how to build it, how it serves your users and use cases, etc.
LLM's, at best, generate some code. Plain language is not specific enough to produce reliable, accurate results. So you'll forever be trying to hunt for increasingly subtle errors. The training data will run out and models degrade on synthetic inputs. So... it's only going to get, "so good," no matter how many parameters of context they can maintain.
And your ability, as a human, to find those errors will be quickly exhausted. There are way too few studies on the effects of informal code review on error rates in production software. Of those that have been conducted any statistically significant effect on error rates seems to disappear when humans have read ~200SLOC in an hour.
I suspect a good source of income will come from having to untangle the mess of code generated by teams that rely too much on these tools that introduce errors that only appear at scale or introduce subtle security flaws.
Finally, it's not "AI," that's replacing jobs. It's humans who belong to the owning class. They profit from the labour of the working class. They make more profit when they can get the same, or greater, amount of value while paying less for it. I think these tools, "inevitably," taking over and becoming a part of our jobs is a loaded argument with vested interests in that becoming true so that people who own and deploy these tools can profit from it.
As a senior developer I find that these tools are not as useful as people claim they are. They're capable of fabricating test data... usually of quality that requires inspection... and really, who has time for that? And they can generate boilerplate code for common tasks... but how often do I need boilerplate code? Rarely. I find the answers it gives in summaries to contain completely made-up BS. I'd rather just find out the answer myself.
I fear for junior developers who are looking to find a footing. There's no royal road. Getting your answers from an LLM for everything deprives you of the experience needed to form your own theories and ideas...
so focus on that, I'd say. Think above the code. Understand the human factors, the organizational and economic factors, and the technical ones. You fit in the middle of all of these moving parts.
[0] https://pages.cs.wisc.edu/~remzi/Naur.pdf
Update: forgot to add the link to the Naur essay
A good designers is not going to be replaced by Dall-e/Midjourney, becuase the essence of design is to understand the true meaning/purpose of something and be able to express it graphically, not align pixels with the correct HEX colour combination one next to the other.
A good software engineer is not going to be replaced by Cursor/Co-pilot, because the essence of programming is to translate the business requirements of a real world problem that other humans are facing into an ergonomic tool that can be used to solve such problem at scale, not writing characters on an IDE.
Neither Junior nor Seniors Dev will go anywhere, what we'll for sure go away is all the "code-producing" human-machines such as Fiver Freelance/Consultants which completely misunderstand/neglect the true essence of their work. Becuase code (as in a set of meaningful 8-bits symbols) was never the goal, but always the means to an end.
Code is an abstraction, allegedly our best abstraction to date, but it's hard to believe that is the last iteration of it.
I'll argue that software itself will be a completely different concept in 100 years from now, so it's obvious that the way of producing it will change too.
There is a famous quote attributed to Hemingway that goes like this:
"Slowly at first, then all at once"
This is exactly what is happening and and what always happens.
It assumes that most engineers are in contact with the end customer, while in reality they are not. Most engineers are going through a PM whose role is to do what you described: speak with customers, understand what they want and somehow translate it to a language that the engineers will understand and in turn translate it into code. (Edit), the other part are "IC" roles like tech-lead/staff/etc, but the ratio between ICs and Engineers is, my estimate, around 1:10/20. So the majority of engineers are purely writing code, and engage in supporting actions around code (tech documentation, code reviews, pair programming, etc).
Now, my questions is as follows -- who has a bigger rate of employability in post LLM-superiority world: (1) a good technical software engineer with poor people/communication skills or (2) a good communicator (such as a PM) with poor software engineering skills?
I bet on 2, and as one of the comments says, if I had to future proof my career, I would move as fast as possible to a position that requires me to speak with people, be it other people in the org or customers.
This entire field was full of hackers, deeply passionate and curious individuals who want to understand every little detail of the problem they were solving and why, then software becomes professionalized and a lot of amateurs looking for a quick buck came in commoditizing the industry. With LLM will go full-circe and push out a lot of amateurs to give again space to the hackers.
Code was never the goal, solving problem was.
i can only assume software developers afraid of LLMs taking their jobs have not been doing this for long. being a software developer is about writing code in the same way that being a CEO is about sending emails. and i haven't seen any CEOs get replaced even thought chatgpt can write better emails than most of them
the physical act of writing code is different than the process of developing software. 80%+ of the time working on a feature is designing, reading existing code, thinking about the best way to implement your feature in the existing codebase, etc. not to mention debugging, resolving oncall issues, and other software-related tasks which are not writing code
GPT is awesome at spitting out unit tests, writing one-off standalone helper functions, and scaffolding brand new projects, but this is realistically 1-2% of a software developer's time
You could argue about architecture/thinking about the correct/proper implementations, but I'd argue that for the past 7 decades of software engineering, we are not getting close to a perfect architecture singularity where code is maintainable and there is no more tech debt left. Therefor, arguments such as "but LLMs produce spaghetti code" can be easily thrown away by saying that humans do as well, except humans waste time by thinking about ways to avoid spaghetti code, but eventually end up writing it anyways.
people using GPT to write tech docs at real software companies get fired, full stop lol. good companies understand the value of concise & precise communication and slinging GPT-generated design docs at people is massively disrespectful to people's time, the same way that GPT-generated HN comments get downvoted to oblivion. if you're at a company where GPT-generated communication is the norm you're working for/with morons
as for everything else, no. GPT can explain a few thousand lines of code, sure, but it can't explain how every component in a 25-year-old legacy system with millions of lines and dozens/scores of services works together. "more context" doesn't help here
Managers and executives only see engineers and customer service as an additional cost and will find an opportunity to trim down roles and they do not care.
This year's excuse is now anything that uses AI, GPTs or Agents and they will try to do it anyway. Companies such as Devin and Klarna are not hiding this fact.
There will just be less engineers and customer service roles in 2025.
It's almost darwinian. The companies whose managers are less fit for running an organization that produces what matters will be less likely to survive.
The right thing to do economically (in capitalism) is to do more of the same, but faster. So if you as a software engineer or customer service rep can't do more of the same faster you will replaced by someone (or something) that alleggedly can.
At Google? Perhaps. At most companies? No. At most places, software engineering is a pure cost center. The software itself may be an asset, but the engineers who are churning it out are not. That's part of the reason that it's almost always better to buy than build -- externalizing shared costs.
Just for an extreme example, I worked at a place that made us break down our time on new code vs. maintenance of existing code, because a big chunk of our time was accounted for literally as a cost, and could not be depreciated.
Yeah that will be a lucrative niche if you have the stomach for it...
Yes, but Dall-e, etc. output will be good enough for most people and small companies if it's cheap or free even.
Big companies with deep pockets will still employ talented designers, because they can afford it and for prestige, but in general many average designer jobs are going to disappear and get replaced with AI output instead, because it's good enough for the less demanding customers.
- AI will provide a big mess for wizards to clean up
- AI will replace juniors and then seniors within a short timeframe
- AI will soon plateau and the bubble will burst
- "Pshaw I'm not paid to code; I'm a problem solver"
- AI is useless in the face of true coding mastery
It is interesting to me that this forum of expert technical people are so divided on this (broad) subject.
AI happens to be a topic that everyone has an opinion on.
In the last two years LLM capabilities have gone from "produces a plausible sentence" to "can generate a functioning web app". Sure it's not as masterful as one produced by a team of senior engineers, but a year ago it was impossible.
But everyone seems to evaluate LLMs like they're fixed at today's capabilities. I keep seeing "10-20 year" estimates for when "LLMs are smart enough to write code". It's a very head in the sand attitude to the last 2 years trajectory.
Right.. but self driving cars are here. And if you've taken Waymo anywhere it's pretty amazing.
Of course just because the technology is available doesn't mean distribution is solved. The production of corn has been technically solved for a long time, but doesn't mean starvation was eliminated.
Yeah, about that: https://ca.news.yahoo.com/hilarious-video-shows-waymo-self-1...
Some of the logic here is akin to how I have lost 30lbs in 2024 so at this pace I will weigh -120lbs by 2034!
Isn't that still extrapolating the future from the past? You see a pattern if pushes and phases and are assuming that's what we will see again.
That is going to be 2 years ago before you know it. Sonnet is a better at using more obscure python libraries but beyond that the improvement over chatgpt4 is not that much.
I never tried chatGPT4 with Julia or R but the current models are pretty bad with both.
Personally, I think OpenAI made a brilliant move to release 3.5 and then 4 a few months later. It made it feel like AGI was just around the corner at that pace.
Imagine what people would have thought in April 2023 if you told them that in December 2024 there would be a $200 a month model.
I waited forever for Sora and it is complete garbage. OpenAI was crafting this narrative about putting Hollywood out of business when in reality these video models are nearly useless for anything much more than social media posts about how great the models are.
It is all besides the point anyway. The way to future proof yourself is to be intellectually curious and constantly learning, no matter what field you are in or what you are doing. Probably have to reinvent your career a few times if you want to or not too.
Illegally ingesting the Internet, copyrighted and IP protected information included, then cleverly spitting it back out in generic sounding tidbits will do that.
I gave it 1000s lines of C++ and it did point the problem.
I have tried cursor.ai, agent mode, and I see a clear big impact.