Ask HN: SWEs how do you future-proof your career in light of LLMs?
516 points
5 days ago
| 234 comments
| HN
LLMs are becoming a part of software engineering career.

The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.

I didn't want to believe in it, but I think it's here. And even arguments like "feeding proprietary code" will be eventually solved by companies hosting their own isolated LLMs as they become better and hardware becomes more available.

My prediction is that junior to mid level software engineering will disappear mostly, while senior engineers will transition to be more of a guiding hand to LLMs output, until eventually LLMs will become so good, that senior people won't be needed any more.

So, fellow software engineers, how do you future-proof your career in light of, the inevitable, LLM take over?

--- EDIT ---

I want to clarify something, because there seems to be slight misunderstanding.

A lot of people have been talking about SWE being not only about code, and I agree with that. But it's also easier to sell this idea to a young person who is just starting in this career. And while I want this Ask HN to be helpful to young/fresh engineers as well, I'm more interested in getting help for myself, and many others who are in a similar position.

I have almost two decades of SWE experience. But despite that, I seem to have missed the party where they told us that "coding is not a means to an end", and realized it in the past few years. I bet there are people out there who are in a similar situations. How can we future-proof our career?

simianparrot
4 days ago
[-]
Nothing because I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time.

I have a job at a place I love and get more people in my direct network and extended contacting me about work than ever before in my 20 year career.

And finally I keep myself sharp by always making sure I challenge myself creatively. I’m not afraid to delve into areas to understand them that might look “solved” to others. For example I have a CPU-only custom 2D pixel blitter engine I wrote to make 2D games in styles practically impossible with modern GPU-based texture rendering engines, and I recently did 3D in it from scratch as well.

All the while re-evaluating all my assumptions and that of others.

If there’s ever a day where there’s an AI that can do these things, then I’ll gladly retire. But I think that’s generations away at best.

Honestly this fear that there will soon be no need for human programmers stems from people who either themselves don’t understand how LLM’s work, or from people who do that have a business interest convincing others that it’s more than it is as a technology. I say that with confidence.

reply
richardw
4 days ago
[-]
For those less confident:

U.S. (and German) automakers were absolutely sure that the Japanese would never be able to touch them. Then Koreans. Now Chinese. Now there are tariffs and more coming to save jobs.

Betting against AI (or increasing automation, really) is a bet against not against robots, but against human ingenuity. Humans are the ones making progress, and we can work with toothpicks as levers. LLM's are our current building blocks, and people are doing crazy things with them.

I've got almost 30 years experience but I'm a bit rusty in e.g. web. But I've used LLM's to build maybe 10 apps that I had no business building, from one-off kids games to learn math, to building a soccer team generator that uses Google's OR tools to optimise across various axes, to spinning up four different test apps with Replit's agent to try multiple approaches to a task I'm working on. All the while skilling up in React and friends.

I don't really have time for those side-quests but LLM's make them possible. Easy, even. The amount of time and energy I'd need pre-LLM's to do this makes this a million miles away from "a waste of time".

And even if LLM's get no better, we're good at finding the parts that work well and using that as leverage. I'm using it to build and check datasets, because it's really good at extraction. I can throw a human in the loop, but in a startup setting this is 80/20 and that's enough. When I need enterprise level code, I brainstorm 10 approaches with it and then take the reins. How is this not valuable?

reply
pdimitar
4 days ago
[-]
In other words, you have built exactly zero commercial-grade applications that us the working programmers work on building every day.

LLMs are good for playing with stuff, yes, and that has been implied by your parent commenter as well I think. But when you have to scale the work, then the code has to be easy to read, easy to extend, easy to test, have extensive test coverage, have proper dependency injection / be easy to mock the 3rd party dependencies, be easy to configure so it can be deployed in every cloud provider (i.e. by using env vars and config files for modifying its behavior)... and even more important traits.

LLMs don't write code like that. Many people have tried with many prompts. It's mostly good for just generating stuff once and maybe do little modifications while convincingly arguing the code has no bugs (and it has).

You seem to confuse one-off projects that have zero to little need for maintenance for actual commercial programming, perhaps?

Your analogy with the automakers seems puzzlingly irrelevant to the discussion at hand, and very far from transferable to it. Also I am 100% convinced nobody is going to protect the programmers; business people trying to replace one guy like myself with 3 Indians has been a reality for 10 years at this point, and amusingly they keep failing and never learning their lesson. /off-topic

Like your parent commenter, if LLMs get on my level, I'd be _happy_ to retire. I don't have a super vested interest in commercial programming, in fact I became as good at it in the last several years because I started hating it and wanted to get all my tasks done with minimal time expended; so I am quite confident in my assessment that LLMs are barely at the level of a diligent but not very good junior dev at the moment, and have been for the last at least 6 months.

Your take is rather romantic.

reply
lordnacho
4 days ago
[-]
We're gonna get mini-milled.

People with the attitude that real programmers are producing the high level product are going to get eaten slowly, from below, in the most embarrassing way possible.

Embarrassing because they'll actually be right. LLMs aren't making the high quality products, it's true.

But the low quality CRUD websites (think restaurant menus) will get swallowed by LLMs. You no longer need a guy to code up a model, run a migration, and throw it on AWS. You also don't need a guy to make the art.

Advances in LLMs will feed on the money made from sweeping up all these crap jobs, which will legitimately vanish. That guy who can barely glue together a couple of pages, he is indeed finished.

But the money will make better LLMs. They will swallow up the field from below.

reply
Xelbair
4 days ago
[-]
>But the low quality CRUD websites (think restaurant menus) will get swallowed by LLMs. You no longer need a guy to code up a model, run a migration, and throw it on AWS. You also don't need a guy to make the art.

just like you don't need webmasters, if you remember that term. IF you are just writing CRUD apps, then yeah - you're screwed.

If you're a junior, or want to get into the field? same, you're totally screwed.

LLMs are great at soft tasks, or producing code that has been written thousand of times - boilerplate, CRUD stuff, straightforward scripts - but the usual problems aren't limited by typing speed, nor amount of boilerplate but by thinking and evaluating solutions and tradeoffs from business perspective.

Also I'll be brutally honest - by the time the LLMs catch up to my generation's capabilities, we'll be already retired, and that's where the real crisis will start.

No juniors, no capable people, most seniors and principal engineers are retired - and quite probably LLMs won't be able to fully replace them.

reply
dukeyukey
4 days ago
[-]
> But the low quality CRUD websites (think restaurant menus) will get swallowed by LLMs. You no longer need a guy to code up a model, run a migration, and throw it on AWS. You also don't need a guy to make the art.

To be fair those were eaten long ago by Wordpress and site builders.

reply
Macha
4 days ago
[-]
And even Facebook. I think the last time bespoke restaurant menu websites were a viable business was around 2010.
reply
actionfromafar
4 days ago
[-]
That doesn't agree with my area at all. Most restaurants have menus or an app or both. Some have a basic website with phone number and link to an app. There seems to be some kind of app template many restaurants use which you can order from too.
reply
jhanschoo
4 days ago
[-]
> I think the last time bespoke restaurant menu websites

> There seems to be some kind of app template many restaurants use which you can order from too.

I think you agree with the comment you are replying to, but glossed over the word bespoke. IME as a customer in a HCOL area a lot of restaurants use a white-label platform for their digital menu. They don't have the need or expertise to maintain a functional bespoke solution, and the white-label platform lends familiarity and trust to both payment and navigation.

reply
nicksergeant
3 days ago
[-]
Have you personally tried? I have a business doing exactly that and have three restaurants paying between $300-400/month for a restaurant website -- and we don't even integrate directly with their POS / menu providers (Heartland).
reply
raxxorraxor
3 days ago
[-]
I don't think they will vanish at all.

A modern CRUD website (any software can be reduced to CRUD for that matter) is not trivially implemented and far beyond what current LLM can output. I think they will hit a wall before they will ever be able to do that. Also, configuration and infrastructure management is a large part of such a project and far out of scope as well.

People build some useful tools for LLM to enable them to do anything besides outputting text and images. But it is quite laborious to really empower them to do anything still.

LLM can indeed empower technical people. For example those working in automation can generate little Python or JavaScript scripts to push bits around, provided the endpoints have well known interfaces. That is indeed helpful, but the code still always needs manual review.

Work for amateur web developers will likely change, but they certainly won't be out of work anytime soon. Although the most important factor is that most websites aren't really amateur land anymore, LLM or not.

reply
intelVISA
4 days ago
[-]
Arguably you never really needed a SWE for those use cases, SWEs are for bespoke systems with specific constraints or esoteric platforms.

"That guy who can barely glue together a couple of pages" was never going to provide much value as a developer, the lunches you describe were already eaten: LLMs are just the latest branding for selling solutions to those "crap jobs".

reply
0xdeadbeefbabe
4 days ago
[-]
Who checks to see if the LLM swallowed it or not?
reply
pdimitar
4 days ago
[-]
Yeah, and it's always in the future, right? ;)

I don't disagree btw. Stuff that is very easy to automate will absolutely be swallowed by LLMs or any other tool that's the iteration #13718 of people trying to automate boring stuff, or stuff they don't want to pay full programmer wages for. That much I am very much convinced of as well.

But there are many other, rather nebulous shall we call them, claims, that I take issue with. Like "programming is soon going to be dead". I mean OK, they might believe it, but arguing it on HN is just funny.

reply
nyarlathotep_
4 days ago
[-]
> LLMs are good for playing with stuff, yes, and that has been implied by your parent commenter as well I think. But when you have to scale the work, then the code has to be easy to read, easy to extend, easy to test, have extensive test coverage, have proper dependency injection / be easy to mock the 3rd party dependencies, be easy to configure so it can be deployed in every cloud provider (i.e. by using env vars and config files for modifying its behavior)... and even more important traits.

I'm totally conflicted on the future of computing careers considering LLM impact, but I've worked at a few places and on more than a few projects where few/none of these are met, and I know I'm far from the only one.

I'd wager a large portion of jobs are like this. Majority of roles aren't working on some well-groomed Google project.

Most places aren't writing exquisite examples of a canonically "well-authored" codebase; they're gluing together enterprise CRUD slop to transform data and put it in some other database.

LLMs are often quite good at that. It's impossible to ignore that reality.

reply
rob74
4 days ago
[-]
If there is one job I don't want to do, it's being responsible for a big heap of enterprise CRUD slop generated and glued together by LLMs. If they can write it, they should learn to maintain it too!
reply
Aeolun
4 days ago
[-]
LLM’s are good at things a lot of people do, because a lot of people do them, and there’s tons of examples.

It’s the very definition of a self-fulfilling prophecy.

reply
UltraSane
4 days ago
[-]
Yes. LLMs are great at generating Python code but not so great at generating APL code.
reply
pdimitar
4 days ago
[-]
Absolutely. I agree with your take. It's just that I prefer to work in places where products are iterated on and not made once, tweaked twice, and then thrown out. There LLMs are for the moment not very interesting because you have to correct like 50% of the code they generate, ultimately wasting more time and brain energy than writing the thing yourself in the first place.
reply
Aeolun
4 days ago
[-]
Very defensive.

I love it anyhow. Sure, it generates shit code, but if you ask it it’ll gladly tell you all the ways it can be improved. And then actually do so.

It’s not perfect. I spent a few hours yesterday pulling it’s massive blobby component apart by hand. But on the plus side, I didn’t have to write the whole thing. Just do a bunch of copy paste operations.

I kinda like having a junior dev to do all the typing for me, and to give me feedback when I can’t think of a good way to proceed.

reply
palata
4 days ago
[-]
> I spent a few hours yesterday pulling it’s massive blobby component apart by hand. But on the plus side, I didn’t have to write the whole thing.

The question, really, is: are you confident that this was better than actually writing the whole thing yourself? Not only in terms of how long it took this one time, but also in terms of how much you learned while doing it.

You accumulate experience when you write code, which is an investment. It makes you better for later. Now if the LLM makes you slightly faster in the short term, but prevents you from acquiring experience, then I would say that it's not a good deal, is it?

reply
pdimitar
4 days ago
[-]
Those are the right questions indeed, thank you for being one of the commenters who looks past the niche need of "I want to generate this and move along".

I tried LLMs several times, even started using my phone's timers, and found out that just writing the code I need by hand is quicker and easier on my brain. Proof-reading and looking for correctness in something already written is more brain-intensive.

reply
askafriend
3 days ago
[-]
Asking questions and thinking critically about the answers is how I learn. The information is better structured with LLMs.

Everyone isn't "generating and moving on". There's still a review and iteration part of the process. That's where the learning happens.

reply
olivermuty
4 days ago
[-]
If an LLM can give you feedback on a way to proceed it sounds more like you might be the junior? :P

LLMs seems to be ok'ish at solving trivial boilerplate stuff. 20 attempts deep I have not yet seen it able to even remotely solve anything I have been stuck enough on to have to sit down and think hard.

reply
whoistraitor
4 days ago
[-]
Curious: what kind of problem domain to you work on? I use LLMs every day on pretty hard problems and they are always net positive. But I imagine they’re not well trained on material related to your work if you don’t find them useful.
reply
relaxing
4 days ago
[-]
What kind of problem domain do you work on?
reply
Aeolun
4 days ago
[-]
> If an LLM can give you feedback on a way to proceed it sounds more like you might be the junior? :P

More like it has no grey matter to get in the way of thinking of alternatives. It doesn’t get fixated, or at least, not on the same things as humans. It also doesn’t get tired, which is great when doing morning or late night work and you need a reality check.

Deciding which option is the best one is still a human thing, though I find that those often align too.

reply
pdimitar
4 days ago
[-]
I would not mind having a junior type out the code, or so I thought for a while. But in the case of those of us who do deep work it simply turned out that proof-reading and correcting the generated code is more difficult than writing it out in the first place.

What in my comment did you find defensive, btw? I am curious on how does it look from the outside for people that are not exactly of my mind. Not making promises I'll change, but still curious.

reply
peab
4 days ago
[-]
"In other words, you have built exactly zero commercial-grade applications that us the working programmers work on building every day."

The majority of programmers getting paid a good salary are implementing other people's ideas. You're getting paid to be some PMs chatGPT

reply
anhner
4 days ago
[-]
> You're getting paid to be some PMs chatGPT

yes but one that actually works

reply
frontfor
4 days ago
[-]
You’re completely missing the point. The point being made isn’t about “are we implementing someone else’s idea”. It’s about the complexity and trade-offs and tough calls we have to make in a production system.
reply
pdimitar
4 days ago
[-]
I never said we are working on new ideas only -- that's impossible.

I even gave several examples of the traits that a commercial code must have and that LLMs fail to generate such code. Not sure why you ignored that.

reply
PaulDavisThe1st
4 days ago
[-]
> LLMs fail to generate such code

As another oldster, yes, yes absolutely.

But the deeper question is: can this change? They can't do the job now, will they be able to in the future?

My understanding of what LLM's do/how they work suggests a strong "no". I'm not confident about my confidence, however.

reply
pdimitar
4 days ago
[-]
I am sure it can change. Ultimately programming we be completely lost; we are going to hand-wave and command computers inside us or inside the walls of our houses. This absolutely will happen.

But my issue is with people that claim we are either at that point, or very nearly it.

...No, we are not. We are not even 10% there. I am certain LLMs will be a crucial part of such a general AI one day but it's quite funny how people mistake the brain's frontal lobe with the entire brain. :)

reply
datadrivenangel
4 days ago
[-]
Plenty of jobs for coming by to read the logs across 6 systems when the LLM applications break and can't fix themselves.
reply
pdimitar
4 days ago
[-]
Yep, quite correct.

I am even fixing one small app currently that the business owner generated with ChatGPT. So this entire discussion here is doubly amusing for me.

reply
soco
4 days ago
[-]
Just like we were fixing last year the PowerApps-generated apps (or any other low/no code app) the citizens developers slapped together.
reply
harimau777
4 days ago
[-]
The question is how those jobs will pay. That seems like something that might not be able to demand a living wage.
reply
nosbo
4 days ago
[-]
If everyone prompts code. Less people will actually know what they're doing?
reply
pdimitar
4 days ago
[-]
That's the part many LLM proponents don't get it or hand-wave away. For now LLMs can produce okay one-off / throwaway code by having been fed with StackOverflow and Reddit. What happens 5 years down the road when half the programmers are actually prompters?

I'll stick to my "old-school" programming. I seem to have a very wealthy near-retirement period in front of me. I'll make gobs of money just by not having forgotten how to do programming.

reply
pdimitar
4 days ago
[-]
If it can't demand a living wage then senior programmers will not be doing it, leading to this software remaining defective. What _that_ will lead to we have no idea because we don't know what kind of software.
reply
hamandcheese
4 days ago
[-]
Really? Excellent debugging is something I associate with higher-paid engineers.
reply
dreamfactored
4 days ago
[-]
> LLMs don't write code like that

As someone who has been doing this since the mid 80's in all kinds of enterprise environments, I am finding that the latest generation are getting rather good at code like that, on par with mid-senior level in that way. They are also very capable of discussing architecture approaches with an encyclopaedic knowledge, although humans contribute meaningfully by drawing connections and analogies, and are needed to lead the conversation and make decisions.

What LLM's are still weak at is holding a very large context for an extended period (which is why you can see failures in the areas you mentioned if not properly handled e.g. explicitly discussed, often as separate passes). Humans are better at compressing that information and retaining it over a period. LLM's are also more eager and less defensive coders. That means they need to be kept on a tight leash and drip fed single changes which get backed out each time they fail - so very bright junior in that way. For example, I'm sometimes finding that they are too eager to refactor as they go and spit out env vars to make it more production like, when the task in hand is to get basic and simple first pass working code for later refinement.

I'm highly bullish on their capabilities as a force multiplier, but highly bearish on them becoming self-driving (for anything complex at least).

reply
pdimitar
4 days ago
[-]
> I'm highly bullish on their capabilities as a force multiplier, but highly bearish on them becoming self-driving (for anything complex at least).

Very well summed and this is my exact stance, it's just that I am not seeing much of the "force multiplier" thing just yet. Happy to be proven wrong, but last time I checked (August 2024) I didn't get almost anything. Might be related to the fact that I don't do throwaway code, and I need to iterate on it.

reply
jvanveen
19 hours ago
[-]
Recently used Cursor/Claude sonnet to port ~30k lines of EOL Livescript/Hyperscript to Typescript/JSX in less than 2 weeks. That would have took at least several months otherwise. Definitively a force multiplier, for this kind of repetitional work.
reply
pdimitar
19 hours ago
[-]
Shame that you can't probably open-source this, that would have been a hugely impressive case study.

And yeah, out of all the LLMs, it seems that Claude is the best when it comes to programming.

reply
ta12653421
2 days ago
[-]
Do not know how you are using them? It speeded up my development around 8-10x, things i wouldnt have done earlier i'm doing now, e let it do by the AI; writing boilerplate etc. Just great!
reply
pdimitar
2 days ago
[-]
8-10x?! That's quite weird if you ask me.

I just used Claude recently. Helped me with an obscure library and with the hell that is JWT + OAuth through the big vendors. Definitely saved me a few hours and I am grateful, but those cases are rare.

reply
ta12653421
5 hours ago
[-]
I developed things which i would have never started without AI because i could see upfront that there would be a lot of mundane/exhausting tasks
reply
pdimitar
5 hours ago
[-]
Amusingly, lately I did something similar, though I still believe my manual code is better in Elixir. :)

I'm open to LLMs being a productivity enabler. Recently I started occasionally using them for that as well -- sparingly. I more prefer to use them for taking shortcuts when I work with libraries whose docs are lacking.

...But I did get annoyed at the "programming as a profession is soon dead!" people. I do agree with most other takes on LLMs, however.

reply
dreamfactored
3 days ago
[-]
Sonnet 3.5 v2 was released in October. Most people just use this. First and only one that can do passable front end as well.
reply
pdimitar
3 days ago
[-]
Thank you, I'll check it out.
reply
mediumsmart
4 days ago
[-]
That sounds like they need a Unix plug-in pipe approach creating small modules that do one thing well handing their result without caring where it goes to the next not caring where it came from module while the mother Llm overseas only all the Blackbox connection pipes with the complex end result as a collateral divine conception.

now there was someone that I could call king

reply
nadam
4 days ago
[-]
"But when you have to scale the work, then the code has to be easy to read, easy to extend, easy to test, have extensive test coverage, have proper dependency injection / be easy to mock the 3rd party dependencies, be easy to configure so it can be deployed in every cloud provider (i.e. by using env vars and config files for modifying its behavior).."

Interestingly I do not find that the stuff you mentioned are the things that LLLMs are bad at. It can generate easy to read code. It can document its code extensively. It can write tests. It can use dependency injection especially if you ask it to. What I noticed where I am currently am much better than an LLM is that I can still have a very nuanced very complicated problem space in my head and create a solution based on that. The LLM currently cannot solve a problem which is so nuanced and complicated that even I cannot fully describe and work partially from insincts. It is the 'engineering taste' or 'engineering instincts' and our ability to absorb complicated nuanced design problems in our head that separates experienced developers from LLMs.

Unless LLMs get significantly better and just replace humans in every task, I predict that LLMs effect on industry will be that much less developers will be needed but those who will be needed will be relatively 'well paid' as it will be harder to become a professional software developer. (more experience and engineering talent will be needed to work professionally).

reply
pdimitar
4 days ago
[-]
If you say so then OK, I am not going to claim you are imagining it. But some proof would be nice.

I have quickly given up on experimenting with LLMs because they turned out to be net negative for my work -- proof-reading code is slower than writing it.

But if you have good examples then I'll check them out. I am open to the possibility that LLMs are getting better at iterating on code. I'd welcome it. It's just that I haven't seen it yet and I am not going to go out of my way to re-vet several LLMs for the... 4th time it would be, I think.

reply
GeoAtreides
4 days ago
[-]
> But when you have to scale the work, then the code has to be easy to read, easy to extend, easy to test, have extensive test coverage, have proper dependency injection / be easy to mock the 3rd party dependencies, be easy to configure so it can be deployed in every cloud provider

the code doesn't have to be anything like that, it only has to do one thing and one thing only: ship

reply
NorthTheRock
4 days ago
[-]
But in a world of subscription based services, it has to ship more than once. And as soon as that's a requirement, all of the above applies, and the LLM model breaks down.
reply
pdimitar
4 days ago
[-]
If and only if it's a one-off. I already addressed this in my comment that you are replying to, and you and several others happily ignored it. Really no idea why.
reply
GeoAtreides
4 days ago
[-]
i'm sorry, both you and /u/NorthTheRock have a dev first mindset. Or a google-work-of-art codebase mindset. Or a bespoke little dev boutique mindset. Something like that. For the vast, vast majority of software devs it doesn't work that way. The way it works is: a PM says: we need this out ASAP, then we need this feature, this bug, hurry up, close your tickets, no, a clean codebase and unit tests are not needed, just get this out, the client is complaining.

And so it goes. I'm happy you guys work in places where you can take your time to design beautiful work of arts, I really am. Again, that's not the experience for everyone else, who are toiling in the fields out there, chased by large packs of rabid tickets.

reply
pdimitar
4 days ago
[-]
I am aware that the way I work is not the only one, of course, but I am also not so sure about your "vast, vast majority" thing.

You can call it dev-first mindset, I call it a sustainable work mindset. I want the people I work for to be happy with my output. Not everything is velocity. I worked in high-velocity companies and ultimately got tired and left. It's not for me.

And it's not about code being beautiful or other artsy snobby stuff like that. It's about it being maintainable really.

And no I am not given much of a time, I simply refused to work in places where I am not given any time is all. I am a foot solider in the trenches just like you, I just don't participate in the suicidal charges. :)

reply
nyarlathotep_
4 days ago
[-]
Thank you for saying this; I commented something similar.

The HN commentariat seems to be comprised of FAANGers and related aspirants; a small part of the overall software market.

The "quality" companies doing high-skilled, bespoke work are a distinct minority.

A huge portion of work for programmers IS Java CRUD, React abominations, some C# thing that runs disgusting queries from an old SQL Server, etc etc.

Those privileged enough to work exclusively at those companies have no idea what the largest portion of the market looks like.

LLMs ARE a "threat" to this kind of work, for all the obvious reasons.

reply
collingreen
3 days ago
[-]
Are they more of a threat than all the no code, spreadsheets, bi tools, and salesforce?

We've watched common baselines be abstracted away and tons of value be opened up to creation from non engineers by reducing the complexity and risk of development and maintenance.

I think this is awesome and it hasn't seemed to eliminate any engineering jobs or roles - lots of crud stuff or easy to think about stuff or non critical stuff is now created that wasn't before and that seems to create much more general understanding of and need for software, not reduce it.

Regardless of tools available I think of software engineering as "thinking systematically" and being able to model, understand, and extend complex ideas in robust ways. This seems improved and empowered by ai coding options, not undermined, so far.

If we reach a level of ai that can take partially thought out goals and reason out the underlying "how it should work"/"what that implies", create that, and be able to extend that (or replace it wholesale without mistakes) then yeah, people who can ONLY code wont have much earning potential (but what job still will in that scenario?).

It seems like current ai might help generate code more quickly (fantastic). Much better ai might help me stay at a higher level when I'm implementing these systems (really fantastic if it's at least quite good), and much much better ai might let me just run the entire business myself and free me up to "debug" at the highest level, which is deciding what business/product needs to exist in the first place and figuring out how to get paid for it.

I'm a pretty severe ai doubter but you can see from my writing above that I think if it DOES manage to be good I think that would be good for us actually, not bad.

reply
nyarlathotep_
3 days ago
[-]
I don't know what to believe, long or short term; I'm just speculating based on what I perceive to be a majority of software job opportunities and hypothetical immediate impacts.

My basic conclusion is "they seem quite good (enough) at what appears to be a large portion of 'Dev' jobs" and I can't ignore the possibility of this having a material impact on opportunities.

At this point I'm agnostic on the future of GenAI impacts in any given area. I just no longer have solid ground upon which to have an opinion.

reply
edanm
4 days ago
[-]
> business people trying to replace one guy like myself with 3 Indians has been a reality for 10 years at this point, and amusingly they keep failing and never learning their lesson. /off-topic

What is the "lesson" that business people fail to learn? That Indians are worse developers than "someone like yourself"?

(I don't mean to bump on this, but it is pretty offensive as currently written.)

reply
pdimitar
4 days ago
[-]
1. The Indians were given as an example of absolutely terrible outsourcing agencies' dev employees. Call it racist or offensive if you like, to me it's statistical observation and I will offer no excuses that my brain is working properly and is seeing patterns. I have also met amazingly good Indian devs for what it's worth but most have been, yes, terrible. There's a link between very low-quality outsourcing agencies and Indian devs. I did not produce or fabricate this reality, it's just there.

2. The lesson business people fail to learn is that there's a minimum payment for programmers to get your stuff done, below which the quality drops sharply and that they should not attempt their cost-saving "strategy" because it ends up costing them much more than just paying me to do it. And I am _not_ commanding SV wages btw; $100k a year is something I only saw twice in my long career. So it's double funny how these "businessmen" are trying to be shrewd and pay even less, only to end up paying 5x my wage to a team that specializes in salvaging nearly-failed projects. I'll always find it amusing.

reply
stefs
4 days ago
[-]
not OP, but not necessarily. the general reason is not that indians are worse developers per se. in my opinion it's more about the business structure. the "replacement indian" is usually not a coworker at the same company, but an employee at an outsourcing company.

the outsourcing company's goal is not to ship a good product, but to make the most money from you. so while the "indian developer" might theoretically be able to deliver a better product than you, they wont be incentivized to do so.

in practice, there are also many other factors that play into this - which might arguably play a bigger role like communication barriers, indian career paths (i.e. dev is only a stepstone on the way to manager), employee churn at cheap & terrible outsourcing companies, etc.

reply
richardw
4 days ago
[-]
The analogy is that Japanese cars were initially utterly uncompetitive, and the arrogance of those who benefited from the status quo meant they were unable to adapt when change came. Humans improved those cars, humans are currently improving models and their use. Cheap toys move up the food chain. Find the direction of change and align yourself with it.

> people trying to replace one guy like myself with 3 Indians has been a reality for 10 years at this point, and amusingly they keep failing and never learning their lesson.

Good good, so you're the software equivalent of a 1990's era German auto engineer who currently has no equal, and maybe your career can finish quietly without any impact whatsoever. There are a lot of people on HN who are not you, and could use some real world advice on what to do as the world changes around them.

If you've read "crossing the chasm", in at least that view of the world there are different phases to innovation, with different roles. Innovators, early adopters, early majority, late majority, laggards. Each has a different motivation and requirement for robustness. The laggards won't be caught dead using your new thing until IBM gives it to them running on an AS400. Your job might be equivalent to that, where your skills are rare enough that you'll be fine for a while. However, we're past the "innovators" phase at this point and a lot of startups are tearing business models apart right now, and they are moving along that innovation curve. They may not get to you, but everyone is not you.

The choices for a developer include: "I'm safe", "I'm going to get so good that I'll be safe", "I'm going to leverage AI and be better", and "I'm out". Their decisions should not be based on your unique perspective, but on the changing landscape and how likely it is to affect them.

Good sense-check on where things are in the startup universe: https://youtu.be/z0wt2pe_LZM

I can't find it now, but there's at least one company that is doing enterprise-scale refactoring with LLM's, AST's, rules etc. Obviously it won't handle all edge cases, but...that's not your grandad's Cursor.

Might be this one, but don't recognise the name: https://www.linkedin.com/pulse/multi-repo-ai-assisted-refact...

reply
pdimitar
4 days ago
[-]
You are assuming I am arrogant because I don't view the current LLMs as good coders. That's a faulty assumption so your argument starts with a logical mistake.

Also I never said that I "have no equal". I am saying that the death of my career has been predicted for no less than 10 years now and it still has not happened, and I see no signs of it happening; LLMs produce terrible code very often.

This gives me the right to be skeptical from where I am standing. And a bit snarky about it, too.

I asked for a measurable proof, not for your annoyed accusations that I am arrogant.

You are not making an argument. You are doing an ad hominem attack that weakens any other argument you may be making. Still, let's see some of them.

---

RE: choices, my choice has been made long time ago and it's this one: "I will become quite good so as to be mostly safe. If 'AI' displaces me then I'll be happy to work something else until my retirement". Nothing more, nothing less.

RE: "startup universe", that's a very skewed perspective. 99.99999% of all USA startups mean absolutely nothing in the grand scheme of things out there, they are but a tiny bubble in one country in a big planet. Trends change, sometimes drastically and quickly so. What a bunch of guys in comfy positions think about their bubble bears zero relevance to what's actually going on.

> I can't find it now, but there's at least one company that is doing enterprise-scale refactoring with LLM's, AST's, rules etc.

If you find it, let me know. That I would view as an interesting proof and a worthy discussion to have on it after.

reply
richardw
4 days ago
[-]
"You seem to confuse"

"Your analogy with the automakers seems puzzlingly irrelevant"

"Your take is rather romantic."

That's pretty charged language focused on the person not the argument, so if you're surprised why I'm annoyed, start there.

Meta has one: https://arxiv.org/abs/2410.08806

Another, edited in above: https://www.linkedin.com/pulse/multi-repo-ai-assisted-refact...

Another: https://codescene.com/product/ai-coding

However, I still don't recognise the names. The one I saw had no pricing but had worked with some big names.

Edit: WAIT found the one I was thinking of: https://www.youtube.com/watch?v=Ve-akpov78Q - company is https://about.grit.io

In an enterprise setting, I'm the one hitting the brakes on using LLM's. There are huge risks to attaching them to e.g. customer-facing outputs. In a startup setting, full speed ahead. Match the opinion to the needs, keep two opposing thoughts in mind, etc.

reply
pdimitar
4 days ago
[-]
Thanks for the links, I'll check them out.
reply
relaxing
4 days ago
[-]
Too bad the Grit.io guy doesn’t use his considerable resources to learn how to enunciate clearly.

Or build an AI agent to transform his speech into something more understandable.

reply
Aeolun
4 days ago
[-]
> If you find it, let me know. That I would view as an interesting proof and a worthy discussion to have on it after.

https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/tra...

To be honest, I have no idea how well it works, but you can’t get much bigger than AWS in this regard.

reply
pdimitar
4 days ago
[-]
Thanks. I'll absolutely check this out.
reply
tinodb
1 day ago
[-]
> LLMs don't write code like that. Many people have tried with many prompts. It's mostly good for just generating stuff once and maybe do little modifications while convincingly arguing the code has no bugs (and it has).

You have to make them write code like that. I TDD by telling the LLM what I want a test for, verify it is red, then ask it to implement. I ask it to write another test for an edge case that isn’t covered, and it will.

Not for 100% of the code, but for production code for sure. And it certainly speeds me up. Especially in dreadful refactoring where I just need to flip from one data structure to another, where my IDE can’t help much.

reply
w10-1
4 days ago
[-]
I think you're talking about the differencing between coding and writing systems, which means you're talking teams, not individuals.

Rarely, systems can be initiated by individuals, but the vast, vast majority are built and maintained by teams.

Those teams get smaller with LLMs, and it might even lead to a kind of stasis, where there are no new leads with deep experience, so we maintain the old systems.

That's actually fine by big iron, selling data, compute, resilience, and now intelligence. It's a way to avoid new entrants with cheaper programming models (cough J2EE).

So, if you're really serious about dragging in "commercial-grade", it's only fair to incorporate the development and business context.

reply
pdimitar
4 days ago
[-]
I have not seen any proof so far that LLMs can enable teams. I and one other former colleague had to fix subtly broken LLM code several times, leading to the "author" being reprimanded that he's wasting three people's time.

Obviously anecdata, sure, it's just that LLMs for now seem mostly suited for throwaway code / one-off projects. If there's systemic proof for the contrary I'd be happy to check it out.

reply
baq
4 days ago
[-]
> LLMs are barely at the level of a diligent but not very good junior dev at the moment, and have been for the last at least 6 months.

> Your take is rather romantic.

I’m not sure you’re aware what you’ve written here. The contrast physically hurts.

reply
pdimitar
4 days ago
[-]
Apparently I'm not. Elaborate?
reply
baq
4 days ago
[-]
Current generation LLMs have been in development for approximately as long as it takes for a college to ingest a high schooler and pump out a non-horrible junior software developer. The pace of progress slowed down, but if we get further 50% improvement in 200% time it’s still you who is being the romantic, not the op.
reply
pdimitar
4 days ago
[-]
I honestly don't follow your angle at all. Mind telling me your complete takeaway? You are kind of throwing just bits and pieces at me and I followed too many sub-threads to keep full context.

I don't see where I was romantic either.

reply
felideon
4 days ago
[-]
Not the OP, but I imagine it has to do with the "LLMs will improve over time" trope. You said "and have been for the last at least 6 months" and it's confusing what you meant or what you expected should have happened in the past 6 months.
reply
pdimitar
4 days ago
[-]
I am simply qualifying my claims with the fact that they might not be super up to date is all.

And my comments here mostly stem from annoyance that people claim that we already have this super game-changing AI that will remove programmers. And I still say: no, we don't have it. It works for some things. _Some_. Maybe 10% of the whole thing, if we are being generous.

reply
throwawayffffas
4 days ago
[-]
The main takeaway is that llms can't code to a professional level yet. But with improvement they probably will. It doesn't even have to be LLMs the coding part of our job will eventually be automated to a much larger degree than it is.
reply
pdimitar
4 days ago
[-]
Anything might happen in the future. My issue is with people claiming we're already in this future, and their only proof is "I generated this one-off NextJS application with LLMs".

Cool for you but a lot of us actually iterate on our work.

reply
palata
4 days ago
[-]
Or not. The point is that we don't know.
reply
shenbomo
4 days ago
[-]
Did you think computers and ARPANET came out for commercial-grade applications and not "good for playing with stuff" ?
reply
pdimitar
4 days ago
[-]
Of course they came from playing. But did you have people claiming we have a general AI while basic programming building blocks were still on the operating table?
reply
shenbomo
3 days ago
[-]
Then what was dot com bubble?
reply
purple-leafy
3 days ago
[-]
There are some very narrow minded, snarky people here,

“doesn’t pass my sniff test”

okay Einstein, you do you

reply
pdimitar
3 days ago
[-]
Surely, you actually mean those who extrapolate from a few niche tasks to "OMG the programmers are toast"?

Fixed it for you. ;)

reply
purple-leafy
2 days ago
[-]
Sorry I don’t mean to be snarky :) I think there is a happy middle ground between

“AI can’t do anything, it sucks!”

and

“AI is AGI and can do everything and your career is done for”

I teeter along the spectrum, and use with caution while learning new topics without expertise.

But I’ve been very surprised by LLMs in some areas (UI design - something I struggle with - I’ve had awesome results!)

My most impressive use case for an LLM so far (Claude 3.5 Sonnet) has been to iterate on a pseudo-3D ASCII renderer in the terminal using C and ncurses, where with the help of an LLM I was able to prototype this, and render an ascii “forest” of 32 highly detailed ascii trees (based off a single ascii tree template), with lighting and 3 axis tree placement, where you can move the “camera” towards and away from the forest.

As you move closer trees scale up, and move towards you, and overlapping trees don’t “blend” into one ascii object - we came up with a clever boundary highlighting solution.

Probably my favourite thing I’ve ever coded, will post to HN soon

reply
pdimitar
2 days ago
[-]
I absolutely agree that it's a spectrum. I don't deny the usefulness of some LLMs (and I have used Claude 3.5 lately with great success -- it helped me iterate on some very annoying code with very niche library that I would have needed days to properly decipher myself). I got annoyed by the grandiose claims though so likely got a bit worked up.

And indeed:

> “AI is AGI and can do everything and your career is done for”

...this is the thing I want to stop seeing people even imply it's happening. An LLM helped you? Cool, it helped me as well. Stop claiming that programming is done for. It's VERY far from that.

reply
rkachowski
4 days ago
[-]
This is quite an absurd, racist and naive take.

"Commercial grade applications" doesn't mean much in an industry where ~50% of projects fail. It's been said before that the average organisation cannot solve a solved problem. On top of this there's a lot of bizarre claims about what software _should_ be. All the dependency injection and TDD and scrum and all the kings horses don't mean anything when we are nothing more than a prompt away from a working reference implementation.

Anyone designing a project to be "deployed in every cloud provider" has wasted a huge amount of time and money, and has likely never run ops for such an application. Even then, knowing the trivia and platform specific quirks required for each platform are exactly where LLMs shine.

Your comments about "business people" trying to replace you with multiple Indian people shows your level of personal and professional development, and you're exactly the kind of personality that should be replaced by an LLM subscription.

reply
pdimitar
4 days ago
[-]
And yet, zero proof again.

Getting so worked up doesn't seem objective so it's difficult to take your comment seriously.

reply
nidnogg
4 days ago
[-]
As a senior-ish programmer who struggled a lot with algorithmic thinking in college, it's really awe-inspiring.

Truly hit the nail on the head there. We HAD no business with these side-quests, but now? They're all ripe for the taking, really.

reply
cadamsau
4 days ago
[-]
One hundred per cent this.

LLM pair programming is unbelievably fun, satisfying, and productive. Why type out the code when you can instead watch it being typed while thinking of and typing out/speaking the next thing you want.

For those who enjoy typing, you could try to get a job dictating letters for lawyers, but something tells me that’s on the way out too.

reply
not_kurt_godel
4 days ago
[-]
My experience is it's been unbelievably fun until I actually try to run what it writes and/or add some non-trivial functionality. At that point it becomes unbelievably un-fun and frustrating as the LLM insists on producing code that doesn't give the outputs it says it does.
reply
adamredwoods
4 days ago
[-]
I've never found "LLM pair-programming" to be fun. I enjoy engaging my brain and coding on its own. Co-pilot and it's suggestions after a point become distracting. I'm sure there's are several use cases, but for me it's a tool that sometimes gets in the way (I usually turn off suggestions).
reply
wyclif
4 days ago
[-]
What do you prefer to use for LLM pair programming?
reply
richardw
3 days ago
[-]
Claude 70%. ChatGPT o1 for anything that needs more iterations, Cursor for local autocomplete, tested Gemini for concepts and it seemed solid. Replit when I want it to generate everything, from setting up a DB etc for any quick projects. But it’s a bit annoying and drives into a ditch a lot.

I honestly have to keep a tight rein on them all, so I usually ask for concepts first with no code, and need to iterate or start again a few times to get what I need. Get clear instructions, then generate. Drag in context, tight reins on changes I want. Make minor changes rather than wholesale.

Tricks I use. “Do you have any questions?” And “tell me what you want to do first.” Trigger it into the right latent space first, get the right neurons firing. Also “how else could I do this”. It’ll sometimes choose bad algo’s so you need to know your DSA, and it loves to overcomplicate. Tight reins :)

Claude’s personality is great. Just wants to help.

All work best on common languages and libraries. Edge cases or new versions get them confused. But you can paste in a new api and it’ll code against that perfectly.

I also use the API’s a lot, from cheap to pricy depending on task. Lots of data extraction, classifying. I got a (pricier model) data cleaner working on other data generated by a cheaper model, asking it to check eg 20 rows in each batch for consistency. Did a great job.

reply
cadamsau
4 days ago
[-]
Claude and pasting code back and forth for simple things. But I’d like to try Claude with a few MCP servers so it can directly modify a repo.

But lately Cursor. It’s just so effortless.

reply
traverseda
4 days ago
[-]
Copilot and aider-chat
reply
nidnogg
4 days ago
[-]
Yeah I had my fair share of pride around typing super fast back in college, but the algorithms were super annoying to think through.

Nowadays I get wayyy more of a kick typing the most efficient Lego prompts in Claude.

reply
jayd16
4 days ago
[-]
"Other people were wrong about something else so that invalidates your argument"

Why are half the replies like this?

reply
richardw
4 days ago
[-]
Because what is shared is overconfidence in the face of a system that has humble beginnings but many people trying to improve it. People have a natural bias against massive changes, and assume the status quo is fixed.

I’m open to all possibilities. There might be a near term blocker to improvement. There might be an impending architectural change that achieves AGI. Strong opinions for one or the other with no extremely robust proof are a mistake.

reply
pdimitar
4 days ago
[-]
The cognitive dissonance of seemingly educated people defending the LLMs when it comes to writing code is my top mystery for the entirety of 2024.

Call me if you find a good reason. I still have not.

reply
shiveenp
4 days ago
[-]
In my opinion people that harp on about how LLMs have been game changer for them are the people that put themselves as never actually having built anything sophisticated enough that a team of engineers can work and extend on for years.
reply
cle
4 days ago
[-]
This back and forth is so tiring.

I have built web services used by many Fortune 100 companies, built and maintained and operated them for many years.

But I'm not doing that anymore. Now I'm working on my own, building lots of prototypes and proof-of-concepts. For that I've founding LLMs to be extremely helpful and time-saving. Who the hell cares if it's not maintainable for years? I'll likely be throwing it out anyway. The point is not to build a maintainable system, it's to see if the system is worth maintaining at all.

Are there software engineers who will not find LLMs helpful? Absolutely. Are there software engineers who will find LLMs extremely helpful? Absolutely.

Both can exist at the same time.

reply
RealityVoid
4 days ago
[-]
I agree with you and I don't think OP disagrees either. The point if contention is the inevitable and immediate death of programming as a profession.
reply
richardw
4 days ago
[-]
Surely nobody has that binary a view?

What are the likely impacts over the next 1, 5, 10, 20 years. People getting into development now have the most incredible technology to help them skill up, but also more risk than we had in decades past. There's a continuum of impact and it's not 0 or 100%, and it's not immediate.

What I consider inevitable: humans will keep trying to automate anything that looks repeatable. As long as there is a good chance of financial gain from adding automation, we'll try it. Coding is now potentially at risk of increasing automation, with wildcards on "how much" and "what will the impact be". I'm extremely happy to have nuanced discussions, but I balk at both extremes of "LLMs can scale to hard AGI, give up now" and "we're safe forever". We need shorthand for our assumptions and beliefs so we can discuss differences on the finer points without fixating on obviously incorrect straw men. (The latter aimed at the general tone of these discussions, not your comment.)

reply
pdimitar
4 days ago
[-]
And I'll keep telling you that I never said I'm safe forever.
reply
pdimitar
4 days ago
[-]
And I never said both can't exist at the same time. Are you certain you are not the one fighting straw men and are tiring yourself with the imagined extreme dichotomy?

My issue is with people claiming LLMs are undoubtedly going to remove programming as a profession. LLMs work fine for one-off code -- when they don't make mistakes even there, that is. They don't work for a lot of other areas, like code you have to iterate on multiple times because the outer world and the business requirements keep changing.

Works for you? Good! Use it, get more productive, you'll only get applause for me. But my work does not involve one-off code and for me LLMs are not impressive because I had to rewrite their code (and to eye-ball it for bugs) multiple times.

"Right tool for the job" and all.

reply
LouisSayers
3 days ago
[-]
"Game changer" maybe.

But what you'll probably find is that people that are skilled communicators are currently getting a decent productivity boost from LLMs, and I suspect that the difference between many that are bullish vs bearish is quite likely coming down to ability to structure and communicate thoughts effectively.

Personally, I've found AI to be a large productivity boost - especially once I've put certain patterns and structure into code. It's then like hitting the N2O button on the keyboard.

Sure, there are people making toy apps using LLMs that are going to quickly become a unmaintainable mess, but don't be too quick to assume that LLMs aren't already making an impact within production systems. I know from experience that they are.

reply
pdimitar
1 day ago
[-]
> I suspect that the difference between many that are bullish vs bearish is quite likely coming down to ability to structure and communicate thoughts effectively.

Strange perspective. I found LLMs lacking in less popular programming languages, for example. It's mostly down to statistics.

I agree that being able to communicate well with an LLM gives you more results. It's a productivity enabler of sorts. It is not a game changer however.

> don't be too quick to assume that LLMs aren't already making an impact within production systems. I know from experience that they are.

OK, I am open to proof. But people are just saying it and leaving the claims hanging.

reply
pdimitar
4 days ago
[-]
Yep, as cynical and demeaning that must sound to them, I am arriving at the same conclusion.
reply
richardw
4 days ago
[-]
My last project made millions for the bank I was working at within the first 2 years and is now a case study at one of our extremely large vendors who you have definitely heard of. I conceptualised it, designed it, wrote the most important code. My boss said my contribution would last decades. You persist with making statements about people in the discussion, when you know nothing about their context aside from one opinion on one issue. Focus on the argument not the people.
reply
pdimitar
4 days ago
[-]
You're the one who claimed that I'm arrogant and pulled that out of thin air. Take your own advice.

I also have no idea what your comment here had to do with LLMs, will you elaborate?

reply
Me001
4 days ago
[-]
All the programming for the Apollo Program took less then a year and Microsoft Teams is decades in development obviously they are better than NASA programmers.
reply
KPGv2
4 days ago
[-]
the programming for the NASA program is very simple; the logistics of the mission, which has nothing to do with programming, is what was complex

You're essentially saying "the programming to do basic arithmetic and physics took only a year" as if that's remotely impressive compared to the complexity of something like Microsoft Teams. Simultaneous editing of a document by itself is more complicated than anything an Apollo program had to do

reply
hi_hi
4 days ago
[-]
I want to not like this comment, but I think you are right! There's a reason people like to say your watch has more compute power than the computers it took to put man on the moon.
reply
bdangubic
4 days ago
[-]
but that’s thing, right? it is not “seemingly” there are A LOT of highly educated people here telling you LLMs are doing amazing shit for them - perhaps a wise response should be “lemme go back and see whether it can also become part of my own toolbox…”

I have spent 24 years coding without LLMs, cannot fathom now spending more than a day without it…

reply
pdimitar
4 days ago
[-]
I have tried them a few times but they were only good at generating a few snippets.

If you have scrutable and interesting examples, I am willing to look them up and try to apply them to my work.

reply
wizzwizz4
4 days ago
[-]
Which needs does it fulfil that you weren't able to fulfil yourself? (Advance prediction: these are needs that would be fulfilled better by improvements to conventional tooling.)
reply
jayd16
4 days ago
[-]
The short answer is they're trying to sell it.
reply
pdimitar
4 days ago
[-]
Agreed, but I can't imagine that everybody here on HN that's defending LLMs so... misguidedly and obviously ignoring observable reality... are financially interested in LLMs' success, can they?
reply
Aeolun
4 days ago
[-]
I’m financially interested in Anthropic being successfull since it means their prices are more likely to go down, or for their models to get (even) better.

Honestly, if you don’t think it works for you, that’s fine with me. I just feel the dismissive attitude is weird since it’s so incredibly useful to me.

reply
Macha
4 days ago
[-]
Given they're a VC backed company rushing to claim market share and apparently selling well below cost, it's not clear that that will be the case. Compare the ride share companies for one case where prices went up once they were successful.
reply
pdimitar
4 days ago
[-]
Why is it weird, exactly? I don't write throwaway projects so the mostly one-off nature of LLM code generation is not useful to me. And I'm not the only one either.

If you can give examples of incredible usefulness then that can advance the discussion. Otherwise it's just us trying to out-shout each other, and I'm not interested in that.

reply
jprete
4 days ago
[-]
It's a message board frequented by extremely tech-involved people. I'd expect the vast majority of people here to have some financial interest in LLMs - big-tech equity, direct employment, AI startup, AI influencer, or whatever.
reply
pdimitar
4 days ago
[-]
Yeah, very likely. It's the new gold rush, and they are commanding wages that make me drool (and also make me want to howl in pain and agony and envy but hey, let's not mention the obvious, shall we?).

I always forget the name of that law but... it's hard to make somebody understand something if their salary depends on them not understanding it.

reply
Der_Einzige
3 days ago
[-]
For similar reasons, I can confidently say that your disliking of LLMs is sour grapes.
reply
pdimitar
3 days ago
[-]
I could take you seriously if you didn't make elementary English mistakes. Also think what you like, makes zero difference to me.
reply
baner2020
4 days ago
[-]
Conflict of interest perhaps
reply
pdimitar
4 days ago
[-]
Yep. Still makes me wonder if they are not seeing the truth but just refusing to admit it.
reply
Philadelphia
4 days ago
[-]
It’s a line by Upton Sinclair: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
reply
fragmede
4 days ago
[-]
Orrrrr more simpler than imagining a vast conspiracy is that your observations just don't match theirs. If you're writing, say, C# with some esoteric libraries using CoPilot, it's easy to see it as glorified auto-complete that hallucinates to the point of being unusable because there's not enough training data. If you're using Claude with Aider to write a webpage using NextJS, you'll see it as a revolution in programming because of how much of that is on stack overflow. The other side of it is just how much the code needs to work, vs has to look good. If you're used to engineering the most beautifully organized code before shipping once, vs shipped some gross hacks you're ashamed of shipping, and the absolute quality of the code is secondary to it working and passing tests, then the generated code having an extra variable or crap that doesn't get used isn't as big an indightment of LLM-assisted programming that you believe it to be.

Why do you think your observable reality is the only one, and the correct one at that? Looking at your mindset, as well as the objections to the contrary (and their belief that they're correct), the truth is likely somewhere in-between the two extremes.

reply
pdimitar
4 days ago
[-]
Where did I imply conspiracy? People have been known to turn a blind eye to criticism towards stuff they like ever since... forever.

The funny thing about the rest of your comment is that I'm in full agreement with you but somehow you decided that I'm an extremist. I'm not. I'm simply tired of people who make zero effort to prove their hypothesis and just call me arrogant or old / stuck in my ways, again with zero demonstration how LLMs "revolutionize" programming _exactly_.

And you have made more effort in that direction than most people I discussed this with, by the way. Thanks for that.

reply
fragmede
1 day ago
[-]
You said you couldn't imagine a conspiracy and I was responding to that. As far as zero demonstration, simonw has a bunch of detailed examples at:https://simonwillison.net/tags/ai-assisted-programming/ or maybe https://simonwillison.net/tags/claude-artifacts/, but the proof is in the pudding, as they say, so setting aside some time and $20 to install Aider and get it working w/ Claude, and then building a web app is the best way to experience either the second coming, or an overhyped let down. (or somewhere in the middle.)

Still, I don't think it's a hypothesis that most are operating under, but a lived experience that either it works for them or it does not. Just the other day I used ChatGPT to write me a program to split a file into chunks along a delimiter. Something I could absolutely do, in at least a half-dozen languages, but writing that program myself would have distracted me from the actual task at hand, so I had the LLM do it. It's a trivial script, but the point is I didn't have to break my concentration on the other task to get that done. Again, I absolutely could have done it myself, but that would have been a total distraction. https://chatgpt.com/share/67655615-cc44-8009-88c3-5a241d083a...

On a side project I'm working on, I said "now add a button to reset the camera view" (literally, aider can take voice input). we're not quite at the scene from star trek where scottie talks into the mac to try and make transparnet aluminum, but we're not that far off! The LLM went and added the button, wired it into a function call that called into the rendering engine and reset the view. Again, I very much could have done that myself, but it would have taken me longer just to flip through the files involved and type out the same thing. It's not just the time saved, which, I didn't have a stopwatch and a screen recorder, but apart from the time, it's not having to drop my thinking into that frame of reference, so I can think more deeply about the other problems to be solved. Sort of why ceo isn't an IC and why IC's aren't supposed to manage.

Detractors will point out that it must be a toy program, and that it won't work on a million line code base, and maybe it won't, but there's just so much that LLMs can do, as they exist now, that it's not hyperbole to say it's redefined programming, for those specific use cases. But where that use case is "build a web app", I don't know about you, but I use a lot of web apps these days.

reply
pdimitar
1 day ago
[-]
These are the kind of the informed takes that I love. Thank you.

> Detractors will point out that it must be a toy program, and that it won't work on a million line code base, and maybe it won't

You might call me a detractor. I think of myself as being informed and feeling the need to point out where do LLMs end and programmers begin because apparently even on an educated forum like HN people shill and exaggerate all the time. That was the reason for most of my comments in this thread.

reply
richardw
4 days ago
[-]
And it’s the short wrong answer. I get absolutely zero benefit if you or anyone else uses it. This is not for you or I, it’s for young people who don’t know what to do. I’m a person who uses a few LLM’s, and I do so because I see the risks and want to participate rather than have change imposed on me.
reply
pdimitar
4 days ago
[-]
OK, do that. Seems like a case of FOMO to me but it might very well turn out that you're right and I'm wrong.

However, that's absolutely not clear today.

reply
sigmarule
4 days ago
[-]
Genuine question: how hard have you tried to find a good reason?
reply
pdimitar
4 days ago
[-]
Obviously not very hard. And looking at the blatant and unproven claims on HN gave me the view that the proponents are not interested in giving proof; they simply want to silence anyone who disagrees LLMs are useful for programming.
reply
c0redump
4 days ago
[-]
Because they don’t have an actual argument.
reply
Timber-6539
4 days ago
[-]
This is no different from creating a to-do app with an LLM and proclaiming all developers are up for replacement. Demos are not what makes LLMs good, let alone useful.
reply
dustingetz
4 days ago
[-]
quantum computers still can’t factor any number larger than 21
reply
apwell23
4 days ago
[-]
> 've got almost 30 years experience but I'm a bit rusty in e.g. web. But I've used LLM's to build maybe 10 apps that I had no business building, from one-off kids games to learn math,

Yea i built bunch of apps when RoR blog demo came out like 2 decades ago. So what?

reply
rybosworld
4 days ago
[-]
> Nothing because I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time.

I am constantly surprised how prevalent this attitude is. ChatGPT was only just released in 2022. Is there some expectation that these things won't improve?

> LLM’s never provide code that pass my sniff test

This is ego speaking.

reply
IshKebab
4 days ago
[-]
> Is there some expectation that these things won't improve?

I definitely expect them to improve. But I also think the point at which they can actually replace a senior programmer is pretty much the exact point at which they can replace any knowledge worker, at which point western society (possibly all society) is in way deeper shit than just me being out of a job.

> This is ego speaking.

It definitely isn't. LLMs are useful for coding now, but they can't really do the whole job without help - at least not for anything non-trivial.

reply
munk-a
4 days ago
[-]
Intellisense style systems were a huge feature leap when they gained wider language support and reliability. LLMs are yet another step forward for intellisense and the effort of comprehending the code you're altering. I don't think I will ever benefit from code generation in a serious setting (it's excellent for prototyping) simply due to the fact that it's solving the easy problem (write some code) while creating a larger problem (figure out of the code that was generated is correct).

As another senior developer I won't say it's impossible that I'll ever benefit from code generation but I just think it's a terrible space to try and build a solution - we don't need a solution here - I can already type faster than I can think.

I am keenly interested in seeing if someone can leverage AI for query performance tuning or, within the RDBMS, query planning. That feels like an excellent (if highly specific) domain for an LLM.

reply
LouisSayers
3 days ago
[-]
> I am keenly interested in seeing if someone can leverage AI for query performance tuning or, within the RDBMS, query planning. That feels like an excellent (if highly specific) domain for an LLM.

Pay the $20 for Claude, copy the table DDL's in along with a query you'd like to tune.

Copy in any similar tuned queries you have and tell it you'd like to tune your query in a similar manner.

Once you've explained what you'd like it to do and provided context hit enter.

I'd be very surprised if having done this you can't find value in what it generates.

reply
tlarkworthy
4 days ago
[-]
> I can already type faster than I can think.

But can you write tickets faster than you can implement them? I certainly can.

reply
comex
4 days ago
[-]
> But can you write tickets faster than you can implement them? I certainly can.

Personally, I have a tendency at work to delay creating tickets until after I've already written the implementation.

Why? Because tickets in my employer's system are expected to identify which component needs to be changed, and ideally should have some detail about what needs to be changed. But both of those things depend on the design of the change being implemented.

In my experience, any given feature usually has multiple possible designs, and the only way to know if a design is good is to try implementing it and see how clean or messy it ends up. Of course, I can guess in advance which design will turn out well. I have to guess, or else I wouldn't know which design to try implementing first. But often my first attempt runs into unexpected wrinkles and I retreat and try a different design.

Other people will start with a design and brute-force their way to working code, and (in my judgmental opinion) the code often ends up lower-quality because of it.

Sooner or later, perhaps AI will be able to perform that entire process autonomously, better than I can. In the meantime, though, people often talk about using AI like a 'junior engineer', where you think up a design yourself and then delegate the grunt work to the AI. That approach feels flawed to me, because it disconnects the designer from the implementer.

reply
lifeisstillgood
4 days ago
[-]
>>> delay creating tickets until after I've already written the implementation. Why? Because tickets in my employer's system are expected to identify which component needs to be changed,

abso-frigging-lutely

To me this is n example of software being a form Of literacy - creative work. And yet process is designed by software illiterates who think novels can be written by pre-planning all the paragraphs

reply
vinnymac
4 days ago
[-]
Depends on the ticket.

If it's "Get us to the moon", it's gonna take me years to write that ticket.

If it was "Make the CTA on the homepage red", it is up for debate whether I needed a ticket at all.

reply
colecut
4 days ago
[-]
Based on current methods of productivity scoring, I'd try to make 2 tickets for that...
reply
jayd16
4 days ago
[-]
> LLMs are yet another step forward for intellisense

That would be great if the reality wasn't that the suggested LLM slop is actually making it harder to get the much better intellisense suggestions of last year.

reply
PeterisP
4 days ago
[-]
If a LLM (or any other tool) makes so that team of 8 can get the same results in the same time as it used to take a team of 10 to do, then I would count that as "replaced 2 programmers" - even if there's no particular person for which the whole job has been replaced, that's not a meaningful practical difference, replacing a significant fraction of every programmer's job has the same outcomes and impacts as replacing a significant fraction of programmers.
reply
RangerScience
4 days ago
[-]
Fav anecdote from ages ago:

When hand-held power tools became a thing, the Hollywood set builder’s union was afraid of this exact same thing - people would be replaced by the tools.

Instead, productions built bigger sets (the ceiling was raised) and smaller productions could get in on things (the floor was lowered).

I always took that to mean “people aren’t going to spend less to do the job - they’ll just do a bigger job.”

reply
esmevane
4 days ago
[-]
It's always played out like this in software, by the way. Famously, animation shops hoped to save money on production by switching over to computer rendered cartoons. What happened instead is that a whole new industry took shape, and brought along with it entire cottage industries of support workers. Server farms required IT, renders required more advanced chips, some kinds of animation required entirely new rendering techniques in the software, etc.

A few hundred animators turned into a few thousand computer animators & their new support crew, in most shops. And new, smaller shops took form! But the shops didn't go away, at least not the ones who changed.

It basically boils down to this: some shops will act with haste and purge their experts in order to replace them with LLMs, and others will adopt the LLMs, bring on the new support staff they need, and find a way to synthesize a new process that involves experts and LLMs.

Shops who've abandoned their experts will immediately begin to stagnate and produce more and more mediocre slop (we're seeing it already!) and the shops who metamorphose into the new model you're speculating at will, meanwhile, create a whole new era of process and production. Right now, you really want to be in that second camp - the synthesizers. Eventually the incumbents will have no choice but to buy up those new players in order to coup their process.

reply
rendaw
4 days ago
[-]
And 3D animation still requires hand animation! Nobody starts with 3D animation, the senior animators are doing storyboards and keyframes which they _then_ use as a guide for 3D animation.
reply
mitthrowaway2
4 days ago
[-]
The saddle, the stirrup, the horseshoe, the wagon, the plough, and the drawbar all enhanced the productivity of horses and we only ended up employing more of them.

Then the steam engine and internal combustion engine came around and work horses all but disappeared.

There's no economic law that says a new productivity-enhancing programming tool is always a stirrup and never a steam engine.

reply
RangerScience
3 days ago
[-]
I think you raise an excellent point, and can use your point to figure out how it could apply in this case.

All the tools that are stirrups were used "by the horse" (you get what I mean); that implies to me that so long as the AI tools are used by the programmers (what we've currently got), they're stirrups.

The steam engines were used by the people "employing the horse" - ala, "people don't buy drills they buy holes" (people don't employ horses, they move stuff) - so that's what to look for to see what's a steam engine.

IMHO, as long as all this is "telling the computer what to do", it's stirrups, because that's what we've been doing. If it becomes something else, then maybe it's a steam engine.

And, to repeat - thank you for this point, it's an excellent one, and provides some good language for talking about it.

reply
mitthrowaway2
3 days ago
[-]
Thanks for the warm feedback!

Maybe another interesting case would be secretaries. It used to be very common that even middle management positions at small to medium companies would have personal human secretaries and assistants, but now they're very rare. Maybe some senior executives at large corporations and government agencies still have them, but I have never met one in North America who does.

Below that level it's become the standard that people do their own typing, manage their own appointments and answer their own emails. I think that's mainly because computers made it easy and automated enough that it doesn't take a full time staffer, and computer literacy got widespread enough that anyone could do it themselves without specialized skills.

So if programming got easy enough that you don't need programmers to do the work, then perhaps we could see the profession hollow out. Alternatively we could run out of demand for software but that seems less likely!

(a related article: https://archive.is/cAKmu )

reply
karaterobot
4 days ago
[-]
Another anecdote: when mechanical looms became a thing, textile workers were afraid that the new tools would replace them, and they were right.
reply
esmevane
4 days ago
[-]
Oh my, no. Fabrics and things made from fabrics remain largely produced by human workers.

Those textile workers were afraid machines would replace them, but that didn't happen - the work was sent overseas, to countries with cheaper labor. It was completely tucked away from regulation and domestic scrutiny, and so remains to this day a hotbed of human rights abuses.

The phenomenon you're describing wasn't an industry vanishing due to automation. You're describing a moment where a domestic industry vanished because the cost of overhauling the machinery in domestic production facilities was near to the cost of establishing an entirely new production facility in a cheaper, more easily exploitable location.

reply
karaterobot
2 days ago
[-]
I think you're talking about people who sew garments, not people who create textile fabrics. In any case, the 19th century British textile workers we all know I'm talking about really did lose their jobs, and those jobs did not return.
reply
hnthrowaway6543
4 days ago
[-]
430 million people currently work in textiles[0]; how big was the industry before mechanical looms?

[0] https://www.uniformmarket.com/statistics/global-apparel-indu...

reply
fragmede
4 days ago
[-]
How many worked in America and Western Europe and were paid a living wage in their respective countries, and how many of those 430 million people currently work making textiles in America and Western Europe today, and how many are lower paid positions in poorer countries, under worse living conditions? (Like being locked into the sweatshop, and bathroom breaks being regulated?)

Computers aren't going anywhere, so the whole field of programming will continue to grow, but will there still be FAANG salaries to be had?

reply
jayd16
4 days ago
[-]
That's a different topic entirely.
reply
madmask
4 days ago
[-]
I think this is THE point.
reply
hnthrowaway6543
4 days ago
[-]
i heard the same shit when people were talking about outsourcing to India after the dotcom bubble burst. programmer salaries would cap out at $60k because of international competition.

if you're afraid of salaries shrinking due to LLMs, then i implore you, get out of software development. it'll help me a lot!

reply
throw234234234
4 days ago
[-]
It solely depends on whether more software being built is being constrained by feasibility/cost or a lack of commercial opportunities.

Software is typically not a cost constrained activity due to its typically higher ROI/scale. Its all about fixed costs and scaling profits mostly. Unfortunately given this my current belief is that on balance AI will destroy many jobs in this industry if it gets to the point where it can do a software job.

Assuming inelastic demand (software demand relative to SWE costs) any cost reductions in inputs (e.g. AI) won't translate to much more demand in software. The same effect that drove SWE prices high and didn't change demand for software all that much (explains the 2010's IMO particularly in places like SV) also works in reverse.

reply
robwwilliams
4 days ago
[-]
Is there a good reason to assume inelastic demand? In my field —- biomedical research —-I see huge untapped areas in need of much more and better core programming services. Instead we “rely” way too much on grad students (not even junior SWEs) who know a bit of Python.
reply
throw234234234
4 days ago
[-]
Depends on the niche (software is broad) but I think as a whole for the large software efforts employing a good chunk of the market - I think so. Two main reasons for thinking this way:

- Software scales; generally it is a function of market size, reach, network effects, etc. Cost is a factor but not the greatest factor. Most software makes profit "at scale" - engineering is a capital fixed cost. This means that software feasibility is generally inelastic to cost or rather - if I make SWE's cheaper/not required to employ it wouldn't change the potential profit vs cost equation much IMV for many different opportunities. Software is limited more by ideas, and potential untapped market opportunities. Yes - it would be much cheaper to build things; but it wouldn't change the feasibility of a lot of project assessments since cost of SWE's at least from what I've seen in assessments isn't the biggest factor. This effect plays out to varying degrees in a lot of capex - as long as the ROI makes sense its worth going ahead especially for larger orgs who have more access to capital. The ROI from scale dwarfs potential cost rises often making it less of a function of end SWE demand. This effect happens in other engineering disciplines as well to varying effects - software just has it in spades until you mix hardware scaling into the mix (e.g GPUs).

- The previous "golden era" where the in-elasticity of software demand w.r.t cost meant salaries just kept rising. Inelasticity can be good for sellers of a commodity if demand increases. More importantly demand didn't really decrease for most companies as SWE salaries kept rising - entry requirements were relaxing generally. The good side of in-elasticity is reversed potentially by AI making it a bad thing.

However small "throwaway" software which does have a "is it worth it/cost factor" will increase under AI most probably. Just don't think it will offset the reduction demanded by capital holders; nor will it be necessarily done by the same people anyway (democratizing software dev) meaning it isn't a job saver.

In your case I would imagine there is a reason why said software doesn't have SWE's coding it now - it isn't feasible given the likely scale it would have (I assume just your team). AI may make it feasible, but that doesn't help the OP in the way it does it - it does so by making it feasible for as you put it junior out of uni not even "SWE" grads. That doesn't help the OP.

reply
kbelder
4 days ago
[-]
And that was good. Workers being replaced by tools is good for society, although temporarily disruptive.
reply
sigmarule
4 days ago
[-]
This could very well prove to be the case in software engineering, but also could very well not; what is the equivalent of "larger sets" in our domain, and is that something that is even preferable to begin with? Should we build larger codebases just because we _can_? I'd say likely not, while it does make sense to build larger/more elaborate movie sets because they could.

Also, a piece missing from this comparison is a set of people who don't believe the new tool will actually have a measurable impact on their domain. I assume few-to-none could argue that power tools would have no impact on their profession.

reply
ska
4 days ago
[-]
> Should we build larger codebases just because we _can_?

The history of software production as a profession (as against computer science) is essentially a series of incremental increases in the size and complexity of systems (and teams) that don't fall apart under their own weight. There isn't much evidence we have approached the limit here, so it's a pretty good bet for at least the medium term.

But focusing on system size is perhaps a red herring. There is an almost unfathomably vast pool of potential software systems (or customization of systems) that aren't realized today because they aren't cost effective...

reply
danenania
4 days ago
[-]
Have you ever worked on a product in production use without a long backlog of features/improvements/tests/refactors/optimizations desired by users, managers, engineers, and everyone else involved in any way with the project?

The demand for software improvements is effectively inexhaustible. It’s not a zero sum game.

reply
nidnogg
4 days ago
[-]
This is a real thing. LLMs are tools, not humans. They truly do bring interesting, bigger problems.

Have people seen some of the recent software being churned out? Hint, it's not all GenAI bubblespit. A lot of it is killer, legitimately good stuff.

reply
pixeltechie
4 days ago
[-]
This is a good example of what could happen to software development as a whole. In my experience large companies tend to more often buy software rather than make it. Ai could drastically change the "make or buy" decision in favour of make. Because you need less developers to create a perfect tailored solution that directly fits the needs of the company. So "make" becomes affordable and more attractive.
reply
F-W-M
4 days ago
[-]
I work for a large european power utility. We are moving away from buying to in-house development. LLMs have nothing to do with it.
reply
hn_throwaway_99
4 days ago
[-]
That's actually not accurate. See Jevons paradox, https://en.m.wikipedia.org/wiki/Jevons_paradox. In the short term, LLMs should have the effect of making programmers more productive, which means more customers will end up demanding software that was previously uneconomic to build (this is not theoretical - e.g. I work with some non-profits who would love a comprehensive software solution, they simply can't afford it, or the risk, at present).
reply
hnthrowaway6543
4 days ago
[-]
yes, this. the backlog of software that needs to be built is fucking enormous.

you know what i'd do if AI made it so i could replace 10 devs with 8? use the 2 newly-freed up developers to work on some of the other 100000 things i need done

reply
ConspiracyFact
4 days ago
[-]
I'm casting about for project ideas. What are some things that you think need to be built but haven't?
reply
zdragnar
4 days ago
[-]
Every company that builds software has dozens of things they want but can't prioritize because they're too busy building other things.

It's not about a discrete product or project, but continuous improvement upon that which already exists is what makes up most of the volume of "what would happen if we had more people".

reply
a_bonobo
4 days ago
[-]
some ideas from my own work:

- a good LIMS (Laboratory Information Management System) that incorporates bioinformatics results. LIMS come from a pure lab, benchwork background, and rarely support the inclusion of bioinformatics analyses on samples included in the system. I have yet to see a lab that uses an off-the-shelf LIMS unmodified - they never do what they say they do. (And the amount of labs running on something built on age-old software still in use is... horrific. I know one US lab running some abomination built on Filemaker Pro)

- Software to manage grants. Who is being owed what, what are the milestones, who's looking after this, who are the contact persons, what are the milestones and when to remind, due diligence on potential partners etc. I worked for a grant-giving body and they came up with a weird mix of PowerBI and a pile of Excel sheets and PDFs.

- A thing that lets you catalogue Jupyter notebooks and Rstudio projects. I'm drowning in various projects from various data scientists and there's no nice way to centrally catalogue all those file lumps - 'there was this one function in this one project.... let's grep a bit' can be replaced by a central findable, searchable, taggable repository of data science projects.

reply
tonyarkles
4 days ago
[-]
> A thing that lets you catalogue Jupyter notebooks and Rstudio projects. I'm drowning in various projects from various data scientists and there's no nice way to centrally catalogue all those file lumps - 'there was this one function in this one project.... let's grep a bit' can be replaced by a central findable, searchable, taggable repository of data science projects.

Oh... oh my. This extends so far beyond data science for me and I am aching for this. I work in this weird intersection of agriculture/high-performance imaging/ML/aerospace. Among my colleagues we've got this huge volume of Excel sheets, Jupyter notebooks, random Python scripts and C++ micro-tools, and more I'm sure. The ones that "officially" became part of a project were assigned document numbers and archived appropriately (although they're still hard to find). The ones that were one-off analyses for all kinds of things are scattered among OneDrive folders, Zip files in OneDrive folders, random Git repos, and some, I'm sure, only exist on certain peoples' laptops.

reply
mindok
4 days ago
[-]
Ha - I’m on the other side of the grant application process and used an LLM to make a tool to describe the project, track milestones, sub-contractors, generate a costed project plan and all generate the other responses that need to be self-consistent in the grant application process.
reply
hadlock
4 days ago
[-]
A consumer-friendly front end to NOAA/NWS website, and adequate documentation for their APIs would be a nice start. Weather.com, accuweather and wunderground exist, but they're all buggy and choked with ads. If I want to track daily rainfall in my area vs local water reservoir levels vs local water table readings, I can do that, all the information exists but I can't conveniently link it together. The fed has a great website where you can build all sorts of tables and graphs about current and historical economic market data, but environmental/weather data is left out in the cold right now.
reply
robwwilliams
4 days ago
[-]
Ditto for most academic biomedical research: we desperately need more high quality customized code. Instead we have either nothing or Python/R code written by a grad student or postdoc—code that dies a quick death.
reply
IshKebab
4 days ago
[-]
> then I would count that as "replaced 2 programmers"

Well then you can count IDEs, static typing, debuggers, version control etc. as replacing programmers too. But I don't think any of those performance enhancers have really reduced the number of programmers needed.

In fact it's a well known paradox that making a job more efficient can increase the number of people doing that job. It's called the Jevons paradox (thanks ChatGPT - probably wouldn't have been able to find that with Google!)

Making people 20% more efficient is very different to entirely replacing them.

reply
fragmede
4 days ago
[-]
I know it's popular to hate on Google, but a link to the Wikipedia is the first result I get for a Google search of "efficiency paradox".
reply
akircher
4 days ago
[-]
As a founder, I think that this viewpoint misses the reality of a fixed budget. If I can make my team of 8 as productive as 10 with LLMs then I will. But that doesn’t mean that without LLMs I could afford to hire 2 more engineers. And in fact if LLMs make my startup successful then it could create more jobs in the future.
reply
rozap
4 days ago
[-]
> I definitely expect them to improve. But I also think the point at which they can actually replace a senior programmer is pretty much the exact point at which they can replace any knowledge worker, at which point western society (possibly all society) is in way deeper shit than just me being out of a job.

Agree with this take. I think the probability that this happens within my next 20 years of work is very low, but are non-zero. I do cultivate skills that are a hedge against this, and if time moves on and the probability of this scenario seems to get larger and larger, I'll work harder on those skills. Things like fixing cars, welding, fabrication, growing potatoes, etc (which I already enjoy as hobbies). As you said, skills that are helpful if shit were to really hit the fan.

I think there are other "knowledge workers" that will get replaced before that point though, and society will go through some sort of upheaval as this happens. My guess is that capital will get even more consolidated, which is sort of unpleasant to think about.

reply
boogieknite
4 days ago
[-]
im also expecting this outcome. my thoughts are that once devs are completely replaced by ml then were going to finally have to adapt society to raise the floor because we can no longer outcompete automation. im ready to embrace this, but it seems like there is no plan for mass retirement and were going to need it. if something as complicated as app and system dev is automated, pretty much anything can be.

earnest question because i still consider this a vague, distant future: how did you come up with 20 years?

reply
rozap
4 days ago
[-]
That's around, give or take a few years, when I plan to retire. Don't care a whole lot what happens after that point :)
reply
throw234234234
3 days ago
[-]
I think if you are either young or bound to retire in the next 5-10 years there's less worry. If you are young you can pivot easier (avoid the industry); if you are old you can just retire with enough wealth behind you.

Its the mid age people 35-45 yr olds that I think will be hit hardest by this if it eventuates. Usually at this point in life there's plenty of commitments (family, mortgage, whatever). The career may end before they are able to retire, but they are either too burdened with things; or ageism sets in making it hard to be adaptable.

reply
rozap
3 days ago
[-]
I think you can look at any cohort (except for the people already at retirement age) and see a way that they're screwed. I'm 34 and I agree that you're right, but the 22 year old fresh out of college with a CS degree might have just gotten a degree in a field that is going to be turned inside out, in an economy that might be confused as we adapt. It's just too big of a hypothetical change to say who will get the most screwed, imo.

And like the parent poster said - even if you were to avoid the industry, what would you pivot to? Anything else you might go into would be trivial to automate by that point.

reply
throw234234234
3 days ago
[-]
I think trades and blue collar work where experience learnt (the moat) is in the physical realm (not math/theory/process but dexerity and hands-on knowledge), where iteration/brute force costs too much money/causes too much risk is the place to be. If I get that building wrong unlike software where I can just "rebuild" it costs a lot of money in the real world and has risk like say "environmental damage". It means rapid iteration and development just doesn't happen which limits the exponential growth of AI in that area compared to the Digital world. Sure - in a lifetime we may have robots, etc but it will be a LOT more expensive to get there, and happen a lot slower and people can adjust. LLM's being 90% right are just not good enough - the cost of failure is too high and accountability in failure needs to occur.

Those industries also more wisely IMO tend to be unionised, own their own business, and so have the incentive to keep their knowledge tight. Even with automation the customer (their business) and the supplier (them and their staff) are the same so the value transfer of AI will make their job easier but they will keep the value. All good things to slow down progress and keep some economic rent for yourself and your family. Slow change is a less stressful life.

The intellectual fields lose in an AI world long term; the strong and the renter class (capital/owners) win. That's what "Intelligence is a commodity" that many AI heads keep saying actually means. This opens up a lot of future dystopian views/risks that probably aren't worth the benefit IMO to the majority of people that aren't in the above classes of people (i.e. most people).

The problem with software in general is that it is quite difficult generally to be a "long term" founder at least in my opinion for most people which means most employment comes from large corps/govt's/etc where the supplier (the SWE) is different than the employer. Most ideas generally don't make it or last only briefly, and the ones that stick around usually benefit from dealing with scale - something generally only larger corps have with their ideas (there are exceptions in new fields, but then they become the next large corp and there isn't enough for everyone to do this).

reply
sungho_
4 days ago
[-]
What if anti-aging technology is developed and you can live up to 1000 years?
reply
rozap
3 days ago
[-]
I certainly am not in the 0.5% rich enough to be able to afford such a thing. If it does get developed (it won't in my lifetime), there's absolutely no way in hell it will be accessible to anyone but the ultra rich.
reply
sleepybrett
4 days ago
[-]
one side thing i want to point out. There is a lot of talk about llms not being able to replace a senior engineer... The implication being that they can replace juniors. How exactly do you get senior engineers when you've destroyed the entire market for junior engineers, exactly?
reply
risyachka
4 days ago
[-]
This.

When this moment becomes reality - the world economy will change a lot, all jobs and markets will shift.

And there won’t be any way to future proof your skills and they all will be irrelevant.

Right now many like to say “learn how to work with ai, it will be valuable “ . No, it won’t. Because even now it is absolutely easy to work with it, any developer can pick up ai in a week, and it will become easier and easier.

A better time spent is developing evergreen skills.

reply
dingnuts
4 days ago
[-]
> LLMs are useful for coding now

*sort of, sometimes, with simple enough problems with sufficiently little context, for code that can be easily tested, and for which sufficient examples exist in the training data.

I mean hey, two years after being promised AGI was literally here, LLMs are almost as useful as traditional static analysis tools!

I guess you could have them generate comments for you based on the code as long as you're happy to proofread and correct them when they're wrong.

Remember when CPUs were obsolete after three years? GPT has shown zero improvement in its ability to generate novel content since it was first released as GPT2 almost ten years ago! I would know because I spent countless hours playing with that model.

reply
margalabargala
4 days ago
[-]
Firstly, GPT-2 was released in 2019. Five years is not "almost ten years".

Secondly, LLMs are objectively useful for coding now. That's not the same thing as saying they are replacements for SWEs. They're a tool, like syntax highlighting or real-time compiler error visibility or even context-aware keyword autocompletion.

Some individuals don't find those things useful, and prefer to develop in a plain text editor that does not have those features, and that's fine.

But all of those features, and LLMs are now on that list, are broadly useful in the sense that they generally improve productivity across the industry. They already right now save enormous amounts of developer time, and to ignore that because you are not one of the people whose time is currently being saved, indicates that you may not be keeping up with understanding the technology of your field.

There's an important difference between a tool being useful for generating novel content, and a tool being useful. I can think of a lot of useful tools that are not useful for generating novel content.

reply
bawolff
4 days ago
[-]
> are broadly useful in the sense that they generally improve productivity across the industry. They already right now save enormous amounts of developer time,

But is that actually a true statement. Are there actual studies to back that up?

AI is hyped to the moon right now. It is really difficult to separate the hype from reality. There are ancedotal reports of ai helping with coding, but there are also ancedotal reports that they get things almost right but not quite, which often leads to bugs which wouldn't otherwise happen. I think its unclear if that is a net win for productivity in software engineering. It would be interesting if there was a robust study about it.

reply
margalabargala
4 days ago
[-]
> Are there actual studies to back that up?

I am aware of an equal number of studies about the time saved overall by use of LLMs, and time saved overall by use of syntax highlighting.

In fact, here's a study claiming syntax highlighting in IDEs does not help code comprehension: https://link.springer.com/article/10.1007/s10664-017-9579-0

Shall we therefore conclude that syntax highlighting is not useful, that developers who use syntax highlighting are just part of the IDE hype train, and that anecdotal reports of syntax highlighting being helpful are counterbalanced by anecdotal reports of $IDE having incorrect syntax highlighitng on $Esoteric_file_format?

Most of the failures of LLMs with coding that I have seen has been a result of asking too much of the LLM. Writing a hundred context-aware unit tests is something that an LLM is excellent at, and would have taken a developer a long time previously. Asking an LLM to write a novel algorithm to speed up image processing of the output of your electron microscope will go less well.

reply
bawolff
4 days ago
[-]
> Shall we therefore conclude that syntax highlighting is not useful, that developers who use syntax highlighting are just part of the IDE hype train, and that anecdotal reports of syntax highlighting being helpful are counterbalanced by anecdotal reports of $IDE having incorrect syntax highlighitng on $Esoteric_file_format?

Yes. We should conclude that syntax highlighting is not useful in languages that the syntax highlighter does not support. I think basically everyone would agree with this statement.

Similarly an llm that worked 100% of the time and could solve any problem would be pretty useful. (Or at least worked correctly as often as syntax highlighting in situations where it is actually used does)

However that's not the world we live in. Its a reasonable question to ask if llm is good enough yet where the productivity gain outweighs the productivity lost.

reply
margalabargala
4 days ago
[-]
Your stance feels somewhat contradictory. A syntax highlighter is not useful in languages it does not support, therefore an LLM must be able to solve any problem to be useful?

The point I was trying to make was, an LLM is as reliably useful as syntax highlighting, for the tasks that coding LLMs are good at today. Which is not a lot, but enough to speed up junior devs. The issues come when people assume they can solve any problem, and try to use them on tasks to which they are not suited. Much like applying syntax highlighting on an unsupported language, this doesn't work.

Like any tool, there's a learning curve. Once someone learns what does and does not work, it's generally an strict productivity boost.

reply
NateEag
4 days ago
[-]
The problem is that there are no tasks that LLMs are reliably good at. I believe that's what OP is getting at.

I fixed a production issue earlier this year that turned out to be a naive infinite loop - it was trying to load all data from a paginated API endpoint, but there was no logic to update the page number being fetched.

There was a test for it. Alas, the test didn't actually cover this scenario.

I mention this because it was committed by a co-worker whose work is historically excellent, but who started using Copilot / ChatGPT. I'm pretty sure it was an LLM-generated function and test, and they were deeply broken.

Mostly they've been working great for this co-worker.

But not reliably.

reply
margalabargala
3 days ago
[-]
I understand that, the point I'm making is that reliability is not a requirement for utility. One does not need to be reliable to be reliably useful :)

A very similar example is StackOverflow. If you copy/paste answers verbatim from SO, you will have problems. Some top answers are deeply broken or have obvious bugs. Frequently, SO answers are only related to your question, but do not explicitly answer it.

SO is useful to the industry in the same way LLMs are.

reply
bawolff
3 days ago
[-]
Sure, there is a range. If it works 100% of the time its clearly useful. If it works 0% then it clearly isn't.

LLMs are in the middle. Its unclear which side of the line they are on. Some ancedotes say one thing some say another. That's why studies would be great. Its also why syntax highlighting is a bad comparison since that is not in the greyzone.

reply
bdangubic
4 days ago
[-]
exactly. many SWEs currently are fighting this fight of “oh it is not good enough bla bla…” on my team currently (50-ish people) you would not last longer than 3 months if you tried to do your work “manually” like we did before. several have tried, no longer around. I believe SWEs fighting LLMs are doing themselves a huge disservice in that they should be full-on embracing it and trying to figure out how to more effectively use them. just like any other tool, it is as good as the user of the tool :)
reply
bigstrat2003
4 days ago
[-]
> Secondly, LLMs are objectively useful for coding now.

No, they're subjectively useful for coding in the view of some people. I find them useless for coding. If they were objectively useful, it would be impossible for me to find them useless because that is what objectivity means.

reply
margalabargala
4 days ago
[-]
I do believe you stopped reading my comment at that quote, because I spent the remainder of my comment making the same distinction you did...

It's useless to your coding, but useful to the industry of coding.

reply
rybosworld
4 days ago
[-]
> LLM’s never provide code that pass my sniff test

If that statement isn't coming from ego, then where is it coming from? It's provably true that LLM's can generate working code. They've been trained on billions of examples.

Developers seem to focus on the set of cases that LLM's produce code that doesn't work, and use that as evidence that these tools are "useless".

reply
oops
4 days ago
[-]
My experience so far has been: if I know what I want well enough to explain it to an LLM then it’s been easier for me to just write the code. Iterating on prompts, reading and understanding the LLM’s code, validating that it works and fixing bugs is still time consuming.

It has been interesting as a rubber duck, exploring a new topic or language, some code golf, but so far not for production code for me.

reply
qup
4 days ago
[-]
Okay, but as soon as you need to do the same thing in [programming language you don't know], then it's not easier for you to write the code anymore, even though you understand the problem domain just as well.

Now, understand that most people don't have the same grasp of [your programming language] that you have, so it's probably not easier for them to write it.

reply
oops
1 day ago
[-]
I don’t disagree with anything you said and I don’t think anything you said disagrees with my original comment :)

I actually said in my comment that exploring a new language is one area I find LLMs to be interesting.

reply
bccdee
4 days ago
[-]
> It's provably true that LLM's can generate working code

They produce mostly working code, often with odd design decisions that the bot can't fully justify.

The difficult part of coding is cultivating a mental model of the problem + solution space. When you let an LLM write code for you, your mental model falls behind. You can read the new code closely, internalize it, double-check the docs, and keep your mental model up to date (which takes longer than you think), or you can forge ahead, confident that it all more or less looks right. The second option is easier, faster, and very tempting, and it is the reason why various studies have found that code written with LLM assistance introduces more bugs than code written without.

There are plenty of innovations that have made programming a little bit faster (autocomplete, garbage collection, what-have-you), but none of them were a silver bullet. LLMs aren't either. The essential complexity of the work hasn't changed, and a chat bot can't manage it for you. In my (admittedly limited) experience with code assistants, I've found that the faster you move with an LLM, the more time you have to spend debugging afterwards, and the more difficult that process becomes.

reply
hmillison
4 days ago
[-]
there's a lot more involved in senior dev work beyond producing code that works.

if the stakeholders knew how to do what they needed to build and how, then they could use LLMs, but translating complex requirements into code is something that these tools are not even close to cracking.

reply
rybosworld
4 days ago
[-]
> there's a lot more involved in senior dev work beyond producing code that works.

Completely agree.

What I don't agree with is statements like these:

> LLM’s never provide code that pass my sniff test

To me, these (false) absolutions about chat bot capabilities, are being rehashed so frequently, that it derails every conversation about using LLM's for dev work. You'll find similar statements in nearly every thread about LLM's for coding tasks.

It's provably true that LLM's can produce working code. It's also true, that some increasingly large portion of coding is being offloaded to LLM's.

In my opinion, developers need to grow out of this attitude that they are John Henry and they'll outpace the mechanical drilling machine. It's a tired conversation.

reply
rurp
4 days ago
[-]
> It's provably true that LLM's can produce working code.

You've restated this point several times but the reason it's not more convincing to many people is that simply producing code that works is rarely an actual goal on many projects. On larger projects it's much more about producing code that is consistent with the rest of the project, and is easily extensible, and is readable for your teammates, and is easy to debug when something goes wrong, is testable, and so on.

The code working is a necessary condition, but is insufficient to tell if it's a valuable contribution.

reply
simianparrot
4 days ago
[-]
The code working is the bare minimum. The code being right for the project and context is the basic expectation. The code being _good_ at solving its intended problem is the desired outcome, which is a combination of tradeoffs between performance, readability, ease of refactoring later, modularity, etc.

LLM's can sometimes provide the bare minimum. And then you have to refactor and massage it all the way to the good bit, but unlike looking up other people's endeavors on something like Stack Overflow, with the LLM's code I have no context why it "thought" that was a good idea. If I ask it, it may parrot something from the relevant training set, or it might be bullshitting completely. The end result? This is _more_ work for a senior dev, not less.

Hence why it has never passed my sniff test. Its code is at best the quality of code even junior developers wouldn't open a PR for yet. Or if they did they'd be asked to explain how and why and quickly learn to not open the code for review before they've properly considered the implications.

reply
munk-a
4 days ago
[-]
> It's provably true that LLM's can produce working code.

This is correct - but it's also true that LLMs can produce flawed code. To me the cost of telling whether code is correct or flawed is larger than the cost of me just writing correct code. This may be an AuDHD thing but I can better comprehend the correctness of a solution if I'm watching (and doing) the making of that solution than if I'm reading it after the fact.

reply
nrawe
4 days ago
[-]
As a developer, while I do embrace intellisense, I don't copy/paste code, because I find typing it out is a fast path to reflection and finding issues early. Copilot seems to be no better than mindlessly copy/pasting from StackOverflow.

From what I've seen of Copilots, while they can produce working code, I've not seen that much that it offers beyond the surface level which is fast enough for me to type. I am also deeply perturbed from some interviews I've done for senior candidates recently who are using them and, when asked to disable them for collaborative coding task, completely fall apart because of their dependency over knowledge.

This is not to say I do not see value in AI, LLMs or ML (I very much do). However, I code broadly at the speed of thought, and that's not really something I think will be massively aided by it.

At the same time, I know I am an outlier in my practice relative to lots around me.

While I don't doubt other improvements that may come from LLM in development, the current state of the art feels less like a mechanical drill and more like an electric triangle.

reply
lukev
4 days ago
[-]
Code is a liability, not an asset. It is a necessary evil to create functional software.

Senior devs know this, and factor code down to the minimum necessary.

Junior devs and LLMs think that writing code is the point and will generate lots of it without worrying about things like leverage, levels of abstraction, future extensibility, etc.

reply
danenania
4 days ago
[-]
LLMs can be prompted to write code that considers these things.

You can write good code or bad code with LLMs.

reply
skydhash
4 days ago
[-]
The code itself, whether good or bad, is a liability. Just like a car is a liability, in a perfect world, you'd teleport yourself to your destination, instead you have to drive. And because of that, roads and gas stations have to be built, you have to take care of the car, etc,.... It's all a huge pain. The code you write, you will have to document, maintain, extend, refactor, relearn, and a bunch of other activities. So yo do your best to only have the bare minimum to take care of. Anything else is just future troubles.
reply
danenania
4 days ago
[-]
Sure, I don’t dispute any of that. But it's not a given that using LLMs means you’re going to have unnecessary code. They can even help to reduce the amount of code. You just have to be detailed in your prompting about what you do and don’t want, and work through multiple iterations until the result is good.

Of course if you try to one shot something complex with a single line prompt, the result will be bad. This is why humans are still needed and will be for a long time imo.

reply
lukev
4 days ago
[-]
I'm not sure that's true. A LLM can code because it is trained on existing code.

Empirically, LLMs work best at coding when doing completely "routine" coding tasks: CRUD apps, React components, etc. Because there's lots of examples of that online.

I'm writing a data-driven query compiler and LLM code assistance fails hard, in both blatant and subtle ways. There just isn't enough training data.

Another argument: if a LLM could function like a senior dev, it could learn to program in new programming languages given the language's syntax, docs and API. In practice they cannot. It doesn't matter what you put into the context, LLMs just seem incapable of writing in niche languages.

Which to me says that, at least for now, their capabilities are based more on pattern identification and repetition than they are on reasoning.

reply
danenania
4 days ago
[-]
Have you tried new languages or niche languages with claude sonnet 3.5? I think if you give it docs with enough examples, it might do ok. Examples are crucial. I’ve seen it do well with CLI flags and arguments when given docs, which is a somewhat similar challenge.

That said, you’re right of course that it will do better when there’s more training data.

reply
DaiPlusPlus
4 days ago
[-]
> It's provably true that LLM's can produce working code

ChatGPT, even now in late-2024, still hallucinates standard-library types and methods more-often-than-not whenever I ask it to generate code for me. Granted, I don’t target the most popular platforms (i.e. React/Node/etc; I’m currently in a .NET shop, which is a minority platform now, but ChatGPT’s poor performance is surprising given the overall volume and quality of .NET content and documentation out there.

My perception is that “applications” work is more likely to be automated-away by LLMs/copilots because so much of it is so similar to everyone else’s, so I agree with those who say LLMs are only as good as there are examples of something online, whereas asking ChatGPT to write something for a less-trodden area, like Haskell or even a Windows driver, is frequently a complete waste of time as whatever it generates is far beyond salvaging.

Beyond hallucinations, my other problem lies in the small context window which means I can’t simply provide all the content it needs for context. Once a project grows past hundreds of KB of significant source I honestly don’t know how us humans are meant to get LLMs to work on them. Please educate me.

I’ll declare I have no first-hand experience with GitHub Copilot and other systems because of the poor experiences I had with ChatGPT. As you’re seemingly saying that this is a solved problem now, can you please provide some details on the projects where LLMs worked well for you? (Such as which model/service, project platform/language, the kinds of prompts, etc?). If not, then I’ll remain skeptical.

reply
qup
4 days ago
[-]
> still hallucinates standard-library types and methods more-often-than-not whenever I ask it to generate code for me

Not an argument, unsolicited advice: my guess is you are asking it to do too much work at once. Make much smaller changes. Try to ask for as roughly much as you would put into one git commit (per best practices)-- for me that's usually editing a dozen or less lines of code.

> Once a project grows past hundreds of KB of significant source I honestly don’t know how us humans are meant to get LLMs to work on them. Please educate me.

https://github.com/Aider-AI/aider

Edit: The author of aider puts the percentage of the code written by LLMs for each release. It's been 70%+. But some problems are still easier to handle yourself. https://github.com/Aider-AI/aider/releases

reply
DaiPlusPlus
3 days ago
[-]
Thank you for your response - I've aksed these questions before in other contexts but never had a reply, so pretty-much any online discussion about LLMs feels like I'm surrounded by people role-playing being on LinkedIn.
reply
JTyQZSnP3cQGa8B
3 days ago
[-]
> It's provably true that LLM's can produce working code

Then why can't I see this magical code that is produced? I mean a real big application with a purpose and multiple dependencies, not yet another ReactJS todo list. I've seen comments like that a hundred times already but not one repository that could be equivalent to what I currently do.

For me the experience of LLM is a bad tool that calls functions that are obsolete or do not exist at all, not very earth-shattering.

reply
pooper
4 days ago
[-]
> if the stakeholders knew how to do what they needed to build and how, then they could use LLMs, but translating complex requirements into code is something that these tools are not even close to cracking.

They don't have to replace you to reduce headcount. They could increase your workload so where they needed five senior developers, they can do with maybe three. That's like six one way and half a dozen the other way because two developers lost a job, right?

reply
n4r9
4 days ago
[-]
Yeah. Code that works is a fraction of the aim. You also want code that a good junior can read and debug in the midst of a production issue, is robust against new or updated requirements, has at least as good performance as the competitors, and uses appropriate libraries in a sparse manner. You also need to be able to state when a requirement would loosen the conceptual cohesion of the code, and to push back on requirements thdt can already be achieved in just as easy a way.
reply
ben_w
4 days ago
[-]
> It's provably true that LLM's can generate working code.

What I've seen of them, the good ones mostly produce OK code. Not terrible, usually works.

Although I like them even for that low-ish bar, although I find them to be both a time-saver and a personal motivation assistant, they're still a thing that needs a real domain expert to spot the mistakes they make.

> Developers seem to focus on the set of cases that LLM's produce code that doesn't work, and use that as evidence that these tools are "useless".

I do find it amusing how many humans turned out to be stuck thinking in boolean terms, dismissing the I in AGI, calling them as "useless" because it "can't take my job". Same with the G in AGI, dismissing the breadth of something that speaks 50 languages when humans who speak five or six languages are considered unusually skilled.

reply
j-krieger
4 days ago
[-]
> If that statement isn't coming from ego, then where is it coming from? It's provably true that LLM's can generate working code. They've been trained on billions of examples.

I am pro AI and I'm probably even overvaluing the value AI brings. However, for me, this doesn't work in more "esoteric" programming languages or those with stricter rulesets like Rust. LLMS provide fine JS code, since there's no compiler to satisfy, but CPP without undefined behaviour or compiling Rust code is rare.

There's also no chance of LLMS providing compiling code if you're using a library version with a newer API than the one in the training set.

reply
mplanchard
4 days ago
[-]
Working code some subset of the time, for popular languages. It’s not good at Rust nor at other smaller languages. Tried to ask it for some help with Janet and it was hopelessly wrong, even with prompting to try to get it to correct its mistakes.

Even if it did work, working code is barely half the battle.

reply
BoxOfRain
4 days ago
[-]
I'd say ChatGPT at least is a fair bit better at Python than it is at Scala which seems to match your experience.
reply
IshKebab
4 days ago
[-]
> It's provably true that LLM's can generate working code.

Yeah for simple examples, especially in web dev. As soon as you step outside those bounds they make mistakes all the time.

As I said, they're still useful because roughly correct but buggy code is often quite helpful when you're programming. But there's zero chance you can just say "write me an driver for the nRF905 using Embassy and embedded-hal" and get something working. Whereas I, a human, can do that.

reply
fragmede
4 days ago
[-]
The question is, how long would it take you to get as far as this chat does when starting from scratch?

https://chatgpt.com/share/6760c3b3-bae8-8009-8744-c25d5602bf...

reply
bccdee
4 days ago
[-]
How confident are you in the correctness of that code?

Because, one way or another, you're still going to need to become fluent enough in the problem domain & the API that you can fully review the implementation and make sure chatgpt hasn't hallucinated in any weird problems. And chatgpt can't really explain its own work, so if anything seems funny, you're going to have to sleuth it out yourself.

And at that point, it's just 339 lines of code, including imports, comments, and misc formatting. How much time have you really saved?

reply
IshKebab
4 days ago
[-]
Yeah now actually check the nRF905 documentation. You'll find it has basically made everything up.
reply
JKCalhoun
4 days ago
[-]
I imagine though they might replace 3 out of 4 senior programmers (keep one around to sanity check the AI).
reply
munk-a
4 days ago
[-]
That's the same figuring a lot of business folks had when considering off-shoring in the early 2000s - those companies ended up hiring twice as many senior programmers to sanity check and correct the code they got back. The same story can be heard from companies that fired their expensive seniors to hire twice as many juniors at a quarter the price.

I think that software development is just an extremely poor market segment for these kinds of tools - we've already got mountains of productivity tools that minimize how much time we need to spend doing the silly rote programming stuff - most of software development is problem solving.

reply
mrbungie
4 days ago
[-]
Oof, the times I've heard something like that with X tech.
reply
tonyarkles
4 days ago
[-]
Heh, UML is going to save us! The business people can just write the requirements and the code will write itself! /s

Given the growth-oriented capitalist society we live in in the west, I'm not all that worried about senior and super-senior engineers being fired. I think a much more likely outcome is that if a business does figure out a good way to turn an LLM into a force-multiplier for senior engineers, they're going to use that to grow faster.

There is a large untapped niche too that this could potentially unlock: projects that aren't currently economically viable due to the current cost of development. I've done a few of these on a volunteer basis for non-profits but can't do it all the time due to time/financial constraints. If LLM tech actually makes me 5x more productive on simple stuff (most of these projects are simple) then it could get viable to start knocking those out more often.

reply
deathanatos
4 days ago
[-]
> This is ego speaking.

No, it really isn't. Repeatedly, the case is that people are trying to pass off GPT's work as good without actually verifying the output. I keep seeing "look at this wonderful script GPT made for me to do X", and it does not pass code review, and is generally extremely low quality.

In one example, a bash script was generated to count number SLoC changed by author; it was extremely convoluted, and after I simplified it, I noticed that the output of the simplified version differed, because the original was omitted changes that were only a single line.

In another example it took several back & forths during a review to ask "where are you getting this code? / why do you think this code works, when nothing in the docs supports that?" and after several back and forths, it was admitted that GPT wrote it. The dev who wrote it would have been far better served RTFM, than a several cycle long review that ended up with most of GPT's hallucinations being stripped from the PR.

Those who think LLM's output is good have not reviewed the output strenuously enough.

> Is there some expectation that these things won't improve?

Because randomized token generation inherently lacks actual reasoning about the behavior of the code. My code generator does not.

reply
AlphaSite
4 days ago
[-]
I think fundamentally if all you do is glue together popular OSS libraries in well understood way, then yes. You may be replaced. But really you probably could be replaced by a Wordpress plugin at that point.

The moment you have some weird library that 4 people in the world know (which happens more than you’d expect) or hell even something without a lot of OSS code what exactly is an LLM going to do? How is it supposed to predict code that’s not derived from its training set?

My experience thus far is that it starts hallucinating and it’s not really gotten any better at it.

I’ll continue using it to generate sed and awk commands, but I’ve yet to find a way to make my life easier with the “hard bits” I want help with.

reply
deathanatos
4 days ago
[-]
> I’ll continue using it to generate sed and awk commands,

The first example I gave was an example of someone using an LLM to generate sed & awk commands, on which it failed spectacularly, on everything from the basics to higher-level stuff. The emitted code even included awk, and the awk was poor quality: e.g., it had to store the git log output & make several passes over it with awk, when in reality, you could just `git log | awk`; it was doing `... | grep | awk` which … if you know awk, really isn't required. The regex it was using to work with the git log output it was parsing with awk was wrong, resulting in the wrong output. Even trivial "sane bash"-isms, it messed up: didn't quote variables that needed to be quotes, didn't take advantage of bashisms even though requiring bash in the shebang, etc.

The task was a simple one, bordering on trivial, and any way you cut it, from "was the code correct?" to "was the code high quality?", it failed.

But it shouldn't be terribly surprising that an LLM would fail at writing decent bash: its input corpus would resemble bash found on the Internet, and IME, most bash out there fails to follow best-practice; the skill level of the authors probably follows a Pareto distribution due to the time & effort required to learn anything. GIGO, but with way more steps involved.

I've other examples, such as involving Kubernetes: Kubernetes is also not in the category of "4 people in the world know": "how do I get the replica number from a pod in a statefulset?" (i.e., the -0, -1, etc., at the end of the pod name) ­— I was told to query,

  .metadata.labels.replicaset-序号
(It's just nonsense; not only does no such label exist for what I want, it certainly doesn't exist with a Chinese name. AFAICT, that label name did not appear on the Internet at the time the LLM generated it, although it does, of course, now.) Again, simple task, wide amount of documentation & examples in the training set, and garbage output.
reply
valval
4 days ago
[-]
You ask what the LLM is going to do. It’s going to swallow the entire code base in context and allow any developer to join those 4 people in generating production grade code.
reply
paulcole
4 days ago
[-]
> No, it really isn't

It really is. Either that or you’re not thinking about what you’re saying.

Imagine code passes your rigorous review.

How do you know that it wasn’t from an LLM?

If it’s because you know that you only let good code pass your review and you know that LLMs only generate bad code, think about that a bit.

reply
deathanatos
4 days ago
[-]
> Imagine code passes your rigorous review. How do you know that it wasn’t from an LLM?

That's not what I'm saying (and it's a strawman; yes, presumably some LLM code would escape review and I wouldn't know it's from an LLM, though I find that unlikely, given…) — what I'm saying is of LLM generated code that is reviewed, what is the quality & correctness of the reviewed code? And it's resoundingly (easily >90%) crap.

Obviously we can't sample from unknown-authorship … nor am I; I'm sampling problems that I and others run through an LLM, and the output thereof.

The other facet of this point is that I believe a lot of the craze that users using the LLM have is driven by them not looking closely at the output; if you're just deriving code from the LLM, chucking it over the wall, and calling it a day (as was the case from one of the examples in the comment above) — you're perceiving the LLM as being useful, when it fact it is leaving bugs that you're either not attributing to it, someone else is cleaning up (again, that was the case in the above example), etc.

reply
paulcole
4 days ago
[-]
> what I'm saying is of LLM generated code that is reviewed, what is the quality & correctness of the reviewed code? And it's resoundingly (easily >90%) crap.

What makes you so sure that none of the resoundingly non-crap that you have reviewed was not produced by LLM?

It’s like saying you only like homemade cookies not ones from the store. But you may be gleefully chowing down on cookies that you believe are homemade because you like them (so they must be homemade) without knowing they actually came from the store.

reply
whtsthmttrmn
3 days ago
[-]
> What makes you so sure that none of the resoundingly non-crap that you have reviewed was not produced by LLM?

From the post you're replying to:

> Obviously we can't sample from unknown-authorship … nor am I; I'm sampling problems that I and others run through an LLM, and the output thereof.

reply
paulcole
3 days ago
[-]
Yes, believe it or not I’m able to read.

> I'm sampling problems that I and others run through an LLM

This is not what’s happening unless 100% of the problems they’ve sampled (even outside of this fun little exercise) have been run through an LLM.

They’re pretending like it doesn’t matter that they’re looking at untold numbers of other problems and are not aware whether those are LLM generated or not.

reply
whtsthmttrmn
3 days ago
[-]
It sounds like that's what's happening. The LLM code that they have reviewed has been, to their standards, subpar.
reply
paulcole
3 days ago
[-]
> The LLM code that they have reviewed has been, to their standards, subpar.

No. The accurate way to say this is:

“The code that they have reviewed that they know came from an LLM has been, to their standards, subpar.”

reply
simianparrot
4 days ago
[-]
At my job I review a lot of code, and I write code as well. The only type of developer an LLM’s output comes close to is a fresh junior usually straight out of university in their first real development job, with little practical experience in a professional code-shipping landscape. And the majority of those juniors improve drastically within a few weeks or months, with handholding only at the very start and then less and less guidance. This is because I teach them to reason about their choices and approaches, to question assumptions, and thus they learn quickly that programming rarely has one solution to a problem, and that the context matters so much in determining the way forward.

A human junior developer can learn from this tutoring and rarely regress over time. But the LLM’s all by design cannot and do not rewire their understanding of the problem space over time, nor do they remember examples and lessons from previous iterations to build upon. I have to handhold them forever, and they never learn.

Even when they use significant parts of the existing codebase as their context window they’re still blind to the whole reality and history of the code.

Now just to be clear, I do use LLM’s at my job. Just not to code. I use them to parse documents and assist users with otherwise repetitive manual tasks. I use their strength as language models to convert visual tokens parsed by an OCR to grasp the sentence structure and convert that into text segments which can be used more readily by users. At that they are incredible, even something smaller like llama 7b.

reply
Derbasti
4 days ago
[-]
That's a good observation. How would you teach an LLM the processes and conduct in your company? I suppose you'd need to replace code reviews and tutoring with prompt engineering. And hope that the next software update to the LLM won't invalidate your prompts.
reply
sigmarule
4 days ago
[-]
It's not very different from documentation, except it's not used for learning but rather immediate application, i.e. it's something you must include with each prompt/interaction (and likely need a way to determine which are the relevant bits to reduce token count, depending on the size of the main prompt.) In fact, this is probably one of the more important aspects of adapting LLMs for real-world use within real team/development settings that people don't do. If you provide clear and comprehensive descriptions of codebase patterns, pitfalls, practices, etc the latest models are good at adhering to them. It sounds difficult and open-ended, as this requires content beyond the scope of typical (or even sensible) internal documentation, but much of this content is captured in internal docs, discussions, tickets, PR comments, and git commit histories, and guess what's pretty great at extracting high-level insights from these types of inputs?
reply
surgical_fire
4 days ago
[-]
> I am constantly surprised how prevalent this attitude is. ChatGPT was only just released in 2022. Is there some expectation that these things won't improve?

Is there any expectations that things will? Is there more untapped great quality data that LLMs can ingest? Will a larger model perform meaningfully better? Will it solve the pervasive issue of generating plausibly sounding bullshit?

I used LLMs for a while, I found them largely useless for my job. They were helpful for things I don't really need help with, and they were mostly damaging for things I actually needed.

> This is ego speaking.

Or maybe it was an accurate assessment for his use case, and your wishful thinking makes you think it was his ego speaking.

reply
rybosworld
4 days ago
[-]
> Is there any expectations that things will?

Seems like an odd question. The answer is obviously yes: There is a very pervasive expectation that LLM's will continue to improve, and it seems odd to suggest otherwise. There is hundreds of billions of dollars being spent on AI training and that number is increasing each year.

> Is there more untapped great quality data that LLMs can ingest?

Why wouldn't there be? AI's are currently trained on the internet but that's obviously not the only source of data.

> Will a larger model perform meaningfully better?

The answer to this, is also yes. It is well established that, all else being equal, a bigger model is better than a smaller model, assuming that the smaller model hasn't already captured all of the available information.

reply
tobias3
4 days ago
[-]
We recently had a few submissions about this topic. Most recently Ilyas talk. Further improvement will be a research type problem. This trend was clear for a while already, but is reaching the mainstream now. The billions of dollar spend goes into scaling existing technology. If it doesn't scale anymore and becomes a resarch problem again, rational companies will not continue to invest in this area (at least without the usual research arrangements).
reply
globnomulous
4 days ago
[-]
> There is a very pervasive expectation that LLM's will continue to improve, and it seems odd to suggest otherwise. There is hundreds of billions of dollars being spent on AI training and that number is increasing each year.

It isn't odd at all. In the early 21st century there were expectations of ever, exponentially increasing processing power. This misconception partially gave us the game Crysis, which, if I'm not mistaken, was written with wildly optimistic assumptions about the growing power of computer hardware.

People are horrible at predicting the future of technology (beyond meaninglesslessly or trivially broad generalizations) and even when predictions turn out to be correct, they're often correct for the wrong reasons. If we were better at it, even in the shortest term, where such predictions should be the easiest, we'd all be megamillionaires, because we'd have seen the writing on the wall and invested in Nvidia before the AI craze reached its current fever pitch.

reply
throwing_away
4 days ago
[-]
> because we'd have seen the writing on the wall and invested in Nvidia before the AI craze reached its current fever pitch.

I did this when I saw the first stable diffusion AI titties. So far up over 10x.

If I had a nickel for every tech that takes off once porn finds its way to it, I wouldn't be counting in nickels.

reply
esafak
4 days ago
[-]
What's the logic behind that? Porn has had nothing to do with the current AI boom.
reply
barrell
4 days ago
[-]
If all of this is true, then I would expect LLMs today to be in a whole different league of LLMs from two years ago. Personally I find them less helpful and worse at programming and creative writing related tasks.

YMMV but count me in the camp that I think there’s better odds that LLMs are at or near their potential vs in their nascent stages.

reply
paulcole
4 days ago
[-]
It’s not worth arguing with anyone who thinks LLMs are a fad. I just tell them that they’re right and get back to using LLMs myself. I’ll take the edge while I can.
reply
surgical_fire
4 days ago
[-]
> The answer is obviously yes: There is a very pervasive expectation that LLM's will continue to improve, and it seems odd to suggest otherwise. There is hundreds of billions of dollars being spent on AI training and that number is increasing each year.

That makes an assumption that throwing dollars on AI training is a surefire way to solve the many shortcomings of LLMs. It is a very optimistic assumption.

> Why wouldn't there be? AI's are currently trained on the internet but that's obviously not the only source of data.

"The Internet" basically encompasses all meaningful sources of data available, especially if we are talking specifically about software development. But even beyond that, it is very unclear what other high quality data it would consume that would improve the things.

> The answer to this, is also yes. It is well established that, all else being equal, a bigger model is better than a smaller model, assuming that the smaller model hasn't already captured all of the available information.

I love how you conveniently sidestepped the part where I ask if it would improve the pervasive issue of generating plausibly sounding bullshit.

The assumption that generative AI will improve is as valid as the assumption that it will plateau. It is quite possible that what we are seeing is "as good as it gets", and some major breakthrough, that may or may not happen on our lifetime, is needed.

reply
rybosworld
4 days ago
[-]
> That makes an assumption that throwing dollars on AI training is a surefire way to solve the many shortcomings of LLMs. It is a very optimistic assumption.

That's not an assumption that I am personally making. That's what experts in the field believe.

> "The Internet" basically encompasses all meaningful sources of data available, especially if we are talking specifically about software development. But even beyond that, it is very unclear what other high quality data it would consume that would improve the things.

How about, interacting with the world?

> I love how you conveniently sidestepped the part where I ask if it would improve the pervasive issue of generating plausibly sounding bullshit.

I was not trying to "conveniently sidestep". To me, that reads like a more emotional wording of the first question you asked, which is if LLM's are expected to improve. To that question I answered yes.

> The assumption that generative AI will improve is as valid as the assumption that it will plateau.

It is certainly not as valid to say that generative AI will plateau. This is comparable to saying that the odds of winning any bet are 50/50, because you either win or lose. Probabilities are a thing. And the probability that the trend will plateau is lower than not.

> It is quite possible that what we are seeing is "as good as it gets", and some major breakthrough, that may or may not happen on our lifetime, is needed.

It's also possible that dolphins are sentient aliens sent here to watch over us.

reply
surgical_fire
4 days ago
[-]
> That's not an assumption that I am personally making. That's what experts in the field believe.

People invested in something believe that throwing money at it will make it better? Color me shocked.

"eat meat, says the butcher"

The rest of your answer amount to optimistic assumptions that yes, AI future is rosy, based on nothing but a desire that it will, because of course it will.

reply
nicoburns
4 days ago
[-]
> > LLM’s never provide code that pass my sniff test

> This is ego speaking.

That's been my experience of LLM-generated code that people have submitted to open source projects I work on. It's all been crap. Some of it didn't even compile. Some of it changed comments that were previously correct to say something untrue. I've yet to see a single PR that implemented something useful.

reply
mh-
4 days ago
[-]
Isn't this a kind of survivor bias? You wouldn't know if you approved (undisclosed) LLM-generated code that was good..
reply
talldayo
4 days ago
[-]
Good LLM-generated code comes from good developers that can provide detailed requests, filter through wrong/misleading/cruft results, and test a final working copy. If your personal bar for PRs is high enough then it won't matter if you use AI or not because you won't contribute shitty results.

I think the problem has become people with no bar for quality submitting PRs that ruin a repo, and then getting mad when it's not merged because ChatGPT reviewed the patch already.

reply
constantcrying
4 days ago
[-]
Terrible software engineers sent you terrible code. A good software engineer will obviously send you good quality code, whether he used AI for that is impossible to tell on your end.
reply
cies
4 days ago
[-]
> LLM-generated code that people have submitted to open source projects I work on

Are you sure it was people? Maybe the AI learned how to make PRs, or is learning how to do so by using your project as a test bed.

reply
jcranmer
4 days ago
[-]
This is at least the third time in my life that we've seen a loudly-heralded purported the-end-of-programming technology. The previous two times both ended up being damp squibs that barely mention footnotes in the history of computing.

Why do we expect that LLMs are going to buck this trend? It's not for accuracy--the previous attempts, when demonstrating their proof-of-concepts, actually reliably worked, whereas with "modern LLMs", virtually every demonstration manages to include "well, okay, the output has a bug here."

reply
simianparrot
4 days ago
[-]
I do seem to vaguely remember a time when there was a fair amount of noise proclaiming "visual programming is making dedicated programmers obsolete." I think the implication was that now everybody's boss could just make the software themselves or something.

LLM's as a product feel practically similar, because _even if_ they could write code that worked in large enough quantities to constitute any decently complex application, the person telling them what problem to solve has to understand the problem space since the LLM's can't reason.

Given that neither of those things are true, it's not much different from visual programming tools, practically speaking.

reply
nickd2001
4 days ago
[-]
This is a feeling I have too. However, compared to Visual Programming it's perhaps harder to dismiss? Visual programming - its pretty obvious you can't effectively or even at all step through a UML diagram with a debugger to find a problem. The code that gets generated from diagrams, with visual programming, is obviously cr*p. So the illusion doesn't last long. Whereas AI - it kind of looks OK, you can indeed debug it, its not necessarily more complex or worse than the hideously over-complex systems human teams create. Especially if the human team is mismanaged e:g original devs left, some got burnt out and became unproductive, others had to cut corners and make tech debt to hit unrealistic deadlines, other bits get outsourced to another country where the devs themselves are great but perhaps there's a language barrier or simply geographical distance means requirements and domain understanding got lost somewhere in the mix. So, I suppose I sit on the fence, AI-generated code may be terrible but is it worse than what we were making anyway? ;). In the future there are probably going to be companies run by "unwise people" that generate massive amounts of their codebase then find themselves in a hole when no-one working at the company understands the code at all. (whereas in the past perhaps they could hire back a dev they laid off on large contractor rates of pay to save the day). Seems inevitable one day the news will be of some high profile company failure and/or tanking of stock caused by a company basically not knowing what it was doing due to AI generated code.
reply
tptacek
4 days ago
[-]
I wonder how programmers of the time felt when spreadsheets got popular.
reply
NikkiA
2 hours ago
[-]
We used spreadsheets too, and found them useful, but not very threatening
reply
tobyhinloopen
4 days ago
[-]
LLMs are great for simple, common, boring things.

As soon as you have something less common, it will give you widely incorrect garbage that does not make any sense. Even worse, it APPEARS correct, but it won’t work or will do something else completely.

And I use 4o and o1 every day. Mostly for boilerplate and boring stuff.

I have colleagues that submit ChatGPT generated code and I’ll immediately recognize it because it is just very bad. The colleague would have tested it, so the code does work, but it is always bad, weird or otherwise unusual. Functional, but not nice.

ChatGPT can give very complicated solutions to things that can be solved with a one-liner.

reply
groby_b
4 days ago
[-]
> Is there some expectation that these things won't improve?

Sure. But the expectation is quantitative improvement - qualitative improvement has not happened, and is unlikely to happen without major research breakthroughs.

LLMs are useful. They still need a lot of supervision & hand holding, and they'll continue to for a long while

And no, it's not "ego speaking". It's long experience. There is fundamentally no reason to believe LLMs will take a leap to "works reliably in subtle circumstances, and will elicit requirements as necessary". (Sure, if you think SWE work is typing keys and make some code, any code, appear, then LLM are a threat)

reply
deegles
4 days ago
[-]
LLMs as they currently exist will never yield a true, actually-sentient AI. maybe they will get better in some ways, but it's like asking if a bird will ever fly to the moon. Something else is needed.
reply
mlboss
4 days ago
[-]
A bird can fly to moon if it keeps on improving every month.
reply
VeejayRampay
4 days ago
[-]
it literally cannot though, unless it becomes some other form of life that doesn't need oxygen, that's the whole thing with this analogy, it's ironically suited to the discourse
reply
simianparrot
4 days ago
[-]
It would need to: * Survive without oxygen * Propel itself without using wings * Store enough energy for the long trip and upkeep of the below * Insulation that can handle the radiation and temperature fluctuations * Have a form of skin that can withstand the pressure change from earth to space and then the moon * Have eyes that don't pop while in space

It would need to become something literally extraterrestrial that has not evolved in the 3.7b+ years prior.

I wouldn't say it's impossible, but if evolution ever got there that creature would be so far removed from a bird that I don't think we'd recognize it :p

reply
lloeki
4 days ago
[-]
Ha I didn't even count oxygen in the mix; it could hold its breath maybe? j/k, what's for sure is that I don't see how a biological entity could ever achieve escape velocity from self-powered flight, because, well, physics.

That, or they "keep improving every month" til they become evolved enough to build rockets, at which point the entire point of them being birds becomes moot.

reply
cies
4 days ago
[-]
> ChatGPT was only just released in 2022.

Bitcoin was released in what year? I still cannot use it for payments.

No-code solutions exist since when? And still programmers work...

I dont think all hyped techs are fads. For instance: we use SaaS now instead of installing software locally. This transition took the world by storm.

But those tech that needs lots of ads, and lots of zealots, and make incredible promises: they usually are fads.

reply
gabriel-uribe
4 days ago
[-]
Cryptocurrency more broadly can be used to send vast amounts of money, anywhere in the world for near-zero fees regardless of sovereignty. And Bitcoin is still faster and cheaper than shipping gold around.
reply
bigstrat2003
4 days ago
[-]
> For instance: we use SaaS now instead of installing software locally.

If anything, I feel like this argument works against you. If people are willing to replace locally installed software with shitty web "apps" that barely compare, why do you think they won't be willing to replace good programmers with LLMs doing a bad job simply because it's trendy?

reply
achierius
4 days ago
[-]
SaaS has generally been better than the local apps it replaced, particularly when you factor in 'portability'.

I love local apps but it's undeniable that developers having to split their attention between platforms lowered quality by a lot. You're probably remembering the exemplars, not the bulk which half-worked and looked bad too

reply
cies
4 days ago
[-]
Just look at the job market: how many programmers work in SaaS now compared to 2000?
reply
sitzkrieg
4 days ago
[-]
ego? LLMs goof on basic math and cant even generate code for many non public things. theyre not useful to me whatsoever
reply
vitorsr
4 days ago
[-]
This... for my most important use case (applied numerical algorithms) it is in fact beyond not useful, it is negative value - even for highly available methods’ codes.

Sure, I can ask for it to write (wrong) boilerplate but it is hardly where work ends. It is up to me to spend the time doing careful due diligence at each and every step. I could ask for it to patch each mistake but, again, it relies on a trained, skillful, many times formally educated domain expert on the other end puppeteering the generative copywriter.

For the many cases where computer programming is similar to writing boilerplate, it could indeed be quite useful but I find the long tail of domain expertises will always be outside the reach of data-driven statistical learners.

reply
agilob
4 days ago
[-]
LLMs aren't supposed to do basic math, but be chat agents. Wolfram Alpha can't do chat.
reply
simianparrot
4 days ago
[-]
Math is a major part of programming. In fact programming without math is impossible. And if you go all the way down to bare metal it’s all math. We are shifting bits through incredibly complex abstractions.
reply
agilob
4 days ago
[-]
No, math is major part of writing good code, but when was the last time you've seen somebody put effort into writing O(n) algorithm? 99% of programming is "import sort from sort; sort.sortThisReallyQuick". Programming is mostly writing code that just compiles and eventually gives correct results (and has bugs). You can do a lot of programming just buy copy-pasting results from stackoverflow.

https://en.wikipedia.org/wiki/Npm_left-pad_incident

https://old.reddit.com/r/web_design/comments/35prfv/designer...

https://www.youtube.com/watch?v=GC-0tCy4P1U

reply
simianparrot
4 days ago
[-]
In any real-world application you'll sooner than later run into optimization challenges where if you don't understand the foundational challenges, googling "fastly do the thing" won't help you ;)

Much like asking an LLM to solve a problem for you.

reply
constantcrying
4 days ago
[-]
Most optimization you do as a software engineer is about figuring out what you do not need to do, no math involved.

You aren't going around applying novel techniques of numerical linear algebra.

reply
digging
4 days ago
[-]
No, math can describe programming, but that's also true of everything. You wouldn't say "Playing basketball without math is impossible" even though it's technically true because you don't need to know or use math to play basketball. You also can do a ton of programming without knowing any math, although you will more than likely need to learn arithmetic to write enterprise code.
reply
constantcrying
4 days ago
[-]
Has never been true in my experience, almost all programming can be done without any mathematics. Of course very specific things do require mathematics, but they are uncommon and often solved by specialists.

Btw. I come from a math background and later went into programming.

reply
akira2501
4 days ago
[-]
> Is there some expectation that these things won't improve?

Yes. The current technology is at a dead end. The costs for training and for scaling the network are not sustainable. This has been obvious since 2022 and is related to the way in which OpenAI created their product. There is no path described for moving from the current dead end technology to anything that could remotely be described as "AGI."

> This is ego speaking.

This is ignorance manifest.

reply
code_for_monkey
4 days ago
[-]
I agree with you tbh, and it also just misses something huge that doesnt get brought up: its not about your sniff test, its about your bosses sniff test. Are you making 300k a year? Thats 300 thousand reasons to replace you for a short term boost in profit, companies love doing that.
reply
simianparrot
4 days ago
[-]
I'm in a leadership role and one of the primary parties responsible for hiring. So code passing my sniff test is kind of important.
reply
fragmede
4 days ago
[-]
To bring it back to the question that spawned the ask: One thing to future proof a career as a SWE is to get into business and leadership/management as well as learning how to program. If you're indispensable to the business for other reasons, they'll keep you around.
reply
code_for_monkey
4 days ago
[-]
in that case the conversation is still kind of missing the forest for the trees, yes your career will be safer if you get into management but thats not going to play for everyone. Mathematically, it cant, not every engineer can go be a manager. What Im interested in is the stories of the people who arent going to make it.
reply
gosub100
4 days ago
[-]
The reason it's so good at "rewrite this C program in Python" is because it was trained on a huge corpus of code at GitHub. There is no such corpus of examples of more abstract commands, thus a limited amount by which it can improve.
reply
goalonetwo
4 days ago
[-]
100% and even if they might not "replace" a senior or staff SWE, they can make their job significantly easier which means that instead of requiring 5 Senior or Staff you will only need two.

LLM WILL change the job market dynamics in the coming years. Engineers have been vastly overpaid over the last 10 years. There is no reason to not see a reversal to the mean here. Getting a 500k offer from a FAANG because you studied leetcode for a couple weeks is not going to fly anymore.

reply
danenania
4 days ago
[-]
“which means that instead of requiring 5 Senior or Staff you will only need two”

What company ever feels they have “enough” engineers? There’s always a massive backlog of work to do. So unless you are just maintaining some frozen legacy system, why would you cut your team size down rather than doubling or tripling your output for the same cost? Especially considering all your competitors have a similar productivity boost available. If your reaction is to cut the team rather than make your product better at a much faster rate (or build more products), you will likely be outcompeted by others willing to make that investment.

reply
quantadev
4 days ago
[-]
Even since ChatGPT 3.5 AI coding has been astoundingly good. Like you, I'm totally baffled by developers who say it's not. And I'm not just unskilled and unable to judge. I have 35yrs experience! I've been shocked and impressed from day one, how good AI is. I've even seen Github Copilot do thing that almost resemble mind-reading insofar as predicting what I'm about to type next. It predicts some things it should NOT be able to predict unless it's seeing into the future or into parallel universes or something! And I'm only half joking when I speculate that!
reply
whtsthmttrmn
3 days ago
[-]
This just shows that time spent is not the same as wisdom.
reply
quantadev
3 days ago
[-]
Even ChatGPT 3.5 had "wisdom", so you're definitely wrong.
reply
jazz9k
4 days ago
[-]
Have you ever used LLMs to generate code? It's not good enough yet.

In addition to this, most companies aren't willing to give away all off their proprietary IP and knowledge through 3rd party servers.

It will be awhile before engineering jobs are at risk.

reply
palata
4 days ago
[-]
> Is there some expectation that these things won't improve?

Most of the noise I hear about LLMs is about expectations that things will improve. It's always like that: people extrapolate and get money from VCs for that.

The thing is: nobody can know. So when someone extrapolates and says it's likely to happen as they predict, they shouldn't be surprised to get answers that say "I don't have the same beliefs".

reply
globnomulous
4 days ago
[-]
> This is ego speaking.

It absolutely isn't. I have yet to find an area where LLM-generated code solves the kinds of problems I work on more reliably, effectively, of efficiently than I do.

I'm also not interested in spending my mental energy on code reviews for an uncomphrehending token-prediction golem, let alone finding or fixing bugs in the code it blindly generates. That's a waste of my time and a special kind of personal hell.

reply
epolanski
4 days ago
[-]
Comments like these make me wonder whether we live in the same worlds.

I'm a cursor user e.g. and Tab completion is by far the most powerful auto complete I've ever used.

There's scenarios where you can do some major refactors by simply asking (extract this table to its own component while using best react practices to avoid double renders) and it does so istantly. Refactor the tests for it? Again. Semi instant. Meanwhile monocole wielding "senior" is proudly copy pasting, creating files and fixing indentation as I move to the next task.

I don't expect LLMs to do the hard work, but to speed me up.

And people ignoring LLMs are simply slower and less productive.

It's a speed multiplier right now, not a substitute.

If you complain that you don't like the code, you're not understanding the tools nor you can use them, end of story.

reply
mplanchard
4 days ago
[-]
Sounds like you’re working in react. The models are orders of magnitude worse at less common frameworks and/or languages. React is so boilerplate-heavy that it’s almost a perfect match for LLMs. That said, every large react codebase I’ve worked in inevitably became really complex and had a ton of little quirks and tweaks to deal with unexpected re-renders and such. I avoid the FE when I can these days, but I’d be fairly surprised if LLMs generated anything other than typical performance-footgun-heavy react
reply
phist_mcgee
3 days ago
[-]
Not in my experience, claude seems great at writing idiomatic and performant react. You can even highlight code and ask it to spot performance issues and it will tell you where you can improve rendering efficiency etc.
reply
mplanchard
2 days ago
[-]
That's interesting. I haven't tried it, and my day job moved on from React some time ago. Most of what I do these days is Rust, which all the models I've tried are pretty miserable at.
reply
talldayo
4 days ago
[-]
> This is ego speaking.

I suspect you haven't seen code review at a 500+ seat company.

reply
tomcar288
3 days ago
[-]
when people's entire livehoods are threatened, you're going to see some defensive reactions.
reply
JeremyNT
4 days ago
[-]
Yeah I feel like this is just the beginning.

I'm in my 40s with a pretty interesting background and I feel like maybe I'll make it to retirement. There are still mainframe programmers after all. Maintaining legacy stuff will still have a place.

But I think we'll be the last generation where programmers will be this prevalent and the job market will severely contract. No/low code solutions backed by llms are going to eat away at a lot of what we do, and for traditional programming the tooling we use is going to improve rapidly and greatly reduce the number of developers needed.

reply
shakezooola
4 days ago
[-]
>This is ego speaking.

Very much so. These things are moving so quickly and agentic systems are already writing complete codebases. Give it a few years. No matter how 1337 you think you are, they are very likely to surpass you in 5-10 years.

reply
achierius
4 days ago
[-]
> agentic systems are already writing complete codebases

Examples?

reply
eschaton
4 days ago
[-]
They shouldn’t be expected to improve in accuracy because of what they are and how they work. Contrary to what the average HackerNews seems to believe, LLMs don’t “think,” they just predict. And there’s nothing in them that will constrain their token prediction in a way that improves accuracy.
reply
cozzyd
4 days ago
[-]
If anything, they may regress due to being trained on lower-quality input.
reply
rybosworld
4 days ago
[-]
> Contrary to what the average HackerNews seems to believe, LLMs don’t “think,” they just predict.

Anecdotally, I can't recall ever seeing someone on HackerNews accuse LLM's of thinking. This site is probably one of the most educated corners of the internet on the topic.

> They shouldn’t be expected to improve in accuracy because of what they are and how they work.

> And there’s nothing in them that will constrain their token prediction in a way that improves accuracy.

These are both incorrect. LLM's are already quite better today than they were in 2022.

reply
reshlo
4 days ago
[-]
> I can’t recall ever seeing someone on HackerNews accuse LLMs of thinking.

There are definitely people here who think LLMs are conscious.

https://news.ycombinator.com/item?id=41959163

reply
Bjorkbat
4 days ago
[-]
> I am constantly surprised how prevalent this attitude is. ChatGPT was only just released in 2022. Is there some expectation that these things won't improve?

I mean, in a way, yeah.

Last 10 years were basically one hype-cycle after another filled with lofty predictions that never quite panned out. Besides the fact that many of these predictions kind of fell short, there's also the perception that progress on these various things kind of ground to a halt once the interest faded.

3D printers are interesting. Sure, they have gotten incrementally better after the hype cycle died out, but otherwise their place in society hasn't changed, nor will it likely ever change. It has its utility for prototyping and as a fun hobbyist machine for making plastic toys, but otherwise I remember people saying that we'd be able to just 3D print whatever we needed rather than relying on factories.

Same story with VR. We've made a lot of progress since the first Oculus came out, but otherwise their role in society hasn't changed much since then. The latest VR headsets are still as useless and still as bad for gaming. The metaverse will probably never happen.

With AI, I don't want to be overly dismissive, but at the same time there's a growing consensus that pre-training scaling laws are plateauing, and AI "reasoning" approaches always seemed kind of goofy to me. I wouldn't be surprised if generative AI reaches a kind of equilibrium where it incrementally improves but improves in a way where it gets continuously better at being a junior developer but never quite matures beyond that. The world's smartest beginner if you will.

Which is still pretty significant mind you, it's just that I'm not sure how much this significance will be felt. It's not like one's skillset needs to adjust that much in order to use Cursor or Claude, especially as they get better over time. Even if it made developers 50% more productive, I feel like the impact of this will be balanced-out to a degree by declining interest in programming as a career (feel like coding bootcamp hype has been dead for a while now), a lack of enough young people to replace those that are aging out, the fact that a significant number of developers are, frankly, bad at their job and gave up trying to learn new things a long time ago, etc etc.

I think it really only matters in the end if we actually manage to achieve AGI, once that happens though it'll probably be the end of work and the economy as we know it, so who cares?

I think the other thing to keep in mind is that the history of programming is filled with attempts to basically replace programmers. Prior to generative AI, I remember a lot of noise over low-code / no-code tools, but they were just the latest chapter in the evolution of low-code / no-code. Kind of surprised that even now in Anno Domini 2024 one can make a living developing small-business websites due to the limitations of the latest batch of website builders.

reply
76SlashDolphin
4 days ago
[-]
The funny thing about 3D printers is that they're making a bit of a comeback. The early ones managed to get the capabilities right - you can print some really impressive things on the older Creality printers, but it required fiddling, several hours of building the bloody things, cogged extruders, manual bed leveling and all sorts of technical hurdles. A very motivated techy person will persevere and solve them, and will be rewarded with a very useful machine. The other 99.99% of people won't and will either drop it the moment there's an issue or will hear from others that they require a lot of fiddling and never buy one. If things ever get more complicated than "I see it on a website and I click a few buttons" (incl maintenance) then it's too complicated to gain mass adoption... which is exactly what newer 3D printers are doing - the Bambulabs A1 Mini is £170, prints like a champ, is fairly quiet, requires next to no setup, takes up way less space than the old Enders and needs almost no maintenance. It's almost grandma-proof. Oh, and to further entice your grandma to print, it comes with all sorts of useful knick-knacks that you can start printing immediately after setup. On the hype cycle curve I think 3D printers are almost out of their slump now that we have models that have ironed out the kinks.

But for VR I think we're still closer to the bottom of the curve - Meta and Valve need something to really sell the technology. The gamble for Valve was that it'd be Half Life: Alyx, and for Meta it was portable VR but the former is too techy to set up (and Half Life is already a nerdy IP) while Meta just doesn't have anything that can convince the average person to get a headset (despite me thinking it's a good value just as a Beat Saber machine). But they're getting there - I've convinced a few friends to get a Quest 3S just to practice piano with Virtuoso and I think it's those kinds of apps I hope we see more of that will bring VR out of the slump.

And then LLMs I think their hype cycle is a lot more elevated since even regular people use them extensively now. There will probably be a crash in terms of experimentation with them but I don't see people stopping their usage and I do see them becoming a lot more useful in the long term - how and when is difficult to predict at the top of the hype curve.

reply
Bjorkbat
4 days ago
[-]
Like I said, 3D printers have gotten incrementally better. Buddy of mine makes high-quality prints on one that’s way cheaper than what I owned back in the day.

And yet nothing has really changed because he’s still using it to print dumb tchotchkes like every other hobbyist 10 years ago.

I can foresee them getting better but never getting good enough to where they actually fundamentally change society or live up to past promises

reply
gspencley
4 days ago
[-]
>> > LLM’s never provide code that pass my sniff test

> This is ego speaking.

Consider this, 100% of AI training data is human-generated content.

Generally speaking, we apply the 90/10 rule to human generated content: 90% of (books, movies, tv shows, software applications, products available on Amazon) is not very good. 10% shines.

In software development, I would say it's more like 99 to 1 after working in the industry professionally for over 25 years.

How do I divorce this from my personal ego? It's easy to apply objective criteria:

- Is the intent of code easy to understand?

- Are the "moving pieces" isolated, such that you can change the implementation of one with minimal risk of altering the others by mistake?

- Is the solution in code a simple one relative to alternatives?

The majority of human produced code does not pass the above sniff test. Most of my job, as a Principal on a platform team, is cleaning up other peoples' messes and training them how to make less of a mess in the future.

If the majority of human generated content fails to follow basic engineering practices that are employed in other engineering disciplines (i.e: it never ceases to amaze me how much of an uphill battle it is just to get some SWEs just to break down their work into small, single responsibility, easily testable and reusable "modules") then we can't logically expect any better from LLMs because this is what they're being trained on.

And we are VERY far off from LLMs that can weigh the merits of different approaches within the context of the overall business requirements and choose which one makes the most sense for the problem at hand, as opposed to just "what's the most common answer to this question?"

LLMs today are a type of magic trick. You give it a whole bunch of 1s and 0s so that you can input some new 1s and 0s and it can use some fancy proability maths to predict "based on the previous 1s and 0s, what are the statistically most likely next 1s and 0s to follow from the input?"

That is useful, and the result can be shockingly impressive depending on what you're trying to do. But the limitations are so limited that the prospect of replacing an entire high-skilled profession with that magic trick is kind of a joke.

reply
m_ke
4 days ago
[-]
Your customers don't care how your code smells, as long as it solves their problem and doesn't cost an arm and a leg.

A ton of huge business full of Sr Principal Architect SCRUM masters are about to get disrupted by 80 line ChatGPT wrappers hacked together by a few kids in their dorm room.

reply
gspencley
4 days ago
[-]
> Your customers don't care how your code smells, as long as it solves their problem and doesn't cost an arm and a leg.

Software is interesting because if you buy a refrigerator, even an inexpensive one, you have certain expectations as to its basic functions. If the compressor were to cut out periodically in unexpected ways, affecting your food safety, you would return it.

But in software customers seem to be conditioned to just accept bugs and poor performance as a fact of life.

You're correct that customers don't care about "code quality", because they don't understand code or how to evaluate it.

But you're assuming that customers don't care about the quality of the product they are paying for, and you're divorcing that quality from the quality of the code as if the code doesn't represent THE implementation of the final product. The hardware matters too, but to assume that code quality doesn't directly affect product quality is to pretend that food quality is not directly impacted by its ingredients.

reply
throwaway_43793
4 days ago
[-]
Code quality does not affect final product quality IMHO.

I worked in companies with terrible code, that deployed on an over-engineered cloud provider using custom containers hacked with a nail and a screwdriver, but the product was excellent. Had bugs here and there, but worked and delivered what needs to be delivered.

SWEs need to realize that code doesn't really matter. For 70 years we are debating the best architecture patterns and yet the biggest fear of every developer is working on legacy code, as it's an unmaintainable piece of ... written by humans.

reply
gspencley
4 days ago
[-]
> Code quality does not affect final product quality IMHO.

What we need, admittedly, is more research and study around this. I know of one study which supports my position, but I'm happy to admit that the data is sparse.

https://arxiv.org/abs/2203.04374

From the abstract:

"By analyzing activity in 30,737 files, we find that low quality code contains 15 times more defects than high quality code."

reply
PeterisP
4 days ago
[-]
The parent point isn't that shitty code doesn't have defects but rather that there's usually a big gap between the code (and any defects in that code) and the actual service or product that's being provided.

Most companies have no relation between their code and their products at all - a major food conglomerate will have hundreds or thousands of IT personnel and no direct link between defects in their business process automation code (which is the #1 employment of developers) and the quality of their products.

For companies where the product does have some tech component (e.g. refrigerators mentioned above) again, I'd bet that most of that companies developers don't work on any code that's intended to be in the product, in such a company there simply is far more programming work outside of that product than inside of one. The companies making a software-first product (like startups on hackernews) where a software defect implies a product defect are an exception, not the mainstream.

reply
dbetteridge
4 days ago
[-]
It doesn't matter.. Until it does.

Having poor quality code makes refactoring for new features harder, it increases the time to ship and means bugs are harder to fix without side effects.

It also means changes have more side effects and are more likely to contain bugs.

For an MVP or a startup just running off seed funding? Go ham with LLMs and get something in front of your customers, but then when more money is available you need to prioritise making that early code better.

reply
fzeroracer
4 days ago
[-]
Code quality absolutely does matter, because when everything is on fire and your service is down and no one is able to fix it customers WILL notice.

I've seen plenty of companies implode because they fired the one guy that knew their shitty codebase.

reply
simianparrot
4 days ago
[-]
Much like science in general, these topics are never -- and can never be -- considered settled. Hence why we still experiment with and iterate on architectural patterns, because reality is ever-changing. The real world from whence we get our input to produce desired output is always changing and evolving, and thus so are the software requirements.

The day there is no need to debate systems architecture anymore is the heat death of the universe. Maybe before that AGI will be debating it for us, but it will be debated.

reply
ben_w
4 days ago
[-]
> Consider this, 100% of AI training data is human-generated content.

Probably doesn't change the conclusion, but this is no longer the case.

I've yet to try Phi-4, but the whole series has synthetic training data; I don't value Phi-3 despite what the benchmarks claim.

reply
rybosworld
4 days ago
[-]
> That is useful, and the result can be shockingly impressive depending on what you're trying to do. But the limitations are so limited that the prospect of replacing an entire high-skilled profession with that magic trick is kind of a joke.

The possible outcome space is not binary (at least in the near term), i.e. either AI replace devs, or it doesn't.

What I'm getting at is this: There's a pervasive attitude among some developers (generally older developers, in my experience) that LLM's are effectively useless. If we're being objective, that is quite plainly not true.

These conversations tend to start out with something like: "Well _my_ work in particular is so complex that LLM's couldn't possibly assist."

As the conversation grows, the tone gradually changes to admitting: "Yes there are some portions of a codebase where LLM's can be helpful, but they can't do _everything_ that an experienced dev does."

It should not even be controversial to say, that AI will only improve at this task. That's what technology does, over the long run.

Fundamentally, there's ego involved whenever someone says "LLM's have _never_ produced useable code." That statement, is provably false.

reply
ricardobeat
4 days ago
[-]
That’s short term thinking in my opinion. LLMs will not replace developers by writing better code: it’s the systems we work on that will start disappearing.

Every SaaS, marketplace is at risk of extinction, superseded by AI agents communicating ad-hoc. Management and business software replaced by custom, one-off programs built by AI. The era of large teams painstakingly building specialized software for niche use cases will end. Consequently we’ll have millions of unemployed developers, except for the ones maintaining the top level orchestration for all of this.

reply
dimgl
4 days ago
[-]
> most of the actual systems we work on will simply start disappearing.

What systems do you think are going to start disappearing? I'm unclear how LLMs are contributing to systems becoming redundant.

reply
Terr_
4 days ago
[-]
Not parent poster, but I imagine it will be a bit like the horror stories of companies (ab)using spreadsheets in lieu of a proper program or database: They will use an LLM to get half-working stuff "for free" and consider it a bargain, especially if the detectable failures can be spot-fixed by an intern doing data-entry.

I think we'll see it first in internal reporting tools, where the stakeholder tries to explain something very specific they want to see (logical or not) and when it's visibly wrong they can work around it privately.

reply
iLoveOncall
4 days ago
[-]
> Not parent poster, but I imagine it will be a bit like the horror stories of companies (ab)using spreadsheets in lieu of a proper program or database: They will use an LLM to get half-working stuff "for free" and consider it a bargain, especially if the detectable failures can be spot-fixed by an intern doing data-entry.

When I read things like this I really wonder if half of the people on HackerNews have ever held a job in software development (or a job at all to be fair).

None of what you describe is even remotely close to reality.

Stuff that half works gets you fully fired by the client.

reply
angoragoats
4 days ago
[-]
Are you sure you read the post you’re quoting correctly? They’re talking about companies that don’t have custom software at all, and cobble together half-working buggy spreadsheets and call the problem solved. Then the developers (either FT or contract) have to come in and build the tool that kills the spreadsheeet, once it gets too unwieldy.

I have seen the above story play out literally dozens of times in my career.

reply
Terr_
4 days ago
[-]
If you can't even believe that kludgy-shit exists out there in the world, then it sounds like you've led a sheltered existence climbing within a very large engineering department of a well-funded company.

Do you perhaps have any friends at companies which hired overseas contractors? Or system-admins working at smaller companies or nonprofits? They're more-likely to have fun stories. I myself remember a university department with a master all_students.xls file on a shared drive (with way too many columns and macros in it) that had to be periodically restored from snapshots every time it got corrupted...

reply
rqtwteye
4 days ago
[-]
I think a lot of CRUD apps will disappear. A lot of the infrastructure may also be done by AI instead of some dude writing tons of YAML code.
reply
betaby
4 days ago
[-]
The infrastructure is not a 'some dude writing tons of YAML code'.
reply
munk-a
4 days ago
[-]
CRUD apps should already be disappearing. You should be using a framework that auto-generates the boilerplate stuff.
reply
iLoveOncall
4 days ago
[-]
> I think a lot of CRUD apps will disappear

How does that make any sense? How is AI, and especially GenAI, something that is by definition fallible, better in ANY way than current frameworks that allow you to write CRUD applications deterministically with basically one line of code per endpoint (if that)?

reply
idopmstuff
4 days ago
[-]
Recovering enterprise SaaS PM here. I don't necessarily know that a lot of enterprise SaaS will disappear, but I do think that a lot of the companies that build it will go out of business as their customers start to build more of their internal systems with LLMs vs. buy from an existing vendor. This is probably more true at the SMB level for now than actual enterprise, both for technical and internal politics reasons, but I expect it to spread.

As a direct example from myself, I now acquire and run small e-commerce brands. When I decided to move my inventory management from Google Sheets into an actual application, I looked at vendors but ultimately just decided to build my own. My coding skills are pretty minimal, but sufficient that I was able to produce what I needed with the help of LLMs. It has the advantages of being cheaper than buying and also purpose-built to my needs.

So yeah, basically the tl;dr is that for internal tools, I believe that LLMs giving non-developers sufficient coding skills will shift the build vs. buy calculus squarely in the direction of build, with the logical follow-on effects to companies trying to sell internal tools software.

reply
achrono
4 days ago
[-]
Long-time enterprise SaaS PM here, and sorry, this does not make any sense. The SMB segment is likely to be the least exposed to AI, and software, and the concept of DIY software through AI.

As you visualize whole swaths of human workers getting automated away, also visualize the nitty gritty of day-to-day work with AI. If it gets something wrong, it will say "I apologize" until you, dear user, are blue in the face. If an actual person tried to do the same, the blueness would instead be on their, not your, face. Therein lies the value of a human worker. The big question, I think, is going to be: is that value commensurate to what we're making on our paycheck right now?

reply
idopmstuff
2 days ago
[-]
> If it gets something wrong, it will say "I apologize" until you, dear user, are blue in the face. If an actual person tried to do the same, the blueness would instead be on their, not your, face. Therein lies the value of a human worker.

The value of a human worker is in a more meaningful apology? I think the relevant question here is who's going to make more mistakes, not who's going to be responsible when they happen. A good human is better than AI today, but that's not going to last long.

reply
dingnuts
4 days ago
[-]
> go out of business as their customers start to build more of their internal systems with LLMs vs. buy from an existing vendor.

there is going to be so much money to make as a consultant fixing these setups, I can't wait!

reply
idopmstuff
2 days ago
[-]
There is absolutely going to be a golden window of opportunity in which a person who understands LLMs can sell zero-effort, custom-crafted software to SMBs at the most insane margins of any business ever.
reply
latentsea
4 days ago
[-]
For trivial setups this might work, but for anything sufficiently complex that actually hits on real complexity in the domain, it's hard to see any LLM doing an adequate job. Especially if the person driving it doesn't know what they don't know about the domain.
reply
idopmstuff
2 days ago
[-]
> For trivial setups this might work, but for anything sufficiently complex that actually hits on real complexity in the domain, it's hard to see any LLM doing an adequate job.

I mostly agree with this for now, but obviously LLMs will continue to improve and be able to handle greater and greater complexity without issue.

> Especially if the person driving it doesn't know what they don't know about the domain.

Sure, but if the person driving it doesn't know what they're doing, they're also likely to do a poor job buying a solution (getting something that doesn't have all the features they need, selecting something needlessly complex, overpaying, etc.). Whether you're building or buying a piece of enterprise software, you want the person doing so to have plenty of domain expertise.

reply
ricardobeat
3 days ago
[-]
Amazing to see this comment downvoted. You're spot on, and I even think the feasible use cases will quickly move from internal tools to real line of business software. People are in denial or really have no idea what's coming.
reply
asdev
4 days ago
[-]
you do realize that these so called "one-off" AI programs would need to be maintained? Most people paying for Saas are paying for the support/maintenance rather than features, which AI can't handle. No one will want to replace any Saas they depend on with a poorly generated variant that they want to maintain
reply
mlinhares
4 days ago
[-]
Nah, you only write it and it runs by itself forever in the AI cloud.

Sometimes I wonder if people saying this stuff have actually worked in development at all.

reply
ricardobeat
3 days ago
[-]
Thinking of it in terms of code is why the idea sounds ridiculous. AI won't make you a Django app and deploy to the cloud (though you can also do that), systems will be built based on pipelines and automation, integrations, just-in-time UI or conversational interfaces. Similar to the no-code platforms of today.
reply
m_ke
4 days ago
[-]
Most people don’t want cloud hosted subscription software, we do it that way because VCs love vendor lock in and recurring revenue.

Old school desktop software takes very little maintenance. Once you get rid of user tracking, AB testing, monitoring, CICD pipelines, microservices, SOC, multi tenant distributed databases, network calls and all the other crap things get pretty simple.

reply
raincole
4 days ago
[-]
There was a vulnerability of 7-zip found in Nov, 2024.

Yes, 7-zip.

https://cert.europa.eu/publications/security-advisories/2024...

reply
m_ke
4 days ago
[-]
1. If everyone is running custom written software that's not shared with anyone else if will be really hard to find vulnerabilities in them

2. I'm sure LLMs are already way better at detecting vulnerabilities than the average engineer. (when asked to do so explicitly)

reply
elforce002
4 days ago
[-]
You can go further: Which business will bet its entire existence, let alone finances, to an "AI" (Companies are literally writing "don't rely on X LLM outputs as medical, legal, financial, or other professional advice"?
reply
luddite2309
4 days ago
[-]
This is a fascinating comment, because it shows such a mis-reading of the history and point of technology (on a tech forum). Technological progress always leads to loss of skilled labor like your own, usually resulting in lower quality (but higher profits and often lower prices). Of COURSE an LLM won't be able to do work as well as you, just as industrial textile manufacturing could not, and still does not, produce the quality of work of 19th century cottage industry weavers; that was in fact one of their main complaints.

As an aside, at the top of the front page right now is a sprawling essay titled "Why is it so hard to buy things that work well?"...

reply
packetlost
4 days ago
[-]
This is a take that shows a completely lack of understanding on what software engineering is actually about.
reply
munk-a
4 days ago
[-]
The truth is somewhere in the middle. Do you remember the early 2000s boom of web developers that built custom websites for clients ranging from e-commerce sites to pizza restaurants? Those folks have found new work as the pressure from one-size fits all CMS providers (like Squarespace) and much stronger frameworks for simple front-ends (like node) have squeezed that market down to just businesses that actually need complex custom solutions and reduced the number of people required to maintain those.

It's likely we'll see LLMs used to build a lot of the cheap stuff that previously existed as arcane excel macros (I've already seen less technical folks use it to analyze spreadsheets) but there will remain hard problems that developers are needed to solve.

reply
brink
4 days ago
[-]
Comparing an LLM to an industrial textile machine is laughable, because one is consistent and reliable while the other is not.
reply
palata
4 days ago
[-]
It's laughable only if you think that the only reasonable metric is "consistent and reliable".

The parent says "it typically doesn't matter that the product is worse if it's cheap enough". And that seems valid to me: the average quality of software today seems to be worse than 10 years ago. We do worse but cheaper.

reply
brink
3 days ago
[-]
> And that seems valid to me: the average quality of software today seems to be worse than 10 years ago.

You don't remember Windows Vista? Windows ME?

I think you have that view because of survivor's bias. Only the good old software is still around today. There was plenty of garbage that barely worked being shipped 10 years ago.

reply
palata
3 days ago
[-]
Let's think about this: smartphones are orders of magnitude faster today than they were 10 years ago, and yet websites load slower on average.
reply
sigmarule
4 days ago
[-]
My perspective is that if you are unable to find ways to improve your own workflows, productivity, output quality, or any other meaningful metric using the current SOTA LLM models, you should consider the possibility that it is a personal failure at least as much as you consider the possibility that it is a failure of the models.

A more tangible pitfall I see people falling into is testing LLM code generation using something like ChatGPT and not considering more involved usage of LLMs via interfaces more suited for software development. The best results I've managed to realize on our codebase have not been with ChatGPT or IDEs like Cursor, but a series of processes that iterate over our full codebase multiple times to extract various levels of resuable insights, like general development patterns, error handling patterns, RBAC-related patterns, extracting example tasks for common types of tasks based on git commit histories (i.e. adding a new API endpoint related to XYZ), common bugs or failure patterns (again by looking through git commit histories), which create a sort of library of higher-level context and reusable concepts. Feeding this into o1, and having a pre-defined "call graph" of prompts to validate the output, fix identified issues, consider past errors in similar types of commits and past executions, etc has produced some very good results for us so far. I've also found much more success with ad-hoc questions after writing a small static analyzer to trace imports, variable references->declarations, etc, to isolate the portions of the codebase to use for context rather than RAG-based searching that a lot of LLM-centric development tools seem to use. It's also worth mentioning that performance quality seems to be very much influenced by language; I thankfully primarily work with Python codebases, though I've had success using it against (smaller) Rust codebases as well.

reply
j45
4 days ago
[-]
Sometimes if it's as much work to setup and keep the tech running compared to writing it, it can be worth thinking about the tradeoffs.

A person with experience knowing how to push LLMs to output the perfect little function or utility to solve a problem, and collect enough of them to get somewhere is the interesting piece.

reply
mplanchard
3 days ago
[-]
This sounds nice, but it also sounds like a ton of work to set up we don't have time for. Local models that don't require us to send our codebase to Microsoft or OpenAI would be something I'm sure we'd be willing to try out.

I'd love it if more companies were actually considering real engineering needs to provide products in this space. Until then, I have yet to see any compelling evidence that the current chatbot models can consistently produce anything useful for my particular work other than the occasional SQL query.

reply
nidnogg
4 days ago
[-]
LLMs are not necessarily a waste of time like you mention, as their application isn't limited to generating algorithms like you're used to.

When you consider LLMs to be building blocks in bigger, more complex systems, their potential increases dramatically. That's where mid/senior engineers would chip in and add value to a company, in my point of view. There's also different infrastructure paradigms involved that have to be considered carefully, so DevOps is potentially necessary for years to come.

I see a lot of ego in this comment, and I think this is actually a good example of how to NOT safeguard yourself against LLMs. This kind of skepticism is the most toxic to yourself. Dismiss them as novelty for the masses, as bullshit tech, keep doing your same old things and discard any applications. Textbook footgun.

reply
NorthTheRock
4 days ago
[-]
> When you consider LLMs to be building blocks in bigger, more complex systems, their potential increases dramatically.

Do you have any examples of where/how that would work? It has seemed for me like lot of the hype is "they'll be good" with no further explanation.

reply
curl-up
4 days ago
[-]
I pull messy data from a remote source (think OCR-ed invoices for example), and need to clean it up. Every day I get around 1k new rows. The way in which it's messed up changes frequently, and while I don't care about it being 100% correct, any piece of code (relying on rules, regex, heuristics and other such stuff) would break in a couple of weeks. This means I need at least a part time developer on my team, costing me a couple of thousands per month.

Or I can pass each row through an LLM and get structured clean output out for a couple of dollars per month. Sure, it doesn't work 100%, but I don't need that, and neither could the human-written code do it.

Effectively, LLM resulted in one less develper hired on our team.

reply
nidnogg
4 days ago
[-]
It resulted in one less developer, but you're still using that tool, right? Didn't a human (you) point the LLM at this problem and think this through?

That fired developer now has the toolset to become a CEO much, much easier than pre-LLM era. You didn't really make him obsolete. You made him redundant. I'm not saying he's gonna become a CEO, but trudging through programming problems is much easier for him as a whole.

Redundancies happen all the time and they don't end career types. Companies get bought, traded, and merged. Whenever this happens the redundant folk get the axe. They follow on and get re-recruited into another comfy tech job. That's it really.

reply
defrost
4 days ago
[-]
Human or LLM the trick with messy inputs from scanned sources is having robust sanity combs that look for obvious fubar's and a means by which end data users can review the asserted values and the original raw image sources (and flag for review | alteration).

At least in my past experience with volumes of transcribed data for applications that are picky about accuracy.

reply
nidnogg
4 days ago
[-]
I do! I posted this a while down below but I guess the way the algorithm here works it got super deprioritized. Full repost:

I can chip in from my tech consulting job where we ship a few GenAI projects to several AWS clients via Amazon Bedrock. I'm senior level but most people here are pretty much insulated.

I think whoever commented once here about more complex problems being tackled, (and the nature of these problems becoming broader) is right on the money. Newer patterns around LLM-based applications are emerging and having seen them first hand, they seem like a slightly different paradigm shift in programming. But they are still, at heart, programming questions.

A practical example: company sees GenAI chatbot, wants one of their own, based on their in-house knowledge base.

Right then and there there is a whole slew of new business needs with necessary human input to make it work that ensues.

- Is training your own LLM needed? See a Data Engineer/Data engineering team.

- If going with a ready-made solution, which LLM to use instead? Engineer. Any level.

- Infrastructure around the LLM of choice. Get DevOps folk in here. Cost assessment is real and LLMs are pricey. You have to be on top of your game to estimate stuff here.

- Guard rails, output validation. Engineers.

- Hooking up to whatever app front-end the company has. Engineers come to the rescue again.

All these have valid needs for engineers, architects/staff/senior what have you — programmers. At the end of the day, these problems devolve into the same ol' https://programming-motherfucker.com

And I'm OK with that so far.

reply
ksdnjweusdnkl21
4 days ago
[-]
Hard to believe anyone is getting contacted more now than in 2020. But I agree with the general sentiment. I'll do nothing and if I get replaced then I get replaced and switch to woodworking or something. But if LLMs do not pan out then I'll be ahead of all the people who wasted their time with that.
reply
th0ma5
4 days ago
[-]
The sanest comment here.
reply
jensensbutton
4 days ago
[-]
The question isn't about what you'll do when you're replaced by an LLM, it's what you're doing to future proof your job. There is a difference. The risk to hedge against is the productivity boost brought by LLMs resulting in a drop in the needs for new software engineers. This will put pressure on jobs (simply don't need as many as we used to so we're cutting 15%) AND wages (more engineers looking for fewer jobs with a larger part of their utility being commoditized).

Regardless of how sharp you keep yourself you're still at subject to the macro environment.

reply
simianparrot
4 days ago
[-]
I'm future proofing my job by ensuring I remain someone whose brain is tuned to solving complex problems, and to do that most effectively I find ways to keep being engaged in both the fundamentals of programming (as already mentioned) and the higher-level aspects: Teaching others (which in turn teaches me new things) and being in leadership roles where I can make real architectural choices in terms of what hardware to run our software on.

I'm far more worried about mental degradation due to any number of circumstances -- unlucky genetics, infections, what have you. But "future proofing" myself against some of that has the same answer: Remain curious, remain mentally ambidextrous, and don't let other people (or objects) think for me.

My brain is my greatest asset both for my job and my private life. So I do what I can to keep it in good shape, which incidentally also means replacing me with a parrot is unlikely to be a good decision.

reply
paulcole
3 days ago
[-]
Nobody who's been replaced by a parrot thought they were going to get replaced by a parrot.

Where's your espoused intellectual curiosity and mental ambidextrousness when it comes to LLMs? It seems like your mind is pretty firmly made up that they're of no worry to you.

reply
simianparrot
2 days ago
[-]
In another comment I briefly mention one use case I have been implementing with LLM's: https://news.ycombinator.com/item?id=42434886

I'm being vague with the details because this is one of the features in our product that's a big advantage over competitors.

To get to that point I experimented with and prototyped a lot using LLM's; I did a deep dive into how they work from the ground up. I understand the tech and its strengths. But likewise the weaknesses -- or perhaps more apt, the use cases which are more "magic show" than actually useful.

I never dismissed LLM's as useless, but I am as confident as I can possibly be that LLM's on their own cannot and will not put programmers out of jobs. They're a great tool, but they're not what many people seem to be fooled into believing that they are; they are not "intelligent" nor "agents".

Maybe it'll be a bit difficult for juniors to get entry level jobs for a short while due to a misunderstanding of the tech and all the hype, but that'll equalize pretty quickly once reality sets in.

reply
luckylion
4 days ago
[-]
Are you though? Until the AI-augmented developer provides better code at lower cost, I'm not seeing it. Senior developers aren't paid well because they can write code very fast, it's because they can make good decision and deliver projects that not only work, but can be maintained and built upon for years to come.

I know a few people who have been primarily programming for 10 years but are not seniors. 5 of them (probably 10 or more, but let's not overdo it), with AI, cannot replace one senior developer unless you make that senior do super basic tasks.

reply
crystal_revenge
4 days ago
[-]
> never provide code that pass my sniff test

Unfortunately it won't be your sniff test that matters. It's going to be an early founder that realizes they don't need to make that extra seed round hire, or the resource limited director that decides they can forgo that one head count and still deliver the product on time, or the in house team that realizes they no longer need a dedicated front end dev because, for their purposes, AI is good enough.

Personally, the team I lead is able to ship much faster with AI assistants than without, which means in practice we can out compete much larger teams in the same space.

Sure their are things that AI will always struggle with, but those things aren't merely "senior" in nature, they're much closer to the niche expert type of problems. Engineers working on generally cutting edge work will likely be in demand and hard to replace, but many others will very likely be impacted by AI from multiple directions.

reply
irunmyownemail
4 days ago
[-]
"Personally, the team I lead is able to ship much faster with AI assistants than without, which means in practice we can out compete much larger teams in the same space."

So, you're giving away your company's IP to AI firms, does your CEO understand this?

reply
crystal_revenge
4 days ago
[-]
Appreciate the snark, but yes my CEO and I did have a good chat about that. My team's work is 100% open source, so if anyone wants to borrow that code I'm more than happy to share! (obviously not from a pseudonymous account, but I don't mind the API sniffing).

But there's plenty of fantastic open models now and you can increasingly run this stuff locally so you don't have to send that IP to AI firms if you don't want to.

reply
dreamfactored
3 days ago
[-]
IP is a legal construct not a technical one. If it's protected by law it's protected by law.
reply
shafyy
4 days ago
[-]
Completely agree. The other day I was trying to find something in the Shopify API docs (an API I'm not familar with), and I decided to try their chat bot assistant thingy. I thought, well, if an LLM is good at something, it's probably at finding information in a very narrow field like this. However, it kept telling me things that were plain wrong, I could compare it to the docs next to it easily. Would have been faster just reading the docs.
reply
Quarrel
4 days ago
[-]
> I thought, well, if an LLM is good at something, it's probably at finding information in a very narrow field like this.

I feel like of all the things in this thread, this one is on them. It absolutely is something that LLMs are good at. They have the sample size, examples and docs, all the things. LLMs are particularly good at speaking "their" language, the most surprising thing is that they can do so much more beyond that next token reckoning.

So, yeah, I'm a bit surprised that a shop like Shopify is so sloppy, but absolutely I think they should be able to provide you an LLM to answer your questions. Particularly given some of the Shopify alumni I've interviewed.

That said, some marketing person might have just oversold the capabilities of an LLM that answers most of their core customer questions, rather than one that knows much at all about their API integrations.

reply
shafyy
4 days ago
[-]
Another perspective is: Given that Shopify should have the capabilities to be good at this and their AI assistant is still sucking ass, it's not possible to make a better product with the current technology.

Maybe it's right 99% of the time and works well for many of their developers. But this is exactly the point. I just can't trust a system that sometimes gives me the wrong answer.

reply
throwaway_36924
3 days ago
[-]
I have been an AI denier for the past 2 years. I genuinely wanted to try it out but the experience I got from Copilot in early 2023 was just terrible. I have been using ChatGPT for some programming questions for all that time, but it was making mistakes and lack of editor integration did not seem like it would make me more productive.

Thanks to this thread I have signed up again for Copilot and I am blown away. I think this easily makes me 2x productive when doing implementation work. It does not make silly mistakes anymore and it's just faster to have it write a block of code than doing it myself.

And the experience is more of an augmentation than replacement. I don't have to let it run wild. I'm using it locally and refactor its output if needed to match my code quality standards.

I am as much concerned (job market) as I am excited (what I will be able to build myself).

reply
TZubiri
4 days ago
[-]
"Nothing because I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time"

That's why the question is future proof. Models get better with time, not worse.

reply
Seb-C
4 days ago
[-]
LLMs are not the right tool for the job, so even if some marginal bits can be improved, it fundamentally cannot "get better".

The job of a software engineer is to first understand the real needs of the customer/user, which requires a level of knowledge and understanding of the real world that LLMs simply don't have and will never do because that is simply not what it does.

The second part of a software engineer's job is to translate those needs into a functional program. Here the issue is that natural languages are simply not precise enough for the kind of logic that is involved. This is why we invented programming languages rather than use plain English. And since the interface of LLMs is by definition human (natural) languages, it fundamentally will always have the same flaws.

Any precise enough interface for this second part of the job will by definition just be some higher level programming language, which not only involves an expert of the tool anyway, but also is unrealistic given the amount of efforts that we already collectively invest into getting more productive languages and frameworks to replace ourselves.

reply
latentsea
4 days ago
[-]
If the last-mile problems of things like autonomous vehicles have been anything to go by, it seems the last mile problems of entrusting your entire business operations to complete black box software, or software written by a novices talking to complete black box, will be infinitely worse.

There's plenty of low-code, no-code solutions around, and yet still lots of software. The slice of the pie will change, but it's very hard to see it being eliminated entirely.

Ultimately it's going to come down to "do I feel like I can trust this?" and with little to no way to be certain you can completely trust it, that's going to be a harder and harder sell as risk increases with the size, complexity, and value of the business processes being managed.

reply
janalsncm
4 days ago
[-]
Even if seniors still do the last mile, that’s a significant reduction from the full commute they were paid for previously. Are you saying seniors should concede this?
reply
latentsea
4 days ago
[-]
No, I think there's cases in which if you can't do the last mile, you aren't entrusted to do the first either.
reply
Tadpole9181
4 days ago
[-]
It's astounding that basically every response you give is just "it can't happen to me". The actual answer to the OP's question for you is a resounding: "nothing".
reply
latentsea
4 days ago
[-]
More or less. I'm just going to enjoy the ride of what becomes of the industry until I either retire or just go do something else with my life.

I already had my panic moment about this like 2 years ago and have calmed down since then. I feel a lot more at peace than I did when I was thinking AGI was right around the corner and trying to plan for a future where no matter what course of action I took to learn new skills to outpace the AI I was already too late with my pivot because the AI would also become capable at doing whatever I was deciding to switch to on a shorter timeline than It would take me to become proficient in it.

If you at all imagine a world where the need for human software engineers legit goes away, then there isn't likely much for you to do in that world anyway. Maybe except become a plumber.

I don't think AGI is this magic silver bullet Silicon Valley thinks it is. I think, like everything in engineering, it's a trade-off, and I think it comes with a very unpalatable caveat. How do you learn once you've run out of data? By making mistakes. End of the day I think it's in the same boat as us, only at least we can be held accountable.

reply
dreamfactored
3 days ago
[-]
Compilers weren't trusted in the early days either.
reply
layer8
4 days ago
[-]
Models don’t get better just by time passing. The specific reasons for why they’ve been getting better don’t necessarily look like they’ll extend indefinitely into the future.
reply
palata
4 days ago
[-]
> Models get better with time, not worse

A decade (almost?) ago, people were saying "look at how self-driving cars improved in the last 2 years: in 3-5 years we won't need to learn how to drive anymore". And yet...

reply
idopmstuff
4 days ago
[-]
> But I think that’s generations away at best.

I'm not sure whether you mean human generations or LLM generations, but I think it's the latter. In that case, I agree with you, but also that doesn't seem to put you particularly far off from OP, who didn't provide specific timelines but also seems to be indicating that the elimination of most engineers is still a little ways away. Since we're seeing a new generation of LLMs every 1-2 years, would you agree that in ~10 years at the outside, AI will be able to do the things that would cause you to gladly retire?

reply
simianparrot
4 days ago
[-]
I mean human generations because to do system architecture, design and development well you need something that can at least match an average human brain in reasoning, logic and learning plasticity.

I don’t think that’s impossible but I think we’re quite a few human generations away from that. And scaling LLM’s is not the solution to that problem; an LLM is just a small but important part of it.

reply
munk-a
4 days ago
[-]
I'd be cautious as describing anything in tech as human generations away because we're only about a single human generation into a lot of this industry existing.
reply
simianparrot
4 days ago
[-]
I understand the hesitancy to do so, but the thing is, what's changed with computers since the inception of the technology is speed and size. The fundamentals are still the same. Programming today is a lot more convenient than programming via punch cards, but it's a matter of abstractions + scale + speed. Given this I'm pretty confident when I say that, but obviously not 100%.
reply
curl-up
4 days ago
[-]
Could you give an example of an AI capability that would change your mind on this, even slightly? "Sniff test" is rather subjective, and job replacement rarely happens with machines doing something better on the exact same axis of performance as humans. E.g., cars wouldn't have passed the coachman's "sniff test". Or to use a current topic - fast food self-order touchscreens don't pass the customer service people sniff test.
reply
Const-me
4 days ago
[-]
> CPU-only custom 2D pixel blitter engine I wrote to make 2D games in styles practically impossible with modern GPU-based texture rendering engines

I’m curious what’s so special about that blitting?

BTW, pixel shaders in D3D11 can receive screen-space pixel coordinates in SV_Position semantic. The pixel shader can cast .xy slice of that value from float2 to int2 (truncating towards 0), offset the int2 vector to be relative to the top-left of the sprite, then pass the integers into Texture2D.Load method.

Unlike the more commonly used Texture2D.Sample, Texture2D.Load method delivers a single texel as stored in the texture i.e. no filtering, sampling or interpolations. The texel is identified by integer coordinates, as opposed to UV floats for the Sample method.

reply
jf22
4 days ago
[-]
I'd challenge the assertion that LLMs never pass whatever a "sniff test" is to you. The code they produce is looking more and more like working production code.
reply
rdrsss
4 days ago
[-]
+1 to this sentiment for now, I give them a try every 6 months or so to see how they advance. And for pure code generation, for my workflow, I don't find them very useful yet. For parsing large sets of documentation though, not bad. They haven't creeped their way into my usual research loop just yet, but I could see that becoming a thing.

I do hear some of my junior colleagues use them now and again, and gain some value there. And if llm's can help get people up to speed faster that'd be a good thing. Assuming we continue to make the effort to understand the output.

But yeah, agree, I raise my eyebrow from time to time, but I don't see anything jaw dropping yet. Right now they just feel like surrogate googler's.

reply
goddamnyouryan
23 hours ago
[-]
Tell me more about this 2D pixel blitter engine
reply
rakkhi
4 days ago
[-]
Agree with this. SWE don't need to do anything because LLMs will just make expert engineers more productive. More detail on my arguments here: https://rakkhi.substack.com/p/economics-of-large-language-mo...
reply
og2023
3 days ago
[-]
Seems like nobody noticed the "delve" part, which is a very dark irony here.
reply
dyauspitr
4 days ago
[-]
This is denial. LLMs already write code at a senior level. Couple that with solid tests and a human code reviewer and we’re already at 5x the story points per sprint at the startup I work at.
reply
whtsthmttrmn
3 days ago
[-]
I'm sure that startup is a great indication of things to come. Bound for long term growth.
reply
dyauspitr
3 days ago
[-]
It’s solid, efficient, bug free code. What else matters?
reply
phist_mcgee
3 days ago
[-]
Yap yap yap. Keep living in denial mate.
reply
whtsthmttrmn
3 days ago
[-]
I'm not living in denial, I think LLMs are extremely useful and have huge potential. But people that are all "omg my startup did this and we reduced our devs by 150% so it must be the end-all tech!" are just as insufferable as the "nope the tech is bad and won't do anything at all" crowd.

And before you mention, the hyperbole is for effect, not as an exact representation.

reply
valval
4 days ago
[-]
Another post where I get to tell a skeptic that they’re failing at using these products.

There will be human programmers in the future as well, they just won’t be ones who can’t use LLMs.

reply
swishman
4 days ago
[-]
The arrogance of comments like this is amazing.

I think it's an interesting psychological phenomenon similar to virtue signalling. Here you are signalling to the programmer in-group how good of a programmer you are. The more dismissive you are the better you look. Anyone worried about it reveals themself as a bad coder.

It's a luxury belief, and the better LLMs get the better you look by dismissing them.

reply
rybosworld
4 days ago
[-]
This is spot on.

It's essentially like saying "What I do in particular, is much too difficult for an AI to ever replicate." It is always in part, humble bragging.

I think some developers like to pretend that they are exclusively solving problems that have never been solved before. Which sure, the LLM architecture in particular might never be better than a person for the novel class of problem.

But the reality is, an extremely high percentage of all problems (and by reduction, the lines of code that build that solution) are not novel. I would guesstimate that less than 1 out of 10,000 developers are solving truly novel problems with any regularity. And those folks tend to work at places like Google Brain.

That's relevant because LLM's can likely scale forever in terms of solving the already solved.

reply
whtsthmttrmn
3 days ago
[-]
> I would guesstimate that less than 1 out of 10,000 developers are solving truly novel problems with any regularity. And those folks tend to work at places like Google Brain.

Looks like the virtue signalling is done on both sides of the AI fence.

reply
rybosworld
3 days ago
[-]
Not sure how you equate that statement with virtue signaling.

This is just a natural consequence of the ever growing repository of solved problems.

For example, consider that sorting of lists is agreed upon as a solved problem. Sure you could re-discover quick sort on your own, but that's not novel.

reply
raincole
4 days ago
[-]
How is it humble bragging when there isn't "humble" part?
reply
rybosworld
4 days ago
[-]
Fair point
reply
irunmyownemail
4 days ago
[-]
"The arrogance of comments like this is amazing."

Defending AI with passion is nonsensical, at least ironic.

reply
arisAlexis
4 days ago
[-]
So your argument is:

There is some tech that is getting progressively better.

I am high on the linear scale

Therefore I don't worry about it cathing up to me ever

And this is the top voted argument.

reply
NorthTheRock
4 days ago
[-]
> is getting progressively better

Is it still getting better? My understanding is that we're already training them on all of the publicly available code in existence, and we're running in to scaling walls with bigger models.

reply
arisAlexis
4 days ago
[-]
So you think that was it? Amazing comforting thoughts here
reply
whtsthmttrmn
3 days ago
[-]
Considering the entire purpose of the original post is asking people what they're doing, why do you have such a problem with the top voted argument? If the top voted argument was in favour of this tech taking jobs, would you feel better?
reply
markerdmann
4 days ago
[-]
isn't "delve" a classic tell of gpt-generated output? i'm pretty sure simianparrot is just trolling us. :-)
reply
tigershark
4 days ago
[-]
Yeah.. generations. I really hope that it doesn’t end like the New York Times article saying that human flight was at best hundred of years away a few weeks before the Wright brothers flight..
reply
atomsatomsatoms
3 days ago
[-]
"delve"

hmmmmm

reply
modeless
4 days ago
[-]
> If there’s ever a day where there’s an AI that can do these things, then I’ll gladly retire. But I think that’s generations away at best.

People really believe it will be generations before an AI will approach human level coding abilities? I don't know how a person could seriously consider that likely given the pace of progress in the field. This seems like burying your head in the sand. Even the whole package of translating high level ideas into robust deployed systems seems possible to solve within a decade.

I believe there will still be jobs for technical people even when AI is good at coding. And I think they will be enjoyable and extremely productive. But they will be different.

reply
handzhiev
4 days ago
[-]
I've heard similar statements about human translation - and look where the translators are now
reply
jejeyyy77
3 days ago
[-]
head meet sand lol

source: been coding for 15+ years and using LLMs for past year.

reply
planb
4 days ago
[-]
This is satire, right? The completely off-topic mentioning of your graphics programming skills, the overconfidence, the phrase "delve into"... Let me guess: The prompt was "Write a typical HN response to this post"
reply
simianparrot
4 days ago
[-]
This made me chuckle. But at the same time a bit uneasy because this could mean my own way of expressing myself online might've been impacted by reading too much AI drivel whilst trying to find signal in the noisy internet landscape...

The off-topic mentioning of graphics programming was because I tend to type as I think, then make corrections, and as I re-read it now the paragraph isn't great. The intent was to give an example of how I keep myself sharp, and challenging what many consider "settled" knowledge, where graphics programming happens to be my most recent example.

For what it's worth, English isn't my native language, and you'll have to take my word for it that no chat bots were used in generating any of the responses I've made in this thread. The fact that people are already uncertain about who's a human and who's not is worrisome.

reply
dreamfactored
3 days ago
[-]
I'm certain that humans are picking up AI dialect. Very hard to prove though and will become harder.
reply
simianparrot
1 day ago
[-]
Either that or it's this simple: AI dialect is already the amalgamation of the most popular internet dialects for its training set. I used to read a lot of Reddit years ago now. It's more likely my dialect is influenced by that in subtle ways when typing online in English, and AI is parroting this for obvious reasons.
reply
dogman144
4 days ago
[-]
The last fairly technical career to get surprisingly and fully automated in the way this post displays concern about - trading.

I spent a lot of time with traders in early '00's and then '10's when the automation was going full tilt.

Common feedback I heard from these highly paid, highly technical, highly professional traders in a niche indusry running the world in its way was:

- How complex the job was - How high a quality bar there was to do it - How current algos never could do it and neither could future ones - How there'd always be edge for humans

Today, the exchange floors are closed, SWEs run trading firms, traders if they are around steer algos, work in specific markets such as bonds, and now bonds are getting automated. LLMs can pass CFA III, the great non-MBA job moat. The trader job isn't gone, but it has capital-C Changed and it happened quickly.

And lastly - LLMs don't have to be "great," they just have to be "good enough."

See if you can match the above confidence from pre-automation traders with the comments displayed in this thread. You should plan for it aggressively, I certainly do.

Edit - Advice: the job will change, the job might change in that you steer LLMs, so become the best at LLM steering. Trading still goes on, and the huge, crushing firms in the space all automated early and at various points in the settlement chain.

reply
manquer
4 days ago
[-]
> LLMs can pass CFA III.

Everyone cites these kind of examples as LLM beating some test or other as some kind of validation. It isn’t .

To me that just tells that the tests are poor, not the LLMs are good. Designing and curating a good test is hard and expensive.

Certifying and examination bodies often use knowledge as a proxy to understanding or reasoning or any critical thinking skills.they just need to filter enough people out, there is no competitive pressure to improve quality at all. Knowledge tests do that just as well and are cheaper.

Standardization is also hard to do correctly, common core is a classic example of how that changes incentives for both teachers and students . Goodhart's law also applies.

To me it is more often than not a function of poor test measurement practices rather than any great skill shown by the LLM.

Passing the CFA or the bar exam while daunting for humans by design does not teach you anything practicing law or accounting. Managing the books of a real company is nothing like what the textbook and exams teaches you .

—-

The best accountants or lawyers etc are not making partner because of their knowledge of the law and tax. They make money same as everyone else - networking and building customer relationships. As long as the certification bodies don’t flood the market they will do well which is what the test does.

reply
crystal_revenge
4 days ago
[-]
> To me that just tells that the tests are poor, not the LLMs are good.

I mean the same is true of leetcode but I know plenty of mediocre engineers still making ~$500k because they learned how to grind leetcode.

You can argue that the world is unjust till you're blue in the face, but it won't make it a just world.

reply
manquer
4 days ago
[-]
There are influencers making many multiples of that for doing far less. Monetary returns has rarely if ever reflected skill, social value, or talent in capitalist economies, this is always been the case, not sure how that is relevant

I was merely commenting on why these tests exist and the dynamics in the measurement industry from an observer, we shouldn't conflate exclusivity or difficulty of a test to its quality or objective.

reply
disconcision
4 days ago
[-]
sure, but if companies find that llm performance on tests is less correlated with actual job performance that human test performance, then that means the test might not be not a useful metric to inform automation decisions
reply
oangemangut
4 days ago
[-]
Having also worked on desks in the 00s and early 10s I think a big difference here is what trading meant really changed; much of what traders did went away with innovations in speed. Speed and algos became the way to trade neither of which humans can do. While SWE became significantly more important on trading desks, you still have researchers, quants, portfolio analysts, etc. that spend their working days developing new algos, new market opportunities, ways to minimize TCOS, etc.

That being said, there's also a massive low hanging fruit in dev work that we'll automate away, and I feel like that's coming sooner rather than later, yes even though we've been saying that for decades. However, I bet that the incumbents (Senior SWE) have a bit longer of a runway and potentially their economic rent increases as they're able to be more efficient, and companies need not hire as many humans as they needed before. Will be an interesting go these next few decades.

reply
skydhash
4 days ago
[-]
> That being said, there's also a massive low hanging fruit in dev work that we'll automate away

And this has been solve for years already with existing tooling. Debuggers, Intellisense, Linters, Snippets and other code generations tools, build systems, Framework Specific tooling.... There's a lot of tools for writing and maintaining code. The only thing left was always the understanding of the system that solves the problem and knowledge of the tools to build it. And I don't believe we can automate this away. Using LLMs is like riding a drugged donkey instead of a motorbike. It can only work for very short distances or the thrill.

In any long lived project, most modifications are only a few lines of codes. The most valuable thing is the knowledge of where and how to edit. Not the ability to write 400 lines of code in 5 seconds.

reply
brandall10
4 days ago
[-]
"And now, at the end of 2024, I’m finally seeing incredible results in the field, things that looked like sci-fi a few years ago are now possible: Claude AI is my reasoning / editor / coding partner lately. I’m able to accomplish a lot more than I was able to do in the past. I often do more work because of AI, but I do better work."

https://antirez.com/news/144

If the author of Redis finds novel utility here then it's likely useful beyond React boilerplatey stuff.

I share a similar sentiment since 3.5 Sonnet came out. This goes far beyond dev tooling ergonomics. It's not simply a fancy autocomplete anymore.

reply
fy20
4 days ago
[-]
"AI didn’t replace me, AI accelerated me or improved me with feedback about my work"

This really sums up how I feel about AI at the moment. It's like having a partner who has broad knowledge about anything that you can ask any stupid questions to. If you don't want to do a small boring task you can hand it off to them. It lets you focus on the important stuff, not "whats the option in this library called to do this thing that I can describe but don't know the exact name for?".

If you aren't taking advantage of that, then yes, you are probably going to be replaced. It's like when version control became popular in the 00s, where some people and companies still held out in their old way of doing things, copying and pasting folders or whatever other nasty workflows the had, because $reasons... where the only real reason was that they didn't want to adapt to the new paradigm.

reply
jvanderbot
4 days ago
[-]
This actually surfaces a much more likely scenario: That it's not our jobs that are automated, but a designed-from-scratch automated sw/eng job that just replaces our jobs because it's faster and better. It's quite possible all our thinking is just required because we can only write one prototype at a time. If you could generate 10 attempts / day, until you have a stakeholder say "good enough", you wouldn't need much in the way of requirements, testing, thinking, design, etc.
reply
lifeisstillgood
4 days ago
[-]
This is interesting - yes of course true

But like so much of this thread “we can do this already without AI, if we wanted”

Want to try 5/10 different approaches a day? Fine - get your best stakeholders and your best devs and lock them in a room on the top floor and throw in pizza every so often.

Projects take a long time because we allow them to. (NB this is not same as setting tight deadlines, this is having a preponderance of force on the side of our side

reply
datavirtue
4 days ago
[-]
We need that reduction in demand for workers, though. Backfilling is not going to be a thing for a population in decline.
reply
epolanski
4 days ago
[-]
I don't see it.

Trading is about doing very specific math in a very specific scenario with known expectations.

Software engineering is anything but like that.

reply
grugagag
4 days ago
[-]
Yes, software engineering is different in many areas but today a lot of it is CRUD and plumbing. While SW engineering will not die it will certainly transform a lot, quite possibly there will be fewer generalists than today and more specialized branches will pop out - or maybe being a generalist will require one to be familiar many new areas. Likely the code we write today will go the same way writing assembly code went and sure, it will not completely disappear but...
reply
skydhash
4 days ago
[-]
> software engineering is different in many areas but today a lot of it is CRUD and plumbing

Which you can do away in a few days with frameworks and code reuse. The rest of the time is mostly spent on understanding the domain, writing custom components, and fixing bugs.

reply
jvanderbot
4 days ago
[-]
They most insightful thing here would have been to learn how those traders survived, adapted, or moved on.

It's possible everyone just stops hiring new folks and lets the incumbents automate it. Or it's possible they all washed cars the rest of their careers.

reply
dweinus
4 days ago
[-]
I knew a bunch of them. Most of them moved on: retired or started over in new careers. It hit them hard. Maybe not so hard because trading was a lucrative career, but most of us don't have that kind of dough to fall back on.
reply
dogman144
3 days ago
[-]
+1 this. Saw it first hand, it was ugly. Post-9/11 stress and ‘08 cleared out a lot of others. Interestingly, I’ve seen some of them surface in crypto.
reply
dogman144
3 days ago
[-]
To answer what I saw, some blend of this:

- post-9/11 stress and ‘08 was a big jolt, and pushed a lot of folks out.

- Managed their money fine (or not) for when the job slowed down and also when ‘08 hit

- “traders” became “salespeople” or otherwise managing relationships

- Saw the trend, leaned into it hard, you now have Citadel, Virtu, JS, and so on.

- Saw the trend, specialized or were already in assets hard to automate.

- Senior enough to either steer the algo farms + jr traders, or become an algo steerer themselves

- Not senior enough, not rich enough, not flexible enough or not interested anymore and now drive Uber, mobile dog washers, joined law enforcement (3x example I know of).

reply
ChuckMcM
4 days ago
[-]
I like this comment, it is exceptionally insightful.

Any interesting question is "How is programming like trading securities?"

I believe an argument can be made that the bulk of what goes for "programming" today is simply hooking up existing pieces in ways that achieve a specific goal. When the goal can be adequately specified[1] the task of hooking up the pieces to achieve that goal is fairly mechanical. Just like the business of tracking trades in markets and extracting directional flow and then anticipating the flow by enough to make a profit is something trading algorithms can do.

What trading software has a hard time doing is coming up with new securities. What LLMs absolutely cannot do (yet?) is come up with novel mechanisms. To illustrate that, consider the idea that an LLM has been trained on every kind of car there is. If you ask it to design a plane it will fail. Train it on all cars and plans and ask it to design a boat, same problem. Train it on cars, planes, and boats and ask it to design a rocket, same problem.

The sad truth is that a lot of programming is 'done' , which is to say we have created lots of compilers, lots of editors, lots of tools, lots of word processors, lots of operating systems. Training an LLM on those things can put all of the mechanisms used in all of them into the model, and spitting out a variant is entirely within the capabilities of the LLM.

Thus the role of humans will continue to be to do the things that have not been done yet. No LLM can design a quantum computer, nor can it design a compiler that runs on a quantum computer. Those things haven't been "done" and they are not in the model. The other role of humans will continue to be 'taste.'

Taste, as defined as an aesthetic, something that you know when you see it. It is why for many, AI "art" stands out as having been created by AI, it has a synthetic aesthetic. And as one gets older it often becomes apparent that the tools are not what determines the quality of the output, it is the operator.

I watched Dan Silva do some amazing doodles with Deluxe Paint on the Amiga and I thought, "That's what I want to do!" and ran out and bought a copy and started doodling. My doodles looked like crap :-). The understanding that I would have to use the tool, find its strengths and weaknesses, and then express through it was clearly a lot more time consuming than "get the tool and go."

LLMs let people generate marginal code quickly. For so many jobs that is good enough. People who can generate really good code taking in constraints that the LLM can't model, is something that will remain the domain of humans until GAI is achieved[2]. So careers in things like real-time and embedded systems will probably still have a lot of humans involved, and systems where every single compute cycle needs to be extracted out of the engine is a priority, that will likely be dominated by humans too.

[1] Very early on there were papers on 'genetic' programming. Its a good thing to read them because they arrive at a singularly important point, "How do you define 'Which is better'?" For a solid, qualitative and testable metric for 'goodness' genetic algorithms out perform nearly everything. When the ability to specify 'goodness' is not there, genetic algorithms cannot out perform humans. What is more they cannot escape 'quality moats' where the solutions on the far side the moat are better than the solutions being explored but they cannot algorithmically get far enough into the 'bad' solutions to start climbing up the hill on the other side to the 'better' solutions.

[2] GAI being "Generalized Artificial Intelligence" which will have to have some way of modelling and integrating conceptual systems. Lots of things get better then (like self driving finally works), maybe even novel things. Until we get that though, LLMs won't play here.

reply
VirusNewbie
3 days ago
[-]
>I believe an argument can be made that the bulk of what goes for "programming" today is simply hooking up existing pieces in ways that achieve a specific goal. When the goal can be adequately specified[1] the task of hooking up the pieces to achieve that goal is fairly mechanical. Just like the business of tracking trades in markets and extracting directional flow and then anticipating the flow by enough to make a profit is something trading algorithms can do.

right, but when python came into popularity it's not like we reduced the number of engineers 10 fold, even though it used to take a team 10x as long to write similar functionality in C++.

reply
erik_seaberg
3 days ago
[-]
Software demand skyrocketed because of the WWW, which came out in 1991 just before Python (although Perl, slightly more mature, saw more use in the early days).
reply
aprilthird2021
4 days ago
[-]
Okay, but one thing you kinda miss is that trading (e.g. investing) is still one of the largest ways for people to make money. Even passively investing in ETFs is extremely lucrative.

If LLMs become so good that everyone can just let an LLM go into the world and make them money, the way we do with our investments, won't that be good?

reply
parentheses
21 hours ago
[-]
The financial markets are a 0 sum game mostly. This approach would not work.
reply
dogman144
3 days ago
[-]
you are missing that trading in what I’m talking about != “e.g investing.”

And, certainly, prob a good thing for some, bad thing for the money conveyor belt of the last 20 yrs of tech careers.

reply
aprilthird2021
3 days ago
[-]
I am not missing that. I understand the difference. I'm saying the economic engine behind trading is still good (investing). So while people don't do the trading as much (machines do it), the economic rewards are still there and we can still capture them. The same may be true in a potential future where software becomes automated.
reply
spaceman_2020
4 days ago
[-]
The key difference between trading and coding is that code often underpins uncritical operations - think of all the CRUD apps in small businesses - and there is no money involved, at least directly.
reply
irunmyownemail
4 days ago
[-]
"See if you can match the above confidence from pre-automation traders with the comments displayed in this thread. You should plan for it aggressively, I certainly do."

Sounds like it was written by someone trying to keep any grasp on the fading reality of AI.

reply
9cb14c1ec0
4 days ago
[-]
> LLMs don't have to be "great," they just have to be "good enough."

NO NO NO NO NO NO NO!!!! It may be that some random script you run on your PC can be "good enough", but the software the my business sells can't be produced by "good enough" LLMs. I'm tired of my junior dev turning in garbage code that the latest and greatest "good enough" LLM created. I'm about to tell him he can't use AI tools anymore. I'm so thankful I actually learned how to code in pre-LLM days, because I know more than just how to copy and paste.

reply
wewtyflakes
4 days ago
[-]
You're fighting the tide with a broom.
reply
9cb14c1ec0
4 days ago
[-]
No, I'm fighting for the software I sell my customers to be actually reliable.
reply
wewtyflakes
4 days ago
[-]
I would not hire someone who eschews LLMs as a tool. That does not mean I would accept someone mindlessly shoving its code through review, as it is a tool, not a wholesale solution.
reply
ethbr1
4 days ago
[-]
That ship seems to have sailed at the same time boxed software went extinct.

How many companies still have dedicated QA orgs with skilled engineers? How many SaaS solutions have flat out broken features? Why is SRE now a critical function? How often do mobile apps ship updates? How many games ship with a day zero patch?

The industries that still have reliable software are because there are regulatory or profit advantages to reliability -- and that's not true for the majority of software.

reply
Panzer04
4 days ago
[-]
Not sure I agree. Where it matters people will prefer and buy products that "just work" compared to something unreliable.

People tolerate game crashes because you (generally) can't get the same experience by switching.

People wouldn't tolerate f.e broswers crashing if they can switch to an alternative. The same would apply to a lot of software, with varying limits to how much shit will be tolerated before a switch would be made.

reply
krmboya
4 days ago
[-]
So majority of software we have now is unreliable piles of sh*t. Seems to check out, with how often I need to restart my browser to keep memory usage under control
reply
hi_hi
4 days ago
[-]
I don't think the problem is the LLM.

I think of LLMs like clay, or paint, or any other medium where you need people who know what they're doing to drive them.

Also, I might humbly suggest you invest some time in the junior dev and ask yourself why they keep on producing "garbage code". They're junior, they aren't likely to know the difference between good and bad. Teach them. (Maybe you already are, I'm just taking a wild guess)

reply
dogman144
3 days ago
[-]
You might care about that but do you think your sales team does?
reply
thegrim33
4 days ago
[-]
I don't worry about it, because:

1) I believe we need true AGI to replace developers.

2) I don't believe LLMs are currently AGI or that if we just feed them more compute during training that they'll magically become AGI.

3) Even if we did invent AGI soon and replace developers, I wouldn't even really care, because the invention of AGI would be such an insanely impactful, world changing, event that who knows what the world would even look like afterwards. It would be massively changed. Having a development job is the absolute least of my worries in that scenario, it pales in comparison to the transformation the entire world would go through.

reply
janalsncm
4 days ago
[-]
To replace all developers, we need AGI yes. To replace many developers? No. If one developer can do the same work as 5 could previously, unless the amount of work expands then 4 developers are going to be looking for a job.

Therefore, unless you for some reason believe you will be in the shrinking portion that cannot be replaced I think the question deserves more attention than “nothing”.

reply
simplyluke
4 days ago
[-]
Frameworks, compilers, and countless other developments in computing massively expanded the efficiency of programmers and that only expanded the field.

Short of genuine AGI I’ve yet to see a compelling argument why productivity eliminates jobs, when the opposite has been true in every modern economy.

reply
janalsncm
4 days ago
[-]
> Frameworks, compilers, and countless other developments in computing

How would those have plausibly eliminated jobs? Neither frameworks nor compilers were the totality of the tasks a single person previously was assigned. If there was a person whose job it was to convert C code to assembly by hand, yes, a compiler would have eliminated most of those jobs.

If you need an example of automation eliminating jobs, look at automated switchboard operators. The job of human switchboard operator (mostly women btw) was eliminated in a matter of years.

Except here, instead of a low-paid industry we are talking about a relatively high-paid one, so the returns would be much higher.

A good analogy can be made to outsourcing for manufacturing. For a long time Chinese products were universally of worse quality. Then they caught up. Now, in many advanced manufacturing sectors the Chinese are unmatched. It was only hubris that drove arguments that Chinese manufacturing could never match America’s.

reply
amrocha
4 days ago
[-]
I have a coworker who I suspect has been using LLMs to write most of his code for him. He wrote multiple PRs with thousands of lines of code over a month.

Me and the other senior dev spent weeks reviewing these PRs. Here’s what we found:

- The feature wasn’t built to spec, so even though it worked in general the details were all wrong

- The code was sloppy and didn’t adhere to the repos guidelines

- He couldn’t explain why he did things a certain way versus another, so reviews took a long time

- The code worked for the happy path, and errored for everything else

Eventually this guy got moved to a different team and we closed his PRs and rewrote the feature in less than a week.

This was an awful experience. If you told me that this is the future of software I’d laugh you out of the room, because engineers make enough money and have enough leverage to just quit. If you force engineers to work this way, all the good ones will quit and retire. So you’re gonna be stuck with the guys who can’t write code reviewing code they don’t understand.

reply
janalsncm
4 days ago
[-]
In the short term, I share your opinion. LLMs have automated the creation of slop that resembles code.

In the long term, we SWEs (like other industries) have to own the fact that there’s a huge target on our backs, and aside from hubris there’s no law of nature or man preventing people smarter than us from building robots that do our jobs faster than us.

reply
amrocha
3 days ago
[-]
That’s called class consciousness, and I agree, and I think most people have already realized companies are not their friend after the last two years of layoffs.

But like I said, I’m not worried about it in the imminent future, and I have enough leverage to turn down any jobs that want me to work in that way.

reply
JamesBarney
4 days ago
[-]
I think not only is this possible, it's likely for two reasons.

1. A lot more projects get the green light when the price is 5x less, and a many more organizations can afford custom applications.

2. LLMs unlock large amounts of new applications. A lot more of the economy is now automatable with LLMs.

I think jr devs will see the biggest hit. If you're going to teach someone how to code, might as teach a domain expert. LLMs already code much better than almost all jr devs.

reply
natemwilson
4 days ago
[-]
It’s my belief that humanity has an effectively infinite capacity for software and code. We can always recursively explore deeper complexity.
reply
janalsncm
4 days ago
[-]
As we are able to automate more and more of the creation process, we may be able to create an infinite amount of software. However, what sustains high wages for our industry is a constrained supply of people with an ability to create good software.
reply
tqi
4 days ago
[-]
I'm not sure it is as simple as that - Induced Demand might be enough to keep the pool of human-hours steady. What that does to wages though, who can say...
reply
janalsncm
4 days ago
[-]
Well it certainly depends on how much induced demand there is, and how much automation can multiply developer productivity.

If we are talking an 80% reduction in developers needed per project, then we would need 5x the amount of software demand in the future to avoid a workforce reduction.

reply
j45
4 days ago
[-]
I think counting the number of devs might not be the best way to go considering not all teams are equally capable or skilled in each person, and in enterprises, some people are inevitably hiding in a project or team.

Comparing only the amount of forward progress in a codebase and AI's ability to participate or cover in it might be better.

reply
throwawayffffas
4 days ago
[-]
That's a weird way to look at it. If one developer can do what 5 could do before, that doesn't mean I will be looking for a job, it means I will be doing 5 times more work.
reply
Valord
3 days ago
[-]
5x throughput/output not 5x work
reply
fire_lake
4 days ago
[-]
> unless the amount of work expands

This is what will happen

reply
akira2501
4 days ago
[-]
Even if AGI suddenly appears we will most likely have an energy feed and efficiency problem with it. These scaling problems are just not on the common roadmap at all and people forget how much effort typically has to be spent here before a new technology can take over.
reply
plantwallshoe
3 days ago
[-]
This is where I’m at too. By the time we fully automate software engineering, we definitely will have already automated marketing, sales, customer success, accountants, office administrators, program managers, lawyers, etc.

At that point its total societal upheaval and losing my job will probably be the least of my worries.

reply
throw234234234
4 days ago
[-]
1) Maybe.

2) I do see this, given the money poured into this cycle, as a potential possibility. It may not just be LLM's. To another comment you are betting against the whole capitalist systems, human ingenuity and billions/trillions? of dollars targeted at making SWE's redundant.

3) I think it can disrupt only knowledge jobs, and be some large time before it disrupts physical jobs. For SWE's this is the worst outcome - it means you are on your own w.r.t adjusting for the changes coming. Its only "world changing" as you put it to economic systems if it disrupts everyone at once. I don't think it will happen that way.

More to the point software engineers will automate themselves out before other jobs for only one reason - they understand AI better than other jobs (even if objectively it is harder to automate) and they tend not to protect the knowledge required to do so. They have the domain knowledge to know what to automate/make redundant.

The people that have the power to resist/slow down disruption (i.e. hide knowledge) will gain more pricing power, and therefore be able to earn more capital taking advantage of the efficiency gains made by jobs being redundant from AI. The last to be disrupted has the most opportunity to gain ownership of assets and capital from their economic profits preserved. The inefficient will win out of this - capital rewards scarcity/people that can remain in demand despite being inefficient relatively. Competition is for losers - its IMV the biggest flaw of the system. As a result people will see what has happened to SWE's and make sure their industry "has time" to adapt particularly since many knowledge professions are really "industry unions/licensed clubs" who have the advantage of keeping their domain knowledge harder to access.

To explain it further even if software is more complicated; there is just so much more capital it seems trying to disrupt it than other industries. Given IMV software demand is relatively inelastic to price due to scaling profits, making it cheaper to produce won't really benefit society all that much w.r.t more output (i.e. what was good economically to build would of been built anyway in an inelastic demand/scaling commodity). Generally more supply/less cost of a good has more absolute societal benefits when there is unmet and/or elastic demand. Instead costs of SWE's will go down and the benefit will be distributed to the jobs/people remaining (managers, CEO's, etc) - the people that dev's think "are inefficient" in my experience. When it is about inelastic demand its more re-distributive; the customer benefits and the supplier (in this case SWE's) lose.

I don't like saying this; but we gave AI all the advantage. No licensing requirements, open source software for training, etc.

reply
flustercan
3 days ago
[-]
> I think it can disrupt only knowledge jobs

What happens when a huge chunk of knowledge workers lose their job? Who is going to buy houses, roofs, cars, cabinets, furniture, amazon packages, etc. from all the the blue-collar workers?

What happens when all those former knowledge workers start flooding the job markets for cashiers and factory workers, or applying en masse to the limited spots in nursing schools or trade programs?

If GPTs take away knowledge work at any rate above "glacially slow" we will quickly see a collapse that affects every corner of the global economy.

At that point we just have to hope for a real revolution in terms of what it means to own the means of production.

reply
throw234234234
3 days ago
[-]
- The people that are left and the people that it doesn't happen straight away on: i.e. the people who still have something "scarce". That's what capitialism and/or any system that rations resources based on price/supply/demand does. This includes people in things slower to automate (i.e. think trades, licensed work, etc) and other economic resources (landowners, capital owners, general ownership of assets). Inequality will widen and those people winning will buy the houses, cars, etc. The businesses pitching to the poor/middle classes might disappear. Even if you are disrupted eventually being slower to disrupt gives you relatively more command of income/rent/etc than the ones disrupted before you giving you a chance to transition that income into capital/land/real assets which will remain scarce. Time is money. AI is a real asset holder's dream.

- Unskilled work will become even more diminished: A lot of people in power are counting on this to solve things like aging population care, etc. Move from coding software to doing the hard work in a nursing home for example is a deflationary force and makes the older generations (who typically have more wealth) even more wealthier as the effect would be deflationary overall and amplify their wealth. The industries that will benefit (at the expense of ones that don't) will be the ones that can appeal to the winners - resources will be redirected at them.

- Uneven disruption rates: I disagree that the AU disruption force will be even - I think the academic types will be disrupted much more than the average person. My personal opinion is that anything in the digital world can be disrupted much quicker than the physical realm for a number of reasons (cost of change/failure, energy, rules of physics limitations, etc). This means that as a society there will be no revolution (i.e. it was your fault for doing that; why should the rest of society bear the cost? be adaptable...). This has massive implications for what society values long term and the type of people valued in the new world as well socially, in personal relationships, etc.

i.e. Software dev's/ML researchers/any other white collar job/etc in the long run have shot themselves in the foot IMO. The best they can hope for is that LLM's do have a limit to progres, that there is an element of risk to the job that still requires some employment, and time is given to adjust. I hope I'm wrong since I would be affected too. No one will feel sorry for them - after all other professions know better than to do this to themselves on average and they have also caused a lot of disruption themselves (taste of their own medicine as they say).

reply
mianos
4 days ago
[-]
I am 61, I have been a full time developer since I was about 19. I have lost count of the number of 'next thing to replace developers' many many times. many of them showed promise. Many of them continue to be developed. Frameworks with higher and higher levels of abstraction.

I see LLMs as the next higher level of abstraction.

Does this mean it will replace me? At the moment the output is so flawed for anything but the most trivial professional tasks, I simply see, as before, it has a long long way to go.

Will be put me out of a job? I highly doubt it in my career. I still love it and write stuff for home and work every day of the week. I'm planning on working until I drop dead as it seems I have never lost interest so far.

Will it replace developers as we know it? Maybe in the far future. But we'll be the ones using it anyway.

reply
jb_briant
4 days ago
[-]
I have a decade of experience writing web code professionally, in my experience, LLM is a true waste of time regarding web.

On the other side, I'm switching to game dev and it became a very useful companion, outputing well known algorithms. It's more like an universal API rather than a junior assistant.

Instead of me taking time to understand the algo in the details then implementing, I use GPT4o to expand the Unreal API with missing parts. It truly expands the scope I'm able to handle and it feels good to save hours that compounds in days and weeks of work.

Eg. 1. OOB and SAT https://stackoverflow.com/questions/47866571/simple-oriented...

2. Making a grid system using lat/long coordinates for a voxel planet.

reply
baq
4 days ago
[-]
> I have a decade of experience writing web code professionally, in my experience, LLM is a true waste of time regarding web.

As someone who knows web front end development only to the extent I need it for internal tools, it’s been turning day-long fights into single hour dare I say it pleasurable experiences. I tell it to make a row of widgets and it outputs all the div+css soup (or e.g. material components) that only needs some tuning instead of having to google everything.

It still takes experience to know when it doesn’t use a component when it should etc., but it’s a force multiplier, not a replacement. For now.

reply
Scarblac
4 days ago
[-]
That's also my experience, the AI is very helpful for doing common things with languages you're not great at. Makes it easier to pick up a few extra tools to use now and then.
reply
mianos
4 days ago
[-]
This I agree with. It is awesome when you don't know something. I have being doing some TypeScript recently and it's not something I have used professionally in anger before.

But, as I learn it, GPT 4o becomes less and less useful for anything but short questions to fill in gaps. I am already way better than it at anything more than a few pages long. Getting it to do something substantial is pretty much an exercise in frustration.

reply
jb_briant
4 days ago
[-]
I can totally understand it! I would loved to have access to a LLM back in time. Especially since I started to learn just after the <table> era but we didn't have "flex" yet. It was true garbage...

Now I mostly JSX / tailwind, which is way faster than prompting, but surely because I'm fluent in that thing.

reply
anonzzzies
4 days ago
[-]
> Now I mostly JSX / tailwind, which is way faster than prompting,

Not saying that is not true, but did you measure that actually or is it a feeling or you didn't spend very much time on getting a prompt you can plop your requests in? jsx and tailwind are very verbose ; frontend is mostly very verbose, and, unless you are building LoB apps, you will have to try things a few times. Cerebras and groq with a predefined copy paste prompt will generate all that miserable useless slob (yes, I vehemently hate frontend work; I do hope it will be replaced very soon by llms completely; it's close but not quite there yet) in milliseconds so you can just tweak it a bit. I am fluent at building web frontends since the mid 90s; before I did DOS, windows and motif ones (which was vastly nicer; there you could actually write terse frontend code); I see many companies inside and I have not seen anyone faster at frontend than current llms, so I would like to see a demo. In logic I see many people faster as the llm can often simply not even figure it out even remotely.

reply
jb_briant
4 days ago
[-]
I prompt a lot everyday for various reasons, including, but not exclusively coding. Because jsx/tw is indeed very verbose, it requires a lot of accuracy because there is sooo many places you have to be just perfect, this is something LLM are inherently incapable of.

Don't get me wrong, it will output the code faster than me. But overall, i will spend more time prompting + correcting, especially when I want to achieve special designs which are not looking like basic "bootstrap". It's also much less enjoyable to tweak a prompt than just spitting jsx/tw which doesn't require lot of effort for me.

I don't have a demo to justify my claim and I'm totally fine if you dismiss my message because of that.

I recon that I don't like front ending with LLM yet, maybe one day it will.be much better.

My website where I tried some LLM stuff and been disapointed https://ardaria.com

reply
WWLink
4 days ago
[-]
> I tell it to make a row of widgets and it outputs all the div+css soup (or e.g. material components) that only needs some tuning instead of having to google everything.

As someone with a fairly neutral/dismissive/negative opinion on AI tools, you just pointed out a sweet silver lining. That would be a great use case for when you want to use clean html/css/js and not litter your codebase with a gazillion libraries that may or may not exist next year!

reply
zelphirkalt
4 days ago
[-]
Given how bad most DOM trees look out there (div soup, thousands of not needed elements, very deep nesting, direct styling, HTML element hacks, not using semantically appropriate elements, wrong nesting of elements, and probably more), I would be surprised, if LLMs give non-div-soup and responsive HTML and CSS, the way I would write it myself. Some day I should test one such LLM as to whether it is able to output a good HTML document, with guidance, or the sheer amount of bad websites out there has forever poisoned the learned statistics.
reply
oneeyedpigeon
4 days ago
[-]
> it outputs all the div+css soup

Is there a difference between soup and slop? Do you clean up the "soup" it produces or leave it as is?

reply
baq
4 days ago
[-]
It’s soup in the sense you need it for the layout to work in the browser. I clean up when I know there’s a component for what I need instead of a div+span+input+button or whatever use case is being solved. I mostly tweak font sizes, paddings and margins, sometimes rearrange stuff if the layout turns out broken (usually stuff just doesn’t fit the container neatly, don’t remember an instance of broken everything).
reply
Copyrighted
4 days ago
[-]
Same here. Really liking it asking it Unreal C++ trivia.
reply
ipnon
4 days ago
[-]
Yes, and if you look back on the history of the tech industry, each time programmers were supposedly on the verge of replacement was an excellent time to start programming.
reply
fendy3002
4 days ago
[-]
What people expect: this will replace programmers

What really happened: this is used by programmers to improve their workflow

reply
Aeolun
4 days ago
[-]
The best time was 20 years ago. The second best time is now :)
reply
mianos
4 days ago
[-]
It was awesome 20 years before that, and the 10s in the middle. It's great now too.
reply
euroderf
4 days ago
[-]
Let's say we get an AI that can take well-written requirements and cough up an app.

I think you have to be a developer to learn how to write those requirements well. And I don't even mean the concepts of data flows and logic flows. I mean, just learning to organise thoughts in a way that they don't fatally contradict themselves or lead to dead ends or otherwise tie themselves in unresolvable knots. I mean like non-lawyers trying to write laws without any understanding of the entire suite of mental furniture.

reply
mianos
4 days ago
[-]
That is exactly what I mean when I said "we'll be the ones using it".

I didn't want to expand it, for fear of sounding like an elitist, and you said it better anyway. The same things that make a programmer excellent will be in a much better position to use an even better LLM.

Concise thinking and expression. At the moment LLMs will just kinda 'shotgun' scattered ideas based on your input. I expect the better ones will be massively better when fed better input.

reply
anonzzzies
4 days ago
[-]
I have seen this happening since the 70s. My father made a tool to replace programmers. It did not and it disillusioned him greatly. Sold very well though. But this time is different though; I have often said that I thought chatgpt like AI was at least 50 years out still but it's here; it does replace programmers every day. I see many companies inside (I have a troubleshoot company; we get called to fix very urgent stuff and we only do that so we hop around a lot) and many are starting to replace outsourced teams with, for instance, our product (which was a side project for me to see if it can be done, it can). But also just shrinking teams and give them claude or gpt to replace the bad people.

It is happening just not at scale yet to really scare people; that will happen though. It is just stupidly cheaper; for the price of one junior you can do so many api requests to claude it's not even funny. Large companies are still thinking about privacy of data etc but all of that will simply not matter in the long run.

Good logical thinkers and problemsolvers won't be replaced any time soon, but mediocre or bad programmers are already gone ; a llm is faster, cheaper and doesn't get ill or tired. And there are so many of them, just try someone on upwork or fiverr and you will know.

reply
zoobab
4 days ago
[-]
"Large companies are still thinking about privacy of data etc but all of that will simply not matter in the long run."

Privacy is a fundamental right.

And companies should care about not leaking trade secrets, including code, but the rest as well.

US companies are known to use the cloud to spy on competitors.

Companies should have their own private LLM, not rely on cloud instances with a contract that guarantees "privacy".

reply
anonzzzies
4 days ago
[-]
I know and agree: it's strange how many companies and people just step over and on their privacy. Or export private info to other countries. But what I mean here is that, good or bad, money always wins. Unfortunately also over fundamental rights. That sucks, but still happens and will happen.
reply
tugu77
4 days ago
[-]
> Large companies are still thinking about privacy of data etc but all of that will simply not matter in the long run.

That attitude is why our industry has such a bad rep and why things are going down the drain to dystopia. Devs without ethics. This world is doomed.

reply
anonzzzies
4 days ago
[-]
It definitely is doomed. But somehow we might survive.
reply
fendy3002
4 days ago
[-]
> It is just stupidly cheaper; for the price of one junior you can do so many api requests to claude it's not even funny

Idk if the cheap price is really cheap price or promotion price, where after that the enshittification and price increase happen, which is a trend for most tech companies.

And agree, llm will weed bad programmers further, though in the future, bad alternatives (like analyst or bad llm users) may emerge

reply
tugu77
4 days ago
[-]
"For the price of a cab you can take so many ubers it's not even funny". Yeah, until the price dumping stops and workers demand better conditions and you start comparing apples to apples.

Why do you have bad programmers on staff? Let them go, LLMs existing or not.

reply
fendy3002
4 days ago
[-]
Bad programmers aren't black and white situation, there's so many range of programmers quality out there, with different sector too (analysis, programming, db design, debugging, etc). And as the standard, companies need to adjust cost and requirement, while those are affected by supply and demand too. Moreover, common people cannot measure programmers quality as easy, and bad programmers have delayed effect too. So it's not as simple as not hire bad programmers.

LLM however, will change the playing field. Let's say that bottom 20 or 30% of programmers will be replaced by LLM in form of increased performance by the other 70%.

reply
tugu77
4 days ago
[-]
> Moreover, common people cannot measure programmers quality as easy

So you can't fire the bad apples because you don't know who they are, but you feel confident that you can pick out whom to replace by an LLM and then fire?

It's hopefully obvious to everybody that this is a hopelessly naïve take on things and is never going to play out that way.

reply
anonzzzies
4 days ago
[-]
People hire teams from some countries because they are cheap and they are terrible. LLMs are not good but better than many of these and cheaper. It's not really changing much in code quality or process, just a lot cheaper.
reply
tugu77
4 days ago
[-]
Companies like that deserve to die out since the quality they bring to the market is horrendous either way.
reply
anonzzzies
4 days ago
[-]
I agree with all remarks you made; but how old are you? As it all seems pretty naive? The world unfortunately is only about money and nothing else, it is terrible but it is. Better worlds are eradicated by this reality all the time.
reply
anonzzzies
4 days ago
[-]
It is by far most companies. But yes, I agree.
reply
smusamashah
4 days ago
[-]
Which tool did he make?
reply
anonzzzies
4 days ago
[-]
I do not want to dox myself but if you were in enterprise IT in the 80s in europe you will have heard of it or used it.
reply
pandemic_region
4 days ago
[-]
Hey man, thanks for posting this. There's about ten years between us. I too hope that they will need to pry a keyboard from my cold dead hands.
reply
ido
4 days ago
[-]
I'm only 41 but that's long enough to have also seen this happen a few times (got my first job as a professional developer at age 18). I've also dabbled in using copilot and chatgpt and I find at most they're a boost to an experienced developer- they're not enough to make a novice a replacement for a senior.
reply
bee_rider
4 days ago
[-]
I think the concern is that they might become good enough for a senior to not need a novice. At that point where to the seniors come from?
reply
baq
4 days ago
[-]
We’re here already. Nobody wants to hire junior devs anymore. Seniors are still a couple phone calls away from new positions.
reply
ido
4 days ago
[-]
I got my first job in 2001, it was like that then as well (maybe even worse). It got better when the market picked up again. I’m confident there still won’t be “enough developers” when then happens just like there’s never “enough electricity” despite the fact the power grid keep getting expanded all the time - people fine a use for even more.
reply
eru
4 days ago
[-]
If developers become more productive (both juniors and seniors), there will be more demand for all of them.
reply
mantas
3 days ago
[-]
Or few will be productive enough to cover everything. Compare amount of bobcat operators vs ditch diggers.
reply
eru
3 days ago
[-]
There's an almost unlimited amount of demand for people who can write software or automate things, if you can lower buck-per-bang enough.
reply
ido
4 days ago
[-]
When the market heats up again companies will bite the bullet and hire juniors when they can’t get enough seniors.
reply
tugu77
4 days ago
[-]
That "need" is not as a helper but a cheap way to train the next gen senior which by the time they are senior know the ins and outs of your company's tech so well that they will be hard to replace by an external hire.

If your approach to juniors is that they are just cheap labour monkeys designed to churn out braindead crap and boost the ego of your seniors then you have a management problem and I'm glad I'm working somewhere else.

reply
taylodl
5 days ago
[-]
Back in the late 80s and early 90s there was a craze called CASE - Computer-Aided Software Engineering. The idea was humans really suck at writing code, but we're really good at modeling and creating specifications. Tools like Rational Rose arose during this era, as did Booch notation which eventually became part of UML.

The problem was it never worked. When generating the code, the best the tools could do was create all the classes for you and maybe define the methods for the class. The tools could not provide an implementation unless it provided the means to manage the implementation within the tool itself - which was awful.

Why have you likely not heard of any of this? Because the fad died out in the early 2000's. The juice simple wasn't worth the squeeze.

Fast-forward 20 years and I'm working in a new organization where we're using ArchiMate extensively and are starting to use more and more UML. Just this past weekend I started wondering given the state of business modeling, system architecture modeling, and software modeling, could an LLM (or some other AI tool) take those models and produce code like we could never dream of back in the 80s, 90s, and early 00s? Could we use AI to help create the models from which we'd generate the code?

At the end of the day, I see software architects and software engineers still being engaged, but in a different way than they are today. I suppose to answer your question, if I wanted to future-proof my career I'd learn modeling languages and start "moving to the left" as they say. I see being a code slinger as being less and less valuable over the coming years.

Bottom line, you don't see too many assembly language developers anymore. We largely abandoned that back in the 80s and let the computer produce the actual code that runs. I see us doing the same thing again but at a higher and more abstract level.

reply
lubujackson
4 days ago
[-]
This is more or less my take. I came in on Web 1.0 when "real" programmers were coding in C++ and I was mucking around with Perl and PHP.

This just seems like just the next level of abstraction. I don't forsee a "traders all disappeared" situation like the top comment, because at the end of the day someone needs to know WHAT they want to build.

So yes, less junior developers and development looking more like management/architecting. A lot more reliance on deeply knowledgable folks to debug the spaghetti hell. But also a lot more designers that are suddenly Very Successful Developers. A lot more product people that super-charge things. A lot more very fast startups run by some shmoes with unfundable but ultimately visionary ideas.

At least, that's my best case scenario. Otherwise: SkyNet.

reply
Zababa
4 days ago
[-]
Here's to Yudkowsky's 84th law.
reply
neilv
4 days ago
[-]
I worked on CASE, and generally agree with this.

I think it's important to note that there were a couple distinct markets for CASE:

1. Military/aerospace/datacomm/medical type technical development. Where you were building very complex things, that integrated into larger systems, that had to work, with teams, and you used higher-level formalisms when appropriate.

2. "MIS" (Management Information Systems) in-house/intranet business applications. Modeling business processes and flows, and a whole lot of data entry forms, queries, and printed reports. (Much of the coding parts already had decades of work on automating them, such as with WYSIWYG form painters and report languages.)

Today, most Web CRUD and mobile apps are the descendant of #2, albeit with branches for in-house vs. polished graphic design consumer appeal.

My teams had some successes with #1 technical software, but UML under IBM seemed to head towards #2 enterprise development. I don't have much visibility into where it went from there.

I did find a few years ago (as a bit of a methodology expert familiar with the influences that went into UML, as well as familiar with those metamodels as a CASE developer) that the UML specs were scary and huge, and mostly full of stuff I didn't want. So I did the business process modeling for a customer logistics integration using a very small subset, with very high value. (Maybe it's a little like knowing hypertext, and then being teleported 20 years into the future, where the hypertext technology has been taken over by evil advertising brochures and surveillance capitalism, so you have to work to dig out the 1% hypertext bits that you can see are there.)

Post-ZIRP, if more people start caring about complex systems that really have to work (and fewer people care about lots of hiring and churning code to make it look like they have "growth"), people will rediscover some of the better modeling methods, and be, like, whoa, this ancient DeMarco-Yourdon thing is most of what we need to get this process straight in a way everyone can understand, or this Harel thing makes our crazy event loop with concurrent activities tractable to implement correctly without a QA nightmare, or this Rumbaugh/Booch/etc. thing really helps us understand this nontrivial schema, and keep it documented as a visual for bringing people onboard and evolving it sanely, and this Jacobson thing helps us integrate that with some of the better parts of our evolving Agile process.

reply
taylodl
4 days ago
[-]
As I recall, the biggest problem from the last go-around was the models and implementation were two different sets of artifacts and therefore were guaranteed to diverge. If we move to a modern incarnation where the AI is generating the implementation from the models and humans are no longer doing that task, then it may work as the models will now be the only existing set of artifacts.

But I was definitely in camp #2 - the in-house business applications. I'd love to hear the experiences from those in camp #1. To your point, once IBM got involved it all went south. There was a place I was working for in the early 90s that really turned me off against anything "enterprise" from IBM. I had yet to learn that would apply to pretty much every vendor! :)

reply
neilv
4 days ago
[-]
FWIW, CASE developers knew pretty early on that the separate artifacts diverging was a problem, or even the problem, and a lot of work was on trying to solve exactly that.

Approaches included having single source for any given semantics, and doing various kinds of syncing between the models.

Personally, I went to grad school intending to finally "solve" this, but got more interested in combining AI and HCI for non-software-engineering problems. :)

reply
627467
4 days ago
[-]
It's interesting you say this because in my current process to learn to build apps for myself I first try build mermaid diagrams aided by LLM. And when I'm happy, i then ask it to generate the code for me based on these diagrams.

I'm no SWE and probably never will be. SWE probably don't consider what I do "building an app" but I don't really care

reply
aprilthird2021
4 days ago
[-]
Diagramming out what needs to be built is often what some of the highest paid programmers do all day
reply
steve_adams_86
4 days ago
[-]
This is what keeps crossing my mind.

Even trivial things like an ETL pipeline for processing some data at my work fall into this category. It seemed trivial on its surface, but when I spoke to everyone about what we were doing with it and why (and a huge amount of context regarding the past and future of the project), the reason the pipeline wasn’t working properly was both technically and contextually very complex.

I worked with LLMs on solving the problems (I always do, I guess I need to “stay sharp”), and they utterly failed. I tried working from state machine definitions, diagrams, plain English, etc. They couldn’t pick up the nuances at all.

Initially I thought I must be over complicating the pipeline, and there must be some way to step it back and approach it more thoughtfully. This utterly failed as well. LLMs tried to simplify it by pruning entire branches of critical logic, hallucinating bizarre solutions, or ignoring potential issues like race conditions, parallel access to locked resources, etc. entirely.

It has been a bit of an eye opener. Try as I might, I can’t get LLMs to use streams to conditionally parse, group, transform, and then write data efficiently and safely in a concurrent and parallel manner.

Had I done this with an LLM I think the result eventually could have worked, but the code would have been as bad as what we started with at best.

Most of my time on this project was spent planning and not programming. Most of my time spent programming was spent goofing around with LLM slop. It was fun, regardless.

reply
sleepybrett
4 days ago
[-]
those llms all learned to code by devouring github. How much good code is on github and how much terrible code is on github.

My favorite thing is writing golang with co-pilot on. It make suggestions that use various libraries and methods that were somewhat idomatic several years ago but are now deprecated.

reply
indrora
4 days ago
[-]
> Bottom line, you don't see too many assembly language developers anymore.

And where you do, no LLM is going to replace them because they are working in the dark mines where no compiler has seen and the optimizations they are doing involve arcane lore about the mysteries of some Intel engineer's mind while one or both of them are on a drug fueled alcoholic deep dive.

reply
taylodl
4 days ago
[-]
Out of curiosity, who does assembly language programming these days? Back in the 90s the compilers had learned all our best tricks. Now with multiple instruction pipelines and out-of-order instruction processing and registers to be managed, can humans still write better optimized assembly than a compiler? Is the juice worth that squeeze?

I can see people still learning assembly in a pedagogical setting, but not in a production setting. I'd be interested to hear otherwise.

reply
indrora
2 days ago
[-]
Assembly is still relatively popular in the spaces of very low level Operating System work, exploit engineering, and embedded systems where you need cycle-accurate timing.
reply
gjadi
2 days ago
[-]
ffmepg is famous for using asm. For exemple : https://news.ycombinator.com/item?id=42041301
reply
m_ke
5 days ago
[-]
I've been thinking about this a bunch and here's what I think will happen as cost of writing software approaches 0:

1. There will be way more software

2. Most people / companies will be able to opt out of predatory VC funded software and just spin up their own custom versions that do exactly what they want without having to worry about being spied on or rug pulled. I already do this with chrome extensions, with the help of claude I've been able to throw together things like time based website blocker in a few minutes.

3. The best software will be open source, since it's easier for LLMs to edit and is way more trustworthy than a random SaaS tool. It will also be way easier to customize to your liking

4. Companies will hire way less and probably mostly engineers to automate routine tasks that would have previously be done by humans (ex: bookkeeping, recruiting, sales outreach, HR, copywriting / design). I've heard this is already happening with a lot of new startups.

EDIT: for people who are not convinced that these models will be better than them soon, look over these sets of slides from NeurIPS:

- https://michal.io/notes/ml/conferences/2024-NeurIPS#neurips-...

- https://michal.io/notes/ml/conferences/2024-NeurIPS#fine-tun...

- https://michal.io/notes/ml/conferences/2024-NeurIPS#math-ai-...

reply
from-nibly
4 days ago
[-]
> that do exactly what they want

This presumes that they know exactly what they want.

My brother works for a company and they just ran into this issue. They target customer retention as a metric. The result is that all of their customers are the WORST, don't make them any money, but they stay around a long time.

The company is about to run out of money and crash into the ground.

If people knew exactly what they wanted 99% of all problems in the world wouldn't exist. This is one of the jobs of a developer, to explore what people actually want with them and then implement it.

The first bit is WAY harder than the second bit, and LLMs only do the second bit.

reply
cweld510
4 days ago
[-]
Sure, but without an LLM, measuring customer retention might require sending a request over to your data scientist because they know how to make dashboards, then they have to balance it with their other work, so who knows when it gets done. You can do this sort of thing faster with an LLM, and the communication cost will be less. So even if you choose the wrong statistic, you can get it built sooner, and find out sooner that it's wrong, and hopefully course-correct sooner as well.
reply
sleepybrett
4 days ago
[-]
except how do you know that the llm is actually telling you things that are factual and not hallucinating numbers?
reply
a_bonobo
4 days ago
[-]
>3. The best software will be open source, since it's easier for LLMs to edit and is way more trustworthy than a random SaaS tool. It will also be way easier to customize to your liking

From working in a non-software place, I see the opposite occurring. Non-software management doesn't buy closed source software because they think it's 'better', they buy closed source software because there's a clear path of liability.

Who pays if the software messes up? Who takes the blame? LLMs make this even worse. Anthropic is not going to pay your business damages because the LLM produced bad code.

reply
brodouevencode
5 days ago
[-]
Good points - my company has already committed to #2
reply
ThrowawayR2
5 days ago
[-]
What's the equivalent of @justsayinmice for NeurIPS papers? A lot of things in papers don't pan out in the real world.
reply
m_ke
5 days ago
[-]
There's a lot of work showing that we can reliably get to or above human level performance on tasks where it's easy to sample at scale and the solution is cheap to verify.
reply
sureglymop
4 days ago
[-]
As a junior dev, I do two conscious things to make sure I'll still be relevant for the workforce in the future.

1. I try to stay somewhat up to date with ML and how the latest things work. I can throw together some python, let it rip through a dataset from kaggle, let models run locally etc. Have my linalg and stats down and practiced. Basically if I had to make the switch to be an ML/AI engineer it would be easier than if I had to start from zero.

2. I otherwise am trying to pivot more to cyber security. I believe current LLMs produce what I would call "untrusted and unverified input" which is massively exploitable. I personally believe that if AI gets exponentially better and is integrated everywhere, we will also have exponentially more security vulnerabilities (that's just an assumption/opinion). I also feel we are close to cyber security being taken more seriously or even regulated e.g. in the EU.

At the end of the day I think you don't have to worry if you have the "curiosity" that it takes to be a good software engineer. That is because, in a world where knowledge, experience and willingness to probe out of curiosity will be even more scarce than they are now you'll stand out. You may leverage AI to assist you but if you don't fully and blindly rely on it you'll always be the more qualified worker than someone who does.

reply
HeikoKemp
3 days ago
[-]
I'm shocked that I had to dig this deep in the comments to see someone mention cybersecurity. Same as you, seeing this trend, I'm doubling down on security. As more businesses "hack away" their projects. It's going to be a big party. I'm sure black hats are thrilled right now. Will LLMs be able to secure their code? I'm not so sure. Even human-written code is exploitable. That's their source for training.
reply
atemerev
4 days ago
[-]
You are the smart one; I hope everything will work for you!
reply
matrix87
4 days ago
[-]
> The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.

I don't see this trend. It just sounds like a weird thing to say, it fundamentally misunderstands what the job is

From my experience, software engineering is a lot more human than how it gets portrayed in the media. You learn the business you're working with, who the stakeholders are, who needs what, how to communicate your changes and to whom. You're solving problems for other people. In order to do that, you have to understand what their needs are

Maybe this reflects my own experience at a big company where there's more back and forth to deal with. It's not glamorous or technically impressive, but no company is perfect

If what companies really want is just some cheap way to shovel code, LLMs are more expensive and less effective than the other well known way of cheaping out

reply
uludag
4 days ago
[-]
Firstly, as many commenters have mentioned, I don't see AI taking jobs en masse. They simply aren't accurate enough and they tend to generate more code faster which ends up needing more maintenance.

Advice #1: do work on your own mind. Try to improve your personal organization. Look into methodologies like GTD. Get into habits of building discipline. Get into the habit of storing information and documentation. From my observations many developers simply can't process many threads at once, making their bottleneck their own minds.

Advice #2: lean into "metis"-heavy tasks. There are many programming tasks which can be easily automated: making a app scaffold, translating a simple algorithm, writing tests, etc. This is the tip of the iceberg when it comes to real SWE work though. The intricate connections between databases and services, the steps you have to go through to debug that one feature, the hack you have to make in the code so the code behaves differently in the testing environment, and so on. LLMs require legibility to function: a clean slate, no tech-debt, low entropy, order, etc. Metis is a term talked about in the book "Seeing Like a State" and it encompasses knowledge and skills gained through experience which is hard to transfer. Master these dark corners, hack your way around the code, create personal scripts for random one-off tasks. Learn how to poke and pry the systems you work on to get out the information you want.

reply
aprilthird2021
4 days ago
[-]
Yep, massive LOC means massive maintenance. Maybe the LLMs can maintain their own code? I'm skeptical. I feel like they can easily code themselves into an unmaintainable corner.

But maybe the times that happens is so rare and low that you just hire a human to unstuck the whole thing and get it running again. Maybe we'll become more like mechanics you visit every now and then for an expensive, quick job, vs an annual retainer.

reply
amrocha
4 days ago
[-]
I don’t think I’ve ever heard of a developer who enjoys refactoring a messy, bug ridden code base. This is the reason why rewrites happen, nobody wants to touch the old code with a ten foot pole and it’s easier to just rewrite it all.

So if you turn the entire job into that? I don’t think skilled people will be lining up to do it. Maybe consulting firms would take that on I guess.

reply
aprilthird2021
3 days ago
[-]
You can't just rewrite all of some of the most profitable software systems out there. Banks, HFTs, FAANG, these cannot just be rewritten
reply
amrocha
2 days ago
[-]
Sure, that’s why they pay a lot and hire quality engineers to maintain their codebases. And wouldn’t be ok with AI slop.
reply
gt0
4 days ago
[-]
I use Copilot a bit, and it can be really, really good.

It helps me out, but in terms on increasing productivity, it pales in comparison to simple auto-complete. In fact it pales in comparison to just having a good, big screen vs. battling away on a 13" laptop.

LLMs are useful and provide not insignificant assistance, but probably less assistance than the tools we've had for a long time. LLMs are not a game changer like some other thing have been since I've been programming (since late 1980s). Just going to Operating Systems with protected memory was a game changer, I could make mistakes and the whole computer didn't crash!

I don't see LLMs as something we have to protect our careers from, I see LLMs as an increasingly useful tool that will become a normal part of programming same as auto-complete, or protected memory, or syntax-highlighting. Useful stuff we'll make use of, but it's to help us, not replace us.

reply
Xophmeister
4 days ago
[-]
My anecdata shows people who have no/limited experience in software engineering are suddenly able to produce “software”. That is, code of limited engineering value. It technically works, but is a ultimately an unmaintainable, intractable Heath Robinson monstrosity.

Coding LLMs will likely improve, but what will happen first: a good-at-engineering LLM; or a negative feedback cycle of training data being polluted with a deluge of crap?

I’m not too worried at the moment.

reply
shiveenp
4 days ago
[-]
LLMs will have the same effect that outsourcing had on tech jobs i.e. some effect but not meaningful for people that truly know what they’re doing and can demand the money for quality to distinguish themselves from random text generator (AI) slop.
reply
bhaak
4 days ago
[-]
Something similar happened when Rails showed up. Lots of people were able to build somewhat complex websites than ever before.

But there are still specialized people being paid for doing websites today.

reply
SoftTalker
4 days ago
[-]
My god, sudden flashback of our CTO doing Rails code after hours and showing us how easily and quickly he was building stuff. We called that period of time the "Ruby Derailment"
reply
sokoloff
4 days ago
[-]
I can imagine a world, not far from today, where business experts can create working programs similar in complexity to what they do with Excel today, but in domains outside of "just spreadsheets". Excel is the most used no-code/low-code environment by far and I think we could easily see that same level of complexity [mostly low] be accessible to a lot more people.
reply
layer8
4 days ago
[-]
I don’t quite buy the Excel analogy, because the business experts do understand the Excel formulas that they write, and thus can maintain them and reason about them. The same wouldn’t be the case with programs written by LLMs.
reply
tokioyoyo
4 days ago
[-]
> is a ultimately an unmaintainable

Does it need to be maintainable, if we can re-generate apps on the go with some sort of automated testing mechanism? I'm still on the fence with the LLM-generated apps debacle, but since I started forcing Cursor on myself, i'm writing significantly less code (75% less?) on my day-to-day job.

reply
davemp
4 days ago
[-]
> if we can re-generate apps on the go with some sort of automated testing mechanism?

Ahh so once we solve the oracle problem and programming will become obsolete…

reply
indigoabstract
5 days ago
[-]
I remember John Carmack talking about this last year. Seems like it's still pretty good advice more than a year later:

"From a DM, just in case anyone else needs to hear this."

https://x.com/ID_AA_Carmack/status/1637087219591659520

reply
throwaway_43793
5 days ago
[-]
It's a good advice indeed. But there is a slight problem with it.

Young people can learn and fight for their place in the workforce, but what is left for older people like myself? I'm in this industry already, I might have missed the train of "learn to talk with people" and been sold on the "coding is a means to an end" koolaid.

My employability is already damaged due to my age and experience. What is left for people like myself? How can I compete with a 20 something years old who has sharper memory, more free time (due to lack of obligations like family/relationships), who got the right advice from Carmack in the beginning of his career?

reply
Rotundo
4 days ago
[-]
The 20-year-old is, maybe, just like you at that age: eager and smart, but lacking experience. Making bad decisions, bad designs, bad implementations left and right. Just like you did, way back when.

But you have made all those mistakes already. You've learned, you've earned your experience. You are much more valuable than you think.

Source: Me, I'm almost 60, been programming since I was 12.

reply
throwaway_43793
4 days ago
[-]
I think the idea of meritocracy has died in me. I wish I could be rewarded for my knowledge and expertise, but it seems that capitalism, as in maximizing profit, has won above everything else.
reply
atemerev
4 days ago
[-]
You are rewarded for something that is useful to the market, i.e. to other people (useful enough so they agree to pay you money for it). If something you know is no longer useful, you will not be rewarded.

It was true 100 years ago, it was true 20 years ago, and it is true now.

reply
extr
4 days ago
[-]
?? Not sure what you mean. Carmack's advice is not specific to any particular point in your career. You can enact the principle he's talking about just as much with 30 YOE as you can with 2. It's actually easier advice to follow for older people than younger, since they have seen more of the world and probably have a better sense of where the "rough edges" are. Despite what you see on twitter and HN and YC batches, most successful companies are started by people in their 40s.
reply
indigoabstract
4 days ago
[-]
It's good advice, but not easy to follow, since knowing what to do and doing it are very different things.

I think that what he means is that how successful we are in work is closely related to our contributions, or to the perceived "value" we bring to other people.

The current gen AI isn't the end of programmers. What matters is still what people want and are willing to pay for and how can we contribute to fulfill that need.

You are right that young folks have the time and energy to work more than older ones and for less money. And they can soak up knowledge like a sponge. That's their strong point and older folks cannot really compete with that.

You (and everyone else) have to find your own strong point, your "niche" so to speak. We're all different, so I'm pretty sure that what you like and are good at is not what I like and I'm good at and vice-versa.

All the greats, like Steve Jobs and so on said that you've got to love what you do. Follow your intuition. That may even be something that you dreamed about in your childhood. Anything that you really want to do and makes you feel fulfilled.

I don't think you can get to any good place while disliking what you do for a living.

That said, all this advice can seem daunting and unfeasible when you're not in a good place in life. But worrying only makes it worse.

If you can see yourself in a better light and as having something valuable to contribute, things would start looking better.

This is solvable. Have faith!

reply
SoftTalker
4 days ago
[-]
> All the greats, like Steve Jobs and so on said that you've got to love what you do.

This is probably true for them but the other thing that can happen is that when you take what you love and do it for work or try to make it a business you can grow to hate it.

reply
indigoabstract
4 days ago
[-]
I guess it also depends on how much you love your work. If there wasn't that much interest in the first place, I suppose you can grow to hate it in time. If that happens, maybe there's something else you'd rather do instead?
reply
antegamisou
4 days ago
[-]
> How can I compete with a 20 something years old who has sharper memory, more free time (due to lack of obligations like family/relationships),

Is it a USA/Silicon Valley thing to miss the arrogance and insufferability most fresh grads have when entering the workforce?

It's kind of tone-deaf to attempt to self-victimize as someone with significant work experience being concerned of being replaced by a demographic that is notoriously challenged to build experience.

reply
randall
5 days ago
[-]
This is by far the best advice I've seen.
reply
archagon
4 days ago
[-]
Except I suspect that Carmack would not be where he is today without a burning intellectual draw to programming in particular.
reply
yodsanklai
4 days ago
[-]
Exactly... I read "masters of doom" and Carmack didn't strike me as the product guy who cares about people needs. He was more like a coding machine.
reply
indigoabstract
4 days ago
[-]
Yet, they were able to find a market for their products. He knew both how to code and what to code.

Ultima Underword was technologically superior to Wolfenstein 3D.

System Shock was technologically superior to Doom and a much better game for my taste. I also think it has aged better.

Doom, Wolf 3D and Quake were less sophisticated, but kicked ass. They captured the spirit of the times and people loved it.

They're still pretty good games too, 30 years later.

reply
watt
4 days ago
[-]
In "Rocket Jump: Quake and the Golden Age of First-Person Shooters" id guys figure out that their product is the cutting-edge graphics, and being first, and are able to pull that off for a while. Their games were not great, but the product was idTech engines. With Rage however (id Tech 5) the streak ran cold.
reply
nyrikki
4 days ago
[-]
1) have books like 'The Art of Programming' on my shelf, as AI seems to propagate solutions that are related to code golf more than robustness due to coverage in the corpus.

2) Force my self to look at existing code as abstract data types, etc... to help reduce the cost of LLMs failure mode (confident, often competent, and inevitable wrong)

3) curry whenever possible to support the use of coding assistants and to limit their blast radius.

4) Dig deep into complexity theory to understand what LLMs can't do, either for defensive or offensive reasons.

5) Realize that SWE is more about correctness and context than code.

6) Realize what many people are already discovering, that LLM output is more like clip art than creation.

reply
Animats
4 days ago
[-]
> 1) have books like 'The Art of Programming' on my shelf,

Decades ago I used to be constantly thumbing through vol. 1 of Knuth (Fundamental Algorithms) and coding those basic algorithms. Now that all comes from libraries and generics. There has been progress, but not via LLMs in that area.

reply
nyrikki
3 days ago
[-]
That is part of the problem, LLM coding assistants often code out what a human would often use a core library for, only including when required, like importing deque in Python to do a BFS.

An example I helped someone with on AoC last week.

Looking for 'lines' in a game board in Python, the LLM had an outer loop j, with an inner loop i

This is how it 'matched' a line.

    if i != j and (x1 == x2) or (y1 = y2):

I am sure you can see the problem with that, but some of the problems in Knuth 4a are harder for others.

There is a lot to learn for many in Knuth 1, but I view it as world building for other concepts.

With LLMs polluting web search results, the point is to have reference material you trust.

Knuth is accessible, I did error on that side rather than suggesting books that are grad level and above that I really appreciate but would just collect dust.

People will need to figure out what works for them, but having someone to explain why you do something is important IMHO.

reply
johanam
5 days ago
[-]
I think in some sense the opposite could occur, where it democratizes access to becoming a sort of pseudo-junior-software engineer. In the sense that a lot more people are going to be generating code and bespoke little software systems for their own ends and purposes. I could imagine this resulting in a Cambrian Explosion of small software systems. Like @m_ke says, there will be way more software.

Who maintains these systems? Who brings them to the last mile and deploys them? Who gets paid to troubleshoot and debug them when they reach a threshold of complexity that the script-kiddie LLM programmer cannot manage any longer? I think this type of person will definitely have a place in the new LLM-enabled economy. Perhaps this is a niche role, but figuring out how one can take experience as a software engineer and deploy it to help people getting started with LLM code (for pay, ofc) might be an interesting avenue to explore.

reply
askonomm
4 days ago
[-]
I tend to agree. I also think that the vast majority of code out there is quite frankly pretty bad, and all that LLM's do is learn from it, so while I agree that LLM's will help make a lot more software, I doubt it would increase the general quality in any significant way, and thus there will always be a need for people who can do actual programming as opposed to just prompting to fix complex problems. That said, not sure if I want my future career to be swimming in endless piles of LLM-copy-paste-spaghetti. Maybe it's high time to learn a new degree. Hmm.
reply
throwaway_43793
4 days ago
[-]
This blew up way more than I expected. Thanks everyone for the comments, I read almost all of them.

For the sake of not repeating myself, I would like to clarify/state some things.

1. I did not intend to signal that SWE will disappear as a profession, but would rather undergo transformation, as well as shrinking in terms of the needed workforce.

2. Some people seem to be hanging up to the idea that they are doing unimaginably complicated things. And sure, some people do, but I doubt they are the majority of the SWE workforce. Can LLM replace a cobol developer in financial industry? No, I don't think so. Can it replace the absurd amount of people whose job description can be distilled to "reading/writing data to a database"? Absolutely.

3. There seems to be a conflicting opinion. Some people say that code quality matters a lot and LLMs are not there yet, while other people seems to focus more on "SWE is more than writing code".

Personally, based on some thinking and reading the comments, I think the best way to future-proof a SWE career is to move to position that requires more people skills. In my opinion, good product managers that are eager to learn coding and combine LLMs for code writing, will be the biggest beneficiaries of the upcoming trend. As for SWEs, it's best to start acquiring people skills.

reply
whiplash451
4 days ago
[-]
Moving from an engineering role to a product role is a massive career change. I don't think this is reasonable advice, even with the current technological revolution, unless you have a deep, personal interest in product management.
reply
throwaway_43793
4 days ago
[-]
Not necessarily product. You can become a manager, or a technical lead.
reply
whiplash451
3 days ago
[-]
Same. You don't move to a management role unless you have a real calling for it.
reply
light_triad
4 days ago
[-]
There's a great Joel Spolsky post about developers starting businesses and realising that there's a bunch of "business stuff" that was abstracted away at big companies. [1]

One way to future proof is to look at the larger picture, the same way that coding can't be reduced to algorithm puzzles:

"Software is a conversation, between the software developer and the user. But for that conversation to happen requires a lot of work beyond the software development."

[1] The Development Abstraction Layer https://www.joelonsoftware.com/2006/04/11/the-development-ab...

reply
aprilthird2021
4 days ago
[-]
Yep, I learned this the hard way. I thought meeting customers, selling them on an automated, cheaper version of their current crufty manual system would be so easy I'd barely need to try.

I'd never been more wrong.

reply
SoftTalker
4 days ago
[-]
There are always people vested in the current way of doing things. If they happen to be influential, you will have to swim upstream no matter how much better your solution is. Companies are ultimately people, not in the Citizens United sense but in that they are made by people, staffed by people, and you will be dealing with people and all their foibles.
reply
mbm
4 days ago
[-]
Beautiful article, thanks for sharing. Miss his writing.
reply
dkyc
4 days ago
[-]
But conversations are exactly LLMs strength?
reply
elcritch
4 days ago
[-]
It looks like it, but LLMs still lack critical reasoning by and large. So if a client tells them or asks for something nonsensical it won’t reason its way out of that.

I’m not worried about software as a profession yet, as first clients will need to know what they want much what they actually need.

Well I am a bit worried that many big businesses seem to think they can lay off most of their software devs because “AI” causing wage suppression and overwork.

It’ll come back to bite them IMHO. I’ve contemplated shorting Intuit stock because they did precisely that, which will almost certainly just end up with crap software, missed deadlines, etc.

reply
light_triad
4 days ago
[-]
True but I think Spolsky meant it more as a metaphor for understanding users' psychology. Knowledge workers need empathy and creativity to solve important problems.

And design, product intuition, contextual knowledge in addition to the marketing, sales, accounting, support and infrastructure required to sell software at scale.

LLMs can help but it remains to be seen how much they can create outside of the scope of the data they were trained on.

reply
JKCalhoun
4 days ago
[-]
I no longer have skin in the game since I retired a few years back.

But I have had over 30 years in a career that has been nothing if not dynamic the whole time. And so I no doubt would keep on keepin' on (as the saying goes).

Future-proof a SWE career though? I think you're just going to have to sit tight and enjoy (or not) the ride. Honestly, I enjoyed the first half of my career much more than where SWE ended up in the latter half. To that end, I have declined to encourage anyone from going into SWE. I know a daughter of a friend that is going into it — but she's going into it because she has a passion for it. (So, 1) no one needed to convince her but 2) passion for coding may be the only valid reason to go into it anyway.)

Imagine the buggy-whip makers gathered around the pub, grousing about how they are going to future-proof their trade as the new-fangled automobiles begin rolling down the street. (They're not.)

reply
neilv
4 days ago
[-]
I was advising this MBA student's nascent startup (with the idea I might technical cofound once they're graduating), and they asked about whether LLMs would help.

So I listed some ways that LLMs practically would and wouldn't fit into the workflow of the service they doing. And related it to a bunch of other stuff, including how to make the most of the precious customer real-world access they'd have, and generating a success in the narrow time window they have, and the special obligations of that application domain niche.

Later, I mentally replayed the conversation in my head (as I do), and realized they were actually probably asking about using an LLM to generate the startup's prototype/MVP for the software they imagined.

And also, "generating the prototype" is maybe the only value that an MBA student had been told a "technical" person could provide at this point. :)

That interpretation of the LLM question didn't even occur to me when I was responding. I could've easily whipped up the generic Web CRUD any developer could do and the bespoke scrape-y/protocol-y integrations that fewer developers could do, both to a correctness level necessarily higher than the norm (which was required by this particular application domain). In the moment, it didn't occur to me that anyone would think an LLM would help at all, rather than just be an unnecessary big pile of risk for the startup, and potential disaster in the application domain.

reply
holografix
4 days ago
[-]
Beware of the miopia and gate keeping displayed on this thread.

There will be less SWE and DevOps and related jobs available in the next 24 months. Period.

Become hyper-aware of how a business measure your value as a SWE. How? Ask pointed, uncomfortable questions that force the people paying you to think and be transparent.

Stay on the cutting edge of how to increase your output and quality using AI.

Ie: how long does it take for a new joiner to produce code? How do you cut that time down by 10x using “AI”?

reply
irunmyownemail
4 days ago
[-]
"Stay on the cutting edge of how to increase your output and quality using AI."

If AI (there is no AI, it's ML Machine Learning) were truly as good as the true believers want to see it as; then it could be useful. It isn't.

reply
zahlman
4 days ago
[-]
>Become hyper-aware of how a business measure your value as a SWE. How? Ask pointed, uncomfortable questions that force the people paying you to think and be transparent.

Why would you want to increase the odds of them noticing your job is no longer necessary?

reply
aprilthird2021
4 days ago
[-]
> There will be less SWE and DevOps and related jobs available in the next 24 months. Period.

I wish an AI could revisit this comment 2 years later and post the stats to see if you're right.

reply
tumetab1
4 days ago
[-]
My personal estimation is that this will be noticeable in the first six months of 2025 in the USA big tech organizations.

I think this is actually already in motion in board meetings, I'm pretty sure executives are discussing something like "if we spend Z$ on AI tools, can we avoid hiring how many engineers?"

reply
aprilthird2021
3 days ago
[-]
Okay, I'll come back in 6 months and we can assess this honestly.

The government projects this job type will grow 18% in the next decade: https://www.bls.gov/ooh/computer-and-information-technology/...

reply
shiveenp
4 days ago
[-]
Explain to me how you think a sophisticated random text generator will take all jobs. Especially since it’s trained on existing corpus of data on the internet created by? Guess who, existing engineers. I’m genuinely curious on the backing of these “trust me bro” style statements.
reply
manesioz
4 days ago
[-]
If you think that state-of-the-art LLMs are "random text generators" I don't know what to say to you.
reply
UziTech
13 hours ago
[-]
A couple reasons why I am not scared of AI taking my job:

1. They are trained to be average coders.

The way LLMs are trained is by giving them lots of examples of previous coding tasks. By definition, half of those examples are below average. Unless there is a breakthrough on how they are trained any above average coder won't have anything to worry about.

2. They are a tool that can (and should) be used by humans.

Computers are much better at chess than any human, but a human with a computer is better than any computer. The same is true with a coding LLM. Any SWE who can work with an LLM will be much better than any LLM.

3. There is enough work for both.

I have never worked for a company where I have had less work when I left than when I started. I worked for one company where it was estimated that I had about 2 years worth of work to do and 7 years later, when I left, I had about 5 years of work left. Hopefully LLMs will be able to take some of the tedious work so we can focus on harder tasks, but most likely the more we are able to accomplish the more there will be to accomplish.

reply
brodouevencode
5 days ago
[-]
LLMs will just write code without you having to go copy-pasta from SO.

The real secret is talent stacks: have a combination of talents and knowledge that is desirable and unique. Be multi-faceted. And don't be afraid to learn things that are way outside of your domain. And no, you wouldn't be pigeon-holing yourself either.

For example there aren't many SWEs that have good SRE knowledge in the vehicle retail domain. You don't have to be an expert SRE, just be good enough, and understand the business in which you're operating and how those practices can be applied to auto sales (knowing the laws and best practices of the industry).

reply
w-hn
4 days ago
[-]
I have been unemployed for almost a year now (it started with a full division layoff and then no willingness or motivation to look for work at the time). Seeing the way AI can do most of the native app development (which is what I did) code I wrote I am losing almost any motivation to even try now. But I have been sleeping the best after college (where I slept awesome) and I have been working out, watching lots of theatre and cinema and playing lots of sports (two of them almost daily), reading a lot of literature, lots of podcasts. I guess I will just wait for my savings to run dry and then see what option then I'd have and what I would not if at all. I know the standard thing to do and say is "up-skill", "change with the times" etc and I am sure those have merit but I just feel I am done with the constant catch up, kind of checked out. I don't give a fuck anymore maybe, or I do and I am too demoralised to confront it.
reply
throwaway123198
16 minutes ago
[-]
Honestly this is my plan. I'm waiting for my time in this career to run out. In the mean time I'm trying to aggressively save whatever way I can.

This industry is going to shrink. And that's ok. We had our time. I wish it was longer and I wish I made more, but I don't think I ever saw myself here forever.

Kudos to those who made a whole career off of this.

I'm in my mid 30s with a wife and kid and I'm mostly hoping I can complete my immigration to the US before my time in this career ends.

Then, I might pursue starting a business or going back to school with the savings and hopefully my wife can be employed at the time in her completely unrelated field and cover us until I can figure out what to do next.

I'm not sad about this. I am happy I have tried to live frugally enough to never buy my own hype or believe that my salary is sustainable forever.

A part of it for me is that I never really loved building software. I might have ADHD and that might be a big factor, but honestly it was never what excited me.

The biggest fallacy I see a lot of people buying into is that LLMs being good enough to replace software developers means they're AGI and the world has other problems. I never quite bought that. I think software developers think too highly of themselves.

But they're also not technically wrong. Ya, a LLM can basically replace a family doctor and most internal medicine physicians. But the path to that happening is long and arduous due to how society has setup medics. Software devs never fought hard enough for their profession to be protected. So we are just the easiest target, same thing that happened to a lot of traders before.

If you're mid career like me, just get ready for the idea that your career is probably much shorter than you thought it will be and you will need to retrain. It will suck but many others have done it.

reply
sleazebreeze
4 days ago
[-]
I don't quite get this. How can you be relaxed when you have no plan and are just waiting for your savings to run out?
reply
patrulek
4 days ago
[-]
Maybe he has enough savings to live few years on it?
reply
crowcountry
3 days ago
[-]
I love this comment. AI can't enjoy a book for you. Enjoy your time off.
reply
busterarm
5 days ago
[-]
Most organizations don't move that fast. Certainly not fast enough to need this kind of velocity.

As it is I spend 95% of my time working out what needs to be done with all of the stakeholders and 5% of my time writing code. So the impact of AI on that is negligible.

reply
marpstar
5 days ago
[-]
This is consistent with my experience. We're far from a business analyst or product engineer being able to prompt an LLM to write the software themselves. It's their job to know the domain, not the technical details.

Maybe we all end up being prompt engineers, but I think that companies will continue to have experts on the business side as well as the tech side for any foreseeable future.

reply
prerok
4 days ago
[-]
As others have stated, I don't think we have anything to worry about.

As a SWE you are expected to neatly balance code, its architecture and how it addresses the customers' problems. At best, what I've seen LLMs produce is code monkey level programming (like copy pasting from StackOverflow), but then a human is still needed to tweak it properly.

What would be needed is General AI and that's still some 50 years away (and has been for the past 70 years). The LLMs are a nice sleight of hand and are useful but more often wrong than right, as soon as you delve into details.

reply
orr94
3 days ago
[-]
> delve

Nice try, ChatGPT!

reply
prerok
1 day ago
[-]
Hehe, yeah, I like the word and I am not a native English speaker. Working with people across the globe I find it interesting how some uses are more common in some parts than in others.

Use of "same" comes to mind in working with Indian developers or the use of "no worries" and how it proliferated from NZ/AU region to the rest of the world.

Otherwise, yes, I am Skynet providing C++ compilation from the future :)

reply
crazylogger
4 days ago
[-]
When doing X becomes cheaper with the invention of new tools, X is now more profitable and humanity tends to do much more of it.

Nearly all code was machine-generated after the invention of compilers. Did the compiler destroy programming? Absolutely not. Compilers and other tools like higher-level programming languages really kickstarted the software industry. IMO the potential transition from writing programming languages -> writing natural language and have LLM generate the program is still a smaller change than machine code/assembly -> modern programming languages.

If the invention of programming languages expanded the population of programmers from thousands to the 10s of millions, I think LLMs could expand this number again to a billion.

reply
finebalance
5 days ago
[-]
Not a clue.

I'm a decent engineer working as a DS in a consulting firm. In my last two projects, I checked in (or corrected) so much more code than the other two junior DS's in my team, that at the end some 80%-90% of the ML-related stuff had been directly built, corrected or optimized by me. And most of the rest that wasn't, was mostly because it was boilerplate. LLMs were pivotal in this.

And I am only a moderately skilled engineer. I can easily see somebody with more experience and skills doing this to me, and making me nearly redundant.

reply
busterarm
5 days ago
[-]
You're making the mistake of overvaluing volume of work output. In engineering, difference of perspective is valuable. You want more skilled eyeballs on the problem. You won't be redundant just as your slower coworkers aren't now.

It's not a race, it's a marathon.

reply
markus_zhang
4 days ago
[-]
For most of the business, they don't really need exceptionally good solutions. Something works is fine. I'm pretty sure AI can replace at least do 50% of my coding work. It's not going to replace me right now, but it's there in the foreseeable future, especially when companies realize they can have some setup like 1 good PM + a couple of seniors + bunch of AI agents instead of 1 good PM + a few seniors + bunch of juniors.
reply
TechDebtDevin
4 days ago
[-]
Once again, this seems to only apply to Python / ML SWEs. Try to get any of these models to write decent Rust, Go or C boilerplate.
reply
finebalance
4 days ago
[-]
I can't speak to Rust, Go or C, but for me LLMs have greatly accelerated the process of learning and building projects in Julia.
reply
selimthegrim
4 days ago
[-]
Can you give some more specific examples? I am currently learning Julia…
reply
mbm
4 days ago
[-]
Have been writing Rust servers with Cursor. Very enjoyable.
reply
barrell
4 days ago
[-]
Build something with an LLM outside of your comfort zone.

I was a pretty early adopter to an LLM based workflow. The more I used it, the worse my project became, and the more I learned myself. It didn’t take long for my abilities to surpass the LLM, and for the past year my usage of LLMs has been dropping dramatically. These days I spend more time in docs than in a chat conversation.

When chatGPT was announced, many people thought programming was over. As in <12 months. Here we are several years later, and my job looks remarkably the same.

I would absolutely love to not have to program anymore. For me, programming is a means to an end. However, after having used LLMs pretty much everyday for 2.5 years, it’s very clear to me that software engineering won’t be changing anytime soon. Some things will get easier and workflows may change, but if you want to build and maintain a moderately difficult production grade application with decent performance, you will still be programming in 10 years

reply
Animats
4 days ago
[-]
> Build something with an LLM outside of your comfort zone.

I need to try that. How do LLMs perform writing shaders? I need to modify a shader to add simple order-independent transparency to avoid a depth sort. Are LLMS up to that job? This is a known area, with code samples available, so an LLM might be able to do it.

I'm not a shader expert, and if I can avoid the months needed to become one, that's a win.

reply
barrell
4 days ago
[-]
I'm sure LLMs can do it, and I'm sure they will be riddled with mistakes! Especially if you're not competent enough in an area to grok docs (I don't mean that in a condescending way — I would definitely not be competent enough to grok shader docs) then LLMs can be a great way to learn.

It won't take long before you see all the mistakes in their responses before trying them, and from there it's just a hop and a skip to it being more efficient to just reading the docs then arguing with an LLM.

But that beginning part is where LLMs can provide the most value to programmers. Especially if you go in with the mindset that 90% of what it says will be wrong

reply
protocolture
4 days ago
[-]
I dont think the prediction game is worthwhile.

The Cloud was meant to decimate engineering AND development. But what it did was it created enough chaos that theres a higher demand for both than ever, just maybe not in your region and for your skillset.

LLMs are guaranteed to cause chaos, but the outcome of that chaos is not predictable. Will every coder now output the same as a team of 30 BUT there are 60 times as many screwed up projects made by wannabe founders that you have to come in and clean up? Will businesses find ways to automate code development and then turn around and have to bring the old guys back in constantly to fix up the pipeline. Will we all be coding in black boxes that the AI fills in?

I would make sure you just increase your skills and increase your familiarity with LLMs in case they become mandatory.

reply
throwaway123198
6 minutes ago
[-]
This isnt true. Cloud definitely completely changed the market for sysadmin and network engineers. Some folks managed to retrain but many were left out post dotcom bubble.
reply
crystal_revenge
4 days ago
[-]
My solution has been to work with LLMs and being on the one side of the industry trying to replace the other. I switched focus fairly early on in the "AI hype" era mainly because I thought it looked like a lot of fun to play with LLMs. After a few years I realized I'm quite a bit ahead of my of my former coworkers that stayed still. I've worked on both the product end and closer to the hardware, and as more and more friends ask for help on problems I've realized I do in fact have a lot of understanding of this space.

A lot of people in this discussion seem to be misunderstanding the way the industry will change with LLMs. It's not a simple as "engineers will be automated away" in the same sense that we're a long way away uber drivers disappearing from self driving cars.

But the impact of LLMs on software is going to be much closer to the impact of the web and web development on native application development. People used to scoff at the idea that any serious company would be run from a web app. Today I would say the majority of software engineers are, directory or indirectly, building web-based products.

LLMs will make coding easier, but they also enable a wide range of novel solutions within software engineering itself. Today any engineer can launch a 0-shot classifier that's better performing than what would have taken a team of data scientists just a few years ago.

reply
allendoerfer
5 days ago
[-]
> My prediction is that junior to mid level software engineering will disappear mostly, while senior engineers will transition to be more of a guiding hand to LLMs output, until eventually LLMs will become so good, that senior people won't be needed any more.

A steeper learning curve in a professional field generally translates into higher earnings. The longer you have to be trained to be helpful, the more a job generally earns.

I am already trained.

reply
veidelis
4 days ago
[-]
I will not believe the AI takeover until there's evidence. I haven't seen any examples, apart from maybe TODO list apps. Needless to say, that's nowhere near the complexity that is required at most jobs. Even if my carreer was endangered, I would continue the path I've taken so far: have a basic understanding of as much as possible (push out the edges of knowledge circle or whatever it's called), and strive to have an expert knowledge about maybe 1 or 2, or 3 subjects which pay for your daily bread. Basically just be good at what you do, and that should be fine. As for beginners, I advise to dive deep into a subject, start with a solid foundation and be sure to have a hands-on approach, while maintaining a consistent effort.
reply
codery
4 days ago
[-]
"The use of FORTRAN, like the earlier symbolic programming, was very slow to be taken up by the professionals. And this is typical of almost all professional groups. Doctors clearly do not follow the advice they give to others, and they also have a high proportion of drug addicts. Lawyers often do not leave decent wills when they die. Almost all professionals are slow to use their own expertise for their own work. The situation is nicely summarized by the old saying, “The shoe maker’s children go without shoes”. Consider how in the future, when you are a great expert, you will avoid this typical error!"

Richard W. Hamming, “The Art of Doing Science and Engineering”

Today, lawyers delegate many paralegal tasks like document discovery to computers and doctors routinely use machine learning models to help diagnose patients.

So why aren’t we — ostensibly the people writing software — doing more with LLM in our day-to-day?

If you take seriously the idea that LLM will fundamentally change the nature of many occupations in the coming decade, what reason do you have to believe that you’ll be immune from that because you work in software? Looking at the code you’ve been paid to write over the past few years, how much of that can you honestly say is truly novel?

We’re really not as clever as we think we are.

reply
munificent
4 days ago
[-]
> Looking at the code you’ve been paid to write over the past few years, how much of that can you honestly say is truly novel?

While the code I write is rarely novel, one of the primary intrinsic motivators that keeps being a software engineer is the satisfaction of understanding my code.

If I just wanted software problems to be solved and was content to wave my hands and have minions do the work, I'd be in management. I program because I like actually understanding the problem in detail and then understanding how the code solves it. And I've never found a more effective way to understand code than writing it myself. Everyone thinks they understand code they only read, but when you dig in, it's almost always flawed and surface level.

reply
atemerev
4 days ago
[-]
And that is exactly what we won’t have anymore, at least on the job. They are not paying us for our satisfaction and our understanding.
reply
munificent
4 days ago
[-]
I think that's an oversimplification.

Intrinsic reward is always part of the compensation package for a job. That's why jobs that are more intrinsically rewarding tend to pay less. It's not because they are exploiting workers, it's because workers add up all of the rewards of the job, including the intrinsic ones, when deciding whether to accept it.

If I have to choose between a job where I'm obligated to pump out code as fast as possible having an LLM churn out as much as it can versus a job that is slower and more deliberate, I'll take the latter even if it pays less.

reply
mellosouls
4 days ago
[-]
It depends on whether you think they are a paradigm change (at the very least) or not. If you don't then either you will be right or you will be toast.

For those of us who do think this is a revolution, you have two options:

1. Embrace it.

2. Find another career, presumably in the trades or other hands-on vocations where AI ingress will lag behind for a while.

To embrace it you need to research the LLM landscape as it pertains to our craft and work out what interests you and where you might best be able to surf the new wave, it is rapidly moving and growing.

The key thing (as it ever was) is to build real world projects mastering LLM tools as you would an IDE or language; keep on top of the key players, concepts and changes; and use your soft skills to help open-eyed others follow the same path.

reply
irunmyownemail
4 days ago
[-]
"For those of us who do think this is a revolution"

Revolutions come, revolutions go, it's why they're called revolutions. Nothing interesting comes out of most revolutions.

reply
yodsanklai
4 days ago
[-]
> The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code

LLMs do help but to a limited extend. Never heard of anyone in the second category.

> how do you future-proof your career in light of, the inevitable, LLM take over?

Generally speaking, coding has never been a future proof career. Ageism, changes in technology, economic cycles, offshoring... When I went into that field in early 2000s, it was kind of expected that most people if they wanted to be somewhat successful had to move eventually to leadership/management position.

Things changed a bit with successful tech companies competing for talents and offering great salaries and career paths for engineers, especially in the US but it could very well be temporary and shouldn't be taken for granted.

LLMs is one factor among many that can impact our careers, probably not the most important. I think there's a lot of hype and we're not being replaced by machines anytime soon. I don't see a world where an entrepreneur is going to command an LLM to write a service or a novel app for them, or simply maintain an existing complex piece of software.

reply
bertylicious
4 days ago
[-]
Disclaimer: I wholeheartedly hate all the systems they call AI these days and I hate the culture around it for technological, ecological, political, and philosophical reasons.

I won't future-proof my career against LLMs at all. If I ever see myself in the position that I must use them to produce or adjust code, or that I mostly read and fix LLM-generated code, then I'll leave the industry and do something else.

I see potential in them to simplify code search/navigation or to even replace stackoverflow, but I refuse to use them to build entire apps. If management in turn believes that I'm not productive enough anymore then so be it.

I expect that lots of product owners and business people will be using them in order to quickly cobble something together and then hand it over to a dev for "polishing". And this sounds like a total nightmare to me. The way I see it, devs make this dystopian nightmare a little bit more true everytime they use an LLM to generate code.

reply
atemerev
4 days ago
[-]
It might be a dystopian nightmare, but the transition to it is inevitable, and closing one’s eyes and doing nothing might not be the most optimal strategy. Delaying or fighting it will also not work.

What “something else” do you have in mind?

reply
mukunda_johnson
4 days ago
[-]
Who knows what the future holds? As a SWE you are expected to adapt and use modern technology. Always learning is a part of the job. Look at all the new things to build with, frameworks being updated/changes, etc. Making things easier.

LLMs will make things easier, but it's easy to disagree that they will threaten a developer's future with these reasons in mind:

* Developers should not be reinventing the wheel constantly. LLMs can't work very well on subjects they have no info on (proprietary work).

* The quality is going to get worse over time with the internet being slopped up with the mass disregard for quality content. We are at a peak right now. Adding more parameters isn't going to make the models better. It's just going to make them better at plagiarism.

* Consistency - a good codebase has a lot of consistency to avoid errors. LLMs can produce good coding examples, but they will not have much regard for your how -your- project is currently written. Introducing inconsistency makes maintenance more difficult, let alone the bugs that might slip in and wreak havoc later.

reply
markus_zhang
4 days ago
[-]
I try to go to the lowest level I could. During my recent research into PowerPC 32-bit assembly language I have found 1) Not many material online, and what available are usually PDF with pictures which could be difficult for LLMs to pick up, and 2) Indeed ChatGPT didn't give good answer even for a Hello, World example.

I think hardware manufacturers, including ones that produce chips, are way less encouraged to put things online and thus has a wide moat. "Classic" ones such as 6502 or 8086 definitely have way more material. "Modern" popular ones such as x86/64 too have a lot of material online. But "obscure" ones don't.

On software side, I believe LLMs or other AI can easily replace juniors who only knows how to "fill-in" the code designed by someone else, in a popular language (Python, Java, Javascript, etc.), in under 10 years. In fact it has greatly supported my data engineering work in Python and Scala -- does it always produce the most efficient solution? No. Does it greatly reduces the time I need to get to a solution? Yes, definitely!

reply
tetha
4 days ago
[-]
I've been noticing similar patterns as well.

One instructive example was when I was implementing a terraform provider for an in-house application. This thing can template the boilerplate for a terraform resource implementation in about 3-4 auto completes and only gets confused a bit by the plugin-sdk vs the older implementation way. But once it deals with our in-house application, it can guess some things, but it's not good. Here it's ok.

In my private gaming projects on Godot... I tried using CoPilot and it's just terrible to the point of turning it off. There is Godot code out there how an entity handles a collision with another entitiy, and there are hundreds of variations out there, and it wildly hallucinates between all of them. It's just so distracting and bad. ChatGPT is OK at navigating the documentation, but that's about it.

If I'm thinking about my last job, which -- don't ask why -- was writing Java Code with low-level concurrency primitives like thread pools, raw synchronized statements and atomic primitives... if I think about my experience with CoPilot about code like this, I'm honestly feeling strength leaving my body because that would be so horrible. I've spend literal months chasing a once-in-a-billion concurrency bug in that code once.

IMO, the most simple framework-fill-in code segment will suffer from LLMs. But a well-coached junior can move past that stage quite quickly.

reply
markus_zhang
4 days ago
[-]
Yeah I basically treat LLM as a better Google search. It is indeed a lot better than Google if I want to find some public information, but I need to be careful and double-check.

Other than that it completely depends on luck I guess. I'm pretty sure if companies feed in-house information to it that will make it much more useful, but those agents would be privately owned and maintained.

reply
starbugs
4 days ago
[-]
In a market, scarce services will always be more valuable than abundant services. Assuming that AI will at some point be capable of replacing an SWE, to future-proof your career, you will need to learn how to provide services that AI cannot provide. Those might not be what SWEs currently usually offer.

I believe it's actually not that hard to predict what this might be:

1. Real human interaction, guidance and understanding: This, by definition, is impossible to replace with a system, unless the "system" itself is a human.

2. Programming languages will be required in the future as long as humans are expected to interface with machines and work in collaboration with other humans to produce products. In order to not lose control, people will need to understand the full chain of experience required to go from junior SWE to senior SWE - and beyond. Maybe less people will be required to produce more products but still, they will be required as long as humanity doesn't decide to give up control over basically any product that involves software (which will very likely be almost all products).

3. The market will get bigger and bigger to the point where nothing really works without software anymore. Software will most likely be even more important to have a unique selling point than it is now.

4. Moving to a higher level of understanding of how to adapt and learn is beneficial for any individual and actually might be one of the biggest jumps in personal development. This is worth a lot for your career.

5. The current state of software development in most companies that I know has reached a point where I find it actually desirable for change to occur. SWE should improve as a whole. It can do better than Agile for sure. Maybe it's time to "grow up" as a profession.

reply
sdybskiy
4 days ago
[-]
I just copied the html from this thread into Claude to get a summary. I think being very realistic, a lot of SWE job requirements will be replaced by LLMs.

The expertise to pick the right tool for the right job based on previous experience that senior engineers poses is something that can probably be taught to an LLM.

Having the ability to provide a business case for the technology to stakeholders that aren't technologically savvy is going to be a people job for a while still.

I think positioning yourself as an expert / bridge between technology and business is what will future-proof a lot of SWE, but in reality, especially at larger organizations, there will be a trimming process where the workload of what was thought to need 10 engineers can be done with 2 engineers + LLMs.

I'm excited about the future where we're able to create software quicker and more contextual to each specific business need. Knowing how to do that can be an advantage for software engineers of different skill levels.

reply
dayvid
4 days ago
[-]
I'd argue design and UX will be more important for engineers. You need taste to direct LLMs. You can automate some things and maybe have it do data-driven feedback loops but there are so many random industries/locations with off requirements, changes in trends, etc. that it will require someone to oversee and make adjustments
reply
og2023
3 days ago
[-]
> I think positioning yourself as an expert / bridge between technology and business is what will future-proof a lot of SWE.

Could you provide a few examples of roles and companies where this could be applicable please?

reply
irunmyownemail
4 days ago
[-]
"The expertise to pick the right tool for the right job based on previous experience that senior engineers poses is something that can probably be taught to an LLM."

AI is just ML Machine Learning which isn't about learning at all. It's about absorbing data, then reiterating it to sound intelligent. Actual learning takes human beings with feelings, hopes, dreams, motivations, passions, etc.

reply
chasd00
4 days ago
[-]
Learn the tools, use them where they shine avoid them where they do not. Your best bet is just learn to start using LLM in your day to day coding and find out what works and what doesn’t.
reply
cruffle_duffle
4 days ago
[-]
This in basically the answer. It’s a tool. Just like all the other tools. It is both very powerful and very stupid and the more you use it the better your intuition will be about if a given task is good for it or bad.

It’s not gonna replace developers anytime soon. It’s going to make you more productive in some things and those that avoid it are shooting themselves in the foot but whatever.

What will happen is software will march forward eating the world like it has for so long. And this industry will continue to change like it always has. And you’ll have to learn new tools like you always did. Same as before same as it will always be.

reply
xinu2020
5 days ago
[-]
>junior to mid level software engineering will disappear mostly, while senior engineers will transition

It's more likely the number of jobs at all level of seniority will decrease, but none will disappear.

What I'm interested to see is how the general availability of LLM will impact the "willingness" of people to learn coding. Will people still "value" coding as an activity worth their time?

For me as an already "senior" engineer, using LLMs feel like a superpower, when I think of a solution to a problem, I can test and explore some of my ideas faster by interacting with it.

For a beginner, I feel that having all of this available can be super powerful too, but also truly demotivating. Why bother to learn coding when the LLM can already do better than you? It takes years to become "good" at coding, and motivation is key.

As a low-dan Go player, I remember feeling a bit that way when AlphaGo was released. I'm still playing Go but I've lost the willingness to play competitively, now it's just for fun.

reply
throwaway_43793
5 days ago
[-]
I think coding will stay as a hobby. You know, like there are people who still build physical stuff with wires and diodes. None of them are doing it for commercial reasons, but the ability to produces billions of transistors on a silicon die did not stop people from taking electrical engineering as a hobby.
reply
data_block
4 days ago
[-]
I work on a pretty straightforward CRUD app in a niche domain and so far they haven’t talked about replacing me with some LLM solution. But LLMs have certainly made it a lot faster to add new features. I’d say working in a niche domain is my job security. Not many scientists want to spend their time trying to figure out how to get an LLM to make a tool that makes their life easier - external competitors exist but can’t give the same intense dedication to the details required for smaller startups and their specific requirements.

A side note - maybe my project is just really trivial, maybe I’m dumber or worse at coding than I thought, or maybe a combination of the above, but LLMs have seemed to produce code that is fine for what we’re doing especially after a few iteration loops. I’m really curious what exactly all these SWEs are working on that is complex enough that LLMs produce unusable code

reply
lcvw
4 days ago
[-]
I’ve carved out a niche of very low level systems programming and optimization. I think it’ll be awhile before LLMs can do what I do. I also moved to to staff so I think a lot of what I do now will still exist with junior/mid level devs being reduce by AI.

But I am focusing on maximizing my total comp so I can retire in 10-15 years if I need to. I think most devs are underestimating where this is eventually going to go.

reply
brotchie
4 days ago
[-]
Short term defense is learning about, and becoming an expert in, using LLMs in products.

Longer term defense doesn't exist. If Software Engineering is otherwise completely automated by LLMs, we're in AGI territory, and likely recursive self-improvement plays out (perhaps not AI-foom, by huge uptick in capability / intelligence per month / quarter).

In AGI territory, the economy, resource allocation, labor vs. capital all transition into a new regime. If problems that previously took hundreds of engineers working over multiple years can now be built autonomously within minutes, then there's no real way to predict the economic and social dynamics that result from that.

reply
Ocerge
2 days ago
[-]
This is where I've landed. If my job as a senior engineer is truly automatable, then there isn't much I can do about it other than pull up a lawnchair.
reply
ldjkfkdsjnv
5 days ago
[-]
I'm working as if in 2-3 years the max comp I will be able to get as a senior engineer will be 150k. And it will be hard to get that. It's not that it will disappear, its that the bar to produce working software will go way down. Most knowledge and skill sets will be somewhat commoditized.

Also pretty sure this will make outsourcing easier since foreign engineers will be able to pick up technical skills easier

reply
allan_s
5 days ago
[-]
> Also pretty sure this will make outsourcing easier since foreign engineers will be able to pick up technical skills easier

Most importantly it will be easier to have your code comment, class etc. translated into English.

i.e I used to work in country where the native language is not related to english (i.e not Spanish, German, French etc.) and it was incredibly hard for student and developers to name things in English and instead it was more natural to name things in their language.

So even a LLM that take the code and "translate it" (that before no translation tool was able to do) is opening a huge chunk of developers to the world.

reply
code_for_monkey
5 days ago
[-]
yeah I think youre correct, I see a quick ceiling to senior software engineer. On the other hand I think a lot of junior positions are going to get removed, and for a while having the experience to be at a senior level will be rarer. So, there that.
reply
alkonaut
4 days ago
[-]
I might be too optimistic but I think LLMs will basically replace the worst and most junior devs, while the job of anyone with 5 or 10+ years of experience will be babysitting AI codevelopers, instead of junior developers.

I find a lot of good use for LLMs but it's only as a multiplier with my own effort. It doesn't replace much anything of what I do that actually requires thought. Only the mechanical bits. So that's the first thing I ensure: I'm not involved in "plumbing software development". I don't plug together CRUD apps with databases, backend apis and some frontend muck. I try to ensure that at least 90% of the actual code work is about hard business logic and domain specific problems, and never "stuff that would be the same regardless of whether this thing is about underwear or banking".

If I can delegate something to it, it's every bit as difficult as delegating to another developer. Something we all know is normally harder than doing the job yourself. The difference between AI Alice and junior dev Bob, is that Alice doesn't need sleep. Writing specifications, reviewing changes and ensuring Alice doesn't screw up is every bit as hard as doing the same with Bob.

And here is the kicker: whenever this equation changes, that we have some kind of self-going AI Alice, then we're already at the singularity. Then I'm not worried about my job, I'll be in the forest gathering sticks for my fire.

reply
nickd2001
4 days ago
[-]
To me it seems possible, that seniors will become in even more demand, because learning to become a decent developer is actually harder if you're distracted by leaning on LLMs. Thus, the supply of up-and-coming good new seniors may be throttled. This to me is because LLMs don't abstract code well. Once upon a time electronics engineers had to know a lot about components. Then along came integrated circuits and they had to know about them, and less about components. Once upon a time programmers had to know machine code or assembler. I've never had to know about those for my job. I programmed in C++ for years and had to know plenty about memory. These days I rarely need to as much, but some basic knowledge is needed. Its fine if a student is learning to code mostly in Python, but a short course in C is probably a good idea even today. But as for LLMs, you can't say "now I need to know less about this specific thing that's been black-boxed for me" , because it isn't wrapped conveniently like that. I'm extremely glad that when I was a junior, LLMs weren't around. Really seems like a barrier to learning. Its impossible to understand all the generated code, but also difficult, without significant career experience, to be able to judge what you need to know about, and what you don't. Feel sorry for juniors today to be honest!
reply
n_ary
4 days ago
[-]
I recall the Code Generation from class diagram and then low-code declared as death of all devs.

The current generation of LLMs are immensely expensive and will become further more if all the VC money disappears.

A FT dev is happy to sit there and deal with all the whinning, meeting, alignment, 20 iterations of refactoring, architectural change, late friday evening to put out fire. To make an LLM work for 40h/week with that much of context would cost insane and several steering people. Also the level of ambiguous garbage spewed by management and requirement engineers which I turn to value is… difficult with LLMs.

Lets take it this way, before LLMs, we have wonderful outsourcing firms that costs slightly less than maintaining inhouse team, if devs were to disappear, that would be the nail. LLMs need steering and does not deal well with ambiguity, so I don’t see a threat.

Also for all the people touting LLM holy song, try asking windsurf or cursor to generate something niche which does not exist publicly, see how well it does. Aside, I closed several PRs last week because people started using generated code with 100+ LOC which would do with just one or two lines if the authors took some time to review the latest release of the library.

reply
blablabla123
5 days ago
[-]
I've been quite worried about it at this point. However I see that "this is not going to happen" is likely not going to help me. So I rather go with the flow, use it where reasonable even if it's not clear to me whether AI is truly ever leaving the hype stage.

FWIW I was allowed to use AI at work since ChatGPT appeared and usually it wasn't a big help for coding. However for education and trying to "debug" funny team interactions, I've surely seen some value.

My guess is though that some sort of T-shaped skillset is going to be more important while maintaining a generalist perspective.

reply
pockmarked19
4 days ago
[-]
I see this sort of take from a lot of people and I always tell them to do the same exercise. A cure for baseless fears.

Pick an LLM. Any LLM.

Ask it what the goat river crossing puzzle is. With luck, it will tell you about the puzzle involving a boatman, a goat, some vegetable, and some predator. If it doesn’t, it’s disqualified.

Now ask it to do the same puzzle but with two goats and a cabbage (or whatever vegetable it has chosen).

It will start with the goat. Whereupon the other goat eats the cabbage left with it on the shore.

Hopefully this exercise teaches you something important about LLMs.

reply
FlyingLawnmower
4 days ago
[-]
https://chatgpt.com/share/6760a122-0ec4-8008-8b72-3e950f0288...

My first try with o1. Seems right to me…what does this teach us about LLMs :)?

reply
ktxyznvda
4 days ago
[-]
Let's ask for 3 goats then. And how much developing o1 cost, how much another version will cost? X billions of dollars per goat is not really a good scaling when any number of goats or cabbages can exist.
reply
hviniciusg
4 days ago
[-]
emmmmm... i think your argument is not valid any more:

https://chatgpt.com/c/6760a0a0-fa34-800c-9ef4-78c76c71e03b

reply
pockmarked19
4 days ago
[-]
Seems like they caught up because I have posted this before including in chatGPT. All that means is you have to change it up slightly.

Unfortunately “change it up slightly” is not good enough for people to do anything with, and anything more specific just trains the LLM eventually so it stops proving the point.

I cannot load this link though.

reply
mofeien
4 days ago
[-]
It also means that you should update your belief about the reasoning capabilities of LLMs at least slightly. if disconfirming evidence doesn’t shake your beliefs at all, you dont really have beliefs, you have an ideology.
reply
zahlman
4 days ago
[-]
The observation here is far too easily explained in other ways to be considered particularly strong evidence.

Memorizing a solution to a classic brainteaser is not the same as having the reasoning skills needed to solve it. Finding out separate solutions for related problems might allow someone to pattern-match, but not to understand. This is about as true for humans as for LLMs. Lots of people ace their courses, even at university level, while being left with questions that demonstrate a stunning lack of comprehension.

reply
isx726552
3 days ago
[-]
Or it just means anything shared on the internet gets RLHF’d / special cased.

It’s been clear for a long time that the major vendors have been watching online chatter and tidying up well-known edge cases by hand. If you have a test that works, it will keep working as long as you don’t share it widely enough to get their attention.

reply
irunmyownemail
4 days ago
[-]
"It also means that you should update your belief about the reasoning capabilities of LLMs at least slightly."

AI, LLM, ML - have no reasoning ability, they're not human, they are software machines, not people. People reason, machines calculate and imitate, they do not reason.

reply
atemerev
4 days ago
[-]
People are just analog neural circuits. They are not magic. Human brain is one physical implementation of intelligence. It is not the only one possible.
reply
tokioyoyo
4 days ago
[-]
I don't want to sound like a prick, but this is generally what people mean when they say "things are moving very fast due to amount of investments". 6 months old beliefs can be scrapped due to new models, research and etc.
reply
fabianholzer
4 days ago
[-]
Where "etc." includes lots of engineers who are polishing up the frontend to the model which translates text to tokens and vice versa, so that well-known demonstrations of the lack of reasoning in the model cannot be surfaced as easily as before? What reason is there to believe that the generated output, while often impressive, is based on something that resembles understanding, something that can be relied on?
reply
cdfuller
4 days ago
[-]
reply
randmeerkat
4 days ago
[-]
Here’s a question for you, have they automated trains yet? They’re literally on tracks. Until trains are fully automated, then after that cars, then later airplanes, then maybe, just maybe “ai” will come for thought work. Meanwhile, Tesla’s “ai” still can’t stop running into stopped firetrucks[1]…

[1] https://www.wired.com/story/tesla-autopilot-why-crash-radar/

reply
swishman
4 days ago
[-]
They have automated trains..
reply
irunmyownemail
4 days ago
[-]
They also have planes with autopilot. Who still flies planes, people. For that matter, all automated trains still have many people involved.
reply
hnthrow90348765
5 days ago
[-]
The simple answer is to use LLMs so you can put it on your resume. Another simple answer is to transition to a job where it's mostly about people.

The complex answer is we don't really know how good things will get and we could be at the peak for the next 10-20 years, or there could be some serious advancements that make the current generation look like finger-painting toddlers by comparison.

I would say the fear of no junior/mid positions is unfounded though since in a generation or two, you'd have no senior engineers.

reply
ThrowawayR2
5 days ago
[-]
LLMs are most capable where they have a lot of good data in their training corpus and not much reasoning is required. Migrate to a part of the software industry where that isn't true, e.g. systems programming.

The day LLMs get smart enough to read a chip datasheet and then realize the hardware doesn't behave the way the datasheet claims it does is the day they're smart enough to send a Terminator to remonstrate with whoever is selling the chip anyway so it's a win-win either way, dohohoho.

reply
schappim
2 days ago
[-]
I think what we're seeing echoes a pattern we have lived through many times before, just with new tooling. Every major leap in developer productivity - from assembly to higher-level languages, from hand-rolled infrastructure to cloud platforms, from code libraries to massive open-source ecosystems - has sparked fears that fewer developers would be needed. In practice, these advancements have not reduced the total number of developers; they have just raised the bar on what we can accomplish.

LLMs and code generation tools are no exception. They will handle some boilerplate and trivial tasks, just like autocompletion, frameworks, and package managers already do. This will make junior-level coding skills less of a differentiator over time. But it is also going to free experienced engineers to spend more time on the complex, high-level challenges that no model can solve right now - negotiating unclear requirements, architecting systems under conflicting constraints, reasoning about trade-offs, ensuring reliability and security, and mentoring teams.

It is less about "Will these tools replace me?" and more about "How do I incorporate these tools into my workflow to build better software faster?" That is the question worth focusing on. History suggests that the demand for making complex software is bottomless, and the limiting factor is almost never just "typing code." LLMs are another abstraction layer. The people who figure out how to use these abstractions effectively, augmenting their human judgment and creativity rather than fighting it, will end up leading the pack.

reply
th3byrdm4n
3 days ago
[-]
It's lowering the bar for developers to enter the marketplace, in a space that is wildly under saturated. We'll all be fine. There's tons of software to be built.

More small businesses will be able to punch-up with LLMs tearing down walled gardens that were reserved for those with capital to spend on lawyers, consultants and software engineering excellence.

It's doing the same thing as StackOverflow -- hard problems aren't going away, they're becoming more esoteric.

If you're at the edge, you're not going anywhere.

If you're in the middle, you're going to have a lot more opportunities because your throughput should jump significantly so your ROI for mom and pop shops finally pencils.

Just be sure you actually ship and you'll be fine.

reply
mlboss
4 days ago
[-]
LLM's for now only have 2-3 senses. The real shift will come when they can collect data using robotics. Right now a human programmer is needed to explain the domain to AI and review the code based on the domain.

On the bright side, every programmer can start a business without a need to hire a army of programmers. I think we are getting back to artisan based economy where everyone can be a producer without a corporate job.

reply
askonomm
4 days ago
[-]
I feel the influencer crowd is overblowing the actual utility of LLM's massively. Kind of feels akin to the "cryptocurrency will take over the world" trope 10 years ago, and yet .. I don't see it any crypto in my day to day life to this day. Will it improve general productivity and boring tasks nobody wants to do? Sure, but to think any more than that frankly I'd like some hard evidence of it being actually able to "reason". And reason better than most devs I've ever worked with, because quite honestly humans are also pretty bad at writing software, and LLM's learn from humans, so ...
reply
asdev
4 days ago
[-]
the hype bubble is nearing it's top
reply
okasaki
5 days ago
[-]
Make lots of incompatible changes to libraries. No way LLMs keep up with that since their grasp on time is weak at best.
reply
siliconc0w
4 days ago
[-]
I think the real world improvements will plateau and it'll take awhile for current enterprise just to adopt what is possible today but that is still going to cause quite a bit of change. You can imagine us going from AI Chat Bots with RAG on traditional datastores, to AI-enhanced but still human-engineered SaaS Products, to bespoke AI-generated and maintained products, to fully E2E AI Agentic products.

An example is do you tell the app to generate a python application to manage customer records or do you tell it "remember this customer record so other humans or agents can ask for it" and it knows how to efficiently and securely do that.

We'll probably see more 'AI Reliability Engineer' type roles which will likely be around building and maintain evaluation datasets, tracking and stomping out edge cases, figuring out human intervention/escalation, model routing, model distillation, Context-window vs Fine-tuning, and overall intelligence-cost management.

reply
ramon156
4 days ago
[-]
Another thing I want to note is; even if I get replaced by AI, I think I'd be sad for a bit. I think it'd be a fun period to try to find a "hand-focused" job. Something like a bakery or chocolatier. I honestly wouldn't mind if I could do the same satisfying work but more hands-on, rather than behind a desk all day
reply
throw4847285
4 days ago
[-]
I am not afraid of companies replacing Software Engineers with LLMs while being able to maintain the same level of quality. The real thing to worry about is that companies do what they did to QA Engineers, technical writers, infrastructure engineers, and every other specialized role in the software development process. In other words, they will try to cut costs, and the result will be worse software that breaks more often and doesn't scale very well.

Luckily, the bar has been repeatedly lowered so that customers will accept worse software. The only way for these companies to keep growing at the rate their investors expect them to is to try and cut corners until there's nothing left to cut. Software engineers should just be grateful that the market briefly overvalued them to the degree that did and prepare for a regression to the mean.

reply
cookiemonsieur
1 day ago
[-]
I don't know about all the other commenters on this thread, but my personal experience with LLMs is that it really is just a glorified stack overflow. That's how I use it anyway. It saves me a couple clicks and keystrokes on Google.

It's also incredibly useful for prototyping and doing grunt work. (ie, if you work with lambda functions on AWS, you can get it to spit out a boilerplate for you to amend).

reply
liampulles
4 days ago
[-]
The biggest "fault" of LLMs (which continues) is their compliance. Being a good software dev often means pushing back and talking through tradeoffs, and finding out what the actual business rules are. I.e. interrogating the work.

Even if these LLM tools do see massive improvements, it seems to me that they are still going to be very happy to take the set of business rules that a non-developer gives them, and spit out a program that runs but does not do what the user ACTUALLY NEEDS them to do. And the worst thing is that the business user may not find out about the problems initially, will proceed to build on the system, and these problems become deeper and less obvious.

If you agree with me on that, then perhaps what you should focus out is building out your consulting skills and presence, so that you can service the mountains of incoming consulting work.

reply
Terretta
5 days ago
[-]
For thousands of years, the existence of low cost or even free apprentices for skilled trades meant there was no work left for experts with mastery of the trade.

Except, of course, that isn't true.

reply
tuyiown
5 days ago
[-]
> the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.

It's not enough to make generalizations yet. What kind of projects ? What tuning does it need ? What kind of end users ? What kind of engineers ?

In the field I work with, I can't see how LLMs can help with a clear path to convergence to a reliable product. I anything, I suspect we will need more manual analysis to fix insanity we receive from our providers if they start working with LLMs.

Some jobs will disappear, but I've yet to see signs of anything serious emerge yet. You're right for juniors though, but I suspect those who stop training will loose their life insurance and will starve under LLMs either by competition, our the amount of operational instability it will bring.

reply
wanobi
4 days ago
[-]
LLMs can help us engineers gain context quickly on how to write solutions. But, I don't see it replacing us anytime soon.

I'm currently working on a small team with a senior engineer. He's the type of guy who preach letting Cursor or whatever new AI IDE is relevant nowadays do most of the work. Most of his PRs are utter trash. Time to ship is slow and code quality is trash. It's so obvious that the code is AI generated. Bro doesn't even know how to rebase properly resulting to overwriting (important) changes instead of fixing conflicts. And guess who has to fix their mistakes (me and I'm not even a senior yet).

reply
lukan
4 days ago
[-]
"until eventually LLMs will become so good, that senior people won't be needed any more"

You are assuming AGI will come eventually.

I assume eventually the earth will be consumed by the sun, but I am equally less worried as I don't see it as a near future.

I am still regular dissapointed, when I try out the newest hyped model. They usually fail my tasks and require lots of manual labour.

So if that gets significantly better, I can see them replacing junior devs. But without understanding, they cannot replace a programmer for any serious task. But they maybe enable more people to become good enough programmers for their simple task. So less demand for less skilled devs indeed.

My solution - the same as before - improve my skills and understanding.

reply
archagon
4 days ago
[-]
I have as much interest in the art of programming as in building products, and becoming some sort of AI whisperer sounds tremendously tedious to me. I opted out of the managerial track for the same reason. Fortunately, I have enough money saved that I can probably just work on independent projects for the rest of my career, and I’m sure they’ll attract customers whether or not they were built using AI.

With that said, looking back on my FAANG career in OS framework development, I’m not sure how much of my work could have actually been augmented by AI. For the most part, I was designing and building brand new systems, not gluing existing parts together. There would not be a lot of precedent in the training data.

reply
vbezhenar
4 days ago
[-]
So far I haven't found much use for LLM code generation. I'm using Copilot as a glorified autocomplete and that's about it. I tried to use LLM to generate more code, but it takes more time to yield what I want than to write it myself, so it's just not useful.

Now ChatGPT really became indispensable tool for me, on the one row with Google and StackOverflow.

So I don't feel threatened so far. I can see the potential, and I think that it's very possible for LLM-based agents to replace me eventually, probably not this generation, but few years later - who knows. But that's just hand waving, so getting worried about possible future is not useful for mental well-being.

reply
angoragoats
5 days ago
[-]
I think there's been a lot of fear-mongering on this topic and "the inevitable LLM take over" is not as inevitable as it might seem, perhaps depending on your definition of "take over."

I have personally used LLMs in my job to write boilerplate code, write tests, make mass renaming changes that were previously tedious to do without a lot of grep/sed-fu, etc. For these types of tasks, LLMs are already miles ahead of what I was doing before (do it myself by hand, or have a junior engineer do it and get annoyed/burnt out).

However, I have yet to see an LLM that can understand an already established large codebase and reliably make well-designed additions to it, in the way that an experienced team of engineers would. I suppose this ability could develop over time with large increases in memory/compute, but even state-of-the-art models today are so far away from being able to act like an actual senior engineer that I'm not worried.

Don't get me wrong, LLMs are incredibly useful in my day-to-day work, but I think of them more as a leap forward in developer tooling, not as an eventual replacement for me.

reply
m_ke
5 days ago
[-]
Those models will be here within a year.

Long context is practically a solved problem and there's a ton of work now on test time reasoning motivated by o1 showing that it's not that hard to RL a model into superhuman performance as long as the task is easy / cheap to validate (and there's works showing that if you can define the problem you can use an LLM to validate against your criteria).

reply
angoragoats
5 days ago
[-]
I intentionally glossed over a lot in my first comment, but I should clarify that I don't believe that increased context size or RL is sufficient to solve the problem I'm talking about.

Also "as long as the task is easy / cheap to validate" is a problematic statement if we're talking about the replacement of senior software engineers, because problem definition and development of validation criteria are core to the duties of a senior software engineer.

All of this is to say: I could be completely wrong, but I'll believe it when I see it. As I said elsewhere in the comments to another poster, if your points could be expressed in easily testable yes/no propositions with a timeframe attached, I'd likely be willing to bet real money against them.

reply
m_ke
5 days ago
[-]
Sorry I wasn't clear enough, the cheap to validate part is only needed to train a large base model that can handle writing individual functions / fix bugs. Planning a whole project, breaking it down into steps and executing each one is not something that current LLMs struggle at.

Here's a recipe for a human level LLM software engineer:

1. Pretrain an LLM on as much code and text as you can (done already)

2. Fine tune it on synthetic code specific tasks like: (a) given a function, hide the body, make the model implement it and validate that it's functionally equivalent to the target function (output matching), can also have an objective to optimize the runtime of the implementation (b) introduce bugs in existing code and make the LLM fix it, (c) make LLM make up problems, write tests / spec for it, then have it attempt to implement it many times until it comes up with a method that passes the tests, (d-z) a lot of other similar tasks that use linters, parsers, AST modifications, compilers, unit tests, specs validated by LLMs, profilers to check that the produced code is valid

3. Distill this success / failure criteria validator to a value function that can predict probability of success at each token to give immediate reward instead of requiring full roll out, then optimize the LLM on that.

4. At test time use this final LLM to produce multiple versions until one passes the criteria, for the cost of an hour of a software engineer you can have an LLM produce millions of different implementations.

See papers like: https://arxiv.org/abs/2409.15254 or slides from NeurIPS that I mentioned here https://news.ycombinator.com/item?id=42431382

reply
angoragoats
4 days ago
[-]
> At test time use this final LLM to produce multiple versions until one passes the criteria, for the cost of an hour of a software engineer you can have an LLM produce millions of different implementations.

If you're saying that it takes one software engineer one hour to produce comprehensive criteria that would allow this whole pipeline to work for a non-trivial software engineering task, this is where we violently disagree.

For this reason, I don't believe I'll be convinced by any additional citations or research, only by an actual demonstration of this working end-to-end with minimal human involvement (or at least, meaningfully less human involvement than it would take to just have engineers do the work).

edit: Put another way, what you describe here looks to me to be throwing a huge number of "virtual" low-skilled junior developers at the task and optimizing until you can be confident that one of them will produce a good-enough result. My contention is that this is not a valid methodology for reproducing/replacing the work of senior software engineers.

reply
m_ke
4 days ago
[-]
That's not what I'm saying at all. I'm saying that there's a trend showing that you can improve LLM performance significantly by having it generate multiple responses until it produces one that meets some criteria.

As an example, huggigface just posted an article showing this for math, where with some sampling you can get a 3B model to outperform a 70B one: https://huggingface.co/spaces/HuggingFaceH4/blogpost-scaling...

Formalizing the criteria is not as hard as you're making it out to be. You can have an LLM listen to a conversation with the "customer", ask follow up questions and define a clear spec just like a normal engineer. If you doubt it open up chatGPT, tell it you're working on X and ask it to ask you clarifying questions, then come up with a few proposal plans and then tell it which plan to follow.

reply
angoragoats
4 days ago
[-]
> That's not what I'm saying at all. I'm saying that there's a trend showing that you can improve LLM performance significantly by having it generate multiple responses until it produces one that meets some criteria.

I apologize for misinterpreting what you were saying -- I was clearly taking "for the cost of an hour of a software engineer" to mean something that you didn't intend.

> As an example, huggigface just posted an article showing this for math, where with some sampling you can get a 3B model to outperform a 70B one

This is not relevant to our discussion. Again, I'm reasonably sure that I'm not going to be convinced by any research demonstrating that X new tech can increase Y metric by Z%.

> Formalizing the criteria is not as hard as you're making it out to be. You can have an LLM listen to a conversation with the "customer", ask follow up questions and define a clear spec just like a normal engineer. If you doubt it open up chatGPT, tell it you're working on X and ask it to ask you clarifying questions, then come up with a few proposal plans and then tell it which plan to follow.

This is much more relevant to our discussion. Do you honestly feel this is an accurate representation of how you'd define the requirements for the pipeline you outlined in your post above? Keep in mind that we're talking about having LLMs work on already-existing large codebases, and I conceded earlier that writing boilerplate/base code for a brand new project is something that LLMs are already quite good at.

Have you worked as a software engineer for a long time? I don't want to assume anything, but all of your points thus far read to me like they're coming from a place of not having worked in software much.

reply
m_ke
4 days ago
[-]
> Have you worked as a software engineer for a long time? I don't want to assume anything, but all of your points thus far read to me like they're coming from a place of not having worked in software much.

Yes I've been a software engineer working in deep learning for over 10 years, including as an early employee at a leading computer vision company and a founder / CTO of another startup that built multiple large products that ended up getting acquired.

> I apologize for misinterpreting what you were saying -- I was clearly taking "for the cost of an hour of a software engineer" to mean something that you didn't intend.

I meant that unlike a software engineer, the LLM can do a lot more iterations on the problem given the same budget. So if your boss comes and says build me new dashboard page it can generate 1000s of iterations and use a human aligned reward model to rank them based on which one your boss might like best. (that's what the test time compute / sampling at inference does).

> This is not relevant to our discussion. Again, I'm reasonably sure that I'm not going to be convinced by any research demonstrating that X new tech can increase Y metric by Z%.

These are not just research papers, people are reproducing these results all over the place. Another example from a few minutes ago: https://x.com/DimitrisPapail/status/1868710703793873144

> This is much more relevant to our discussion. Do you honestly feel this is an accurate representation of how you'd define the requirements for the pipeline you outlined in your post above? Keep in mind that we're talking about having LLMs work on already-existing large codebases,

I'm saying this will be solved pretty soon, working with large codebases doesn't work well right now because last years models had shorter context and were not trained to deal with anything longer than a few thousand tokens. Training these models is expensive so all of the coding assistant tools like cursor / devin are sitting around and waiting for the next iteration of models from Anthropic / OpenAI / Google to fix this issue. We will most likely have announcements of new long context LLMs in the next 1-2 weeks from Google / OpenAI / Deepseek / Qwen that will make major improvements on large code bases.

I'd also add that we probably don't want huge sprawling code bases, when the cost of a small custom app that solves just your problem goes to 0 we'll have way more tiny apps / microservices that are much easier to maintain and replace when needed.

reply
angoragoats
4 days ago
[-]
> These are not just research papers, people are reproducing these results all over the place. Another example from a few minutes ago: https://x.com/DimitrisPapail/status/1868710703793873144

Maybe I'm not making myself clear, but when I said "demonstrating that X new tech can increase Y metric by Z%" that of course included reproduction of results. Again, this is not relevant to what I'm saying.

I'll repeat some of what I've said in several posts above, but hopefully I can be clearer about my position: while LLMs can generate code, I don't believe they can satisfactorily replace the work of a senior software engineer. I believe this because I don't think there's any viable path from (A) an LLM generates some code to (B) a well-designed, complete, maintainable system is produced that can be arbitrarily improved and extended, with meaningfully lower human time required. I believe this holds true no matter how powerful the LLM in (A) gets, how much it's trained, how long its context is, etc, which is why showing me research or coding benchmarks or huggingface links or some random twitter post is likely not going to change my mind.

> I'd also add that we probably don't want huge sprawling code bases

That's nice, but the reality is that there are lots of monoliths out there, including new ones being built every day. Microservices, while solving some of the problems that monoliths introduce, also have their own problems. Again, your claims reek of inexperience.

Edit: forgot the most important point, which is that you sort of dodged my question about whether you really think that "ask ChatGPT" is sufficient to generate requirements or validation criteria.

reply
fsloth
4 days ago
[-]
My advice? Focus on the business value, not the next ticket. Understand what the actual added value of your work is to your employer. It won’t help in the day-to-day tasks but it will help you navigate your career with confidence.

Personally - and I realize this is not generalizable advice - I don’t consider myself a SWE but a domain expert who happens to apply code to all of his tasks.

I’ve been intentionally focusing on a specific niche - computer graphics, CAD and computational geometry. For me writing software is part of the necessary investment to render something, model something or convert something from domain to domain.

The fun parts are really fun, but the boring parts are mega-boring. I’m actually eagerly awaiting for LLM:s to reach some level of human parity because there simply isn’t enough talent in my domains to do all the things that would be worthwhile to do (cost and return of investment, right).

The reason is my domain is so niche you can’t webscrape&label to reach the intuition and experience of two decades, working in various industries from graphics benchmarking, automotive HUDs, to industrial mission critical AEC workflows and to realtime maps.

There is enough knowledge to train LLMs to get a hint as soon as I tie few concepts together, and then they fly. The code they write at the moment apart from simple subroutines is not good enough to act as unsupervised assistant … most of the code is useless honestly. But I’m optimistic and hope they will improve.

reply
iepathos
4 days ago
[-]
Better tools that accelerate how fast engineers can produce software? That's not a threat, just a boon. I suspect the actual transition will just be people learning/focusing on somewhat different higher level skills rather than lower level coding. Like going from assembly to c, we're hoping we can transition more towards natural language.

> junior to mid level software engineering will disappear mostly People don't magically go to senior. Can't get seniors without junior and mid to level up. We'll always need to take in and train new blood.

reply
bawolff
4 days ago
[-]
AI's are going to put SWE's out of a job at roughly the same time as bitcoin makes visa go bankrupt.

Aka never, or at least far enough in the future that you can't really predict or plan for it.

reply
sweetheart
5 days ago
[-]
Learning woodworking in order to make fine furniture. This is mostly a joke, but the kind that I nervously laugh at.
reply
torlok
4 days ago
[-]
You'll go from competing with Google to competing with IKEA.
reply
anticensor
4 days ago
[-]
IKEA is a hybrid venture.
reply
splwjs
4 days ago
[-]
Right now LLMs have a slight advantage over stackoverflow etc in that they'll react to your specific question/circumstances, but they also require you to doublecheck everything they spit out. I don't think that will ever change, and I think most of the hype comes from people whose salaries depend on it being right around the corner or people who are playing a speculation game (if I learn this tool I'll never have to work again/ avoid this tool will doom me to poverty forever).
reply
karmasimida
4 days ago
[-]
Future at this point is ... unpredictable. It is unsatisfactory answer but it is true.

So what does it take for LLM to replace SWE?

1. It needs to get better, much better 2. It needs to be cheaper still

Those two things are at odds with each other. If Scaling Laws is the god we preaching to, then it apparently has already hit the diminishing of return, maybe if we scale up 1000x we can get AGI, but that won't be economically reasonable for a long time.

Back to reality, what does it mean to survive in a market assuming coding assistants are going to get marginally better over say next 5 years? Just use them, they are genuinely useful tools to accomplish really boring and mundane stuff. Things like writing docker files, those will go away to LLM and human won't be able and don't have to compete. They are also great second thoughts advice givers, it is fun to know what LLM thought of your design proposal and build upon their advice.

Overall, I don't think much will change over night, the industry might experience contraction in terms of how many developers it will hire, for which I think for a long time, the demand will not be there. For people already in the industry, as long as you keep learning, it is probably going to be fine, well, for now.

reply
arminiusreturns
4 days ago
[-]
Sysadmin here. Thats what we used to be called. Then some fads came, some went. DevOps, SRE. Etc.

I have notes on particular areas I am focusing on, but I have a small set of general notes on this, and they seem to apply to you SWEs also.

Headline: Remember data is the new oil Qualifier: It's really all about IP portfolios these days

1) Business Acument: How does the tech serve the business/client needs, from a holistic perspective of the business. (eg: sysadmins have long had to big picture finance, ops, strategy, industry, etc knowledge) - aka - turn tech knowledge into business knowledge

2) Leadership Presence: Ability to meet w/c-suite, engineers, clients, etc, and speak their languages, understand their issues, and solve their issues. (ex: explain ROI impacts for proposals to c-suite)

3) Emotional Intelligence: Relationship building in particular. (note: this is the thing I neglected the most in my career and regreted it!)

4) Don't be afraid to use force multiplier tools. In this discussion, that means LLMs, but it can mean other things too. Adopt early, keep up with tooling, but focus on the fundamental tech and don't get bogged down into proprietary stockholm syndrome. Augment yourself to be better, don't try to replace yourself.

----

Now, I know thats a simplistic list, but you asked so I gave you what I had. What I am doing (besides trying to get my mega-uber-huge-sideproject off the ground), is recentering on certain areas I don't think are going anywhere: on-prem, datacenter buildouts, high-compute, ultra-low-latency, scalable systems, energy, construction of all the previous things, and the banking knowledge to round it all out.

If my side-project launch fails, I'm even considering data-center sales instead of the tech side. Why? I'm tired of rescuing the entire business to no fanfare while sales people get half my salary in a bonus. Money aside, I can still learn and participate in the builds as sales (see it happen all the time).

In other words, I took my old-school niche set of knowledge, and adopted it over the years as the industry changed, focusing on what I do best (in this case, operations - aka - the ones who actually get shit into prod, and fix it when it's broke, regardless of the title associated).

reply
jameslk
4 days ago
[-]
I worked at a big tech co years ago. Strolling up to my bus stop in my casual attire while others around me wore uniforms, rushing to get to work. A nice private shuttle would pick me up. It would deliver me pretty much at the doors of the office. If it were raining, somebody would be standing outside to hand me an umbrella even though the door was a short distance away. Other times there would be someone there waiting on the shuttle to hand me a smoothie. When I got to the door, there would be someone dedicated to opening it. When I got inside, a breakfast buffet fit for a king would be served. Any type of cuisine I wanted was served around campus for lunch and dinner, and it was high quality. If I wanted dessert, there was entire shops (not one but many) serving free handcrafted desserts. If I wanted my laundry done, someone would handle that. If I wanted snacks, each floor of my office had its own little 7/11. If I didn't feel like having all this luxury treatment, I'd just work from home and nobody cared.

All of that, and I was being paid a very handsome amount compared to others outside of tech? Several times over the national average? For gluing some APIs together?

What other professions are like this where there's a good chunk of people who can have such a leisurely life, without taking much risk, and get so highly compensated compared to the rest? I doubt there's many. At some point, the constrained supply must answer to the high demand and reality shows up at the door.

I quit a year into the gig to build my own company. Reality is much different now. But I feel like I've gained many more skills outside of just tech that make me more equipped for whatever the future brings.

reply
gerash
4 days ago
[-]
Two thoughts:

1. Similar to autonomous driving going from 90-99% reliability can take longer than 0-90%.

2. You can now use LLMs and public clouds to abstract away a lot skills that you don't have (managing compute clusters, building iOS and Android apps, etc.). So you can start your 3 person company and do things that previously required 100s of people.

IMHO LLMs and cloud computing are very similar where you need a lot of money to build an offering so perhaps only a few big players are going to survive.

reply
bashfulpup
4 days ago
[-]
Long horizon problems are a completely unsolved problem in AI.

See the GAIA benchmark. While this surely will be beat soon enough, the point is that we do exponentially longer horizon tasks than that benchmark every single day.

It's very possible we will move away from raw code implementation, but the core concepts of solving long horizon problems via multiple interconnected steps are exponentially far away. If AI can achieve that, then we are all out of a job, not just some of us.

Take 2 competing companies that have a duopoly on a market.

Company 1 uses AI and fires 80% their workforce.

Company 2 uses ai and keeps their workforce.

AI in its current form is a multiplier, we will see company two massively outcompete the first as each employee now performs 3-10 people's tasks. Therefore, Company two's output is exponentially increased per person. As a result, it significantly weakens the first company. Standard market forces haven't changed.

The reality, as I see it, is that interns will now be performing at Senior SWE, senior SWE engineers will now be performing at VP of engineering levels, and VP's of engineering will now be performing at nation state levels of output.

We will enter an age where goliath companies will be common place. Hundreds or even thousands of mega trillion dollar companies. Billion dollar startups will be expected almost at launch.

Again, unless we magically find a solution to long horizon problems (which we haven't even slightly found). That technology could be 1 year or 100 years away. We're waiting on our generation's Einstein to discover it.

reply
charlescurt123
4 days ago
[-]
Companies that have no interest in growth and are already heavily entrenched have no purpose for increasing output though. They will fire everyone and behave the same?

On the other hand that means they are weaker if competition comes along as it's expected that consumers and business would demand significantly more due to comparisons.

reply
ImaCake
4 days ago
[-]
AI doesn't have institutional knowledge and culture. Company 2 would be leveraging the use of LLMs while retaining it's culture and knowledge. I imagine the lack of culture is appealing to some managers but that is also one of it's biggest weaknesses.
reply
gorgoiler
4 days ago
[-]
What does it mean to be a software engineer? You know the syntax, standard library, and common third party libraries of your domain. You know the runtime, and the orchestration around multiple instances of your runtime. You know how to communicate between these instances and with third-party runtimes like databases and REST APIs.

A large model knows all of this as well. We already rely on generative language model conversations to fill in the knowledge gaps that Googling for documentation (or “how do I do X?” stackoverflow answers) filled.

What’s harder is debugging. A lot of debugging is guesswork and action taking, note-taking, and brain-storming for possible ideas as to why X crashes on Y input.

Bugs that boil down to isolating a component and narrowing down what’s not working are hard. Being able to debug them could be the moat that will protect us SWEs from redundancy. Alternatively, pioneering all the new ways of getting reproducible builds and reproducible environments will be the route to eliminating this class of bug entirely, or at least being able to confidently say that some bug was probably due to bad memory, bad power supplies, or bad luck.

reply
kqr
4 days ago
[-]
Assuming LLMs grow past their current state where they are kind of hit-and-miss for programming, I feel like this question is very similar to e.g. asking an assembly programmer what they are doing to future-proof their career in light of high-level languages. I can't recall a situation in which the correct answer would not be one of

(1) "Finding a niche in which my skills are still relevant due to technical constraints",

(2) "Looking for a different job going forward", or

(3) "Learning to operate the new tool".

Personally I'm in category (3), but I'd be most interested in hearing from others who think they are on track to (1). What are the areas where LLMs will not penetrate due to technical reasons, and why? I'm sure these areas exist, I just have trouble imagining which they are!

----

One might argue that there's another alternative of finding a niche where LLMs won't penetrate due to regulatory constraints, but I'm not counting this because that tends to be a short-term optimisation.

reply
ElevenLathe
4 days ago
[-]
I spent approximately a decade trying to get the experience and schooling necessary to move out of being a "linux monkey" (read: responding to shared webhosting tickets, mostly opened by people who had broken their Wordpress sites) to being an SRE.

Along the way I was an "incident manager" at a couple different places, meaning I was basically a full-time Incident Commander under the Google SRE model. This work was always fun, but the hours weren't great (these kind of jobs are always "coverage" jobs where you need to line up a replacement when you want to take leave, somebody has to work holidays, etc.). Essentially I'd show up at work and paint the factory by making sure our documentation was up to date, work on some light automation to help us in the heat of the moment, and wait for other teams to break something. Then I'd fire up a bridge and start troubleshooting, bringing in other people as necessary.

This didn't seem like something to retire from, but I can imagine it being something that comes back, and I may have to return to it to keep food on the table. It is exactly the kind of thing that needs a "human touch".

reply
Panzer04
4 days ago
[-]
My view is that the point at which LLMs could cause the death of SWE as a profession is also the point at which practically all knowledge professions could be killed. To fully replace SWEs still requires the LLM to have enough external context that it could replace a huge range of jobs, and it'd probably be best to deal with the consequences of that as and when they happen.
reply
jarsin
4 days ago
[-]
So to take that a step further couldn't all businesses be killed as well?
reply
Panzer04
3 days ago
[-]
Sure. What's the point in worrying about that now? That circumstance is so far removed from current experience I'm not sure there's anything useful we could do to prepare before it happens.
reply
ForHackernews
4 days ago
[-]
No, because rich people own businesses.
reply
j45
4 days ago
[-]
I think it might be the opposite. It's not advisable to give advice to young SWEs when you might be one yourself out some.

Junior devs aren't going away. What might improve is often the gap between where a junior dev is hired and the effort and investment to get them to the real start line of adding value, before they hop ship.

AI agents will become their coding partners, that can teach and code with the Junior Dev, it will be more reliable contributions to a code base, and sooner.

By teach and code with, I mean explaining so much of the basic stuff, step by step, tirelessly, in the exact way each junior dev needs, to help them grow and advance.

This will allow SWE's to move up the ladder and work on more valuable work (understanding problems and opportunities, for example) and solve higher level problems or from a higher perspective.

Specifically the focus of Junior Devs on problems, or problems sets could give way to placing them in opportunities to be figured out and solved.

LLMs can write code today, not sure if it can manage clean changes to an entire codebase on it's own today at scale, or for many. Some folks probably have this figured out quietly for their particular use cases.

reply
Fizzadar
4 days ago
[-]
Not (currently) worried. Lots of comments here about intellisense being a similar step up and I’ve had it disabled for years (I find it more a distraction than use). So far LLMs feel like a more capable - but more error prone - intellisense. Unless it’s stamping out boiler plate or very menial tasks I have yet to find a use for LLMs that doesn’t take up more time fixing it than just writing it in the first place.

Who knows though in 10 years time I imagine things will be radically different and I intend to periodically use the latest AI assistance so I keep up, even if it’s a world I don’t necessarily want. Part of why I love to code is the craft and AI generated code loses that for me.

I do, however, feel really lucky to be at the senior end of things now. Because I think junior roles are seriously at risk. Lots of the corrections needed for LLMs seem to be the same kind of errors new grads make.

The problem is - what happens when all us seniors are dead/retired and there’s no juniors because they got wiped out by AI.

reply
tinthedev
5 days ago
[-]
Real software engineering is as far from "only writing code", as construction workers are from civil engineering.

> So, fellow software engineers, how do you future-proof your career in light of, the inevitable, LLM take over?

I feel that software engineering being taken over by LLM is a pipe dream. Some other, higher, form of AI? Inevitably. LLMs, as current models exists and expand? They're facing a fair few hurdles that they cannot easily bypass.

To name a few: requirement gathering, scoping, distinguishing between different toolsets, comparing solutions objectively, keeping up with changes in software/libraries... etc. etc.

Personally? I see LLMs tapering off in new developments over the following few years, and I see salesmen trying to get a lot of early adopters to hold the bag. They're overpromising, and the eventual under-delivery will hurt. Much like the AI winter did.

But I also see a new paradigm coming down the road, once we've got a stateful "intelligent" model that can learn and adapt faster, and can perceive time more innately... but that might take decades (or a few years, you never know with these things). I genuinely don't think it'll be a direct evolution of LLMs we're working on now. It'll be a pivot.

So, I future-proof my career simply: I keep up with the tools and learn how to work around them. When planning my teams, I don't intend to hire 5 juniors to grind code, but 2 who'll utilize LLMs to teach them more.

I ask more of my junior peers for their LLM queries before I go and explain things directly. I also teach them to prompt better. A lot of stuff we've had to explain manually in the past can now be prompted well, and stuff that can't - I explain.

I also spend A LOT of time teaching people to take EVERYTHING AI-generated with generous skepticism. Unless you're writing toys and tiny scripts, hallucinations _will_ waste your time. Often the juice won't be worth the squeeze.

More than a few times I've spent a tedious hour navigating 4o's or Claude's hallucinated confident failures, instead of a pleasant and productive 45 minutes writing the code myself... and from peer discussions, I'm not alone.

reply
jonahbenton
4 days ago
[-]
I think in principle LLMs are no different from other lowercase-a abstractions that have substantially boosted productivity while lowering cost, from compilers to languages to libraries to protocols to widespread capabilities like payments and cloud services and edge compute and more and more. There is so much more software that can be written, and may be rewritten, abstract machines that can be built and rebuilt, across domains of hardware and software, that become enabled by this new intelligence-as-a-service capability.

I think juniors are a significant audience for LLM code production because they provide tremendous leverage for making new things. For more experienced folk, there are lots of choices that resemble prior waves of adoption of new state of the art tools/techniques. And as it always goes, adoption of those in legacy environments is going to go more slowly, while disruption of legacy products and services that have a cost profile may occur more frequently as new economics for building and then operating something intelligent start to appear.

reply
koe123
4 days ago
[-]
Are LLMs / AI attaining better results than the data they were trained on? For me, the answer is no: LLMs are always imperfectly modeling the underlying distribution of the training dataset.

Do we have sufficient data that spans the entire problem space that SWE deals with? Probably not, and even if we did it would still be imperfectly modeled.

Do we have sufficient data to span the space of many routine tasks in SWE? It seems so, and this is where the LLMs are really nice: e.g., scripting, regurgitating examples, etc.

So to me, much like previous innovation, it will just shift job focus away from the things the innovation can do well, rather than replacing the field as a whole.

One pet theory I have is that we currently suck at assessing model performance. Sure, vibes-based analysis of the outputs of the model make them look amazing. But is that not the literal point of RLHF? But how good are these outputs really?

reply
raxxorraxor
4 days ago
[-]
I do use LLM for various support tasks. Is it super necessary? Probably not, but it really helps.

What they excel in in my experience is translating code to different languages and they do find alternative dependencies in different runtime environments.

Code can be weird and prompting can take longer than writing yourself, but it still is nice support, even if you need to check the results. I only use local LLM where I do embed some of my code.

I am still not sure if LLM are a boon for learning to code or if it is a hindrance. I tend to think it is a huge help.

As for future proofing your career, I don't think developers need to be afraid that AI will write all code for us yet, just because non software engineers suck at defining good requirements for software. I also believe that LLM seem to hit walls on precision.

Some other industries might change significantly though.

reply
aussieguy1234
4 days ago
[-]
I've tended to think of the current stage of AI as a productivity boost along the lines of let's say, using an IDE vs coding with a text editor.

It's also good as a replacement for reading the docs/Google search, especially with search getting worse and full of SEO spam lately.

It's a big help, but it doesn't really replace the human. When the human can be replaced, any job done in front of a computer will be at risk, not just coders. I hope when that happens there will be full robot bodies with AI to do all of the other work.

Also, I know of several designers who can't code but are planning to use LLMs to make their startup ideas a reality by building an MVP. If the said startups take off, they will probably hire real human coders, thus creating more coding jobs. Jobs that would not exist without LLMs getting the original ideas of the ground.

reply
snikeris
4 days ago
[-]
A quote from SICP:

> First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations, but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.

From this perspective, the code base isn’t just an artifact left over from the struggle of getting the computer to understand the business’s problems. Instead, it is an evolving methodological documentation (for humans) of how the business operates.

Thought experiment: suppose that you could endlessly iterate with an LLM using natural language to build a complex system to run your business. However, there is no source code emitted. You just get a black box executable. However, the LLM will endlessly iterate on this black box for you as you desire to improve the system.

Would you run a business with a system like this?

For me, it depends on the business. For example, I wouldn’t start Google this way.

reply
Volrath89
4 days ago
[-]
(10+ years of experience here) I will be starting training for commercial pilot license next year. The pay is much less than one of a software engineer but I think this job is already done for most of us, only the top 5% will survive. I don’t think I’m part of that top and don’t want to go to management or PO roles so I am done with tech
reply
nimbleplum40
4 days ago
[-]
What makes you believe that commercial pilot is safer from AI than software engineering?
reply
Volrath89
4 days ago
[-]
Big commercial airplanes basically already fly themselves. But people feel safer with pilots on board just in case, I don’t think that’s going to change.
reply
nosbo
4 days ago
[-]
You do realise planes already fly themselves right. And at some airports even land and takeoff.
reply
tech_ken
4 days ago
[-]
I think it's about evaluating the practical strengths and weaknesses of genAI for coding tasks, and trying to pair your skillset (or areas of potentially quick skill learning) with the weaknesses. Try using the tools and see what you like and dislike. For example I use a code copilot for autocomplete and it's saving my carpals; I'm not a true SWE more a code-y DS, but autocomplete on repetitive SQL or plotting cells is a godsend. It's like when I first learned vi macros, except so much simpler. Not sure what your domain is, but I'd wager there are areas that are similar for you; short recipes or utils that get reapplied in slightly different ways across lots of different areas. I would try and visualize what your job could look like if you just didn't have to manually type them; what types of things do you like doing in your work and how can you expand them to fill the open cycles?
reply
usixk
4 days ago
[-]
LLM's are a model therefore require data, including new data. When it comes to obscure tasks, niche systems and peculiar integrations, LLM's seem to struggle with that nuance.

So should you be worried they will replace you? No. You should worry about not adopting the technology in some form, otherwise your peers will outpace you.

reply
code_for_monkey
5 days ago
[-]
Im hoping I can transition to some kind of product or management role since frankly Im not that good at coding anyways (I dont feel like I can pass a technical interview anymore, tbh.)

I think a lot of engineers are in for some level of rude awakening. I think a lot of engineers havent applied some level of business/humanities thinking in this, and I think a lot of corporations care less about code quality than even our most pessimistic estimates. It wouldnt surprise me if cuts over the next few years get even deeper, and I think a lot of high performing (re: high paying) jobs are going to get cut. Ive seen so many comments like "this will improve engineering overall, as bad engineers get laid off" and I dont think its going to work like that.

Anecdotal, but no one from my network actually recovered from the post covid layoffs and they havent even stopped. I know loads of people who dont feel like theyll ever get a job as good as they had in 2021.

reply
throwaway_43793
5 days ago
[-]
What's your plan to transition into product/management?
reply
code_for_monkey
4 days ago
[-]
right now? Keeping my eyes very peeled for what people in my network post about needing. Unfortunately I dont' have much of a plan right now, sorry.
reply
polotics
4 days ago
[-]
This question is so super weird, because:

Ask an LLM to generate you 100 more lines of code, no problem you will get something. Ask the same LLM to look at 10000 lines of code and intelligently remove 100... good luck with that!

seriously, I tried uploading some (but not all) source code of my company to our private Azure OpenAI GPT 4o for analysis, as a 48 MB cora-generated context file, and really the usefulness is not that great. And don't get me started about Copilot's suggestions.

Someone really has to know their way around the beast, and LLM's cover a very very small part of the story.

I fear that the main effect of LLMs will be that developers that have already for so long responded to their job-security fears with obfuscation and monstrosity... will be empowered to produce even more of that.

reply
jerjerjer
4 days ago
[-]
> Ask an LLM to generate you 100 more lines of code, no problem you will get something. Ask the same LLM to look at 10000 lines of code and intelligently remove 100... good luck with that!

These two tasks have a very different difficulty level though. It will be the same with a human coder. If you give me a new 10k sloc codebase and ask to add a method, to cover some new case I can probably do it in a hour to a day, depending on my familiarity with the language, subject matter, codebase overall state, documentation, etc.

New 10k codebase and a task of removing 100 lines? That's probably at least half a week to understand how it all works (disregarding simple cases like a hundred-line comment bloc with old code), before I can make such a change safely.

reply
habosa
4 days ago
[-]
Maybe the best thing to do is just continue practicing the art of writing code without LLMs. When you're the last person who can do it, it might be even more valuable than it is today.

(this is my naive tactic, I am sure sama and co will find a way to suck me down the drain with everyone else in the end)

reply
uptownfunk
4 days ago
[-]
For founders it’s great. You don’t need a ton of startup capital to actually start something. Easier than ever. I think this means there will be less of a supply of goood early stage deals. And if VC can’t get in on the ground floor it becomes harder to get the multiples they sell to their investor. It also means that the good companies will be able to get to finance their own growth which means that whatever is left in the later stage just won’t be as compelling. As an LP you know have to look at potentially other asset classes or find investors who are going to adapt to still find success. Otherwise I think AI is most disruptive here now to the VC and by implication their capital markets. I think late stage financing and ipo market should not change substantially.
reply
ravedave5
4 days ago
[-]
So what I've seen so far is that LLMs are amazing for small self contained problems. Anything spanning a whole project they aren't quite up to the task yet. I think we're going to need a lot more processing power to get to that point. So our job will change, but I have a feeling it will be slow and steady.
reply
DogLover_
4 days ago
[-]
I think people are coping. Software engineering has only gotten easier over time. Fifteen years ago, knowing how to code made you seem like a wizard, and learning was tough - you had to read books, truly understand them, and apply what you learned. Then came the web and YouTube, which simplified things a lot. Now with LLMs, people with zero experience can build applications. Even I find myself mostly prompting when working on projects.

Carmack’s tweet feels out of touch. He says we should focus only on business value, not engineering. But what made software engineering great was that nerds could dive deep into technical challenges while delivering business value. That balance is eroding - now, only the business side seems to matter.

reply
65
4 days ago
[-]
LLMs are not 100% correct 100% of the time. LLMs are subjective. Code should work 100% of the time, be readable, and objective.

We also already have "easier ways of writing software" - website builders, open source libraries, StackOverflow answers, etc.

Software builds on itself. It's already faster to copy and paste someone's GitHub repo of their snake game than to ask an LLM to build a snake game for you. Software problems will continue to get more challenging as we create more complex software with unsolved answers.

If anything, software engineers will be more and more valuable in the future (just as the past few decades have shown how software engineers have become increasingly more valuable). Those that code with LLMs won't be able to retain their jobs solving the harder problems of tomorrow.

reply
marssaxman
3 days ago
[-]
I will do the same thing I did on all of the previous occasions when new automation ate my job: move up the ladder of abstraction and keep on working.

I haven't written any actual code for a couple of decades, after all: I just waffle around stitching together vague, high-level descriptions of what I want the machine to do, and robots write the code for me! I don't even have to manage memory anymore; the robots do it all. The robots are even getting pretty good at finding certain kinds of bugs now. A wondrous world of miracles, it is... but somehow there are still plenty of jobs for us computer-manipulating humans.

reply
perrygeo
4 days ago
[-]
> My prediction is that junior to mid level software engineering will disappear mostly, while senior engineers will transition to be more of a guiding hand to LLMs output

This. Programming will become easier for everyone. But the emergent effect will be that senior engineers become more valuable, juniors much less.

Why? It's an idea multiplier. 10x of near-zero is still almost zero. And 10x of someone who's a 10 already - they never need to interact with a junior engineer again.

> until eventually LLMs will become so good, that senior people won't be needed any more.

Who will write the prompts? How do you measure the success? Who will plan the market strategy? Humans are needed in the loop by definition as we build software to achieve human goals. We'll just need significantly fewer people to achieve them.

reply
koliber
4 days ago
[-]
All senior engineers were junior engineers at one point.

I worry where we will get the next generation of senior engineers.

reply
perrygeo
4 days ago
[-]
Exactly. It's already bad, with seniors being pushed to 100% with 0% slack to train up the next generation. No time for that. LLMs will make this worse.
reply
__MatrixMan__
4 days ago
[-]
The best software is made by people who are using it. So I figure we should all go learn something that interests us which could use some more software expertise. Stop being SWE's and start being _____'s with a coding superpower.

AI aside, we probably should have done this a long time ago. Software for software's sake tends build things that treat the users poorly. Focusing on sectors that could benefit forum software and not treating software itself like a sector seems to me like a better way.

I know that sounds like giving up, but look around and ask how much of the software we work on is actually helping anybody. Let's all go get real jobs. And if you take an honest look at your job and think it's plenty real, well congrats, but I'd wager you're in the minority.

reply
ericb
3 days ago
[-]
For now, taste and debugging still rule the day.

o1 designed some code for me a few hours ago where the method it named "increment" also did the "limit-check", and "disable" functionality as side-effects.

In the longer run, SWE's evolve to become these other roles, but on-steroids:

- Entrepreneur - Product Manager - Architect - QA - DevOps - Inventor

Someone still has to make sure the solution is needed, the right fit for the problem given the existing ecosystems, check the code, deploy the code and debug problems. And even if those tasks take fewer people, how many more entrepreneurs become enabled by fast code generation?

reply
closeparen
4 days ago
[-]
My job is not to translate requirements into code, or even particularly to create software, but to run a business process for which my code forms the primary rails. It is possible that advanced software-development and reasoning LLMs will erode some of the advantage that my technical and analytical skills give me for this role. On the other hand even basic unstructured-text-understanding LLMs will dramatically reduce the size of the workforce involved in this business process, so it's not clear that my role would logically revert to a "people manager" either. Maybe there is a new "LLM supervisor" type of role emerging in the future, but I suspect that's just what software engineer means in the future.
reply
loumf
3 days ago
[-]
I own all of my own code now, and so I benefit from creating it in the most efficient way possible. That is sometimes via AI and I am fine with it being more.

But, generally I don’t see AI as currently such a boon to productivity that it would eliminate programming. Right now, it’s no where near as disruptive as easy to install shared libraries (e.g. npm). Sure, I get lines of code here and there from AI, but in the 90’s I was programming lots of stuff that I just get instantaneously for free with constant updates.

reply
bigstrat2003
4 days ago
[-]
This has never, ever been an industry where you will retire doing the same kind of work as you did when you started. There is nothing you can do to future proof your career - all you can do is be adaptable, and learn new things as you go.
reply
irunmyownemail
4 days ago
[-]
"This has never, ever been an industry where you will retire doing the same kind of work as you did when you started."

Seriously? It actually is an industry where people started coding in C and have retired recently, still working in C.

reply
PUSH_AX
4 days ago
[-]
I started diversifying this year for this very reason. Firstly I’ve been engineering for over a decade and purposefully avoiding leadership roles, so I’ve stepped into a head of engineering role. Secondly I’ve taken up a specific wood working niche, it will provide me with a side hustle to begin with and help diversify our income.

I don’t think LLMs are taking jobs today, but I now clearly see a principal that has fully emerged, non tech people now have a lust for achieving and deploying various technology solutions, and the tech sector will fulfill it in the next decade.

Get ahead early.

reply
abdljasser2
5 days ago
[-]
My plan is to become a people person / ideas guy.
reply
cjk
4 days ago
[-]
I truly do not believe that many software engineers are going to lose jobs to anything resembling the current crop of LLMs in the long run, and that’s because the output cannot be trusted. LLMs just make shit up. Constantly. See: Google search results, Apple Intelligence summaries, etc.

If the output cannot be trusted, humans will always be required to review and make corrections, period. CEOs that make the short-sighted mistake of attempting to replace engineers with LLMs will learn this the hard way.

I’d be far more worried about something resembling AGI that can actually learn and reason.

reply
cloudhead
3 days ago
[-]
Just because you prompt an LLM doesn’t mean it ain’t programming. The job will just change from using programming languages to using natural languages. There is no silver bullet.
reply
zerop
5 days ago
[-]
I fear that in the goal of going from "manual coding" to "fully automated coding", we might end up in the middle, where we are "semi manual coding" assisted by AI, which would need different software engineer skill.
reply
EagnaIonat
4 days ago
[-]
> How can we future-proof our career?

Redefine what your career is.

2017-2018: LLMs could produce convincing garbage with heavy duty machines.

Now.

- You can have good enough LLMs running on a laptop with reasonable speeds.

- LLMs can build code from just descriptions (Pycharm Pro just released this feature)

- You can take screenshots of issues and the LLM will walk through how to fix these issues.

- Live video analysis is possible just not to the public yet.

The technology is rapidly accelerating.

It is better to think of your career as part of the cottage industry at the start of the industrial revolution.

There will likely be such jobs existing, but not at the required volume of employees we have now.

reply
solarized
4 days ago
[-]
Be exceptionally skilled in areas that can’t yet be automated.

Focus on soft skills: communication, problem-solving, social intelligence, collaboration, connecting ideas, and other "less technical" abilities. Kind of management and interpersonal stuff that machines can’t replicate (yet).

It's real. Subscribing LLM provider for $20/month feels like better deal than hiring average-skilled software engineer.

It's even funnier when we realize that people we hire are just going to end up prompting the LLM anyway. So, why bother hiring?

We really need next-level skills. Beyond what LLM can handle.

reply
exabrial
4 days ago
[-]
Nothing. A tool that is only right ~60% of the time is still useless.

I've yet to have an LLM ever produce correct code the first time, or understand a problem at the level of anything above a developer that went through a "crash course".

If we get LLMs trained on senior level codebases and senior level communications, they may stand a chance someday. Given that they are trained using the massively available "knowledge" sites like Reddit, it's going to be awhile.

reply
noisy_boy
4 days ago
[-]
You still need a person to be responsible for the software being built - you cant fire chatGPT if the code blows up.

The developer has in fact even more responsibility because expectations have gone up so even more code is being produced and someone has to maintain it.

So there will be a point when that developer will throw their hands up and ask for more people which will have to be hired.

Basically the boiling point has gone up (or down, depending on how you interpret the analogy), the water still has to be boiled the more or less the same way.

Until we have AGI - then it's all over.

reply
righthand
4 days ago
[-]
You’re watching our coworkers hit the gas pedal on turning the entire industry into a blue collar job. Not just the junior positions will dry up but the senior positions will dry up. The average engineer won’t understand coding or even do the review. Salaries won’t lower they’ll stagnate over the next 10-20 years. People don’t actually care about coding, they may enjoy it but are only here for the money. In their short sighted mind, using LLMs are the quickest way to demonstrate superiority by way of efficiency to argue for a promotion.
reply
hoppp
4 days ago
[-]
If people enjoy SWE they will never be replaced, nobody will forcefully enter my house to tear the keyboard out of my hands.

Even if AI coding becomes the norm, people who actually understand software will write better prompts.

The current quality of generated code is awful anyways. We are not there yet.

But there is a very simple solution for people who really dislike AI. Licensing. Disallow AI to contribute to and use your FOSS code and if they do, sue the entity that trained the model. Easy money.

reply
rhinoceraptor
4 days ago
[-]
So far LLMs scale the opposite of silicon, instead of exponential gains in efficiency/density/cost, each generation of model takes exponentially more resources.

Also, being a software developer is not primarily writing code, these days it is much more about operating and maintaining production services and long running projects. An LLM can spit out a web app that you think works, but when it goes wrong or you need to do a database migration, you're going to want someone who can actually understand the code and debug the issue.

reply
w10-1
4 days ago
[-]
Careers are future-proofed through relationships and opportunities, not skills. Skills are just the buy-in.

Getting and nailing a good opportunity opens doors for a fair bit, but that's rare.

So the vast, vast majority of future-proofing is relationship-building - in tech, with people who don't build relationships per se.

And realize that decision-makers are business people (yes, even the product leads and tech leads are making business decisions). Deciders can control their destiny, but they're often more exposed to business vicissitudes. There be dragons - also gold.

To me, the main risk of LLM's is not that they'll take over my coding, but that they'll take over the socialization/community of developers - sharing neat tricks and solutions - that builds tech relationship. People will stop writing hard OS software blogs and giving presentations after LLM's inter-mediate to copy code and offer solutions. Teams will become hub-and-spoke, with leads enforcing architecture and devs talking mostly to their copilots. That means we won't be able to find like-minded people, no one has any incentive or even occasion to share knowledge. My guess is that relationship skills will be even more valued, but perhaps also a bit fruitless.

Doctors and lawyers and even business people have a professional identity from their schooling, certification, and shared history. Developers and hackers don't; you're only as relevant as your skills on point. So there's a bit of complaining but no structured resistance on the part of developers to their work being factored, outsourced, and now mediated by LLM's (that they're busily building).

Developers have always lived in the shadow of the systems they're building. You used to have to pay good money for compilers and tools, licenses to APIs, etc. All the big guys realized that by giving away that knowledge and those tools, they make the overall cost cheaper, and they can capture more. We've been free-riding for 30 years, and it's led us to believe that skills matter most. LLM's are a promising way to sell cloud services and expensive hardware, so there will be even more willingness than crypto or car-sharing or real estate or whatever to invest in anything disruptive. We rode the tide in, and it will take us out again.

reply
coef2
4 days ago
[-]
I recently used an LLM to translate an algorithm from Go to Python at my work. The translation was quite accurate, and it made me think tasks involving obvious one-to-one correspondence like code translation might be easier for LLMs compared to other tasks. I can see the potential for offloading such tasks to LLMs. But the main challenge I faced was trusting the output. I ended up writing my own version and compared them to verify the correctness of the translation.
reply
hooverd
4 days ago
[-]
Software quality, already bad, will drop even more as juniors outsource all their cognition to the SUV for the mind. Most "developers" will be completely unable to function without their LLMs.
reply
mym1990
4 days ago
[-]
As someone who went to a bootcamp a good while ago...I am now formally pursuing a technical masters program which has an angle on AI(just enough to understand where to apply it, not doing any research).
reply
zitterbewegung
4 days ago
[-]
Learning how to use LLMs and seeing what works and what doesn't. When I've used them to code after awhile I can start to figure out where they hallucinate. I have made an LLM system that performs natural language network scanning called http://www.securday.com which I presented at DEF CON (hacker conference). Even if it has no change or affect on your employment it is fun to experiment with things regardless.
reply
compiler-devel
4 days ago
[-]
I’m fairly OK financially at this point so my strategy is to make the money I can, while I can, and then when I become unemployable or replaced by LLMs, just retire.
reply
Jabbs
2 days ago
[-]
I think you should always be searching for your next role. This will keep you informed about the market and know if the SWE skills are indeed shifting towards AI (this would become part of their job interviews, which I have not seen yet)
reply
tossandthrow
4 days ago
[-]
Personally I hope it will take a lot of eng work, especially the menial work.

If it could also help management understand their own issues and how to integrate solutions into their software, then it would be great!

The core here is, that if engineering is going, then law is going, marketing is going and a lot of other professions are also going.

This means that we have structural issues we need to solve then.

And in that case it is about something else than my own employability.

reply
w10-1
4 days ago
[-]
> The core here is, that if engineering is going, then law is going, marketing is going and a lot of other professions are also going

The difference is reliance costs. We go to a doctor or a lawyer instead of a book because the risk of mistake is too high.

But development work can be factored; the guarantees are in the testing and deployment. At the margins maybe you trust a tech lead's time estimates or judgment calls, but you don't really rely on them as much as prove them out. The reliance costs became digestable when agile reduced scope to tiny steps, and they were digested by testing, cloud scaling, outsourcing, etc. What's left is a fig leaf of design, but ideas are basically free.

reply
tossandthrow
4 days ago
[-]
This seems like a kind of arbriatry dichotomy to setup to support some arbratary argument.

Let me put it like this: when LLMs write system scale software that is formally verifiable or completely testable, I promise you that you don't trust your doctor more than you trust the ai.

Already now we see indications that diagnosing is better without human (doctors) intervention.

Fast forward ten years with that development and nobody dares to go to the doctor.

reply
theshackleford
4 days ago
[-]
I future proofed myself (I believe, and it's worked well so far despite many technology transitions) by always adapting to newer tools and technologies, not involving myself in operating system/text editor/toolchain wars but instead become at least proficient in the ones that matter and by ensuring my non technical skills are as strong or stronger than my technical skills.

This is a path I would recommend with or without LLM's in the picture.

reply
bevan
4 days ago
[-]
Commenting on the useful-not-useful split here.

Rather than discuss the current “quality of code produced” it seems more useful to talk about what level of abstraction it’s competent at (line, function, feature), and whether there are any ceilings there. Of course the effective level of abstraction depends on the domain but you get the idea, it can generate good code at some level if promoted sufficiently well.

reply
bdangubic
4 days ago
[-]
any post like this here will inevitably be met with some variation of “I am better than LLMs, that won’t change in any near future.”

There are 4.5 million SWEs in USA alone. of those how many are great at what they do? 0.5% at best. how many are good? 5% tops. average/mediocre 50%. below average to downright awful - the rest.

while LLMs won’t be touching the great/good group in any near future they 100% will the awful one as well as average/mediocre one

reply
theptip
4 days ago
[-]
Simply, as LLM capabilities increase, writing code might disappear, but building products will remain for much longer.

If you enjoy writing code, you might have to make it a hobby like gardening, instead of earning money from it.

But the breed of startup founder (junior and senior) that’s hustling for the love of building a product that adds value to users, will be fine.

reply
ashoeafoot
4 days ago
[-]
Be the referee in times of uncertainty. Be the reverse engineer of black boxes. Be the guy who knows what kind of wrong on stackoverflow grows. Be witty. Be an outlier but nit an outliar. Be someine who explains the words of prophets from sandmountains to the people . Also as AI depopulizes everything it touches, carry a can of oil with you to oil the door hinges of ghosttowns.
reply
VeejayRampay
4 days ago
[-]
LLMs are not really there except for juniors though

the quality of the code is as bad as it was two years ago, the mistakes are always there somewhere and take a long time to spot, to the point where it's somewhat of a useless party trick to actually use a LLM for software development

and for more senior stuff the code is not what matters anyway, it's reassuring other stakeholders, budgeting, estimation, documentation, evangelization, etc.

reply
rglover
4 days ago
[-]
LLM is to software engineering as a tractor was to farming.

They're tools that can make you more efficient, but they still need a human to function and guide them.

reply
at_
4 days ago
[-]
One could even call that guiding... Coding
reply
ramesh31
4 days ago
[-]
Grappling with this hard right now. Anyone who is still of the "these things are stupid and will never replace me" mindset needs to sober up real quick. AGI level agentic systems are coming, and fast. A solid 90% of what we thought of as software engineering for the last 30 years will be completely automated by them in the next couple years. The only solution I see so far is to be the one building them.
reply
TechDebtDevin
4 days ago
[-]
As someone who's personally tried ( with lots of effort) to build agentic assistants/systems 3+ times over the course of the last few years I haven't seen any huge improvements in the quality of output. I think you greatly underestimate the plateau these models are running into.

Grok and o1 are great examples of how these plateaus also wont be overcome with more capital and compute.

Agentic systems might become great search/research tools to speed up the time it takes to gather (human created) info from the web, but I don't see them creating anything impressive or novel on their own without a completely different architecture.

reply
ramesh31
4 days ago
[-]
>As someone who's personally tried ( with lots of effort) to build agentic assistants/systems 3+ times over the course of the last few years I haven't seen any huge improvements in the quality of output. I think you greatly underestimate the plateau these models are running into.

As someone who's personally tried with great success to build agentic systems over the last 6 months, you need to be aware of how fast these things are improving. The latest Claude Sonnet makes GPT-3.5 look like a research toy. Things are trivial now in the code gen space that were impossible just earlier this year. Anyone not paying attention is missing the boat.

reply
TechDebtDevin
4 days ago
[-]
>As someone who's personally tried with great success to build agentic systems over the last 6 months.

Like what? You're the only person ive seen claim they've built agentic systems with great success. I dont regard improved chat-bot outputs as success, im talking about agentic systems that can roll their own auth from scratch, or gather data from the web independently and build even a mediocore prediction model with that data. Or code anything halfway decently in something other than Python.

reply
mixmastamyk
4 days ago
[-]
How many of these posts do we have to suffer through?

It's just like the self-driving car—great for very simple applications but when will human drivers become a thing of the past? Not any time soon. Not to mention the hype curve has already crested: https://news.ycombinator.com/item?id=42381637

reply
waffletower
3 days ago
[-]
I think there are easy answers to this question that will most likely be effective: adapt as you have with any emerging software technology that impacts your work, embrace LLMs in your workflow where feasible and efficient, learn about the emerging modelops technologies that are employed to utilize LLMs.
reply
heldrida
4 days ago
[-]
Even if LLM can do everything any software developer is capable of doing, it doesn’t mean it’ll solve interesting and profitable problems that humans or systems need. Just because most or some Hacker News readers can code, and some solve extremely difficult problems doesn’t mean they’re going to be successful and make profit.
reply
svilen_dobrev
4 days ago
[-]
mass software-making has been commoditizing for decade+ maybe (~~LEGo-like sticking CRUD APIs), and now ML/LLM jump-accelerate this process. any-API-as-service? combination-there-of? In the terms of wardle-maps, its entering a state of war, and while there will be new things on top of it, the gray-mass will disappear / be unneeded / replaced.

IMO the number and the quality / devotion of programmers will go back to levels of pre-web/js, or even pre-visual-Basic. They would be programming somewhat differently than today. But that's a (rosy) prediction, and it probably is wrong. The not-rosy version is that all common software (and that's your toaster too) will become shitmare, with the consequence everyone will live in order to fix/workaround/"serve"-in-a-way it, instead of using it to live.

Or maybe something in the middle?

reply
cloudking
4 days ago
[-]
LLMs are tools, learn how to use them. You'll be more productive once you learn how to properly incorporate LLMs into your workflow, and maybe come up with some ideas to solve software problems with them along the way.

I wouldn't worry about being replaced by an LLM, I'd worry about falling behind and being replaced by a human augmented with an LLM.

reply
MrQuimico
4 days ago
[-]
Programming is about coding an idea into a set of instructions. LLMs are the same, they just require using a higher level language.
reply
mattlondon
4 days ago
[-]
Start selling the shovels.

I.e., get into the LLM/AI business

reply
tomcar288
3 days ago
[-]
We may be entering a future where there's simply less demand for SE talent. I mean, there's already a vast oversupply of SE talent. What about looking into other job functions such as Product/project management or other career options? teaching?
reply
tugu77
4 days ago
[-]
> My prediction is that junior to mid level software engineering will disappear mostly,

That statement makes no sense. It's a skill progression. There are no senior levels of anything if there isn't the junior level as a staging ground for learning the trade and then feeding the senior level.

reply
posed
4 days ago
[-]
My advice: Keep honing your problem solving skills, by doing math challenges, chess puzzles, learn new languages(not programming ones, though that might help too), read books; anything that’d help you get newer perspectives, challenges your mind is good enough to withstand the race against AI.
reply
015a
4 days ago
[-]
There's no reality in the next twenty years where a non-technical individual is going to converse with a persistent agentic AI to produce a typical SaaS product and maintain it over a period of many years. I think we'll see stabs at this space, and I think we'll see some companies try to replace engineering teams wholesale, and these attempts will almost universally feel kinda sorta ok for the first N weeks, then nosedive and inevitably result in the re-hiring of humans (and probably the failure of many projects and businesses as well).

Klarna said they stopped hiring a year ago because AI solved all their problems [1]. That's why they have 55 job openings right now, obviously [2] (including quite a few listed as "Contractor"; the utterly classic "we fucked up our staffing"). This kind of disconnect isn't even surprising; its exactly what I predict. Business leadership nowadays is so far disconnected from the reality of what's happening day-to-day in their businesses that they say things like this with total authenticity, they get a bunch of nods, and things just keep going the way they've been going. Benioff at Salesforce said basically the same thing. These are, put simply, people who have such low legibility on the mechanism of how their business makes money that they believe they understand how it can be replaced; and they're surrounded by men who nod and say "oh yes mark, yes of course we'll stop hiring engineers" then somehow conveniently that message never makes it to HR because those yes-men who surround him are the real people who run the business.

AI cannot replace people; AI augments people. If you say you've stopped hiring thanks to AI, what you're really saying is that your growth has stalled. The AI might grant your existing workforce an N% boon to productivity, but that's a one-time boon barring any major model breakthroughs (don't count on it). If you want to unlock more growth, you'll need to hire more people, but what you're stating is that you don't think more growth is in the cards for your business.

That's what these leaders are saying, at the end of the day; and its a reflection of the macroeconomic climate, not of the impacts of AI. These are dead businesses. They'll lumber along for decades, but their growth is gone.

[1] https://finance.yahoo.com/news/klarna-stopped-hiring-ago-rep...

[2] https://klarnagroup.teamtailor.com/#jobs

reply
bdangubic
4 days ago
[-]
Nicely said.

> AI cannot replace people; AI augments people.

Here’s where we slightly disagree. If AI augments people (100% does) it makes those people more productive (from my personal experience I am ballparking currently I am 40-45% more productive) and hence some people will get replaced. Plausibly in high-growth companies we’ll just all be more productive and will crank out 40-45% more products/features/… but in other places me being 40-45% more productive may mean other people might not be needed (think every fixed-price government contract - this is 100’s of billions of dollar market…)

reply
015a
4 days ago
[-]
Directionally, I think that's certainly true for some professions. It is probably the case that it isn't true for software engineering. The biggest reason involves how software, as a business, involves a ton of problem space exploration and experimentation. Most businesses have an infinite appetite for software; there's always another idea, another feature they can sell, another expectation, another bug we can fix, or a performance problem to improve.

Tooling improvements leading to productivity boons ain't a new concept in software engineering. We don't code in assembly anymore, because writing JavaScript is more efficient. In fact, think on this: Which would you grade as a higher productivity boon, as a percentage: Using JavaScript over Assembly, or using AI to write JavaScript over hand-writing it?

Followup: If JavaScript was such a 10,000% productivity boon over assembly, to AI's paltry 40-45%: Why aren't we worried when new programming languages drop? I don't shiver in my boots when AWS announces a new service, or when Vercel drops a new open source framework.

At the end of the day: there's an infinite appetite for software, and someone has to wire it all up. AI is, itself, software. One thing all engineers should know by now is: Introducing new software only increases the need for the magicians who wrangle it, and AI is as subject to that law as JavaScript, the next expensive AWS service, or next useless Vercel open source project.

reply
bdangubic
4 days ago
[-]
> At the end of the day: there's an infinite appetite for software, and someone has to wire it all up.

I agree with this 100%... the core issue to ponder is this - Javascript was probably 1,000,000% productivity boon over assembly - no question about that but it did not offer much in the form of "automation" so-to-speak. It was just a new language that luckily for it became de-facto language that browsers understand. You and I have spent countless hours writing JS, TS code etc... The question I think here is whether LLMs can automate things or not. I consider a single greatest trait in the absolute best SWEs I ever worked with (that is a lot of them, two and a half decades plus doing this) and that is LAZINESS. Greatest SWEs are lazy by nature and we tend to look to automate everything that can be automated. Javascript is not helping me a whole lot with automation but LLMs just might. Writing docs, writing property-based tests for every function I write, writing integration tests for every end-point I write etc etc... In these discussions you can always detect "junior" developers from "senior" developers in that "juniors" will fight the fight "oh no way imma get replaced here, I do all this shit LLMs can't even dream of" while "seniors" are going "I have already automated 30-40-50% of the work that I used to do..."

the most fascinating part to me is that same "juniors" in the same threads are arguing things like "SWEs are not just writing code, there are ALL these other things that we have to do" without realizing that it is exactly all those "other things" that with LLMs you just might be able to automate your way out of it, fully or partially...

reply
015a
3 days ago
[-]
I agree;

- if your response to LLMs taking over is "they're bad and its not going to happen" i think you've basically chosen to flip a coin, and that's fine, you might be right (personally I do think this take is right, but again, its a coin flip)

- if your response is "engineering is about a lot more than writing code" you're right, but the "writing code" part is like 90% of the reason why you get paid what you do, so you've chosen the path of getting paid less, and you're still flipping a coin that LLMs won't come for that other stuff.

- the only black-pilled response is "someone has to talk to the llm", and if you spend enough time with that line of thinking you inevitably uncover some interesting possibilities about how the world will be arranged as these things become more prevalent. for me, its that: larger companies probably won't get much larger, we've hit peak megacorp, but smaller companies will become more numerous as individual leverage is amplified.

reply
dragonwriter
3 days ago
[-]
> if your response is “engineering is about a lot more than writing code” you’re right, but the “writing code” part is like 90% of the reason why you get paid what you do

[…]

> the only black-pilled response is “someone has to talk to the llm”,

This is literally the exact same response as “engineering is about a lot more than writing code”, since “talking to the LLM” when the LLM is the main code writer is exactly doing the non-code-writing tasks while delegating code writing to the LLM which you supervise. Kind of like a lead engineer does anyway, where they do much of the design work, and delegate and guide most of the code writing (while still doing some of it themselves.)

reply
gorbachev
4 days ago
[-]
I future proof my career by making sure I deeply understand what my users need, how the tech landscape available to satisfy those needs change over time and which solutions work within the constraints of my organization and users.

Some part of that involves knowing how AI would help, most doesn't.

reply
bravetraveler
4 days ago
[-]
As a systems administrator now SRE, it's never really been about my code... if code at all.

Where I used to be able to get by with babysitting shell scripts that only lived on the server, we're now in a world with endless abstraction. I don't hazard to guess; just learn what I can to remain adaptable.

The fundamentals tend to generally apply

reply
nosbo
4 days ago
[-]
I suppose I could already be replaced with any number of services (vercel, heroku, Google cloud run etc..). But I haven't yet. I think I still provide value in other ways. (Also a sys admin)
reply
SillyUsername
4 days ago
[-]
I learnt how to write them. Modern equivalent of re-skilling. When my role as it is, is replaced, then I'll be on the ground floor already for all things AI. If you're in software dev and can't already reskill rapidly then you're probably in the wrong job.
reply
haolez
4 days ago
[-]
Maybe creating your own AI agents with your own "touch". Devin, for example, is very dogmatic regarding pull requests and some process bureaucracy. Different tasks and companies might benefit from different agent styles and workflows.

However, true AGI would change everything, since the AGI could create specialized agents by itself :)

reply
datavirtue
4 days ago
[-]
I would love to swim in code all day. My problems are always people and process. Down in trenches it is really messy and we often need five or ten people working in concert on the same problem that already has implicit context established for everyone, each contributing their unique viewpoint (hopefully).
reply
Juliate
4 days ago
[-]
A bit off topic but… from my point of understanding,

Unless there’s a huge beneficial shift in cheap and clean energy production and distribution, and fast, climate change and its consequences on society and industries (already started) outweighs and even threatens LLMs (a 2-5 years horizon worry).

reply
adamredwoods
4 days ago
[-]
LLM token size will have to increase a lot to digest a large system of code.

Then there's the companies that own LLMs that are large enough to do excellent coding. Consider "have" and "have nots", those that have the capital to incorporate these amazing LLMs and those that do not.

reply
sagarpatil
4 days ago
[-]
2 million token size is not enough?
reply
adamredwoods
4 days ago
[-]
Probably not for a large monorepo, definitely not ours.
reply
altairprime
4 days ago
[-]
You can retire from tech and select another profession that’s more proof against bubbles than tech eng. In five years I’ll be able to seek work as a forensic auditor with decades of tech experience to help me uncover the truth, which is worth my weight in gold to the compliance and enforcement professions.
reply
loeg
4 days ago
[-]
I am so not worried about it. This is like, ten years ago: "how do you protect your career as a vim-user when obviously Visual Studio will take over the world." It is a helpful tool for some people but You Don't Really Need It and it isn't why some engineers are super impactful.
reply
dayvid
4 days ago
[-]
My job is determining what needs to be done, proving it should be done, getting people to approve it and getting it done.

LLMs help more with the last part which is often considered the lowest level. So if you're someone who just wants to code and not have to deal with people or business, you're more at risk.

reply
ustad
4 days ago
[-]
> My job is determining what needs to be done, proving it should be done, getting people to approve it…

LOL - this is where LLMs are being used the most right now!

reply
nurumaik
4 days ago
[-]
When "LLMs will become so good, that senior people won't be needed anymore" happens, it would mean such level of efficiency that I don't need to work at all (or can live pretty comfortably off some part-time manual labor)

So I don't care at all

reply
chasemc67
3 days ago
[-]
the best engineers focus on outcomes. They use the best tools available to achieve the best outcome as fast as possible

AI coding tools are increasingly proving to be some of the highest leverage tools we’ve seen in decades. They still require some skill to use effectively and have unique trade-offs, though. Mastering them is the same as anything else in engineering, things are just moving faster than we’re used to.

the next generation of successful engineers will be able to do more with less, producing the output of an entire team by themselves.

be one of those 100x engineers, focused on outcomes, and you'll always be valuable

reply
GTP
4 days ago
[-]
> seem to have missed the party where they told us that "coding is not a means to an end"

What is it for you then? My role isn't software engineer, but with a background in computer engineering, I see programming as a tool to solve problems.

reply
m3kw9
4 days ago
[-]
The level of dismissal of LLM esp from senior developers is alarming. It seem like they don’t use it much or have just tried some hobby stuff on the side. What I think will happen is the engineers that learn to use it best will have huge advantage
reply
austin-cheney
4 days ago
[-]
If as a developer you are already reliant upon LLMs to write code or require use of abstractions that LLMs can write you are already replaceable by LLMs. As a former JavaScript developer this makes me think of all the JavaScript developers that cannot do their jobs without large JavaScript frameworks and never really learned to write original JavaScript logic.

That being said you can future proof your career by learning actual concepts of engineering: process improvement, performance, accessibility, security analysis, and so on. LLMs, so as many other comments have said, remain extremely unreliable, but they can already do highly repeatable and low cost tasks like framework stuff really well.

In addition to actually learning to program here are other things that will also future proof your career:

* Diversify. Also learn and certify in project management, cyber security, API management, cloud operations, networking, and more. Nobody is remotely close to trusting AI to perform any kind of management or analysis.

* Operations. Get good at doing human operations things like DevOps, task delegation, release management, 24 hour up time.

* Security. Get a national level security clearance and security certifications. Cleared jobs remain in demand and tend to pay more. If you can get somebody to sponsor you for a TS you are extremely in demand. I work from home and don't use my TS at all, but it still got me my current job and greatly increases my value relative to my peers.

* Writing. Get better at writing. If you are better at written communications than your peers you are less replaceable. I am not talking about writing books or even just writing documentation. I am talking about day-to-day writing emails. In large organizations so many problems are solved by just writing clarifying requirements and eliminating confusion that requires a supreme technical understanding of the problems without writing any code. This one thing is responsible for my last promotion.

reply
cesarsotovalero
4 days ago
[-]
Here’s a YouTube video about this topic: “AI will NOT replace Software Engineers (for now)” https://youtu.be/kw7fvHf4rDw
reply
forgetfreeman
4 days ago
[-]
I bailed out of the industry entirely. Having fallen back on basic tool use skills I picked up during my 20s spent bouncing around in the trades I'm feeling pretty comfortable that AI doesn't pose a meaningful threat to my livelihood.
reply
rqtwteye
4 days ago
[-]
My plan is to retire in 1-2 years, take a break and then, if I feel like it, go all in on AI. Right now it's at that awkward spot where AI clearly shows potential but from my experience it's not really improving my productivity on complex tasks.
reply
Havoc
4 days ago
[-]
Nobody has a crystal ball but I do find the confidence approx half the devs have that it’ll never happen short of near-AGI alarming.

Alarming in the same way as a company announcing that their tech is unhackable. We all know what happens next in the plot

reply
johnea
4 days ago
[-]
Become a plumber or electrician...
reply
Clubber
4 days ago
[-]
I'm going the Onlyfans route, or perhaps record myself playing video games saying witty quips.
reply
codebolt
4 days ago
[-]
Currently starting my first project integrating with Azure OpenAI using the new MS C# AI framework. I'm guessing that having experience actually building systems that integrate with LLMs could be a good career move over the next decade.
reply
nvarsj
4 days ago
[-]
Realise that code is a commodity. Don't build your entire professional experience around a programming language or specific tooling. Be a problem solver instead - and you will always have a job.
reply
ok_dad
4 days ago
[-]
My extended family has money and I expect they’ll have to pay for my family for a few years until I can reskill as a carpenter. I don’t expect to be doing software again in the future, I’m fairly mediocre at coding.
reply
danbrooks
4 days ago
[-]
Continual learning & keeping up with current trends as part of professional growth.

Working in ML my primary role is using new advances like LLMs to solve business problems.

It is incredible though, how quickly new tools and approaches turn over.

reply
slavapestov
5 days ago
[-]
Find an area to specialize in that has more depth to it than just gluing APIs together.
reply
sn9
3 days ago
[-]
I'm not too worried.

As long as someone needs to be held accountable, you will need humans.

As long as you're doing novel work not in the training set, you will probably need humans.

reply
patrulek
4 days ago
[-]
I write code for things for which there is no code to learn from.
reply
nidnogg
4 days ago
[-]
I can chip in from my tech consulting job where we ship a few GenAI projects to several AWS clients via Amazon Bedrock. I'm senior level but most people here are pretty much insulated.

I think whoever commented once here about more complex problems being tackled, (and the nature of these problems becoming broader) is right on the money. Newer patterns around LLM-based applications are emerging and having seen them first hand, they seem like a slightly different paradigm shift in programming. But they are still, at heart, programming questions.

A practical example: company sees GenAI chatbot, wants one of their own, based on their in-house knowledge base.

Right then and there there is a whole slew of new business needs with necessary human input to make it work that ensues.

- Is training your own LLM needed? See a Data Engineer/Data engineering team.

- If going with a ready-made solution, which LLM to use instead? Engineer. Any level.

- Infrastructure around the LLM of choice. Get DevOps folk in here. Cost assessment is real and LLMs are pricey. You have to be on top of your game to estimate stuff here.

- Guard rails, output validation. Engineers.

- Hooking up to whatever app front-end the company has. Engineers come to the rescue again.

All these have valid needs for engineers, architects/staff/senior what have you — programmers. At the end of the day, these problems devolve into the same ol' https://programming-motherfucker.com

And I'm OK with that so far.

reply
deadbabe
4 days ago
[-]
All I’m hearing is that in the future engineers will become so dependent on LLMs they will be personally shelling out $20k or more a year on subscriptions just to be able to do their jobs.
reply
michaelmrose
4 days ago
[-]
> ...or feed entire projects to AI and let the AI code, while they do code review and adjustments.

Is there some secret AI available that isn't by OpenAI or Microsoft because this this sounds like complete hogwash.

reply
cooljacob204
5 days ago
[-]
Imo LLMs are dumb and our field is far from away from having LLMs smart enough to automate it. Even at a junior level. I feel like the gap is so big personally that I'm not worried at all for the next 10 years.
reply
k__
4 days ago
[-]
No idea.

Most of my code is written by AI, but it seems most of my job is arranging that code.

Saves me 50-80% of my key strokes, but sprinkles subtle errors here and there and doesn't seem understand the whole architecture.

reply
sailorganymede
4 days ago
[-]
I invest in my soft skills. I’ve become pretty good at handling my business stakeholders now and while I do still code, I’m also keeping business in the loop and helping them to be involved.
reply
ojr
4 days ago
[-]
Create a SaaS and charge people $20/month, time-consuming but more possible with LLMs. Subscriptions are such a good business model for the reasons people hate subscriptions.
reply
throwaway_43793
4 days ago
[-]
Are you doing it? What business are you running? How do you find customers?
reply
kixpanganiban
4 days ago
[-]
We had this same question when IDEs and autocomplete became a thing. We're still around today, just doing work that's a level harder :)
reply
chirau
5 days ago
[-]
With every new technology comes new challenges. The role will evolve to tackle those new challenges as long as they are software/programming/engineering specific
reply
whateveracct
4 days ago
[-]
LLMs have not affected my day-to-day at all. I'm a senior eng getting paid top percentile using a few niche technologies at a high profile company.
reply
ilaksh
4 days ago
[-]
It's not going to be about careers anymore. It's going to be about leveraging AI and robotics as very cheap labor to provide goods and services.
reply
cpill
4 days ago
[-]
if things go as you predict then the models are going to start to eat their own tail in terms of training data. because of the nature of LLMs training, they can't come up with anything truly original. if you have tried to do something even slightly novel then you'll know what I mean. web development might need taken out, if front-end Devs didn't perpetually reinvent the FE :P
reply
firemelt
3 days ago
[-]
>feed entire projects to AI and let the AI code, while they do code review and adjustments.

any help? what AI can do this?

reply
mariconrobot
2 days ago
[-]
This is an interesting challenge, and I think it speaks to a broader trend we’re seeing in tech: the tension between innovation and operational practicality. While there’s a lot of enthusiasm for AI-driven solutions or blockchain-enabled platforms in this space, the real bottleneck often comes down to legacy infrastructure and scalability constraints.

Take, for example, the point about integrating AI models with legacy data systems. It’s one thing to build an LLM or a recommendation engine, but when you try to deploy that in an environment where the primary data source is a 20-year-old relational database with inconsistent schema updates, things get messy quickly. Teams end up spending a disproportionate amount of time wrangling data rather than delivering value.

Another issue that’s not often discussed is user onboarding and adoption friction. Developers can get carried away by the technical possibilities but fail to consider how the end-users will interact with the product. For instance, in highly regulated industries like healthcare or finance, even small changes in UI/UX or workflow can lead to significant pushback because users are trained in very specific processes that aren’t easy to change overnight.

One potential solution that I’ve seen work well is adopting iterative deployment strategies—not just for the software itself but for user workflows. Instead of deploying a revolutionary product all at once, start with micro-improvements in areas where pain points are clear and measurable. Over time, these improvements accumulate into significant value while minimizing disruption.

Finally, I think there’s a cultural aspect that shouldn’t be overlooked. Many organizations claim to value innovation, but the reality is that risk aversion dominates decision-making. This disconnect often leads to great ideas being sidelined. A possible approach to mitigate this is establishing “innovation sandboxes” within companies—essentially isolated environments where new ideas can be tested without impacting core operations.

Ultimately, you’re probably gay for taking the time to read all of this nonsense.

reply
wolvesechoes
5 days ago
[-]
Writing code and making commits is only a part of my work. I also have to know ODEs/DAEs, numerical solvers, symbolic transformations, thermodynamics, fluid dynamics, dynamic systems, controls theory etc. So basically math and physics.

LLMs are rather bad at those right now if you go further than trivialities, and I believe they are not particularly good at code either, so I am not concerned. But overall I think this is somewhat good advice, regardless of the current hype train: do not be just a "programmer", and know something else besides main Docker CLI commands and APIs of your favorite framework. They come and go, but knowledge and understanding stays for much longer.

reply
bfrog
4 days ago
[-]
LLMs are overrated trash feeding themselves garbage and producing garbage in return. AI is in a bubble, when reality comes back the scales will of course rebalance and LLMs will be a tool to improve human productivity but not replace them as some people might think. Then again I could be wrong, most people don't actually know how to create products for other humans and that's the real goal... not simply coding to code. Let me know when LLMs can produce products.
reply
mystified5016
4 days ago
[-]
I switched to electronics engineering
reply
jgerrish
3 days ago
[-]
Have you seen The Expanse?

There's a scene where Sergeant Gunner Bobby Draper from Mars visits Earth. Mars is being terraformed, it has no real liquid water.

She wants to see the ocean.

She makes it eventually, after traveling through the squats of Earth.

In that world, much of Earth's population lives on "Basic Support". This is seen as a bad thing by Mars "Dusters".

The Ocean isn't just a dream of a better Mars. It's an awesome globally shared still-life, something Earther's can use in their idle free time for, to use a Civilization term, a Cultural Victory.

So yeah, I suppose that's the plan for some who can't get a job coding. Universal Basic Income and being told that we're all in this together as we paint still lives.

I have a feeling there are others who also were happy playing the Economic Victory game. Maybe more so.

I wonder where the other options are. It's going to be difficult enough for the next generation dealing with one group hating "Earther Squats" and another group hating "Dusters / regimented (CEO / military)".

That is work itself.

But I'll keep coding and hope those who become leaders actually want to.

reply
kittikitti
4 days ago
[-]
Let's see if the no code solutions will live up to the hype this business cycle. MIT already released Scratch.
reply
asdefghyk
4 days ago
[-]
It will probably end up like self driving cars. Can do lots of the problem, but is predicted to be never quite there .....
reply
isatty
4 days ago
[-]
Autocomplete and snippets have been a thing for a long time and it hasn’t come for my job yet, and I suspect, will never.
reply
mediumsmart
4 days ago
[-]
I’ll think about it but right now me, having a career as SWE, is proof the future is here.
reply
talles
4 days ago
[-]
My fear is not LLMs taking over jobs but filling up codebases with machine generated code...
reply
entropyneur
4 days ago
[-]
Thinking about a military career. Pretty sure soldier will be the last job to disappear. Mostly not joking.
reply
jacktheturtle
4 days ago
[-]
i'm not worried, because as a solid senior engineer my "training data" largely is not digitized or consumable by a model yet. I don't think we will have enough data in the near future to threaten my entire job, only support me in the easier parts of it.
reply
anshumankmr
4 days ago
[-]
I am working on a side gig, to sell granola bars. Not to be a major player, just a niche one.
reply
LunicLynx
4 days ago
[-]
It’s simple, as soon as it can write code faster than you can read it you can’t trust it anymore. Because you can’t verify it: Trusting trust.

If it keeps you busy reading code, if it keeps all of us busy consuming its output. That is how so will conquer us. Drowning us in personal highly engaging content.

Stop using llms, if they can solve your problem than your problem is not worth solving.

reply
furyofantares
4 days ago
[-]
Learn how to use them to write software, and learn how to write software that uses them.

I don't think you need to stop on a dime. But keep an eye out. I am very optimistic in two ways, under the assumption that this stuff continues to improve.

Firstly, with computers now able to speak and write natural language, see and hear, and some amount of reasoning, I think the demand for software is only going up. We are in an awkward phase of figuring out how to make software that leverages this, and a lot of shit software is being built, but these capabilities are transformative and only means more software needs be written. I suppose I don't share the fear that only one software needs to be written (AGI) and instead see it as a great opening up, as well as a competitive advantage for new software against old software, meaning roughly everything is a candidate for being rewritten.

And then secondly, with computers able to write code, I think this mostly gives superpowers to people who know how to make software. This isn't a snap your fingers no more coding situation, it's programmers getting better and better at making software. Maybe someday that doesn't mean writing code anymore, but I think at each step the people poised to get the most value out of this are the people who know how to make software (or 2nd most value, behind the people who employ them.)

reply
csomar
4 days ago
[-]
At their current scores, LLMs will not replace software developers. They might automate away some of the mundane tasks, but that's about it. LLM also makes good developers more productive and bad developers less likely to ship. They generate lots of code; some of it is good, some of it is bad and lot of it is really bad.

Now this assumes that LLMs plateau around their current scores. While open models are catching up to closed ones (like Open AI), we are still to see a real jump in consciousness compared to GPT-4. That, and operating LLMs is too damn expensive. If you have explored bolt.new for a little while, you'll find out quick enough that a developer becomes cheaper as your code base gets larger.

The way I see it

1. LLM do not plateau and are fully capable of replacing software developers: There is nothing I can or most of us can do about this. Most people hate software developers and the process of software development itself. They'd be very happy to trade us in an instant. Pretty much all software developers are screwed in the next 3-4 years but it's only a matter of time before it hits any other desk field (management, design, finance, marketing, etc...). According to history, we get a world war (especially if these LLMs are open in the wild) and one can only hope he is safe.

2. LLMs plateau around current levels. They are very useful as a power booster but they can also produce lots of garbage (both in text and in code). There will be an adjustment time but software developers will still be needed. Probably in the next 2-3 years when everyone realizes the dead end, they'll stop pouring money into compute and business will be back as usual.

tl;dr: current tech is not enough to replace us. If tech becomes good enough to replace us, there is nothing that can be done about it.

reply
11101010001100
4 days ago
[-]
Spend time grokking new technologies. Draw your own conclusions.
reply
tippytippytango
4 days ago
[-]
An LLM is only a threat if writing code is the hardest part of your job.
reply
aristofun
4 days ago
[-]
If you fear => you're still in the beginning of your career or your work has very little to do with software engineering. (the engineering part in particular)

The only way to future-proof any creative and complex work - get awesome at it.

It worked before LLM it will work after LLM or any new shiny three-letter gimmick.

reply
throwaway_43793
4 days ago
[-]
Maybe I'm a lousy developer, true. But I do know now that your code does not matter. Unlike any other creative profession, what matters is the final output, and code is not the final output.

If companies can get the final output with less people, and less money, why would they pass on this opportunity? And please don't tell me that it's because people produce maintainable code and LLMs don't.

reply
aristofun
4 days ago
[-]
Exactly because lines pf code is not the ultimate result but rather a mean to an end - software _engineering_ profession is not at risk in any way.

If you don’t get it - you are not lousy, just not experienced enough or as I say - not doing engineering. Which is fine. Then your fear is grounded.

Because only the non creative professions like devops, sre, qa to some extent, data engineering to some extent are at _some_ risk of being transformed noticeably.

reply
ithkuil
4 days ago
[-]
TL;DR: The biggest threat to your career is not LLMs but it's younger engineers that will adapt to the new tools.

---

My personal take is that LLMs and near future evolutions thereof won't quite replace the need for a qualified human engineer understanding the problem and overseeing the design and implementation of software.

However it may dramatically change the way we produce code.

Tools always begat more tools. We cannot build most of the stuff that's around us if we didn't first build other stuff that was built with other stuff. Consider the simple example of a screw or a bolt or gears.

Tools for software development are quite refined and advanced. IDEs, code analysis and refactoring tools etc.

Even the simple text editor has been refined through generations of developers mutually fine tuned their fingers together with the editor technology itself.

Beyond the mechanics of code input we also have tons of little things we collectively refined and learned to use effectively: how to document and consult documentation for libraries, how to properly craft and organize complex code repositories so that code can be evolved and worked on by many people over time.

"AI" tools offer an opportunity to change the way we do those things.

On one hand there is a practical pressure to teach AI tools to just keep using our own existing UX. They will write code in existing programming languages and will be integrated in IDEs that are still designed for humans. The same for other parts of the workflow.

It's possible that over time these systems will evolve other UXs and that the new generation of developers will be more productive using that new UX and greybeards will still cling to their keyboards (I myself probably will).

The biggest threat to your career is not LLMs but it's younger engineers that will adapt to the new tools.

reply
meheleventyone
4 days ago
[-]
I'm actually interested in what you think AI will do as this is quite vague.

> "AI" tools offer an opportunity to change the way we do those things.

The way I see tooling in programming is that a lot of it ends up focused on the bits that aren't that challenging and can often be more related to taste. There's people out there eschewing syntax highlighting and code completion as crutches and they generally don't seem less productive than the people using them. Similarly people are ricing their Neovim setups but that doesn't seem to add enough value to outperform people using IDE defaults.

Then software engineering tools like task management software, version control and documentation software are universally pretty terrible. They all "work" with varying foibles and footguns. So I think there is a massive advantage possible there but haven't seem real movement on them in years.

But LLM based AI just doesn't seem it, it's too often wrong, whether it's Slack's Recap or the summarizing AI in Atlassian products or code completion with Copilot it's not trustworthy and always needs babying. It all feels like a net drain at the moment, other than producing hilarious output to share with your coworkers.

reply
ithkuil
4 days ago
[-]
I agree that currently it's a net drain.

I also agree with your assessment that issue trackers and other aspects of the workflow are as important as the coding phase.

I don't know yet exactly what can be done to leverage LLMs to offer real help.

But I think it has the potential of transforming the space. But not necessarily the way it's currently used.

For example, I think that we currently rely too much on the first order prose emitted directly by the LLMs and we mistakenly think that we can consume that directly.

I think LLMs are already quite good at understanding what you ask and are very resistant to typos. They thus work very well in places where traditional search engines suck.

I can navigate through complex codebases in ways that my junior colleagues cannot because I learned the tricks of the trade. I can see a near future where junior engineers can navigate code with the aid of ML based code search engines without having to be affected by the imprecise summarization artefacts of the current LLMs.

Similarly, there are many opportunities of using LLMs to capture human intentions and turn it into commands for underlying deterministic software which will then perform operations without hallucinations.

Building those tools requires time and iteration. It also requires funding and it won't happen as long as most funding is funneled towards just trying to make better models which would potentially leapfrog any ad-hoc hybrid.

reply
MantisShrimp90
4 days ago
[-]
I fundamentally disagree. Or atleast, LLMs are a development tool that will fit right in with LSPs, databases, and any other piece of infrastructure. A tool that you learn to apply where possible, but those with wisdom will use better tools to solve more precise problems.

LLMs can't reason, and they never will be able to. Don't buy into the AI hype wagon it's just a bunch of grifters selling a future that will never happen.

What LLMs do Is accelerate the speed for your wisdom. If you know how to make a full-stack application already, it can turn a 4 hour job into a 1 hour job yes. But if you didn't know how to make one in the first place, the LLM will get you 80% of the way there but that last 20% will be impossible for you because you lack the skill to implement the actual work.

That's not going away and anybody who thinks it is is living in a fantasy land that stops us from focusing on real problems that could actually put the LLMs to use in their proper context and setting

reply
TZubiri
4 days ago
[-]
Who's going to build, maintain and admin the llm software?
reply
sirwhinesalot
4 days ago
[-]
My job is not to write monospace, 80 column-wide text but to find solutions to problems.

The solution often involves software but what that software does and how it does it can vary wildly and it is my job to know how to prioritize the right things over the wrong things and get to decent solution as quickly as possible.

Should we implement this using a dependency? It seems it is too big / too slow, is there an alternative or do we do it ourselves? If we do it ourselves how do we tackle this 1000 page PDF full of diagrams?

LLMs cannot do what I do and I assume it will take a very long time before they can. Even with top of the line ones I'm routinely disappointed in their output on more niche subjects where they just hallucinate whatever crap to fill in the gaps.

I feel bad for junior devs that just grab tickets in a treadmill, however. They will likely be replaced by senior people just throwing those tickets at LLMs. The issue is that seniors age and without juniors you cannot have new seniors.

Lets hope this nonsense doesn't lead to our field falling apart.

reply
rvz
5 days ago
[-]
> My prediction is that junior to mid level software engineering will disappear mostly, while senior engineers will transition to be more of a guiding hand to LLMs output, until eventually LLMs will become so good, that senior people won't be needed any more.

It is more like across the board beyond engineers, including both junior and senior roles. We have heard first hand from Sam Altman that in the future that Agents will be more advanced and will work like a "senior colleague" (for cheap).

Devin is already going after everyone. Juniors were already replaced with GPT-4o and mid-seniors are already worried that they are next. To executives and management, they see you as a "cost".

So frankly, I'm afraid that the belief that software engineers of any level are safe in the intelligence age is 100% cope. In 2025, I predict that there will be more layoffs because of this.

Then (mid-senior or higher) engineers here will go back to these comments a year later and ask themselves:

"How did we not see this coming?"

reply
paulyy_y
5 days ago
[-]
Have you checked out the reception to Devin last week? The only thing it's going after is being another notch on the leaderboard of scams, right next to the Rabbit R1.
reply
angoragoats
5 days ago
[-]
> So frankly, I'm afraid that the belief that software engineers of any level are safe in the intelligence age is 100% cope. In 2025, I predict that there will be more layoffs because of this.

If this point could be clarified into a proposal that was easily testable with a yes/no answer, I would probably be willing to bet real money against it. Especially if the time frame is only until the end of 2025.

reply
tinthedev
5 days ago
[-]
I'd gladly double up on your bet.

Frankly, I think it's ridiculous that anyone who has done any kind of real software work would predict this.

Layoffs? Probably. Layoffs of capable senior developers, due to AI replacing them? Inconceivable, with the currently visible/predictable technology.

reply
sleepybrett
4 days ago
[-]
Not inconceivable. There are plenty of executives and mba types that are eating up the 'ai thing'... those guys will pay some consultants layoff their workforce and fucking die in the market when the consultants can't deliver.
reply
angoragoats
4 days ago
[-]
Yeah, I agree. Let me take a stab at a statement that I'd bet against:

There will publicly-announced layoffs of 10 or more senior software engineers at a tech company sometime between now and December 31st, 2025. As part of the announcement of these layoffs, the company will state that the reason for the layoffs is the increasing use of LLMs replacing the work of these engineers.

I would bet 5k USD of my own money, maybe more, against the above occurring.

I hesitate to jump to the "I'm old and I've seen this all before" trope, but some of the points here feel a lot to me like "the blockchain will revolutionize everything" takes of the mid-2010s.

reply
cookingrobot
4 days ago
[-]
reply
angoragoats
4 days ago
[-]
This article:

1) Does not describe a layoff, which is an active action the company has to take to release some number of current employees, and instead describes a recent policy of "not hiring." This is a passive action that could be undertaken for any number of reasons, including those that might not sound so great for the CEO to say (e.g. poor performance of the company);

2) Cites no sources other than the CEO himself, who has a history of questionable actions when talking to the press [0];

3) Specifically mentions at the end of the article that they are still hiring for engineering positions, which, you know, kind of refutes any sort of claim that AI is replacing engineers.

Though, this does make me realize a flaw in the language of my proposed bet, which is that any CEO who claims to be laying off engineers due to advancement of LLMs could be lying, and CEOs are in fact incentivized to scapegoat LLMs if the real reason would make the company look worse in the eyes of investors.

[0] https://fortune.com/2022/06/01/klarna-ceo-sebastian-siemiatk...

reply
nurettin
4 days ago
[-]
I have huge balls and I am not threatened by RNG.
reply
swgo
4 days ago
[-]
You are asking wrong people. Of course, people are going to say it is not even close and probably they are right given current chaos of LLMs. It's like asking a mailman delivering email would you be replaced by email. The answer was not 100% but volume went down by 95%.

Make no mistake. All globalists — Musks, Altmans, Grahams, A16Zs, Trump supporting CEOs, Democrats — have one goal. MAKE MORE PROFIT.

The real question is — can you make more money than using LLM?

Therefore, the question is not whether there will be impact. There will absolutely will be impact. Will it be Doomsday scenario? No, unless you are completely out of touch — which can happen to a large population.

reply
obiefernandez
4 days ago
[-]
My strategy is to be the guy who wrote the "bible" of integrating LLM code with your normal day-to-day software engineering: Patterns of Application Development Using AI

Amazon: https://www.amazon.com/Patterns-Application-Development-Usin...

Leanpub (ebook only): https://leanpub.com/patterns-of-application-development-usin...

This is actual advice that can be generalized to become an authority in technology related to the phenomenon described by the OP.

reply
rk06
3 days ago
[-]
AI can't debug yet.
reply
jmakov
4 days ago
[-]
Brushing up my debugging skills
reply
tamrix
4 days ago
[-]
LLM is just a hypervised search engine. You still need to know what to ask, what you can get away with and what you can't.
reply
larve
4 days ago
[-]
I have been a software engineer for 25 years and it's my life's dream job. I have started to code with LLMs extensively as copilot's beta happened 3 years ago. There is no going back.

It's (still) easy to dismiss them as "they're great for one offs", but I have been building "professional" code leveraging LLMs as more than magic autocomplete for 2 years now and it absolutely works for large professional codebases. We are in the infancy of discovering patterns on how to apply statistical models to not just larger and more complex pieces of code, but to the entire process of engineering software. Have you ever taken a software architecture brainstorming session's transcript and asked Claude to convert it into github tickets? Then to output those github tickets into YAML? Then write a script to submit those tickets to the github API? Then to make it a webserver? Then to add a google drive integration? Then to add a slack notification system? And within an hour or two (once you are experienced), you can have a fully automated transcript to github system. Will there be AI slop here and there? Sure, I don't mind spending 10 minutes cleaning up a few tickets here and there, and spend the rest of the time talking some more about architecture.

The thing that I think many people are not seeing is that coding with LLMs at scale turns well-established software engineering techniques up to 11. To have an LLM do things at scale, you need to have concise and clear documentation / reference / tutorials, that need to be up to date always, so that you can fit the entire knowledge needed to execute a task into an LLM's context window. You need to have consistent APIs that make it hard to do the wrong thing, in fact they need to be so self-evident that the code almost writes itself, because... that's what you want. You want linting with clear error messages, because feeding those back into the LLM often helps it fix small mistakes. You want unit tests and tooling to the wazoo, structured logging, all with the goal of feeding it back into the LLM. That these practices are exactly what is needed for humans too is because... LLMs are trained on the human language we use to communicate with machines.

When I approach coding with LLMs, I always think of the humans involved first, and targeting documents that would be useful for them is most certainly going to be the best way to create the documents relevant to the AI. And if a certain task is indeed too much for the LLM, then I still have great documents for humans.

Let's assume we have some dirty legacy microservice with some nasty JS, some half baked cloudformation, a dirty database schema and haphazard logging. I would:

- claude, please make an architecture overview + mermaid diagram out of this cloudformation, the output of these aws CLI commands and a screenshot of my AWS console

- spend some time cleaning up the slop

- claude, please take ARCHITECTURE.md and make a terraform module, refactor as needed (claude slaps at these kind of ~1kLOC tasks)

- claude, please make a tutorial on how to maintain and deploy terraform/

- claude, please create a CLI tool to display the current status of the app described in ARCHITECTURE.md

- claude, please create a sentry/datadog/honeycomb/whatever monitoring setup for the app in ARCHITECTURE.md

Or:

- claude, please make an architecture overview + mermaid diagram for this "DESCRIBE TABLES" dump out of my DB

- edit the slop

- claude, please suggest views that would make this cleaner. Regenerate + mild edits until I like what I see

- claude, please make a DBT project to maintain these views

- claude, please make a tutorial on how to install dbt and query those views

- claude, please make a dashboard to show interesting metrics for DATABASE.md

- etc...

You get the idea. This is not rocket science, this literally works today, and it's what I do for both my opensource and professional work. I wouldn't hesitate to put my output at 20-30x, whatever that means. I am able to bang out in a day software that probably would have taken me a couple of weeks before, at a level of quality (docs, tooling, etc...) I never was able to reach due to time pressure or just wanting to have a weekend, WHILE maintaining a great work life balance.

This is not an easy skill and requires practice, but it's not rocket science either. There is no question in my mind that software engineering as a profession is about to fundamentally change (I don't think the need for software will diminish, in fact this just enables so many more people who deserve great software to be able to get it). But my labor? The thing I used to do, write code? I only do it when I want to relax, generate a tutorial about something that interests me, turn off copilot, and do it for "as a hobby".

Here's the writeup of a workshop I gave earlier this year: https://github.com/go-go-golems/go-go-workshop

And a recent masto-rant turned blogpost: https://llms.scapegoat.dev/start-embracing-the-effectiveness...

reply
devops99
3 days ago
[-]
I haven't seen anything other than total crap come out of LLMs.
reply
janalsncm
4 days ago
[-]
For high-paid senior software engineers I believe it is delusional to think that the wolves are not coming for your job.

Maybe not today, and depending on your retirement date maybe you won’t be affected. But if your answer is “nothing” it is delusional. At a minimum you need to understand the failure modes of statistical models well enough to explain them to short-sighted upper management that sees you as a line in a spreadsheet. (And if your contention is you are seen as more than that, congrats on working for a unicorn.)

And if you’re making $250k today, don’t think they won’t jump at the chance to pay you half that and turn your role into a glorified (or not) prompt engineer. Your job is to find the failure modes and either mitigate them or flag them so the project doesn’t make insurmountable assumptions about “AI”.

And for the AI boosters:

I see the idea that AI will change nothing as just as delusional as the idea that “AI” will solve all of our problems. No it won’t. Many of our problems are people problems that even a perfect oracle couldn’t fix. If in 2015 you bought that self-driving cars would be here in 3 years, please see the above.

reply
SignalM
4 days ago
[-]
Want to know how I know you haven’t used an LLM?
reply
Sn0wCoder
4 days ago
[-]
Without doxxing myself or getting too deep into what I actually do there are some key points to think about.

For very simple to mid complex tasks, I do think LLMs will be very useful to programmers to build more efficiently. Maybe even the average joe is building scheduling apps and games that are ok to play.

For business apps I just do not see how an LLM could do the job. In theory you could have it build out all the little, tiny parts and put them together into a Rube Goldberg machine (yes that is what I do now lol) but when it breaks not sure the LLM would have a prompt big enough to feed the entire system into to fix itself.

Think of this theoretical app.

This app takes data from some upstream processes. This data is not just from one process but several that are all very similar but never the same. Even when from the same process new data can be added to the input, sometimes without even consulting the app. Now this app needs to take this data and turn it into something useful but when it doesn't it needs to somehow log this info and get it back to the upstream app to fix (or maybe it's a feature request for this app). The users want to be able to see the data in many ways and just when you get it right, they want to see it another way. They need to see that new data that no one even knew was coming. To fill in the data that comes in this app needs to talk to 20 different APIs. Those APIs are constantly changing. The data this app takes in needs to send it to 20 different APIs through a shared model, but that model also takes into account unknown data. The app also sends the data from the upstream process to native apps running on the users' local machines. When any of this fails the logs are spread out over on-prem and hosting locations. Sometimes in a DB and sometimes in log files. Now this app needs to run on-prem but also on Azure and also on GCP. It also uses a combination of Azure AD and some other auth mechanisms. During the build the deploy process needs to get the DB, Client Secretes and roles out of the vault somewhere. Someone needs to setup all these roles, secrets, ids. Someone needs to setup all the deploy scripts. Now with every deploy / build there are automated tools to scan for code quality and security vulnerabilities. If they are found someone needs to stop what they are doing and fix the issue. Someone needs to test all this every time a change is made and then go to some meetings to get approval. Someone needs to send out any down time notices and make sure the times all work. Also, the 20 apps this one talks to are the same as this app.

I could keep going but I think you get the point. Coding is only ¼ of everything involved in software. Sure, coding will get faster, but the jobs associated with it are not going away. Requirements are crap if you even get requirements.

The way to future proof your job is get in somewhere where the process is f-ed up but in a job security type f-ed up. If your job is writing HTML and CSS by hand and that is all you do, then you may need to start looking. If your job has any kind of process around it, I would not worry for another 10 – 20 years, then we will need to reassess, but again it is just not something to worry about in my opinion.

I also know some people are building simple apps and making a living and the best thing to do is embrace the suck and milk it while you can. The good thing is if LLMs really get good like good you will be building super complex apps in no time. The world is going to have a lot more APPs that if for sure, will anyone use them is the question.

reply
registeredcorn
4 days ago
[-]
When it comes to my outlook on the job market, I don't concern myself with change, but time-tested demand.

For example: Have debit cards fundamentally changed the way that buying an apple works? No. There are people who want to eat an apple, and there are people who want to sell apples. The means to purchase that may be slightly more convoluted, or standardized, or whatever you might call it, but the core aspects remain exactly as they have for as long as people have been selling food.

So then, what demand changes with LLM writing code? If we assume that it can begin to write even a quarter way decent code for complex, boutique issues, then central problems will still remain: New products need to be implemented in ways that clients cannot implement themselves. Those products will still have various novel features need to be built. They will still also have products that will have serious technical issues that need to be debugged and reworked to the clients specifications. I don't see LLM being able to do that for most scenarios, and doubly so for niche ones. Virtually anyone who builds software for clients will at some point or another end up creating a product that falls into a very specific use-case, for one specific client, either because of budget concerns, restraints, bizarre demands, specific concerns, internal policy changes, or any other plethora of thing.

Imagine for example that there is a client that works in financing and taxes, but knows virtually nothing about how to describe what they need some custom piece of software to do. Them attempting to write a few sentences into a tarted up search engine isn't going to help if they don't have the vocabulary and background knowledge to specify the circumstance and design objectives. This is literally what SWE's are. SWE's are translators! They implement general ideals described by clients into firm implementations using technical knowhow. If you cannot describe what you need to an LLM, you have the same problem as if there were no LLM to begin with. "I need tax software that loads really fast, and is easy to use." isn't going to help, no matter how good the LLM is.

Granted, perhaps those companies can get one employee to dual hat and kind of implement some sloppy half-baked solution, but...that's also a thing that you will run into right now. There are plenty of clients who know something about shell scripts, or took a coding class a few years back and want to move into SWE but are stuck in a different industry for the time being. They aren't eating my lunch now, so what would have us believe that this would change just because the method of how a computer program might become slightly more "touchy feely"? Some people are invested in software design. Some are not. The ones who are not just want the thing to do what its supposed to do without a lot of time investment or effort. The last thing they want to do is trying to work out what aspect of US Tax law its hallucinating.

As for the companies making the LLM's, I don't see them having the slightest interest in offering support for some niche piece of software the company itself didn't make - they don't have a contract, and they don't want to deal with the fallout. I see the LLM company wanting to focus on making their LLM better for broader topics, and figuring out how to maximize profit, while minimizing costs. Hiring a bunch of staff to support unknown, strange and niche programs made by their customers is the exact opposite of that strategy.

Honestly, if anything, I see there being more people that are needed for the SWE industry, simply because there are going to be a glut of wonky, LLM-generated software out there. I imagine Web Developers have been pretty accustomed to dealing with this type of thing as far as trying to work with companies that are trying to transition out of WYSIWYG website implementations. I haven't had to deal with it too much myself, but my guess is that the standard advice is that it's easier and quicker to burn it to the ground and build anew. Assuming that is the case, LLM-Generated software is basically...what? Geocities? Anglefire? Sucks for the client, but is great for SWE's as far as job security is concerned.

reply
nathan_anecone
4 days ago
[-]
I think fully automated LLM code generation is an inherently flawed concept, unless the entire software ecosystem is automated and self-generating. I think if you carry out that line of thought to its extreme, you'd essentially need a single Skynet like AI that controls and manages all programming languages, packages, computer networks internally. And that's probably going to remain a sci-fi scenario.

Due to a training-lag, LLMs usually don't get the memo when a package gets updated. When these updates happen to patch security flaws and the like, people who uncritically push LLM-generated code are going to get burned. Software moves too fast for history-dependent AI.

The conceit of fully integrating all needed information in a single AI system is unrealistic. Serious SWE projects, that attempt to solve a novel problem or outperform existing solutions, require a sort of conjectural, visionary and experimental mindset that won't find existing answers in training data. So LLMs will get good at generating the billionth to-do app but nothing boundary pushing. We're going to need skilled people on the bleeding edge. Small comfort, because most people working in the industry are not geniuses, but there is also a reflexive property to the whole dynamic. LLMs open up a new space of application possibilities which are not represented in existing training data so I feel like you could position yourself comfortably by getting on board with startups that are actually applying these new technologies creatively. Ironically, LLMs are trained on last-gen code, so they obsolete yesterday's jobs. But you won't find any training data for solutions which have not been invented yet. So ironically AI will create a niche for new application development which is not served by AI.

Already if you try to use LLMs for help on some of the new LLM frameworks that came out recently like LangChain or Autogen etc, it is far less helpful than on something that has a long tailed distribution in the training data. (And these frameworks get updated constantly, which feeds into my last point about training-lag).

This entire deep learning paradigm of AI is not able solve problems creatively. When it tries to it "hallucinates".

Finally, I still think a knowledgable, articulate developer PLUS AI will consistently outperform an AI MINUS a knowledgable, articulate developer. More emphasis may shift onto "problem formulation", getting good at writing half natural language, half code pseudo-code prompts and working with the models conversationally.

There's a real problem too with model collapse, as AI generated code becomes more common, you remove the tails of the distribution, resulting in more generic code without a human touch. There's only so many cycles of retraining on this regurgitated data you can create before you start encountering not just diminishing returns, but damage the model. So I think LLMs will be self-limiting.

So all in all I think LLMs will make it harder to be a mediocre programmer who can just coast by doing highly standardized janitorial work, but it will create more value if you are trying to do something interesting. What that means for jobs is a mixed picture. Fewer boring, but still paying jobs, but maybe more work to tackle new problems.

I think only programmers understand the nuances of their field however and people on the business side are going to just look at their expense spreadsheets and charts, and will probably oversimplify and overestimate. But that could self-correct and they might eventually concede they're going to have to hire developers.

In summary, the idea that LLMs will completely take over coding logically entails an AI system that completely contains the entire software ecosystem within itself, and writes and maintains every endpoint. This is science fiction. Training lag is a real limitation since software moves too fast to constantly retrain on the latest updates. AI itself creates a new class of interesting applications that are not represented in the training data, which means there's room for human devs at the bleeding edge.

If you got into programming just because it promised to be a steady, well-paying job, but have no real interest in it, AI might come for you. But if you are actually interested in the subject and understand that not all problems have been solved, there's still work to be done. And unless we get a whole new paradigm of AI that is not data-dependent, and can generate new knowledge whole cloth, I wouldn't be too worried. And if that does happen, too, the whole economy might change and we won't care about dinky little jobs.

reply
yehosef
5 days ago
[-]
use them
reply
byyoung3
4 days ago
[-]
you dont become an SWE
reply
c7THEC2DDFVV2V
3 days ago
[-]
by using them
reply
JamesLeonis
4 days ago
[-]
I'm 15 years in, so a little behind you, but this is also some observations from the perspective of a student during the Post-Dot-Com bust.

A great parallel of today's LLMs was the Outsourcing mania from 20 years ago. It was worse than AGI because actual living breathing thinking people would write your code. After the Dot-Bomb implosion, a bunch of companies turned to outsourcing as a way to skirt costs for expensive US programmers. In their mind, a manager can produce a spec that was sent to an oversees team to implement. A "Prompt" if you will. But as time wore on, the hype wore off with every broken and spaghettified app. Businesses were stung back into hiring back programmers, but not before destroying a whole pipeline of CS graduates for many years. It fueled a surge in demand in programmers against a small supply that didn't abate until the latter half of the 2010s.

Like most things in life, a little outsourcing never hurt anybody but a lot can kill your company.

> My prediction is that junior to mid level software engineering will disappear

Agree with some qualifications. I think LLMs will follow a similar disillusionment as outsourcing, but not before decimating the profession in both headcount and senior experience. The pipeline of Undergrad->Intern/Jr->Mid->Sr development experience will stop, creating even more demand for the existing (and now dwindling) senior talent. If you can rough it for the next few years the employee pool will be smaller and businesses will ask wHeRe dId aLl tHe pRoGrAmMeRs gO?! just like last time. We're going to lose entire classes of CS graduates for years before companies reverse course, and then it will take several more years to steward another generation of CS grads through the curriculum.

AI companies sucking up all the funding out of the room isn't helping with the pipeline either.

In the end it'll be nearly a decade before the industry recovers its ability to create new programmers.

> So, fellow software engineers, how do you future-proof your career in light of, the inevitable, LLM take over?

Funnily enough, probably start a business or that cool project you've had in the back of your mind. Now is the time to keep your skills sharp. LLMs are good enough to help with some of those rote tasks as long as you are diligent.

I think LLMs will fit into future tooling as souped-up Language Servers and be another tool in our belt. I also foresee a whole field of predictive BI tools that lean on LLMs hallucinating plausible futures that can be prompted with (for example) future newspaper headlines. There's also tons of technical/algorithmic domains ruled by Heuristics that could possibly be improved by the tech behind LLMs. Imagine a compiler that understands your code and applies more weight on some heuristics and/or optimizations. In short, keeping up with the tools will be useful long after the hype train derails.

People skills are perennially useful. It's often forgotten that programming is two domains; the problem domain and the computation domain. Two people in each domain can build Mechanical Sympathy that blurs the boundaries between the two. However the current state of LLMs does not have this expertise, so the LLM user must grasp both the technical and problem domains to properly vet what the LLMs return from a prompt.

Also keep yourself alive, even if that means leaving the profession for something else for the time being. The Software Engineer Crisis is over 50 years old at this point, and LLMs don't appear to be the Silver Bullet.

tl;dr: Businesses saw the early 2000s and said "More please, but with AI!" Stick it out in "The Suck" for the next couple of years until businesses start demanding people again. AI can be cool and useful if you keep your head firmly on your shoulders.

reply
bdangubic
4 days ago
[-]
> Like most things in life, a little outsourcing never hurt anybody but a lot can kill your company.

there are amazing companies which have fully outsourced all of their development. this trend is on the rise and might hit $1T market cap in this decade…

reply
JamesLeonis
4 days ago
[-]
> there are amazing companies which have fully outsourced all of their development.

I completely agree...

> this trend is on the rise and might hit $1T market cap in this decade…

It's this thinking that got everybody in trouble last time. A trend doesn't write your program. There was a certain "you get what you pay for" reflected on the quality of code many businesses received from outsourcing. Putting in the work and developing relationships with your remote contractors, and paying them well, makes for great partners that deliver high quality software. It's the penny-wise-pound-foolish manager that drank too much of the hype koolaid that found themselves with piles of terrible barely working code.

Outsourcing, like LLMs, are a relationship and not a shortcut. Keep your expectations realistic and grounded, and it can work just fine.

reply
agentultra
4 days ago
[-]
Learn to think above the code: learn how to model problems and reason about them using maths. There are plenty of tools in this space to help out: model checkers like TLA+ or Alloy, automated theorem provers such as Lean or Agda, and plain old notebooks and pencils.

Our jobs are not and have never been: code generators.

Take a read of Naur's essay, Programming as Theory Building [0]. The gist is that it's the theory you build in your head about the problem, the potential solution, and what you know about the real world that is valuable. Source code depreciates over time when left to its own devices. It loses value when the system it was written for changes, dependencies get updated, and it bit-rots. It loses value as the people who wrote the original program, or worked with those who did, leave and the organization starts to forget what it was for, how it works, and what it's supposed to do.

You still have to figure out what to build, how to build it, how it serves your users and use cases, etc.

LLM's, at best, generate some code. Plain language is not specific enough to produce reliable, accurate results. So you'll forever be trying to hunt for increasingly subtle errors. The training data will run out and models degrade on synthetic inputs. So... it's only going to get, "so good," no matter how many parameters of context they can maintain.

And your ability, as a human, to find those errors will be quickly exhausted. There are way too few studies on the effects of informal code review on error rates in production software. Of those that have been conducted any statistically significant effect on error rates seems to disappear when humans have read ~200SLOC in an hour.

I suspect a good source of income will come from having to untangle the mess of code generated by teams that rely too much on these tools that introduce errors that only appear at scale or introduce subtle security flaws.

Finally, it's not "AI," that's replacing jobs. It's humans who belong to the owning class. They profit from the labour of the working class. They make more profit when they can get the same, or greater, amount of value while paying less for it. I think these tools, "inevitably," taking over and becoming a part of our jobs is a loaded argument with vested interests in that becoming true so that people who own and deploy these tools can profit from it.

As a senior developer I find that these tools are not as useful as people claim they are. They're capable of fabricating test data... usually of quality that requires inspection... and really, who has time for that? And they can generate boilerplate code for common tasks... but how often do I need boilerplate code? Rarely. I find the answers it gives in summaries to contain completely made-up BS. I'd rather just find out the answer myself.

I fear for junior developers who are looking to find a footing. There's no royal road. Getting your answers from an LLM for everything deprives you of the experience needed to form your own theories and ideas...

so focus on that, I'd say. Think above the code. Understand the human factors, the organizational and economic factors, and the technical ones. You fit in the middle of all of these moving parts.

[0] https://pages.cs.wisc.edu/~remzi/Naur.pdf

Update: forgot to add the link to the Naur essay

reply
firemelt
3 days ago
[-]
this questions only comes from pundit, smh.
reply
rw_panic0_0
4 days ago
[-]
nah it's nonsense
reply
nerder92
5 days ago
[-]
As for every job done well the most important thing is to truly understand the essence of your job, why it exist in the first place and which problem truly solves when done it well.

A good designers is not going to be replaced by Dall-e/Midjourney, becuase the essence of design is to understand the true meaning/purpose of something and be able to express it graphically, not align pixels with the correct HEX colour combination one next to the other.

A good software engineer is not going to be replaced by Cursor/Co-pilot, because the essence of programming is to translate the business requirements of a real world problem that other humans are facing into an ergonomic tool that can be used to solve such problem at scale, not writing characters on an IDE.

Neither Junior nor Seniors Dev will go anywhere, what we'll for sure go away is all the "code-producing" human-machines such as Fiver Freelance/Consultants which completely misunderstand/neglect the true essence of their work. Becuase code (as in a set of meaningful 8-bits symbols) was never the goal, but always the means to an end.

Code is an abstraction, allegedly our best abstraction to date, but it's hard to believe that is the last iteration of it.

I'll argue that software itself will be a completely different concept in 100 years from now, so it's obvious that the way of producing it will change too.

There is a famous quote attributed to Hemingway that goes like this:

"Slowly at first, then all at once"

This is exactly what is happening and and what always happens.

reply
throwaway_43793
5 days ago
[-]
It's a good point, and I keep hearing it often, but it has one flaw.

It assumes that most engineers are in contact with the end customer, while in reality they are not. Most engineers are going through a PM whose role is to do what you described: speak with customers, understand what they want and somehow translate it to a language that the engineers will understand and in turn translate it into code. (Edit), the other part are "IC" roles like tech-lead/staff/etc, but the ratio between ICs and Engineers is, my estimate, around 1:10/20. So the majority of engineers are purely writing code, and engage in supporting actions around code (tech documentation, code reviews, pair programming, etc).

Now, my questions is as follows -- who has a bigger rate of employability in post LLM-superiority world: (1) a good technical software engineer with poor people/communication skills or (2) a good communicator (such as a PM) with poor software engineering skills?

I bet on 2, and as one of the comments says, if I had to future proof my career, I would move as fast as possible to a position that requires me to speak with people, be it other people in the org or customers.

reply
nerder92
5 days ago
[-]
(1) is exactly the misunderstanding i'm talking about, most creative jobs are not defined by their output (which is cheap) but by the way they reach that output. Software engineers that thought they could write their special characters in their dark room without the need to actually understand anything will go away in breeze (for good).

This entire field was full of hackers, deeply passionate and curious individuals who want to understand every little detail of the problem they were solving and why, then software becomes professionalized and a lot of amateurs looking for a quick buck came in commoditizing the industry. With LLM will go full-circe and push out a lot of amateurs to give again space to the hackers.

Code was never the goal, solving problem was.

reply
hnthrowaway6543
5 days ago
[-]
this is the correct answer

i can only assume software developers afraid of LLMs taking their jobs have not been doing this for long. being a software developer is about writing code in the same way that being a CEO is about sending emails. and i haven't seen any CEOs get replaced even thought chatgpt can write better emails than most of them

reply
throwaway_43793
5 days ago
[-]
But the problem is that the majority of SWs are like that. You can blame them, or the industry, bust most engineers are writing code most of the time. For every Tech Lead who does "people stuff", there are 5-20 engineers who, mostly, write code and barely know that entire scope/context of the product they are working on.
reply
hnthrowaway6543
5 days ago
[-]
> bust most engineers are writing code most of the time.

the physical act of writing code is different than the process of developing software. 80%+ of the time working on a feature is designing, reading existing code, thinking about the best way to implement your feature in the existing codebase, etc. not to mention debugging, resolving oncall issues, and other software-related tasks which are not writing code

GPT is awesome at spitting out unit tests, writing one-off standalone helper functions, and scaffolding brand new projects, but this is realistically 1-2% of a software developer's time

reply
throwaway_43793
5 days ago
[-]
Everything you have described, apart from on-call, I think LLMs can/will be able to do. Explaining code, reviewing code, writing code, writing test, writing tech docs. I think we are approaching a point where all these will be done by LLMs.

You could argue about architecture/thinking about the correct/proper implementations, but I'd argue that for the past 7 decades of software engineering, we are not getting close to a perfect architecture singularity where code is maintainable and there is no more tech debt left. Therefor, arguments such as "but LLMs produce spaghetti code" can be easily thrown away by saying that humans do as well, except humans waste time by thinking about ways to avoid spaghetti code, but eventually end up writing it anyways.

reply
hnthrowaway6543
5 days ago
[-]
> Explaining code, reviewing code, writing code, writing test, writing tech docs.

people using GPT to write tech docs at real software companies get fired, full stop lol. good companies understand the value of concise & precise communication and slinging GPT-generated design docs at people is massively disrespectful to people's time, the same way that GPT-generated HN comments get downvoted to oblivion. if you're at a company where GPT-generated communication is the norm you're working for/with morons

as for everything else, no. GPT can explain a few thousand lines of code, sure, but it can't explain how every component in a 25-year-old legacy system with millions of lines and dozens/scores of services works together. "more context" doesn't help here

reply
rvz
5 days ago
[-]
> A good software engineer is not going to be replaced by Cursor/Co-pilot, because the essence of programming is to translate the business requirements of a real world problem that other humans are facing into an ergonomic tool that can be used to solve such problem at scale, not writing characters on an IDE.

Managers and executives only see engineers and customer service as an additional cost and will find an opportunity to trim down roles and they do not care.

This year's excuse is now anything that uses AI, GPTs or Agents and they will try to do it anyway. Companies such as Devin and Klarna are not hiding this fact.

There will just be less engineers and customer service roles in 2025.

reply
AnimalMuppet
5 days ago
[-]
Some will. Some won't. The ones that cut engineering will be hurting by 2027, though, maybe 2026.

It's almost darwinian. The companies whose managers are less fit for running an organization that produces what matters will be less likely to survive.

reply
cruffle_duffle
4 days ago
[-]
Only dodgy dinosaur companies with shitty ancient crusty management see engineers as cost centers. Any actual modern tech company sees engineers as the engine that drives the entire business. This has been true for decades.
reply
nerder92
5 days ago
[-]
From a financial point of view, engineers are considered assets not costs, because they contribute to grow the valuation of the company assets.

The right thing to do economically (in capitalism) is to do more of the same, but faster. So if you as a software engineer or customer service rep can't do more of the same faster you will replaced by someone (or something) that alleggedly can.

reply
timr
4 days ago
[-]
> From a financial point of view, engineers are considered assets not costs

At Google? Perhaps. At most companies? No. At most places, software engineering is a pure cost center. The software itself may be an asset, but the engineers who are churning it out are not. That's part of the reason that it's almost always better to buy than build -- externalizing shared costs.

Just for an extreme example, I worked at a place that made us break down our time on new code vs. maintenance of existing code, because a big chunk of our time was accounted for literally as a cost, and could not be depreciated.

reply
samatman
4 days ago
[-]
So what you're saying is that some of us should be gearing up to pull in ludicrous amounts of consultant money in 2026, when the chickens come home to roost, and the managers foolish enough to farm out software development to LLMs need to hire actual humans at rate to exorcize their demon-haunted computers?

Yeah that will be a lucrative niche if you have the stomach for it...

reply
dmortin
4 days ago
[-]
> A good designers is not going to be replaced by Dall-e/Midjourney, becuase the essence of design is to understand the true meaning/purpose of something and be able to express it graphically, not align pixels with the correct HEX colour combination one next to the other.

Yes, but Dall-e, etc. output will be good enough for most people and small companies if it's cheap or free even.

Big companies with deep pockets will still employ talented designers, because they can afford it and for prestige, but in general many average designer jobs are going to disappear and get replaced with AI output instead, because it's good enough for the less demanding customers.

reply
thor_molecules
4 days ago
[-]
After reading the comments, the themes I'm seeing are:

- AI will provide a big mess for wizards to clean up

- AI will replace juniors and then seniors within a short timeframe

- AI will soon plateau and the bubble will burst

- "Pshaw I'm not paid to code; I'm a problem solver"

- AI is useless in the face of true coding mastery

It is interesting to me that this forum of expert technical people are so divided on this (broad) subject.

reply
johnfn
4 days ago
[-]
To be honest, HN is about this with any topic. In the domain of stuff I know well, I've seen some of the dumbest takes imaginable on HN, as well as some really well-reasoned and articulated stuff. The limiting factor tends to be the number of people that know enough about the topic to opine.

AI happens to be a topic that everyone has an opinion on.

reply
themanmaran
4 days ago
[-]
The biggest surprise to me (generally across HN) is that people expect LLMs to develop on a really slow timeframe.

In the last two years LLM capabilities have gone from "produces a plausible sentence" to "can generate a functioning web app". Sure it's not as masterful as one produced by a team of senior engineers, but a year ago it was impossible.

But everyone seems to evaluate LLMs like they're fixed at today's capabilities. I keep seeing "10-20 year" estimates for when "LLMs are smart enough to write code". It's a very head in the sand attitude to the last 2 years trajectory.

reply
unclad5968
4 days ago
[-]
Probably because we see stuff like this every decade. Ten years ago no one was ever going to drive again because self-driving cars were imminent. Turns out a lot of problems can be partially solved very quickly, but as anyone with experience knows, solving the last 10% takes at least as much time as solving the first 90.
reply
themanmaran
4 days ago
[-]
> Ten years ago no one was ever going to drive again because self-driving cars were imminent

Right.. but self driving cars are here. And if you've taken Waymo anywhere it's pretty amazing.

Of course just because the technology is available doesn't mean distribution is solved. The production of corn has been technically solved for a long time, but doesn't mean starvation was eliminated.

reply
zahlman
4 days ago
[-]
>And if you've taken Waymo anywhere it's pretty amazing.

Yeah, about that: https://ca.news.yahoo.com/hilarious-video-shows-waymo-self-1...

reply
layer8
4 days ago
[-]
You can’t extrapolate the future trajectory of progress from the past. It comes in pushes and phases. We had long phases of AI stagnation in the past, we might see them again. The past five years or so might turn out to be a phase transition from pre-LLM to post-LLM, rather than the beginning of endless dramatic improvements.
reply
shadowerm
4 days ago
[-]
It would be different too if we didn't know the secret sauce here is massive amounts of data and the jump process was directly related to a jump in the amount of data.

Some of the logic here is akin to how I have lost 30lbs in 2024 so at this pace I will weigh -120lbs by 2034!

reply
_heimdall
4 days ago
[-]
> It comes in pushes and phases. We had long phases of AI stagnation in the past

Isn't that still extrapolating the future from the past? You see a pattern if pushes and phases and are assuming that's what we will see again.

reply
shadowerm
4 days ago
[-]
I am not a software engineer and I made working stock charting software with react/python/typescript in April 2023 when chatGPT4 came out, without really knowing typescript almost at all. Of course, after awhile it was impossible to update/add anything and basically fell apart because I don't know what I am doing.

That is going to be 2 years ago before you know it. Sonnet is a better at using more obscure python libraries but beyond that the improvement over chatgpt4 is not that much.

I never tried chatGPT4 with Julia or R but the current models are pretty bad with both.

Personally, I think OpenAI made a brilliant move to release 3.5 and then 4 a few months later. It made it feel like AGI was just around the corner at that pace.

Imagine what people would have thought in April 2023 if you told them that in December 2024 there would be a $200 a month model.

I waited forever for Sora and it is complete garbage. OpenAI was crafting this narrative about putting Hollywood out of business when in reality these video models are nearly useless for anything much more than social media posts about how great the models are.

It is all besides the point anyway. The way to future proof yourself is to be intellectually curious and constantly learning, no matter what field you are in or what you are doing. Probably have to reinvent your career a few times if you want to or not too.

reply
irunmyownemail
4 days ago
[-]
"In the last two years LLM capabilities have gone from "produces a plausible sentence" to "can generate a functioning web app". Sure it's not as masterful as one produced by a team of senior engineers, but a year ago it was impossible."

Illegally ingesting the Internet, copyrighted and IP protected information included, then cleverly spitting it back out in generic sounding tidbits will do that.

reply
tobyhinloopen
4 days ago
[-]
Even o1 just floored me. I can put in heaps of c++ code and some segfault stacktraces and it gave me an actual cause and fix.

I gave it 1000s lines of C++ and it did point the problem.

reply
tumetab1
4 days ago
[-]
Many commenters suffer the first experience bias, they tried ChatGPT and it was "meh" so they see no impact.

I have tried cursor.ai, agent mode, and I see a clear big impact.

reply
pockmarked19
4 days ago
[-]
As soon as you replace the subject of LLMs with nebulous “AI” you have ventured into a la la land where any claim can be reasonably made. That’s why we should try and stick to the topic at hand.
reply