AI is going to increase the rate of complexity 10 fold by spitting out enormous amounts of code. This is where the job market is for developers. Unless you 100% solve the problem of feeding every single third party monitoring tool, logging, compiler output, system stats down to the temperature of RAM, and then make it actually understand how to fix said enormous system (it can't do this even if you did give it the context by the way), then AI will only increase the amount of engineers you need.
This is true, and I am (sadly, I'd say) guilty of it. In the past, for example, I'd be much more wary about having too much duplication. I was working on a Go project where I needed to have multiple levels of object mapping (e.g. entity objects to DTOs, etc.), and with LLMs it just spit out the answer in seconds (correct I'd add), even though it was lots and lots of code where in the past I would have written a more generic solution to prevent me from having to write so much boilerplate.
I see where the evolution of coding is going, and as a late middle aged developer it has made me look for the exits. I don't disagree with the business rationale of the direction, and I certainly have found a ton of value in AI (e.g. I think it makes learning a new language a lot easier). But I think it makes programming so much less enjoyable for me personally. I feel like it's transformed the job more to "editor" from "author", and for me, the nitty gritty details of programming were fun.
Note I'm not making any broad statement about the profession generally, I'm just stating with some sadness that I don't enjoy where the day-to-day of programming is heading, and I just feel lucky that I've saved up enough in the earlier part of my career to get out now.
That's one way to solve the problem.
Not the way I'm looking for when hiring.
So instead of being creative and finding ways to avoid
duplication, you look for a way to make copies faster.
That's not at all how I read the parent post. It feels more like you're replying to a hybrid of the grandparent post (person who churned out a lot of duplicated code with AI) and the parent post (person who likes being "editor" and finds AI helpful)When we went from assembler to compilers, this logic would have said, "So instead of being creative and finding ways to avoid hand-coding loops, you look for a way to spit out copies of loops faster." And the answer is, "Yeah, I do! I want the compiler to write the loop for me and get all the jumps right so I don't have to worry about it! Getting the jumps for the loop right is incidental to what I'm actually trying to write; I've got better things to spend my time on."
Note well: I am not arguing that AI will produce good code rather than multiple layers of garbage. I am merely saying that this particular argument is weak.
If my compiler occasionally output the recipe for vegan pancakes instead of working code I would definitely think differently of it.
I will admit that compilers don't hallucinate much.
I think the only thing that perhaps I don't totally agree with is the idea that AI just lets you focus on a higher level of thinking while it "takes care of the details". AI is still the leakiest of abstractions, and while coding LLMs have gotten much better over the past 2 years I can't just trust it, so I still have to review every line that goes to prod. I just find that task much less enjoyable ("editing") than being the author of code. And heck, I'm someone that really enjoys doing code reviews. I think with code reviews my mind is in a state that I'm helping to mentor another human, and I love that aspect of it. I'm not so energetic about helping to train our robot overlords.
People with your attitude will be the first to be replaced.
Not because you code isn't as good as an AI; maybe it's even better. But because your personality makes you a bad teammate.
I expect this theme to repeat all the time from now on.
And also I expect it to crimp the growth of several people, because the AI solves the simplest problems and then they face an insurmountable wall trying to learn every concept at the same time when they need a small increment in code realism.
Software development will probably become extremely profitable for the people that can do it properly on the next couple of decades.
That's awesome!
Never happened so far in my 26 year career. It' almost as if they're hiring for something else than solving problems and writing code. Following orders, most likely.
And then work out how to do code review and fixing using AI, lightly supervised by you so that you can do it all whilst walking the dog or playing croquet or something.
Software engineering jobs involve working in a much wider solution space - writing new code is but one intervention among many. I hope the people blindly following LLM advice realize their lack of attention to detail and "throw new code at it" attitude comes across as ignorant and foolish, not hyper-productive.
Ask for multiple refactors and their trade-offs
But I think I would rather just end my career instead of transitioning into fixing enormous codebases written by LLMs.
I’m really ashamed of what SWE has become and AI will increase that tenfold as you say. We shouldn’t cheer up on that, especially if I will have to debug all that crap.
And if it increases the number of engineers, they won’t be good due to a lack of education (I already experience this at work). But anyway I don’t believe it, managers will not waste more money on us, that would go against modern capitalism.
The most obvious example of this already happening is in how function calling interfaces are defined for existing models. It's not hard to imagine that principle applied more generally, until human intervention to get a desired result is the exception rather than the rule as it is today.
I spent most of the past 2 years in "AI cope" mode and wouldn't consider myself a maximalist, but it's impossible not to see already from the nascent tooling we have that workflow automation is going to improve at a rapid and steady rate for the foreseeable future.
The theoretical advance we're waiting for in LLMs is auditable determinism. Basically, the ability to take a set of prompts and have a model recreate what it did before.
At that point, the utility of human-readable computer languages sort of goes out the door. The AI prompts become the human-readable code, the model becomes the interpreter and it eventually, ideally, speaks directly to the CPUs' control units.
This is still years--possibly decades--away. But I agree that we'll see computer languages evolving towards auditability by non-programmers and reliabibility in parsing by AI.
Non-determinism in LLMs is currently a feature and introduced consciously. Even if it wasn't, you would have to lock yourself on a specific model, since any future update would necessarily be a possibly breaking change.
> At that point, the utility of human-readable computer languages sort of goes out the door.
Its utility is having a non-ambiguous language to describe your solution in and that you can audit for correctness. You'll never get this with an LLM because its very premise is using natural language, which is ambiguous.
What I'm suggesting is a way to lock the model and then be able to have it revert to that state to re-interpret a set of prompts deterministically. When exploring, it can still branch non-deterministically. But once you've found a solution that works, you want the degrees of freedom to be limited.
> You'll never get this with an LLM because its very premise is using natural language, which is ambiguous
That's the point of locking the model. You need the prompts and the interpreter.
This still doesn't seem to work for me:
- even after locking the LLM state you still need to understand how it processes your input, which is a task nobody has been able to do yet. Even worse, this can only happen after locking it, so it needs to be done for every project.
- the prompt is still ambiguous, so either you need to refine it to the point it becomes more similar to a programming language or you need an unlimited set of rules for how it should be disambiguated, which an auditor needs to learn. This makes the job of the auditor much harder and error prone.
I think this is a manifestation of machine thinking - the majority of buyers and users of software rarely ask for or need this level of perfection. Noise is everywhere in the natural environment, and I expect it to be everywhere in the future of computing too.
You're right. Maybe just reliable replicability, then.
The core point is, the next step is the LLM talking directly to the control unit. No human-readable code in between. The prompts are the code.
So now we're looking at a good several decades of us even getting our human interfacing systems to amend themselves to AI will still requiring all the current complexity they already have. The end result is more complexity not less.
Backend is significantly murkier, there are many tasks it seems unlikely an AI will accomplish any time soon (my toy example so far is inventing and finalizing the next video compression standard). But a lot of the complexity in backend derives from supporting human teams with human styles of work, and only exists due to the steady cashflow generated by organizations extracting tremendous premiums to solve problems in their particular style. I have no good way to explain this - what value is a $500 accounting system backend if models get good enough at reliably spitting out bespoke $15 systems with infinite customizations in a few seconds for a non-developer user, and what of all the technologies whose maintenance was supported by the cashflows generated by that $500 system?
Accounting software has to be validated, and part of the appeal is that it simplified and consolidates workflows across huge bureaucracies. I don't see how on earth you can just spit one out from a prompt and expect that to replace anything.
I work on a compression algorithm myself, and I've found AI of limited utility. It does help me translate things for interfacing between languages and it can sometimes help me try out ideas, but I have to write almost everything myself.
EDIT: It is true, that lower skilled jobs are going to change or reduce in quantity in the short term. To a certain degree there might be a Jevon's paradox in terms of code quantity that needs management.
Imagine companies churning out tons and tons of code that no one understands that behaves bizzarely. Maybe it will become a boutique thing for companies to have code that works properly and people will just accept broken user interfaces or whatever so long as there are workarounds.
When I see boring, repetitive code that I don't want to look at my instinct isn't to ignore it and keep adding more boring, repetitive code. It's like seeing that the dog left a mess on your carpet and pretending you didn't see it. It's easier than training the dog and someone else will clean it... right?
My instinct is to fix the problem causing there to be boring, repetitive code. Too much of that stuff and you end up with a great surface area for security errors, performance problems, etc. And the fewer programmers that read that code and try to understand it the more likely it becomes that nobody will understand it and why it's there.
The idea that we should just generate more code on top of the code until the problem goes away is alien to me.
Although it makes a lot more sense when I probe into why developers feel like they need to adopt AI -- they're afraid they won't be competitive in the job market in X years.
So really, is AI a tool to make us more productive or a tool to remove our bargaining power?
Don't you notice how it makes you more productive, that you can solve problems faster? It would be really odd if not.
And regarding the bargaining power: that's not the other side of the scale, it's a different problem. If your code monkey now gets as good as your average developer, the average developer will have lost some relative value, unless he also upped his game by using AI.
If everyone gets better, why would you see this as something bad, which makes us lose "bargaining power"? Because you no longer can put the least effort which your employer expects from you? Even then: it's not like AI makes things harder, it makes them better. At least for me software development has become more enjoyable.
While 5 years ago I was asking myself if I really want to do this for the rest of my career, I now know that I want to do this, with this added help, which takes away much of the tedious stuff like looking up solution-snippets on Stack Overflow. Plus, I know that I will have to deal less and less with writing code, and more and more with managing code solutions offered to me.
Amazing. You think that the only reason people are using AI is because it's being forced on them?
I honestly feel kinda bad for some people in this thread who don't see the freight train coming.
There are a few loud people who think AI programming is the best thing since sliced bread.
What’s the freight train?
It doesn't matter today's tools fail to live up to their promises, by 2027 we're going to have AI just as smart as humans! They'll totally be able to make a moderately complex change to an existing code base correctly enough that you DONT need to spend just as long as it would've taken you to code it yourself cleaning up after it.
Source: trust me bro (also buy my AI product)
I think we're in for a world of trouble if that's the case. /s
I think AI can be yet another tool that takes some repetitive tasks off my hands. I still obviously check all the code it generated.
The problem, so far, is that they're still...quite unreliable, to say it least. Sometimes I can feed the model files, and it will read and parse the data 100 out of 100 times. Other times, the model seems clueless about what to do, and just spits out code on how to do it manually, with some vague "sorry I can't seem to read the file", multiple times, only to start working again.
And then you have the cases where the models seem to dig themselves into some sort of terminal state, or oscillate between 2-3 states, that they can't get out off - until you fire up a new model, and transfer the code to it.
Overall they do save me a ton of time, especially with boilerplate stuff, but very routinely even the most SOTA models will have their stupid moments, or keep trying to do the same thing.
It’s insane how similar non-deterministic software systems already are to biological. Maybe I’ve been wrong and consciousness is a computation.
Either Meta has tools an order of magnitude more powerful than everyone else, or he's drinking his own koolaid.
This probably increased the overall demand for professional website makers and messed-up-Wordpress-fixers.
Now the argument goes that the average business will roll out their own apps using ChatGPT (amusing / scary), or that big software co's will replace engineers with LLMs.
For this last point, I just don't see how any of the current or near-future models could possibly load enough context to do actual engineering as opposed to generating code.
It has also saved time producing well-defined functions, for very specific tasks. But you have to know how to work with it, going through several increasingly complex iterations, until you get what you want.
Producing full applications still seems a pipedream at this stage.
Do you mean like: "write me an app that does XYZ?"
Well, it's a pipedream because you probably couldn't even get a room of developers to agree on how to do it. There are a million ways.
But this isn't really how programmers are expecting to use AI, are they?
You'll probably get a few responses from folks that happily tab complete their software and don't sweat the details. Some get away with that, I'm generally not in a position where it's OK to not fully understand the system I'm building. There's a lot of stuff that's better to find out during development than in a late night production system debugging session.
LLMs are useful tools for programming, as a kind of search engine and squeaking rubber duck. AI as a programmer is worse than a junior, it's the junior that won't actively learn and improve. I think current AI architecture limits it from being much more than that.
What I'm curious about is, can it find innovative ways to solve problems? Like the infamous Quake 3 inverse-sqrt hack? Can it silently convert (read: optimize) a std::string to a raw char* pointer if it doesn't have any harmful side effects? (I don't mean "can you ask it to do that for you?" , I mean can it think to do that on its own?) Can it come up with trippy shit we've never even seen before to solve existing problems? That would truly impress me.
Take a bloated electron app, analyze the UI, and output the exact same thing but in C++ or Rust. Work with LLVM and find optimizations a human could never see. I remember seeing a similar concept applied to physical structures (like a small plane fuselage or a car) where the AI "learns" to make a lighter stronger design and it comes out looking so bizarre, no right angles, lots of strange rounded connections that almost like a growth of mold. Why can't AI "learn" to improve the state of the art in CS?
So such things already exist, and for me, the most frustrating thing about LLMs is that they just suck the oxygen out of the room for talking about anything AI-ish that's not an LLM.
The term for what you're looking for is "superoptimization," which tries to adapt the principles of mathematical nonconvex optimization that AI pioneered to the problem of finding optimal code sequences. And superoptimization isn't new--it's at least 30 years old at this point. At this point, it's mature enough that if I were building a new compiler framework from scratch, I'd design at least the peephole optimizer based around superoptimization and formal verification.
(I kind of am putting my money where my mouth is there--I'm working on an emulator right now, and rather than typing in the semantics of every instruction, I'm generating them using related program synthesis techniques based on the observable effects on actual hardware.)
> Still, nearly two-thirds of software developers are already using A.I. coding tools, according to a survey by Evans Data, a research firm.
> So far, the A.I. agents appear to improve the daily productivity of developers in actual business settings between 10 percent and 30 percent, according to studies. At KPMG, an accounting and consulting firm, developers using GitHub Copilot are saving 4.5 hours a week on average and report that the quality of their code has improved, based on a survey by the firm.
We're in for a really dire future where the worst engineers you can imagine are not only shoveling out more garbage code but the ability to assess it for problems or issues is much more difficult.
It will probably still be more productive. IDEs, Stack Exchange...each of these prompted the same fears and realised some of them. But the benefits of having more code quicker and cheaper, even if more flawed, outweighed those of quality. The same way the benefits of having more clothes and kitchenware and even medicine quicker and cheaper outweighed the high-quality bespoke wares that preceded them. (Where it doesn't, and where someone can pay, we have artisans.)
In the mean time, there should be an obsolescence premium [1] that materialises for coders who can clean up the gloop. (Provided, of course, that young and cheap coders of the DOGE variety stop being produced.)
[1] https://www.sciencedirect.com/science/article/abs/pii/S01651...
Sure. Then we learned how to do it and in which contexts. Since WFH trained executives state-side how to communicate asynchronously and over Zoom, I've seen engineering offshoring done quite productively, possibly for the first time in my career.
If you offshore your coding, how do you know they're not using AI as well? How do you verify code quality? And what do you do when you're left cleaning up a mess? This isn't really a problem of the quality of engineers, but the inherent relationship between a company and contracted firms.
All of these apply to remote-only teams. We know they work. (Note: I'm commenting more on offshoring than outsourcing. Outsourcing is fraught with issues. Offshoring once was, but is increasingly becoming viable, particularly with WFH and AI.)
There was never a strong incentive for your "partner" to actually finish the job, they got more money from overruns, fixing defects, sunk costs, requirements churn, overstaffing, giving you juniors and billing for seniors, etc etc etc.
There were also issues with cultural differences, communication and lack of understanding of the "customer" - but they are minor and resolvable compared to the core problem.
I don't have any specific experience with KPMG, but considering the other "big name" firms' work I've encountered, there's, uh, lots of room for improvement.
It was lucrative cleaning up shit code from Romania and India.
I'm hoping enough people churn out enough hot garbage that needs fixing now that I can jack up my day rate.
I remember when the West would have no coders because Indian coders are cheaper.
I remember when nocode solutions would replace programmers.
I remember.
> Now, let’s talk about the real winners in all this: the programmers who saw the chaos coming and refused to play along. The ones who didn’t take FAANG jobs but instead went deep into systems programming, AI interpretability, or high-performance computing. These are the people who actually understand technology at a level no AI can replicate.
> And guess what? They’re about to become very expensive. Companies will soon realize that AI can’t replace experienced engineers. But by then, there will be fewer of them. Many will have started their own businesses, some will be deeply entrenched in niche fields, and others will simply be too busy (or too rich) to care about your failing software department.
> Want to hire them back? Hope you have deep pockets and a good amount of luck. The few serious programmers left will charge rates that make executives cry. And even if you do manage to hire them, they won’t stick around to play corporate politics or deal with useless middle managers. They’ll fix your broken systems, invoice you an eye-watering amount, and walk away.
> "Imagine a company that fires its software engineers, replaces them with AI-generated code, and then sits back"
It should go without saying this is not even possible at the moment. Will it be possible one day? Yes, probably. And when that day comes, the fantasies this author has dreamed up will be irrelevant.
I've said it before and I'll say it again: It shocks me that a forum filled with tech professionals, is so blindly biased against AI that they refuse to acknowledge what changes are coming.
All of these conversations boil down to: "The LLM's of today couldn't replace me." That's probably true for most folks.
What's also true is that ChatGPT was released less than 3 years ago. And we've seen it go from a novelty with no real use, to something that can write actually decent research papers and gets better at coding by the month.
"B-b-but there's no guarantee it will continue to improve!" is one of the silliest trains of thought a computer scientist could hold.
I have the exact same reaction reading this stuff on HN. It's hilarious, scary and sad, all at the same time.
The speed at which these tools have improved has completely convinced me that these things are going to take over. But I don't fear it, I'm excited about it. But I don't write code for code's sake; I'm using code to solve problems.
1. The "LLMs are still in their infancy" argument is frequently trotted out but let's be clear - GPTs were introduced back in 2018 - so SEVEN years ago.
2. It shocks me that a forum filled with tech professionals, is so blindly biased against AI that they refuse to acknowledge what changes are coming. This feels like a corollary to the Baader-Meinhof phenomenon. I don't think you can extrapolate that a few dozen loudly dissenting voices is necessarily representative of majority opinion.
3. I would like to see a citation of ChatGPT releasing actual "decent research papers".
4. If AIs get to the point of actually acting in a completely autonomous fashion and replace software engineers - then there's no reason to believe that they won't also obliterate 90% of other white-collar jobs (including other STEM) so at that point we're looking at needing to completely re-evaluate our economic system possibly with UBI, etc.
OpenAI just released their "Lancer" benchmark which basically shows their intent - replace the economic value of software development from coding to engineering manager tasks. Nothing is safe pretty much - I don't recommend people enter the industry anymore; its just anxiety you don't need (i.e. companies together collectively worth in the trillions are trying to destroy your economic value).
Not that it will take "good high economic mobility jobs"; my disappointment is more that this effort for society is better spent in many other domains (medicine, building, robotics) which at least have some benefit from the disruption but no - its all about SWE's. Must be what their VC's want from them and/or keeps the fear/hype train going most effectively.
And if it's real AGI, not airplanes flying themselves with a pilot stuff, then we probably will have to re-think employment anyway
There might be a period of time where product management and qa are still done by people, but I think that period will be transitory and short.
I think software engineers in general are grossly underestimating the probability they will be replaced. On a 20-30 year timeline, that probability might be close to 100%. Probably, it will also be gradual, and those who are displaced (starting with least experienced, to most), will not be able to find similar employment.
We are all more or less opting into this without a fight.
I don’t fear my dev job becoming less valuable as large chunks become automated. Like I said if it happens I will figure out a way to be useful to society in other ways, even if it’s inventing things for AI to do or controlling AI to do the job of a bunch of devs. I will adapt like everyone has in history and make myself valuable in new ways.
“Nothing is static. Everything is evolving. Everything is falling apart.”
Doesn't / Shouldn't that competitive advantage inherently exist already? But don't we still see a small group of big players that put out broken / mediocre / harmful tech dominating the market?
The incentives are to make the line keep going up – nothing else. Which is how we get search engines that don't find things, social media that's anti-social, users that are products, etc.
I'm not at all hopeful that an already entrenched company that lays off 50% of its workers and replaces them with an AI-slop-o-matic will lose to a smaller company putting out well-made principled tech. At least not without leveraging other differentiating factors.
(I say all of this as someone that is excited about the possibilities AI can now afford us. It's just that the possibilities I'm excited about are more about augmentation, accessibility, and simplicity rather than replacement or obsolescence.)
However, it has not alleviated any responsibility from me to be a good coder because I have question literally everything little dang thing suggests. If I am learning a new API, it can write code that works, but I need to go read the reference documentation to make sure that it is using the API with current best practices, for example. A lot of code I have to flat out ask it why it did things in a certain way because they look buggy inefficient, and half the time it apologizes and fixes the code.
So, I use the code in my (personal) projects copiously, but I don't use a single line of code that it generates that I don't understand, or it always leads to problems because it did something completely wrong.
Note that, at work, for good reasons, we don't use AI generated code in our products, but I don't write production code in my day job anyway.
What did programmers do to you to trigger such deep-felt insecurities thusly?