A.I. is prompting an evolution, not extinction, for coders
73 points
23 hours ago
| 18 comments
| nytimes.com
| HN
bushido
22 hours ago
[-]
reply
toprerules
22 hours ago
[-]
People are absolutely insane with their takes on AI replacement theory. The complexity of our stacks has grown exponentially since the 70s. Very few people actually comprehend how many layers of indirection, performance, caching, etc. are between their CRUD web app and bare metal these days.

AI is going to increase the rate of complexity 10 fold by spitting out enormous amounts of code. This is where the job market is for developers. Unless you 100% solve the problem of feeding every single third party monitoring tool, logging, compiler output, system stats down to the temperature of RAM, and then make it actually understand how to fix said enormous system (it can't do this even if you did give it the context by the way), then AI will only increase the amount of engineers you need.

reply
hn_throwaway_99
22 hours ago
[-]
> AI is going to increase the rate of complexity 10 fold by spitting out enormous amounts of code.

This is true, and I am (sadly, I'd say) guilty of it. In the past, for example, I'd be much more wary about having too much duplication. I was working on a Go project where I needed to have multiple levels of object mapping (e.g. entity objects to DTOs, etc.), and with LLMs it just spit out the answer in seconds (correct I'd add), even though it was lots and lots of code where in the past I would have written a more generic solution to prevent me from having to write so much boilerplate.

I see where the evolution of coding is going, and as a late middle aged developer it has made me look for the exits. I don't disagree with the business rationale of the direction, and I certainly have found a ton of value in AI (e.g. I think it makes learning a new language a lot easier). But I think it makes programming so much less enjoyable for me personally. I feel like it's transformed the job more to "editor" from "author", and for me, the nitty gritty details of programming were fun.

Note I'm not making any broad statement about the profession generally, I'm just stating with some sadness that I don't enjoy where the day-to-day of programming is heading, and I just feel lucky that I've saved up enough in the earlier part of my career to get out now.

reply
aerhardt
20 hours ago
[-]
I don't always programming in the small, and still feel that AIs provide plenty of chance for architecture, design, refactoring. For me it's been an absolute boon, I'm enjoying build more than ever. At any rate it's undeniably transformative and I can see many people not enjoying the end state.
reply
outside1234
22 hours ago
[-]
Really? I sort of feel the opposite. I am a mid-career as well and HIGHLY TIRED of writing yet another set of boilerplate to do a thing or chase down some syntax error in the code and the fact that AI will now do this for me has given me a lot more energy to focus on the higher level thinking about how it all fits together.
reply
codr7
21 hours ago
[-]
So instead of being creative and finding ways to avoid duplication, you look for a way to make copies faster.

That's one way to solve the problem.

Not the way I'm looking for when hiring.

reply
JohnBooty
18 hours ago
[-]

    So instead of being creative and finding ways to avoid 
    duplication, you look for a way to make copies faster.
That's not at all how I read the parent post. It feels more like you're replying to a hybrid of the grandparent post (person who churned out a lot of duplicated code with AI) and the parent post (person who likes being "editor" and finds AI helpful)
reply
AnimalMuppet
20 hours ago
[-]
This has happened before.

When we went from assembler to compilers, this logic would have said, "So instead of being creative and finding ways to avoid hand-coding loops, you look for a way to spit out copies of loops faster." And the answer is, "Yeah, I do! I want the compiler to write the loop for me and get all the jumps right so I don't have to worry about it! Getting the jumps for the loop right is incidental to what I'm actually trying to write; I've got better things to spend my time on."

Note well: I am not arguing that AI will produce good code rather than multiple layers of garbage. I am merely saying that this particular argument is weak.

reply
player1234
6 hours ago
[-]
Comparing current AI to a compiler is a dogwhistle for white supremacy.
reply
xigoi
10 hours ago
[-]
The difference is that a high-level programming language abstracts away the duplication, whereas an LLM does not.
reply
codr7
20 hours ago
[-]
You're comparing an LLM to a compiler, which doesn't make much sense to me.

If my compiler occasionally output the recipe for vegan pancakes instead of working code I would definitely think differently of it.

reply
AnimalMuppet
19 hours ago
[-]
I'm comparing an LLM to a compiler to the degree that they automate much of the writing of the tedious parts of code, rather than finding ways to reduce the amount of such code written (and therefore to the degree warranted by the argument in your previous post).

I will admit that compilers don't hallucinate much.

reply
hn_throwaway_99
19 hours ago
[-]
Right now you're getting downvoted, but I don't disagree with you. It's not hard for me to see how lots of people like how AI helps them code (again, I find it helpful in tons of areas), so I think it's more of a personal preference kind of thing. It's also probably that I'm older (nearing 50), and I think there is a lot of good research that a fundamental shift happens in most people's brains in their 40s that makes it more difficult to make major shifts to new ways of doing things (and I've found that in myself).

I think the only thing that perhaps I don't totally agree with is the idea that AI just lets you focus on a higher level of thinking while it "takes care of the details". AI is still the leakiest of abstractions, and while coding LLMs have gotten much better over the past 2 years I can't just trust it, so I still have to review every line that goes to prod. I just find that task much less enjoyable ("editing") than being the author of code. And heck, I'm someone that really enjoys doing code reviews. I think with code reviews my mind is in a state that I'm helping to mentor another human, and I love that aspect of it. I'm not so energetic about helping to train our robot overlords.

reply
voidhorse
22 hours ago
[-]
I do not look forward to the amount of incompetence and noise that increasing adoption of these tools will usher in. I've already had to deal with a codebase in which it was clear that the author fundamentally misunderstood what a trie data structure was. I was also having an difficult time trying to talk to them about the implementation and their misconceptions. lo and behold I eventually find out the reason they chose this data structure was because they asked ChatGPT what to do and they never actually understood, conceptually, what they were doing or using. This made the whole engagement with the code and process of fixing things way harder. Not only did I now have to fix the bunk code, I also had to spend significant time disabusing the author of their own misunderstandings...
reply
itsoktocry
5 hours ago
[-]
>Not only did I now have to fix the bunk code, I also had to spend significant time disabusing the author of their own misunderstandings...

People with your attitude will be the first to be replaced.

Not because you code isn't as good as an AI; maybe it's even better. But because your personality makes you a bad teammate.

reply
CGamesPlay
21 hours ago
[-]
So, AI created a job opportunity for you?
reply
voidhorse
21 hours ago
[-]
I suppose that's one way to look at it. But it's a sort of "bs" unproductive job, fixing up poor outcomes, and overall a less efficient scenario that experts doing it right in the first place. Worse, there was already a readily available implementation that could have been used here rather than a hand-rolled, half-baked AI output. In that respect, the code itself was pure noise and the whole activity was predominantly a waste of my time.
reply
gregw2
21 hours ago
[-]
Sounds like outsourcing/offshoring!
reply
marcosdumay
21 hours ago
[-]
> But it's a sort of "bs" unproductive job, fixing up poor outcomes, and overall a less efficient scenario that experts doing it right in the first place.

I expect this theme to repeat all the time from now on.

And also I expect it to crimp the growth of several people, because the AI solves the simplest problems and then they face an insurmountable wall trying to learn every concept at the same time when they need a small increment in code realism.

Software development will probably become extremely profitable for the people that can do it properly on the next couple of decades.

reply
zamalek
21 hours ago
[-]
The problem is the asinine interviews we are going to have to tolerate in order to screen against AI-kiddies. You think HackerRank is bad? Just you wait...
reply
askonomm
21 hours ago
[-]
Here's me hoping that hiring managers / HR will finally start actually calling references and/or checking open source contributions. I have plenty of both, and in my 14 year career have had only 2 companies call my references, 1 check my open source work. So far they all just give me some bs test job to prove that I can do basic CRUD programming despite being a senior engineer, over and over and over again, which could all be avoided with just a phone call and a basic conversation about tech stuff with their technical team ... but they are either lazy, incompetent, or a combination of both, so I get a bs test job - sometimes even before I manage to actually get the first interview. I decline all of these, of course, but there's so many and it has made job searching quite difficult.
reply
codr7
21 hours ago
[-]
You had someone check your work?

That's awesome!

Never happened so far in my 26 year career. It' almost as if they're hiring for something else than solving problems and writing code. Following orders, most likely.

reply
askonomm
20 hours ago
[-]
To be fair I figure it was only because I applied for a Clojure job, which is a pretty niche thing, and so perhaps that attracts people who are actually interested in you as a human being and not just a number in a spreadsheet. Since then I've gone back to mainstream and it has not happened again.
reply
saaaaaam
21 hours ago
[-]
That’s called consultancy and you can bill chunky rates by the hour. You should be rubbing your hands with glee!

And then work out how to do code review and fixing using AI, lightly supervised by you so that you can do it all whilst walking the dog or playing croquet or something.

reply
perrygeo
19 hours ago
[-]
I've yet to see an LLM response or an LLM generated diff that suggests removing or refactoring code. Every AI solution is additive; new functions, new abstractions added in every step. Increased complexity is all but baked into the system.

Software engineering jobs involve working in a much wider solution space - writing new code is but one intervention among many. I hope the people blindly following LLM advice realize their lack of attention to detail and "throw new code at it" attitude comes across as ignorant and foolish, not hyper-productive.

reply
4b11b4
10 hours ago
[-]
Ask for a refactor...

Ask for multiple refactors and their trade-offs

reply
plagiarist
22 hours ago
[-]
I agree they cannot handle a complex codebase at all at this moment in time.

But I think I would rather just end my career instead of transitioning into fixing enormous codebases written by LLMs.

reply
JTyQZSnP3cQGa8B
21 hours ago
[-]
The complexity has grown but not the quality. We went from writing ADA code with contracts and all sorts of protections with well thought architectures, to random crap written in ReactJS in web sites that now weigh more than a full install of Windows 95.

I’m really ashamed of what SWE has become and AI will increase that tenfold as you say. We shouldn’t cheer up on that, especially if I will have to debug all that crap.

And if it increases the number of engineers, they won’t be good due to a lack of education (I already experience this at work). But anyway I don’t believe it, managers will not waste more money on us, that would go against modern capitalism.

reply
toprerules
21 hours ago
[-]
Oh yes, I'm with you. I didn't say I liked it. I am a low level munger and I like it that way - the lowest layers/oldest layers of the stack tend to be the pieces that are well written and stand the test of time. Where I see AI hitting is at the upper, devil may care, layers of application stack that will be an absolutely hellscape to deal with as a competent engineer.
reply
bigbones
22 hours ago
[-]
I expect pretty much the opposite to happen: it makes sense for languages, stacks and interfaces to become more amenable to interfacing with AI. If a machine can act more reliably by simplifying its inputs at a fraction of the cost of the equivalent human labour, the system has always adjusted to accommodate the machine.

The most obvious example of this already happening is in how function calling interfaces are defined for existing models. It's not hard to imagine that principle applied more generally, until human intervention to get a desired result is the exception rather than the rule as it is today.

I spent most of the past 2 years in "AI cope" mode and wouldn't consider myself a maximalist, but it's impossible not to see already from the nascent tooling we have that workflow automation is going to improve at a rapid and steady rate for the foreseeable future.

reply
JumpCrisscross
22 hours ago
[-]
> it makes sense for languages, stacks and interfaces to become more amenable to interfacing with AI

The theoretical advance we're waiting for in LLMs is auditable determinism. Basically, the ability to take a set of prompts and have a model recreate what it did before.

At that point, the utility of human-readable computer languages sort of goes out the door. The AI prompts become the human-readable code, the model becomes the interpreter and it eventually, ideally, speaks directly to the CPUs' control units.

This is still years--possibly decades--away. But I agree that we'll see computer languages evolving towards auditability by non-programmers and reliabibility in parsing by AI.

reply
SkiFire13
21 hours ago
[-]
> The theoretical advance we're waiting for in LLMs is auditable determinism.

Non-determinism in LLMs is currently a feature and introduced consciously. Even if it wasn't, you would have to lock yourself on a specific model, since any future update would necessarily be a possibly breaking change.

> At that point, the utility of human-readable computer languages sort of goes out the door.

Its utility is having a non-ambiguous language to describe your solution in and that you can audit for correctness. You'll never get this with an LLM because its very premise is using natural language, which is ambiguous.

reply
JumpCrisscross
21 hours ago
[-]
> Non-determinism in LLMs is currently a feature and introduced consciously. Even if it wasn't, you would have to lock yourself on a specific model, since any future update would necessarily be a possibly breaking change

What I'm suggesting is a way to lock the model and then be able to have it revert to that state to re-interpret a set of prompts deterministically. When exploring, it can still branch non-deterministically. But once you've found a solution that works, you want the degrees of freedom to be limited.

> You'll never get this with an LLM because its very premise is using natural language, which is ambiguous

That's the point of locking the model. You need the prompts and the interpreter.

reply
SkiFire13
11 hours ago
[-]
> That's the point of locking the model. You need the prompts and the interpreter.

This still doesn't seem to work for me:

- even after locking the LLM state you still need to understand how it processes your input, which is a task nobody has been able to do yet. Even worse, this can only happen after locking it, so it needs to be done for every project.

- the prompt is still ambiguous, so either you need to refine it to the point it becomes more similar to a programming language or you need an unlimited set of rules for how it should be disambiguated, which an auditor needs to learn. This makes the job of the auditor much harder and error prone.

reply
bigbones
22 hours ago
[-]
> The theoretical advance we're waiting for in LLMs is auditable determinism

I think this is a manifestation of machine thinking - the majority of buyers and users of software rarely ask for or need this level of perfection. Noise is everywhere in the natural environment, and I expect it to be everywhere in the future of computing too.

reply
JumpCrisscross
21 hours ago
[-]
> the majority of buyers and users of software rarely ask for or need this level of perfection

You're right. Maybe just reliable replicability, then.

The core point is, the next step is the LLM talking directly to the control unit. No human-readable code in between. The prompts are the code.

reply
toprerules
22 hours ago
[-]
You're missing the point, there are specific reasons why these stacks have grown in complexity - even if you introduce "API for AI interface" as a requirement, you still have to balance that with performance, reliability, interfacing with other systems, and providing all of the information necessary to debug when AI gets it wrong. All of the same things that humans need apply to AI - the claim for AI isn't that it deterministically solve every problem it can comprehend.

So now we're looking at a good several decades of us even getting our human interfacing systems to amend themselves to AI will still requiring all the current complexity they already have. The end result is more complexity not less.

reply
bigbones
22 hours ago
[-]
Based on what I've seen so far, I'm thinking a timeline more like 5-10 years where anything involving at least frontend has all but evaporated. What value is there in having a giant app team grind for 2 years on the perfect Android app when a user can simply ask for the display they want, and 5 variants of it until they are happy, all in a couple of seconds while sitting in the back of a car. What happens to all the hundreds of UI frameworks when a system as a widespread as Android adopts a technology approach like this?

Backend is significantly murkier, there are many tasks it seems unlikely an AI will accomplish any time soon (my toy example so far is inventing and finalizing the next video compression standard). But a lot of the complexity in backend derives from supporting human teams with human styles of work, and only exists due to the steady cashflow generated by organizations extracting tremendous premiums to solve problems in their particular style. I have no good way to explain this - what value is a $500 accounting system backend if models get good enough at reliably spitting out bespoke $15 systems with infinite customizations in a few seconds for a non-developer user, and what of all the technologies whose maintenance was supported by the cashflows generated by that $500 system?

reply
tehjoker
21 hours ago
[-]
These don't sound like the kinds of problems programmers solve. Users don't want to customize their UI (well some do), they want a UI that is adapted to their needs. They only want to customize it when it doesn't meet their needs (for example, if a corporation tries to use addicting features or hide things they need to increase engagement).

Accounting software has to be validated, and part of the appeal is that it simplified and consolidates workflows across huge bureaucracies. I don't see how on earth you can just spit one out from a prompt and expect that to replace anything.

I work on a compression algorithm myself, and I've found AI of limited utility. It does help me translate things for interfacing between languages and it can sometimes help me try out ideas, but I have to write almost everything myself.

EDIT: It is true, that lower skilled jobs are going to change or reduce in quantity in the short term. To a certain degree there might be a Jevon's paradox in terms of code quantity that needs management.

Imagine companies churning out tons and tons of code that no one understands that behaves bizzarely. Maybe it will become a boutique thing for companies to have code that works properly and people will just accept broken user interfaces or whatever so long as there are workarounds.

reply
dmix
22 hours ago
[-]
I wonder if AI is going to reduce the amount of JS UIs. AI bots can navigate simple HTML forms much easier than crazy React code with 10 layers of divs for a single input. It's either that or people create APIs for everything and document how they are related and interact with documentation.
reply
baq
21 hours ago
[-]
Claude is so good at react the amount of UIs will increase.
reply
theGnuMe
22 hours ago
[-]
I wonder if anyone is applying AI to cobol…
reply
Blackthorn
22 hours ago
[-]
There's no readily available Stack Overflow answers for Cobol, so it'll do about as good there as it does digital signal processing.
reply
stray
21 hours ago
[-]
I think IBM is using LLM to rewrite COBOL code into Java.
reply
agentultra
22 hours ago
[-]
I'd really like to know the parameters are. I hear claims like, "it saves me an hour a day," or, "I'm 30% more productive with AI." What do these figures mean? They seem like proxies for fuzzy feelings.

When I see boring, repetitive code that I don't want to look at my instinct isn't to ignore it and keep adding more boring, repetitive code. It's like seeing that the dog left a mess on your carpet and pretending you didn't see it. It's easier than training the dog and someone else will clean it... right?

My instinct is to fix the problem causing there to be boring, repetitive code. Too much of that stuff and you end up with a great surface area for security errors, performance problems, etc. And the fewer programmers that read that code and try to understand it the more likely it becomes that nobody will understand it and why it's there.

The idea that we should just generate more code on top of the code until the problem goes away is alien to me.

Although it makes a lot more sense when I probe into why developers feel like they need to adopt AI -- they're afraid they won't be competitive in the job market in X years.

So really, is AI a tool to make us more productive or a tool to remove our bargaining power?

reply
qwertox
10 hours ago
[-]
> So really, is AI a tool to make us more productive or a tool to remove our bargaining power?

Don't you notice how it makes you more productive, that you can solve problems faster? It would be really odd if not.

And regarding the bargaining power: that's not the other side of the scale, it's a different problem. If your code monkey now gets as good as your average developer, the average developer will have lost some relative value, unless he also upped his game by using AI.

If everyone gets better, why would you see this as something bad, which makes us lose "bargaining power"? Because you no longer can put the least effort which your employer expects from you? Even then: it's not like AI makes things harder, it makes them better. At least for me software development has become more enjoyable.

While 5 years ago I was asking myself if I really want to do this for the rest of my career, I now know that I want to do this, with this added help, which takes away much of the tedious stuff like looking up solution-snippets on Stack Overflow. Plus, I know that I will have to deal less and less with writing code, and more and more with managing code solutions offered to me.

reply
itsoktocry
5 hours ago
[-]
>it makes a lot more sense when I probe into why developers feel like they need to adopt AI -- they're afraid they won't be competitive in the job market in X years.

Amazing. You think that the only reason people are using AI is because it's being forced on them?

I honestly feel kinda bad for some people in this thread who don't see the freight train coming.

reply
agentultra
4 hours ago
[-]
That’s the majority of answers I get when I ask folks. It’s not a huge sample size.

There are a few loud people who think AI programming is the best thing since sliced bread.

What’s the freight train?

reply
johnecheck
3 hours ago
[-]
General artificial intelligence, duh!

It doesn't matter today's tools fail to live up to their promises, by 2027 we're going to have AI just as smart as humans! They'll totally be able to make a moderately complex change to an existing code base correctly enough that you DONT need to spend just as long as it would've taken you to code it yourself cleaning up after it.

Source: trust me bro (also buy my AI product)

reply
agentultra
3 hours ago
[-]
As smart as existing humans, you say?

I think we're in for a world of trouble if that's the case. /s

reply
mschild
22 hours ago
[-]
Well to give to a concrete example. I use it to write test cases for the CRUD applications that I sometimes have to work on. Some test cases already exist and I feed the tests and actually code including additional instructions into a model and get relatively decent output. We also use a code review bot that we feed repository relevant instructions to and get decent basic PR comments. It even caught an edge case that 3 other developers didn't consider.

I think AI can be yet another tool that takes some repetitive tasks off my hands. I still obviously check all the code it generated.

reply
TrackerFF
22 hours ago
[-]
I'm a big user of LLM tools.

The problem, so far, is that they're still...quite unreliable, to say it least. Sometimes I can feed the model files, and it will read and parse the data 100 out of 100 times. Other times, the model seems clueless about what to do, and just spits out code on how to do it manually, with some vague "sorry I can't seem to read the file", multiple times, only to start working again.

And then you have the cases where the models seem to dig themselves into some sort of terminal state, or oscillate between 2-3 states, that they can't get out off - until you fire up a new model, and transfer the code to it.

Overall they do save me a ton of time, especially with boilerplate stuff, but very routinely even the most SOTA models will have their stupid moments, or keep trying to do the same thing.

reply
codr7
21 hours ago
[-]
Are you including the time you spend fighting the model?
reply
crmd
21 hours ago
[-]
You could be describing the performance of me and most of my friends and colleagues over the past five years.

It’s insane how similar non-deterministic software systems already are to biological. Maybe I’ve been wrong and consciousness is a computation.

reply
jdashg
22 hours ago
[-]
I always thought hacking scenes in sci-fi were unrealistic, but if you're cooking up AI-fortified code lasagna at your endpoints, there are going to be a mishmash of vulnerabilities: Expert robust thought will be spread very thin by the velocity that systemic forces push developers to.
reply
bigtimesink
20 hours ago
[-]
> Mark Zuckerberg, Meta’s chief executive, stirred alarm among developers last month when he predicted that A.I. technology sometime this year would effectively match the performance of a midlevel software engineer

Either Meta has tools an order of magnitude more powerful than everyone else, or he's drinking his own koolaid.

reply
FredPret
20 hours ago
[-]
At some point in the past, tools like Wordpress et al made it easy for the average person to roll out their own website.

This probably increased the overall demand for professional website makers and messed-up-Wordpress-fixers.

Now the argument goes that the average business will roll out their own apps using ChatGPT (amusing / scary), or that big software co's will replace engineers with LLMs.

For this last point, I just don't see how any of the current or near-future models could possibly load enough context to do actual engineering as opposed to generating code.

reply
atlantic
22 hours ago
[-]
I've found that AI has saved me time consulting Stack Overflow. It combines thorough knowledge of the documentation with a lot of practical experience gleaned from online forums

It has also saved time producing well-defined functions, for very specific tasks. But you have to know how to work with it, going through several increasingly complex iterations, until you get what you want.

Producing full applications still seems a pipedream at this stage.

reply
itsoktocry
5 hours ago
[-]
>Producing full applications still seems a pipedream at this stage.

Do you mean like: "write me an app that does XYZ?"

Well, it's a pipedream because you probably couldn't even get a room of developers to agree on how to do it. There are a million ways.

But this isn't really how programmers are expecting to use AI, are they?

reply
fhd2
22 hours ago
[-]
That's kinda still how I get the most out of it - search, more or less. Claude gives me great starting points from which I can do some refining/confirming searches and documentation lookups. _Starting_ with search feels like a drag now. But the information and code I get is unreliable at least 20 % (just a guess, frankly, did no statistics) of the time, so I treat the output as things to try or investigate, rather than things to ship.

You'll probably get a few responses from folks that happily tab complete their software and don't sweat the details. Some get away with that, I'm generally not in a position where it's OK to not fully understand the system I'm building. There's a lot of stuff that's better to find out during development than in a late night production system debugging session.

reply
monicaliu
21 hours ago
[-]
New to AI assisted coding, but I'm finding myself spending a lot of time debugging its convincingly wrong output.
reply
notnullorvoid
20 hours ago
[-]
I've been choosing not to use most of the AI code assistant stuff for a while, I try it every now and then. Each time it's the same outcome, it actively reduces my productivity by a fair amount. I suspect this is due to a mix of the majority of my programming being non-trivial (library building, complex-ish algos), and that I'm a bit of a perfectionist coder who enjoys programming.

LLMs are useful tools for programming, as a kind of search engine and squeaking rubber duck. AI as a programmer is worse than a junior, it's the junior that won't actively learn and improve. I think current AI architecture limits it from being much more than that.

reply
jenkstom
21 hours ago
[-]
It seems like AI will generate opportunities for fixing code. Both in reducing internal technical debt ("code maintenance", which is a specialized skill already) and external technical debt (architecture, which is being built by AI also). Eventually AI will be good enough for both of these things as well, and then we may just become the priests of the Temples of Syrinx.
reply
polishdude20
17 hours ago
[-]
Our great computers fill our hollow halls.
reply
m2spring
21 hours ago
[-]
Business wants short term-solutions. The long-term effects it doesn't care, even if it clearly bites them in the ass.
reply
gosub100
22 hours ago
[-]
Sort of off-topic, but is there any generative AI for code? From my limited understanding, the code is trained on human written code, and the model adapts it to what most closely matches.

What I'm curious about is, can it find innovative ways to solve problems? Like the infamous Quake 3 inverse-sqrt hack? Can it silently convert (read: optimize) a std::string to a raw char* pointer if it doesn't have any harmful side effects? (I don't mean "can you ask it to do that for you?" , I mean can it think to do that on its own?) Can it come up with trippy shit we've never even seen before to solve existing problems? That would truly impress me.

Take a bloated electron app, analyze the UI, and output the exact same thing but in C++ or Rust. Work with LLVM and find optimizations a human could never see. I remember seeing a similar concept applied to physical structures (like a small plane fuselage or a car) where the AI "learns" to make a lighter stronger design and it comes out looking so bizarre, no right angles, lots of strange rounded connections that almost like a growth of mold. Why can't AI "learn" to improve the state of the art in CS?

reply
jcranmer
21 hours ago
[-]
> Take a bloated electron app, analyze the UI, and output the exact same thing but in C++ or Rust. Work with LLVM and find optimizations a human could never see. I remember seeing a similar concept applied to physical structures (like a small plane fuselage or a car) where the AI "learns" to make a lighter stronger design and it comes out looking so bizarre, no right angles, lots of strange rounded connections that almost like a growth of mold. Why can't AI "learn" to improve the state of the art in CS?

So such things already exist, and for me, the most frustrating thing about LLMs is that they just suck the oxygen out of the room for talking about anything AI-ish that's not an LLM.

The term for what you're looking for is "superoptimization," which tries to adapt the principles of mathematical nonconvex optimization that AI pioneered to the problem of finding optimal code sequences. And superoptimization isn't new--it's at least 30 years old at this point. At this point, it's mature enough that if I were building a new compiler framework from scratch, I'd design at least the peephole optimizer based around superoptimization and formal verification.

(I kind of am putting my money where my mouth is there--I'm working on an emulator right now, and rather than typing in the semantics of every instruction, I'm generating them using related program synthesis techniques based on the observable effects on actual hardware.)

reply
fzeroracer
22 hours ago
[-]
> To do so, Mr. Giorgi has his own timesaving helper: an A.I. coding assistant. He taps a few keys and the software tool suggests the rest of the line of code. It can also recommend changes, fetch data, identify bugs and run basic tests. Even though the A.I. makes some mistakes, it saves him up to an hour many days.

> Still, nearly two-thirds of software developers are already using A.I. coding tools, according to a survey by Evans Data, a research firm.

> So far, the A.I. agents appear to improve the daily productivity of developers in actual business settings between 10 percent and 30 percent, according to studies. At KPMG, an accounting and consulting firm, developers using GitHub Copilot are saving 4.5 hours a week on average and report that the quality of their code has improved, based on a survey by the firm.

We're in for a really dire future where the worst engineers you can imagine are not only shoveling out more garbage code but the ability to assess it for problems or issues is much more difficult.

reply
JumpCrisscross
22 hours ago
[-]
> We're in for a really dire future where the worst engineers you can imagine are not only shoveling out more garbage code but the ability to assess it for problems or issues is much more difficult

It will probably still be more productive. IDEs, Stack Exchange...each of these prompted the same fears and realised some of them. But the benefits of having more code quicker and cheaper, even if more flawed, outweighed those of quality. The same way the benefits of having more clothes and kitchenware and even medicine quicker and cheaper outweighed the high-quality bespoke wares that preceded them. (Where it doesn't, and where someone can pay, we have artisans.)

In the mean time, there should be an obsolescence premium [1] that materialises for coders who can clean up the gloop. (Provided, of course, that young and cheap coders of the DOGE variety stop being produced.)

[1] https://www.sciencedirect.com/science/article/abs/pii/S01651...

reply
xigoi
10 hours ago
[-]
You say “more code” as if it was a good thing. On the contrary, I want my codebases to have as little code as possible to achieve the given task.
reply
fzeroracer
22 hours ago
[-]
The problem with 'more code, quicker and cheaper' is that when you fall behind a baseline of quality it actually ends up costing you and your business, significantly. Companies learned this the hard way during the outsourcing booms, and the usage of AI amplifies this problem 10 fold much like it's doing with spam.
reply
JumpCrisscross
22 hours ago
[-]
> Companies learned this the hard way during the outsourcing booms

Sure. Then we learned how to do it and in which contexts. Since WFH trained executives state-side how to communicate asynchronously and over Zoom, I've seen engineering offshoring done quite productively, possibly for the first time in my career.

reply
fzeroracer
22 hours ago
[-]
I've seen attempts at engineering offshoring again and it causes problems because the core part of software engineering isn't the actual coding part, it's communication, design and understanding.

If you offshore your coding, how do you know they're not using AI as well? How do you verify code quality? And what do you do when you're left cleaning up a mess? This isn't really a problem of the quality of engineers, but the inherent relationship between a company and contracted firms.

reply
JumpCrisscross
22 hours ago
[-]
> If you offshore your coding, how do you know they're not using AI as well? How do you verify code quality?

All of these apply to remote-only teams. We know they work. (Note: I'm commenting more on offshoring than outsourcing. Outsourcing is fraught with issues. Offshoring once was, but is increasingly becoming viable, particularly with WFH and AI.)

reply
xyzzy123
17 hours ago
[-]
IMHO the primary problem with offshoring vs using employees was always principal / agent problem and incentives.

There was never a strong incentive for your "partner" to actually finish the job, they got more money from overruns, fixing defects, sunk costs, requirements churn, overstaffing, giving you juniors and billing for seniors, etc etc etc.

There were also issues with cultural differences, communication and lack of understanding of the "customer" - but they are minor and resolvable compared to the core problem.

reply
slothtrop
22 hours ago
[-]
Growing pains. Reviewing carefully is still less work.
reply
nyarlathotep_
19 hours ago
[-]
> At KPMG, an accounting and consulting firm, developers using GitHub Copilot are saving 4.5 hours a week on average and report that the quality of their code has improved, based on a survey by the firm.

I don't have any specific experience with KPMG, but considering the other "big name" firms' work I've encountered, there's, uh, lots of room for improvement.

reply
sirsinsalot
22 hours ago
[-]
I made good money cleaning up after the 2000s outsourcing boom.

It was lucrative cleaning up shit code from Romania and India.

I'm hoping enough people churn out enough hot garbage that needs fixing now that I can jack up my day rate.

I remember when the West would have no coders because Indian coders are cheaper.

I remember when nocode solutions would replace programmers.

I remember.

reply
cootsnuck
22 hours ago
[-]
What did/do you call your services you offer? I sincerely love debugging (fixing tech of any kind digital or analog). Never thought I could offer services just fixing instead of building from scratch...
reply
ethagnawl
22 hours ago
[-]
If you missed this recent post, I think you'll appreciate it: https://defragzone.substack.com/p/techs-dumbest-mistake-why-...

> Now, let’s talk about the real winners in all this: the programmers who saw the chaos coming and refused to play along. The ones who didn’t take FAANG jobs but instead went deep into systems programming, AI interpretability, or high-performance computing. These are the people who actually understand technology at a level no AI can replicate.

> And guess what? They’re about to become very expensive. Companies will soon realize that AI can’t replace experienced engineers. But by then, there will be fewer of them. Many will have started their own businesses, some will be deeply entrenched in niche fields, and others will simply be too busy (or too rich) to care about your failing software department.

> Want to hire them back? Hope you have deep pockets and a good amount of luck. The few serious programmers left will charge rates that make executives cry. And even if you do manage to hire them, they won’t stick around to play corporate politics or deal with useless middle managers. They’ll fix your broken systems, invoice you an eye-watering amount, and walk away.

reply
rybosworld
22 hours ago
[-]
This entire article reads like hopium. And it seems predicated on the false belief that companies are going to try to replace their entire workforce with AI overnight:

> "Imagine a company that fires its software engineers, replaces them with AI-generated code, and then sits back"

It should go without saying this is not even possible at the moment. Will it be possible one day? Yes, probably. And when that day comes, the fantasies this author has dreamed up will be irrelevant.

I've said it before and I'll say it again: It shocks me that a forum filled with tech professionals, is so blindly biased against AI that they refuse to acknowledge what changes are coming.

All of these conversations boil down to: "The LLM's of today couldn't replace me." That's probably true for most folks.

What's also true is that ChatGPT was released less than 3 years ago. And we've seen it go from a novelty with no real use, to something that can write actually decent research papers and gets better at coding by the month.

"B-b-but there's no guarantee it will continue to improve!" is one of the silliest trains of thought a computer scientist could hold.

reply
itsoktocry
5 hours ago
[-]
>I've said it before and I'll say it again: It shocks me that a forum filled with tech professionals, is so blindly biased against AI that they refuse to acknowledge what changes are coming.

I have the exact same reaction reading this stuff on HN. It's hilarious, scary and sad, all at the same time.

The speed at which these tools have improved has completely convinced me that these things are going to take over. But I don't fear it, I'm excited about it. But I don't write code for code's sake; I'm using code to solve problems.

reply
vunderba
20 hours ago
[-]
Couple things:

1. The "LLMs are still in their infancy" argument is frequently trotted out but let's be clear - GPTs were introduced back in 2018 - so SEVEN years ago.

2. It shocks me that a forum filled with tech professionals, is so blindly biased against AI that they refuse to acknowledge what changes are coming. This feels like a corollary to the Baader-Meinhof phenomenon. I don't think you can extrapolate that a few dozen loudly dissenting voices is necessarily representative of majority opinion.

3. I would like to see a citation of ChatGPT releasing actual "decent research papers".

4. If AIs get to the point of actually acting in a completely autonomous fashion and replace software engineers - then there's no reason to believe that they won't also obliterate 90% of other white-collar jobs (including other STEM) so at that point we're looking at needing to completely re-evaluate our economic system possibly with UBI, etc.

reply
throw234234234
17 hours ago
[-]
I actually think sadly it can replace "just software engineers" at least in the short and medium term. Not because it can't do other careers (if employed effectively) but because that's what they are actively targeting and they have the domain knowledge in it, and it being a public open profession amenable to RL. There's millions of code pieces online, lots of public job briefs and success criteria defined in it, etc etc. They will throw every research and ML trick at it just to displace SWE's because that's what they really really want to do. IMO this is particularly true for OpenAI. Other jobs, once seeing the bargaining power of SWE's fall and be destroyed, will resist integration of AI from the big corps and see a much slower disruption - this is the most rational thing to do to preserve your enterprise. Especially given most intellectual economic jobs are at best oligopolies at the large end.

OpenAI just released their "Lancer" benchmark which basically shows their intent - replace the economic value of software development from coding to engineering manager tasks. Nothing is safe pretty much - I don't recommend people enter the industry anymore; its just anxiety you don't need (i.e. companies together collectively worth in the trillions are trying to destroy your economic value).

Not that it will take "good high economic mobility jobs"; my disappointment is more that this effort for society is better spent in many other domains (medicine, building, robotics) which at least have some benefit from the disruption but no - its all about SWE's. Must be what their VC's want from them and/or keeps the fear/hype train going most effectively.

reply
dmix
22 hours ago
[-]
Even if we don't write code software engineers (or technically minded people) will be able to coordinate hundreds of AI bots better than the average person and manage the systems. If there's a day where programming is not as valuable I'm pretty confident I can find some way to be useful in the future economy.

And if it's real AGI, not airplanes flying themselves with a pilot stuff, then we probably will have to re-think employment anyway

reply
rybosworld
21 hours ago
[-]
> Even if we don't write code software engineers (or technically minded people) will be able to coordinate hundreds of AI bots better than the average person and manage the systems.

There might be a period of time where product management and qa are still done by people, but I think that period will be transitory and short.

I think software engineers in general are grossly underestimating the probability they will be replaced. On a 20-30 year timeline, that probability might be close to 100%. Probably, it will also be gradual, and those who are displaced (starting with least experienced, to most), will not be able to find similar employment.

We are all more or less opting into this without a fight.

reply
dmix
17 hours ago
[-]
> We are all more or less opting into this without a fight.

I don’t fear my dev job becoming less valuable as large chunks become automated. Like I said if it happens I will figure out a way to be useful to society in other ways, even if it’s inventing things for AI to do or controlling AI to do the job of a bunch of devs. I will adapt like everyone has in history and make myself valuable in new ways.

“Nothing is static. Everything is evolving. Everything is falling apart.”

reply
slothtrop
22 hours ago
[-]
The increase in productivity means you need fewer inexperienced and/or bad engineers to a project. On the other hand, they may be retained to go after bolder, more numerous targets.
reply
deadbabe
22 hours ago
[-]
I don’t think that future will happen, because eventually someone will realize there is a competitive advantage in building a truly good product with people who actually know what they’re doing, and when other companies catch on they will start doing that and bad prompt kiddy engineers will be gone.
reply
cootsnuck
21 hours ago
[-]
> because eventually someone will realize there is a competitive advantage in building a truly good product with people who actually know what they’re doing

Doesn't / Shouldn't that competitive advantage inherently exist already? But don't we still see a small group of big players that put out broken / mediocre / harmful tech dominating the market?

The incentives are to make the line keep going up – nothing else. Which is how we get search engines that don't find things, social media that's anti-social, users that are products, etc.

I'm not at all hopeful that an already entrenched company that lays off 50% of its workers and replaces them with an AI-slop-o-matic will lose to a smaller company putting out well-made principled tech. At least not without leveraging other differentiating factors.

(I say all of this as someone that is excited about the possibilities AI can now afford us. It's just that the possibilities I'm excited about are more about augmentation, accessibility, and simplicity rather than replacement or obsolescence.)

reply
rickspencer3
21 hours ago
[-]
In my Python programming, I have found that ChatGPT makes me something like 10x more productive. From learning to use a new API to tracking debugging, and especially things finding errors in my code from stack traces. Getting results goes SO MUCH FASTER.

However, it has not alleviated any responsibility from me to be a good coder because I have question literally everything little dang thing suggests. If I am learning a new API, it can write code that works, but I need to go read the reference documentation to make sure that it is using the API with current best practices, for example. A lot of code I have to flat out ask it why it did things in a certain way because they look buggy inefficient, and half the time it apologizes and fixes the code.

So, I use the code in my (personal) projects copiously, but I don't use a single line of code that it generates that I don't understand, or it always leads to problems because it did something completely wrong.

Note that, at work, for good reasons, we don't use AI generated code in our products, but I don't write production code in my day job anyway.

reply
kittikitti
20 hours ago
[-]
Microsoft uses the ability to replace software engineers to sell their own AI.
reply
arisAlexis
22 hours ago
[-]
Nice cope from programmers. But reality hits hard.
reply
aerhardt
20 hours ago
[-]
There's always one in the comments. Even way before AIs, with any discussion on low code, hiring tech turns, etc.

What did programmers do to you to trigger such deep-felt insecurities thusly?

reply
arisAlexis
7 hours ago
[-]
I am a programmer for 20 years and I'm just not blind with bias my friend. it's just the way it is. Some see it, some are late.
reply
reverendsteveii
21 hours ago
[-]
People got wildly out of hand thinking that AI would do what we currently do without us. The real truth is AI is gonna do 10x what we currently do with us.
reply