Ask HN: Are developers sad about AI writing more of their code?
12 points
22 hours ago
| 10 comments
| HN
I’ve been chatting with a few dev friends and colleagues about Cursor, copilot etc

and what surprised me was that their biggest feeling about this was neither excitement nor concern, but sadness.

Sadness in the sense that they are afraid the “fun” parts of their job (thinking, building, solving) might slowly be taken away. That they’ll become bored reviewers.

It got me wondering if other devs feel this way?

Are we really on a path where engineering turns into a supervisory job? or is it just temporary until it shifts into something radically different?

Curious to hear from dev folks here !

silentpuck
20 hours ago
[-]
I think the real sadness is that many developers may stop learning the deeper fundamentals — the things that AI can't replace.

When people start relying on the "I just want it to work this way" mentality and let AI take over, they can lose track of how things actually work under the hood. And that opens the door to a lot of problems — not just bugs, but blind trust in things they don't understand.

For me, the joy is in understanding. Understanding what each part does, how it all fits together. If that disappears, I think we lose something important along the way.

reply
vedmakk
8 hours ago
[-]
But in a way a 90s assembler dev would argue that todays developers are not understanding how things work "under the hood" at all. I guess with each generation we just abstract to higher layers and solve bigger problems while just "relying on things under the hood to work just fine".
reply
silentpuck
7 hours ago
[-]
Yeah, that’s a fair point. Abstraction is part of progress — and we do rely more and more on things “just working.”

But that trust can be dangerous when we don’t even know what we’re trusting. And when something breaks, it can leave us completely blind.

I’m not saying everyone needs to go all the way down to the metal — but I do think it’s important to at least understand the boundaries. Know what’s underneath, even if you don’t touch it every day.

Otherwise, it’s not engineering anymore — it’s guessing.

And I’ve never been good at just “believing” things work. I need to poke around, break them, fix them. That’s how I learn. Maybe I’m just stubborn like that.

reply
notorious_pgb
13 hours ago
[-]
This is definitely the core issue from my perspective.
reply
iExploder
16 hours ago
[-]
if you think about it the resource wasteful approach pre-LLM didnt really make sense (thousands of people often times re-implementing similar use cases). LLMs are like global open source repositories of everything known to mankind with search on steroids. we can never go back, if only for this one reason (imagine of how many hours of peoples lives were lost implementing the same function, or yet another CRUD app)... so if we cant go back whats next?

the paradigm is shifting from us not deciding how to do, but deciding what to do, maybe by writing requirements and constraints and letting AI figure out the details.

the skill will be in using specific language with AI to get the desired behavior, old code as we know it is garbage, new code is writing requirements and constraints

so in a sense we will not be reviewers, nor architects or designers, but writers of requirements and use cases in a specific LLM language, which will have its own but different challenges too

there might be still a place for cream of the crop mega talented people to solve new puzzles (still with AI assistance) in order to generate new input knowledge to train LLMs

reply
physicsguy
22 hours ago
[-]
I find it a mix of really frustrating (mostly it not doing what I want it to do, making random changes alongside perfectly good changes of the type I want) and amazing when it works. With that said, I think it works best in an unconstrained environment, and when you start adding constraints (don't do it via this method, don't use this library or import any new dependencies) you find much less good results. This inevitably means it works better at the start of projects rather than coming on to something that's been around for a while and has it's own patterns.
reply
JohnFen
21 hours ago
[-]
It looks like LLMs will automate a lot of what I enjoy doing myself and increase the amount of work of the sort I dislike (such as reviewing code I didn't recently write).

So my worst-case scenario with LLMs in terms of my job is that they will make my job hard to tolerate. If that actually happens, I'll leave the field entirely as there would be no room for the likes of myself in it anymore.

reply
jjice
18 hours ago
[-]
I was scared of this initially, but I've found that I mostly just use it for tedious code that I would normally procrastinate just because of how mind numbing it was (like adding another CRUD endpoint) or making a sweeping change across code that wasn't as simple as a find and replace.

For things that keep me interested, I just won't use LLM features. Sometimes at the end I'll have it audit my code and sometimes it'll catch something that can be improved.

Also test cases. It's not perfect, but a large chunk of that being automated there is very nice.

reply
billylo
13 hours ago
[-]
I'm retired and write apps as community givebacks and as a creative outlet.

No, I am not sad because I am in control. If there is something I want to take on as an intellectual challenge, I do it.

If it's just mechanical tasks or UI layout tweaking, AI is perfect. I can become the user who keeps asking for fine-tuning of corner radius. :-)

reply
notorious_pgb
22 hours ago
[-]
tl;dr: Yes; at least some of us are. I deeply am.

I wouldn't normally post a link to my blog in the comments of another thread -- I'm really not trying to shamelessly plug here -- it's just _incredibly_ relevant, and I've already poured my heart and soul into writing out exactly what I think here, so I think it's germane:

https://prettygoodblog.com/p/the-big-secret-that-big-ai-does...

> I cannot write the necessary philosophical screed this deserves, but: there are things which make us human and in our individual humanity unique; these things are art and skill of any kind; and it so greatly behooves one to strive to master art in any and all its forms.

reply
uncircle
2 hours ago
[-]
The opening of your post made me think of a scifi writing prompt where, in a dystopic future, people are paying mega tech-corporations thousands of dollars per month to replace their meat brain with a subpar artificial one just to become more productive corporate ~~slaves~~workers.
reply
amichail
22 hours ago
[-]
I think indie devs love it since they can focus on manually coding the fun parts and leave the AI to code the boring parts.
reply
JohnFen
20 hours ago
[-]
I think the opinion amongst indie devs (at least the ones I know) is as varied as the opinion amongst other sorts of devs. Some love it, some hate it, some are neutral. It seems to depend on what it is about development that the particular dev enjoys and values.
reply
JFerreol_J
22 hours ago
[-]
Agreed on this, but do you think this split that we human are happy with can last forever?
reply
tomsayervin
22 hours ago
[-]
Reading someone else code is the worst thing, so I tend to agree

This being said it does get some uninteresting things done very fast so I’m not entirely sad

reply
wryoak
22 hours ago
[-]
Recently I considered trying to find a position for myself just reviewing code part time because I find that the funnest part. I find writing (significant amounts of) code quite tedious.

Hearing people say that code review/reading is the boring part makes me think maybe I should actually pursue this

reply
JFerreol_J
22 hours ago
[-]
Agreed it's not black or white. Do you feel 100% more productive than before though?
reply
moomoo11
5 hours ago
[-]
I mean if you’re building react buttons and using pydantic and think that makes you an engineer then yeah you’re getting replaced.

I don’t think people who know how the computer and networking works are going anywhere any time soon. Or the people who actually made react or pydantic.

reply