Things I learned from burning myself out with AI coding agents
22 points
5 hours ago
| 4 comments
| arstechnica.com
| HN
iainctduncan
2 hours ago
[-]
From the article "And yet these tools have opened a world of creative potential in software that was previously closed to me, and they feel personally empowering. "

I keep seeing things like this, about "democratization" etc. But coding has been essentially free to learn for about 25 years now, with all the online resources and open source tools anyone can use.

Is this just a huge boon to those too lazy to learn? And what does that mean for later security and tech debt issues?

reply
kyancey
43 minutes ago
[-]
> Is this just a huge boon to those too lazy to learn? And what does that mean for later security and tech debt issues?

In the same way that GPS is a boon to people too lazy to learn celestial navigation or read a paper map.

In the same way that word processors are a boon to people too lazy to use a typewriter and white-out.

In the same way that supermarkets are a boon to people too lazy to hunt and gather their own food.

In the same way that synthesizers are a boon to people too lazy to hire a 40-piece orchestra.

In the same way that power drills are a boon to people too lazy to use a hand crank.

reply
wjfuu32984
3 minutes ago
[-]
Those are all false equivalents. The GP speaks of "democratization of learning", which had already happened. It's more akin to if I said "now people can finally vote" when remote voting expanded to civilians. It's not like people couldn't have voted before, and in fact it had only a modest impact on turnout.

Then people would ask "is this just a huge boon to those too lazy to vote?", and the answer would be "no actually, voting is still a thing where one must do their own thinking."

If anything, it's a boon to people too lazy to drive, similar to LLMs being a boon for those too lazy to type.

reply
borroka
1 hour ago
[-]
Think about having to assemble a car (you can find specs and tutorials online, say) and then drive it, or asking engineers and mechanics to assemble it, and then using the car assembled by others to go for a drive.
reply
brazukadev
5 hours ago
[-]
> 8. People may become busier than ever

this is so true and the opposite of what was expected

reply
dapperdrake
2 hours ago
[-]
LLMs provide the benefit of lossy compression of all the text on the Internet.

It’s a crappier reddit in your pocket.

Use it well.

reply
funkyfiddler69
4 hours ago
[-]
nice write up of things that are only obvious if you spend time with AI. pretty much everything applies to non-agentic AI work, code or not, as well, if you are aiming beyond average quality and conventional design, that is. people who give up somewhat early won't give up much later just because they use AI or teach an AI agent.

but the article is mostly also what people not in the field or tangentially related expect. it's here but that big thing isn't.

I could say I dabbled with woodworking but I really just used a chainsaw to cut down some trees, make slabs and then used drill and screws to construct the cheapest, fastest MVP of a piece of furniture that I used until the shed burned down. But that's not woodworking, not really.

"AI coding agents" is just an autoiterating chat of/with a large coding model, that you still have to iterate over, which is as obvious as an apprentice in a woodworking shop doing a lot--if not all of the work--alone until the meister points out all the mistakes and lets him do it all over again.

> I was soon spending eight hours a day during my winter vacation shepherding about 15 Claude Code projects at once

If you are a "computer person", spending 8h a day on multiple projects is normal, although 15 is, IMO, way too freaking much but I'm ADHD and not really a computer person. While I run dozens of narratives in parallel all the time, I only "shepherd" and iterate over a handful of them in 'flexible' time intervalls.

The reason for the burnout might be, and I can relate due to my ADHD, the following:

> Due to what might poetically be called “preconceived notions” baked into a coding model’s neural network (more technically, statistical semantic associations), it can be difficult to get AI agents to create truly novel things, even if you carefully spell out what you want.

The expectation to create something "truley novel" based on ideas that aren't truly novel (yet, ...what?) is weird enough, but then expecting that an AI coding agent, an apprentice, will make it novel even though the entire thing basically already exists and the novelty makes no sense conceptually until the core elements are separated

> a UFO (instead of a circular checker piece) flying over a field of adjacent squares,

is quite analogues to semi-functional ADHD people who believe they will get at least some of their ideas out if they "work" or dream on all all them. It can work, but you have to separate concerns, which, in case of ADHD people, is becoming functional, meaning don't consume stuff that impede body and brain, do stuff to eliminate bio-physical distractions and to keep hormonal and neural moral high at most times, and only then work, and in the case of AI coding agents it means to separate concepts that are programmatically/mathematically/linguistically intertwined and only then define mechanics and features within or beyond the individual or combined constraints.

reply
karmakaze
1 hour ago
[-]
> The first 90 percent of an AI coding project comes in fast and amazes you. The last 10 percent involves tediously filling in the details through back-and-forth trial-and-error conversation with the agent. Tasks that require deeper insight or understanding than what the agent can provide still require humans to make the connections and guide it in the right direction.

So then why not at this point switch to the human being primary author and only have the AI do reviews and touch ups? Or are we restricting ourselves to vibe coding only?

reply
jaggs
4 hours ago
[-]
This is an excellent article. I resonated with all ten of his points.This section at the end particularly made sense.

"Regardless of my own habits, the flow of new software will not slow down. There will soon be a seemingly endless supply of AI-augmented media (games, movies, images, books), and that’s a problem we’ll have to figure out how to deal with. These products won’t all be “AI slop,” either; some will be done very well, and the acceleration in production times due to these new power tools will balloon the quantity beyond anything we’ve seen."

reply
tonyedgecombe
3 hours ago
[-]
The problem is finding the pearls amongst the slop.
reply
voakbasda
1 hour ago
[-]
How is that any different than, say, all of human history?
reply
pdimitar
1 hour ago
[-]
It's not different per se, it's just being made much more difficult i.e. if you had to look for one pearl through a pile of 200 barnacles, how you have to scan through 3000.
reply