These kinds of tooling and related work will still be there unless AI evolves to the point that it even thinks of this and announces this to all other AI entities and they also implement it properly etc.
You still need someone who understands the basics to get the good results out of the tools, but they’re not chiseling fine furniture by hand anymore, they’re throwing heaps of wood through the tablesaw instead. More productive, but more likely to lose a finger if you’re not careful.
People go to museums to admire old hand-carved furniture and travel to cities to admire the architecture of centuries past made with hand-chiseled blocks. While power tools do let people make things of equal quality faster, they're instead generally used to make things of worse quality much, much faster and the field has gone from being a craft to simply being an assembly line job. As bad as software is today, we're likely to hit even deeper lows and people will miss the days where Electron apps are good compared to what's yet to come.
There's already been one step in this direction with the Cambrian extinction of 90s/early 2000s software. People still talk about how soulful Winamp/old Windows Media Player/ZSNES/etc were.
It has never worked in the past, I'm not entirely convinced that it will work now
It won't bother them at all what the code looks like under the hood. Not that the code will look worse that what an "average" developer produces. Claude and ChatGPT both write better code than most of the existing code I usually look at.
This is true for most of the software these days (except for professional software like Photoshop and the like) without LLMs.
Moving on into a concrete software example, thanks to AI productivity, we replaced a lot of expensive and crappy subscription SaaS software with our homegrown stuff. Our stuff is probably 100x simpler (everyone knows the pain of making box software for a diverse set of customer needs, everything needs to be configurable, which leads to crazy convoluted code, and a lot of it). It's also much better and cheaper to run, to say nothing of the money we save by not paying the exorbitant subscription fee.
I suspect the biggest losers of the AI revolution will be the SaaS companies whose value proposition was: Yes you can use open source for this, but the extra cost of an engineer who maintains this is more than we charge.
As for bespoke software, 'slop' software using Electron, or Unity in video games exists because people believe in the convenience of using these huge lumbering monoliths that come with a ton of baggage, while they were taught the creed that coding to the metal is too hard.
LLMs can help with that, and show people that they can do bespoke from scratch (and more importantly teach people how to do that). Claude/o3/whatever can probably help you build a game in WebGL you thought you needed a game engine for.
We went through decades of absolutely hideous slop, and now people are yearning for the past and learning how to make things that are aesthetically appealing, like the things that used to be common.
I think we're looking at at least a decade of absolute garbage coming up because it's cheap to make, and people like things that are cheap in the short term. Then people will look back at when software was "good", and use new tools to make things that were as good as they were before.
And not limited to AI and power tools, it happened with art as well. Great art was made with oil paints, watercolors, and brushes. Then digital painting and Photoshop came around and we had a long period of absolute messes on DeviantArt and a lot of knowledge of good color usage and blending was basically lost, but art was produced much faster. Now digital artists are learning traditional methods and combining it with modern technology to make digital art that can be produced faster than traditional art, but with quality that's just as good.
2005 digital paintings have a distinct, and even in the hands of great artists, very sloppy and amateurish feel. Meanwhile 2020s digital artists easily rival the greats of decades and centuries past.
If you want professional work done you’ll still hire someone but that person will also use a lot of professional grade computer tooling with it.
But there definitely won’t be as many jobs as before - especially on the low skill end.
God, will we never move this discussion past this worthless argument? What value would there be in any of these automatization tools, be in in agriculture or AI, if it just made every single worker switch to being an [automatization tool] operator?
There really isn’t a diminishing return on executing great ideas. Almost all software projects have an essentially endless backlog of items that could be done. So I think it will be between 2. and 3. with people who really understand how software is built being even more in demand than ever since they act as multipliers in making sure the increased output is evolvable and maintainable.
Many already let Claude Code update its own CLAUDE.md, so I don't see any reason why you couldn't (dangerously-skipping-permissions) let it edit its own hooks. And as in Jurassic Park, the question of whether we should seems to be left by the wayside.
I already do lots of "coding" in SaaS products, that have very little to do with what most HNers think of proper coding.
I'm not convinced there is a limiter on software demand as it seems to grow alongside development speed. We all have huge backlogs in any non-trivial system.
Yes.
> then who will configure these hooks?
It will also create jobs.
> unless AI evolves to the point that it even thinks of this and announces this to all other AI entities and they also implement it properly
Also yes.
---
People think that technology is some sort of binary less jobs/more jobs thing.
Technology eliminates some jobs and creates others.
The #1 goal of every AI company is to create an AI that is capable of programming and improving itself to create the next, more powerful AI. Of course, these kind of configuration jobs will be outsourced to AI as soon as possible, too.
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|MultiEdit|Write",
"hooks": [
{
"type": "command",
"command": "jq -r '.tool_input.file_path' | xargs bin/save-hook.sh"
}
]
}
]
}
I jumped at this HN post, because Claude Code Opus 4 has this stupid habit of never terminating files with a return.To test a new hook one needs to restart claude. Better to run the actual processing through a script one can continually edit in one session. This script uses formatters on C files and shell scripts, and just fixes missing returns on other files.
As usual claude and other AI is poor at breaking problems into small steps, and makes up ways to do things. The above hook receives a json file, which I first saved to disk, then extracted the file path and saved that to disk, then instead called save-hook.sh on that path. Now we're home after a couple of edits to save-hook.sh
This was all ten minutes. I wasted far more time letting it flail attempting bigger steps at once.
Hooks will be important for "context engineering" and runtime verification of an agent's performance. This extends to things such as enterprise compliance and oversight of agentic behavior.
Nice of Anthropic to have supported the idea of this feature from a github issue submission: https://github.com/anthropics/claude-code/issues/712
This is a pretty killer feature that I would expect to find in all the coding agents soon.
Exit Code 2 Behavior
PreToolUse - Blocks the tool call, shows error to Claude
This is great, it means you can set up complex concrete rules about commands CC is allowed to run (and with what arguments), rather than trying to coax these via CLAUDE.md.E.g. you can allow
docker compose exec django python manage.py test
but prevent docker compose exec django python manage.py makemigrations
Wouldn't it be nice to have the agent autodiscover the hooks and abstracting their implementation details away under the mcp server, which you could even reuse by other agents?
Maybe this will enable a fix
Reading CLAUDE.md (22 seconds, 2.6k tokens..)
You’re absolutely right!
I'm happy to see Claude Code reaching parity with Cursor for linting/type checking after edits.
The rest I can take or leave (plenty of good or better alternatives)
The time it saved me in first few hours of use easily made the monthly fee worthwhile. I did hit a limit near the four-hour mark (resets every five hours for us Pro subscribers), but just went and reviewed the ~1700 lines it added in that time and cleaned up the config files (updated todos etc)
I still feel like I can review diffs more efficiently in an ide, but I'm pretty much just mosh-ing into my server and have a few tmux windows going and feel I'm starting to get a bit more efficient.
Still considering the Claude max 20x plan to just use opus 100% of the time though
$20 Cursor Pro plan and $200 MAX Claude Code plan really is a great great pairing.
We’ve been using CLAUDE.md instructions to tell Claude to auto-format code with the Qlty CLI (https://github.com/qltysh/qlty) but Claude a bit hit and miss in following them. The determinism here is a win.
It looks like the events that can be hooked are somewhat limited to start, and I wonder if they will make it easy to hook Git commit and Git push.
Claude loves Java.
I never have to reformat. It picks up my indentation preferences immediately and obeys my style guide flawlessly. When I ask it to perfect my JavaDoc it is awesome.
Must be a ton of fabulous enterprise Java in the training set.
(Also almost swallowed my tongue saying that out loud)
```lint-monorepo.sh
# read that data
json_input=$(cat)
# do some parsing here with jq, get the file path (file_path)
if [$file_path" == "$dir1"*]
run lint_for_dir1
```
For that reason, I mainly use Aider and Cursor (the latter mostly in the "give me five lines" comment mode).
1) Assign coding task via prompt 2) Hook: Write test for prompt proves 3) Write code 4) Hook: Test code 5) Code passes -> Commit 6) Else go to 3.
They are going to slowly add "features" that brings handcoding back till its like 100% handcoding again.
Yes - it’s fine to think of it as handholding (or handcoding). These model providers cannot be responsible for ultimate alignment with their users. Today, they can at best enable integration so a user, or business, can express and ensure their own alignment at runtime.
The nature of these systems already requires human symbiosis. This is nothing more than a new integration point. Will empower agents beyond today’s capabilities, increase adoption.
Add a PostToolUse [0] hook that automatically creates a git commit whenever changes are made. Then you can either git checkout the commit you want to rollback to... Or you could assign those git commits an index and make a little MCP server that allows you to /rollback:goto 3 or /rollback:back 2 or whichever syntax you want to support.
In fact if you put that paragraph into Claude I wouldn't be surprised if it made it for you.
[0] https://docs.anthropic.com/en/docs/claude-code/hooks#posttoo...
Hooks would probably help, I think you could add a hook to auto-reject the bot when it calls the wrong thing.
I was using the API and passed $50 easily, so I upgraded to the $100 a month plan and have already reached $100 in usage.
I've been working on a large project, with 3 different repos (frontend, backend, legacy backend) and I just have all 3 of them in one directory now with claude code.
Wrote some quick instructions about how it was setup, its worked very well. If I am feeling brave I can have multiple claude codes running in different terminals, each working on one piece, but Opus tends to do better working across all 3 repos with all of the required context.
Still have to audit every change, commit often, but it works great 90% of the time.
Opus-4 feels like what OAI was trying to hype up for the better part of 6 months before releasing 4.5
https://www.pulsemcp.com/posts/how-to-use-claude-code-to-wie...
I have been using it for other stuff (real estate, grilling recipes, troubleshooting electrical issues with my truck), and it seems to have a very large knowledge base. At this point, my goal is to get good at asking the right kinds of questions to get the best/most accurate answers.
I've been trying to max out the $100 plan and I have a problem where I run out of stuff to ask it to do. Even when I try to have multiple side projects at once, Claude just gets stuff done, and then sits idle.
What would have taken me a week to get up-to-speed with Powershell and all the Azure powershell commands/syntax has now taken me about 4hrs.
>hooks let you build workflows where multiple agents can hand off work safely. one agent writes code another reviews it another deploys it. each step gated by verification hooks.
I feel that currently improvement mostly comes from slapping what to me feels like workarounds on top of something that very well may be a local maximum.
Humans are fancy enough to be able to write code yet at the same time they can’t be trusted to do stuff which can be solved with a simple hook, like a simple formatter or linter. That’s why we still run those on CI. This is a meaningless statement.
1y ago - No provider was training LLMs in an environment modeled for agentic behavior - ie in conjunction with software design of an integrated utility.
'slapped on workaround' is a very lazy way to describe this innovation.
Feels like ages
I think this insistance on near autonomous agents is setting the bar too high, which wouldnt be an issue if these companies werent then insisting that the bar is set just right.
These things understand language perfectly, theyve solved NLP because thats what they model extremely well. But agentic stuff is modelled by reinforcement learning and until thats in the foundation model itself (at the token prediction level) these things have no real understanding of state spaces being a recursive function of action spaces and such stuff. And they cant autonomously code or drive or manage a fund until they do
No machine learning work? That would compete.
No writing stuff I would train AI on. Except I own the stuff it writes, but I can’t use it.
Can we build websites with it? What websites don’t compete with Anthropic?
Terminal games? No, Claude code is a terminal game, if you make a terminal game it competes with Claude?
Can their “trust and safety team” humans read everyone’s stuff just to check if we’re competing with LLMs (funny joke) and steal business ideas and use them at Anthropic?
Feels like the dirty secret of AI services is, every possible use case violates the terms, and we just have to accept we’re using something their legal team told us not to use? How is that logically consistent? Any safety concerns? This doesn’t seem like a law Asimov would appreciate.
It would be cool if the set of allowed use cases wasn’t empty. That might make Anthropic seem more intelligent
You may not access or use, or help another person to access or use, our Services in the following ways:
2. To develop any products or services that compete with our Services, including to develop or train any artificial intelligence or machine learning algorithms or models or resell the Services.
Let's say you've used Anthropic's model to generate open source software and then some 3rd party trains their model on that. Have you now helped another person to use their service in a way that violates the terms?
I suppose that's pretty far-fetched, though, unless you have some interaction with the party doing the training. Sometimes you might, though: perhaps some company provides services to train a model on exactly your code base, and then provide similar service as Anthropic does for your code base, thus being in direct competition of Antropic as well.
Also, who the fuck wants a codebase they can’t train models on
I am tired of pretending that this can actually pull any meaningful work besides a debug companion or a slightly enhanced google/stackoverflow.
Claude Code did it in 30 seconds and it works flawlessly.
I am so confused how people are not seeing this as a valuable tool. Like, are you asking it to solve P versus NP or something?
If you need to do something that's been done a million times, but you don't have experience with it, LLMs are an excellent way to get running quickly.
The utility i find is that it helps _me_ do the real engineering work, the planning and solution architechting, and then can bang out code once it has rock solid instructions (in natural language but honestly one level above psuedocode) and then i have to review it with absolutely zero faith in its ability to do things. Then it can work well.
But its not where these guys are claiming it is
An engineer will truly 10x with these, maybe greater. So will the unskilled but they will have diminishing returns over more usage.
* afraid that the demands on them in their job will increase * actually like and are attached to the act of writing out code.
Frankly I can sometimes empathize with both but their conclusions are still wrong.
I was skeptical about Claude code and then I spent a week really learning how to use it. Within the week I had built a backend with FastAPI that supported user creation, password reset, email confirmation, a front end, and support for ouath into a few systems.
It definitely took me some time to learn how to make it work, but I’m astounded at how much work I got done and for so little typing.
If you do it right this actually forces good design.
But you do know, that this is what LLMs ain't good at.
So your conclusion is somewhat off, because there are plenty of programming work of things that were done before and require just tweaking.
I mean, I am also not hooked yet and just occasionally use chatGPT/claude for concrete stuff, but I do find it useful and I do see, where it can get really useful for me (once it really knows my codebase and the libaries used and does not jump between incompatible API versions)
My request involved a local web application that acted as a server for other clients in the same network using Rust.
I wanted to use websockets, it never worked and I was never able to nudge it in any meaningful direction, it started to make circular edits on the codebase and it just never worked.
A better analogy is LLM's are closer to the "universal translator" with an occasional interaction similar to[0]:
Black Knight: None shall pass.
King Arthur: What?
Black Knight: None shall pass!
King Arthur: I have no quarrel with you good Sir Knight, But I must cross this bridge.
Black Knight: Then you shall die.
King Arthur: I command you, as King of the Britons, to stand aside!
Black Knight: I move for no man.
King Arthur: So be it!
[they fight until Arthur cuts off the Black Knight's left arm]
King Arthur: Now, stand aside, worthy adversary.
Black Knight: 'Tis but a scratch.
King Arthur: A scratch? Your arm's off!
Black Knight: No, it isn't.
King Arthur: Well, what's that then?
Black Knight: I've had worse.
0 - https://en.wikiquote.org/wiki/Monty_Python_and_the_Holy_Grai...