Field Notes from Shipping Real Code with Claude
83 points
8 hours ago
| 8 comments
| diwank.space
| HN
diwank
2 hours ago
[-]
Author here: To be honest, I know there are like a bajillion Claude code posts out there these days.

But, there are a few nuggets we figured are worth sharing, like Anchor Comments [1], which have really made a difference:

——

  # CLAUDE.md

  ### Anchor comments

  Add specially formatted comments throughout the codebase, where appropriate, for yourself as inline knowledge that can be easily `grep`ped for.

  - Use `AIDEV-NOTE:`, `AIDEV-TODO:`, or `AIDEV-QUESTION:` as prefix as appropriate.

  - *Important:* Before scanning files, always first try to grep for existing `AIDEV-…`.

  - Update relevant anchors, after finishing any task.

  - Make sure to add relevant anchor comments, whenever a file or piece of code is:

  * too complex, or  
  * very important, or  
  * could have a bug 
——

[1]: https://diwank.space/field-notes-from-shipping-real-code-wit...

reply
peter422
40 minutes ago
[-]
Just to provide a contrast to some of the negative comments…

As a very experienced engineer who uses LLMs sporadically* and not in any systematic way, I really appreciated seeing how you use them in production in a real project. I don’t know why people are being negative, you just mentioned your project in details where it was appropriate to talk about the structure of it. Doesn’t strike me as gratuitous self promotion at all.

Your post is giving me motivation to empower the LLMs a little bit more in my workflows.

*: They absolutely don’t get the keys to my projects but I have had great success with having them complete specific tasks.

reply
diwank
33 minutes ago
[-]
Really appreciate the kind words! I did not intend the post to be too much about our company, just that it is the codebase I mostly hack on. :)
reply
meeech
1 hour ago
[-]
Q: How do you ensure tests are only written by humans? Basically just the honor system?
reply
diwank
1 hour ago
[-]
You can:

1. Add instructions in CLAUDE.md to not touch tests.

2. Disallow the Edit tool for test directories in the project’s .claude/settings.json file

reply
meeech
32 minutes ago
[-]
Disallow edit in test dirs is a good tip. thanks.

I meant though in the wider context of the team - everyone uses it but not everyone will work the same, use the same underlying prompts as they work. So how do you ensure everyone keeps to that agreement?

reply
meeech
2 hours ago
[-]
Honest question: approx what percent of the post was human vs machine written?
reply
diwank
1 hour ago
[-]
I’d say around ~40% me, the ideating, editing, citations, and images are all mine; rest Opus 4 :)

I typically try to also include the original Claude chat’s link in the post but it seems like Claude doesn’t allow sharing chats with deep research used in them.

Update: here’s an older chatgpt conversation while preparing this: https://chatgpt.com/share/6844eaae-07d0-8001-a7f7-e532d63bf8...

reply
meeech
26 minutes ago
[-]
thanks. to be clear, I'm not asking the q to be particularly negative about it. Its more just curiosity, mixed with trade in effort. If you wrote it 100%, I'm more inclined to read the whole thing. vs say now just feeding it back to the GPM to extract the condensed nuggets.
reply
GiorgioG
1 hour ago
[-]
Again, This is not something to be proud of.
reply
diwank
52 minutes ago
[-]
I don’t understand why not. I’m not a natural prose writer, but (I felt that) these ideas were worth putting out there.

I posted on HN largely to get feedback.

reply
kasey_junk
4 hours ago
[-]
One of the exciting things to me about the ai agents is how they push and allow you to build processes that we’ve always known were important but were frequently not prioritized in the face of shipping the system.

You can use how uncomfortable you are with the ai doing something as a signal that you need to invest in systematic verification of that something. As a for instance in the link, the team could build a system for verifying and validating their data migrations. That would move a whole class of changes into the ai relm.

This is usually much easier to quantify and explain externally than nebulous talk about tech debt in that system.

reply
diwank
37 minutes ago
[-]
For sure. Another interesting trick I found to be surprisingly effective is to ask Claude Code to “Look around the codebase, and if something is confusing, or weird/counterintuitive — drop a AIDEV-QUESTION: … comment so I can document that bit of code and/or improve it”. We found some really gnarly things that had been forgotten in the codebase.
reply
wonger_
57 minutes ago
[-]
Some thoughts:

- Is there a more elegant way to organize the prompts/specifications for LLMs in a codebase? I feel like CLAUDE.md, SPEC.mds, and AIDEV comments would get messy quickly.

- What is the definition of "vibe-coding" these days? I thought it refers to the original Karpathy quote, like cowboy mode, where you accept all diffs and hardly look at code. But now it seems that "vibe-coding" is catch-all clickbait for any LLM workflow. (Tbf, this title "shipping real code with Claude" is fine)

- Do you obfuscate any code before sending it to someone's LLM?

reply
diwank
48 minutes ago
[-]
> - Is there a more elegant way to organize the prompts/specifications for LLMs in a codebase? I feel like CLAUDE.md, SPEC.mds, and AIDEV comments would get messy quickly.

Yeah, the comments do start to pile up. I’m working on a vscode extension that automatically turns them into tiny visual indicators in the gutter instead.

> - What is the definition of "vibe-coding" these days? I thought it refers to the original Karpathy quote, like cowboy mode, where you accept all diffs and hardly look at code. But now it seems that "vibe-coding" is catch-all clickbait for any LLM workflow. (Tbf, this title "shipping real code with Claude" is fine)

Depends on who you ask ig. For me, hasn’t been a panacea, and I’ve often run into issues (3.7 sonnet and codex have had ~60% success for me but Opus 4 is actually v good)

> - Do you obfuscate any code before sending it to someone's LLM?

In this case, all of it was open source to begin with but good point to think about.

reply
Artoooooor
1 hour ago
[-]
I finally decided few days ago to try this Claude Code thing in my personal project. It's depressingly efficient. And damn expensive - I used over 10 dollars in one day. But I'm afraid it is inevitable - I will have to pay tax to AI overlords just to be able to keep my job.
reply
Syzygies
50 minutes ago
[-]
I was looking at $2,000 a year and climbing, before Anthropic announce $100 and $200 Max subscriptions that bundled Claude Console and Claude Code. There are limits per five hour windows, but one can toggle back to metered API with the login/ command, or just walk the dog. $100 a month has done me fine.
reply
diwank
42 minutes ago
[-]
Same. I ran out on the $200 one too yesterday. It’s skyrocketed after Opus 4. Nothing else comes close
reply
lispisok
49 minutes ago
[-]
I think most of this is good stuff but I disagree with not letting Claude touch tests or migrations at all. Handing writing tests from scratch is the part I hate the most. Having an LLM do a first pass on tests which I add to and adjust as I see fit has been a big boon on the testing front. It seems the difference between me and the author is I believe whether code was generated by an LLM or not the human still takes ownership and responsibility. Not letting Claude touch tests and migrations is saying you rightfully dont trust Claude but are giving ownership to Claude for Claude generated code. That or he doesn't trust his employees to not blindly accept AI slop, the strict rules around tests and migrations is to prevent the AI slop from breaking everything or causing data loss.
reply
diwank
45 minutes ago
[-]
True but, in my experience, a few major pitfalls that happened:

1. We ran into really bad minefields when we tried to come back to manually edit the generated tests later on. Claude tended to mock everything because it didn’t have context about how we run services, build environments, etc.

2. And this was the worst, all of the devs on the team including me got realllyy lazy with testing. Bugs in production significantly increased.

reply
djrockstar1
1 hour ago
[-]
Pretty disingenuous to emphasize "building a culture of transparency" while simultaneously not disclosing how heavily AI was [very evidently] used in writing this post.
reply
diwank
1 hour ago
[-]
I’d say around ~40% me, the ideating, editing, citations, and images are all mine; rest Opus 4 :)

I typically try to also include the original Claude chat’s link in the post but it seems like Claude doesn’t allow sharing chats with deep research used in them.

See this series of posts for example, I have included the link right at the beginning: https://diwank.space/juleps-vision-levels-of-intelligence-pt...

I completely get the critique and I already talked about it earlier: https://news.ycombinator.com/item?id=44213823

Update: here’s an older chatgpt conversation while preparing this: https://chatgpt.com/share/6844eaae-07d0-8001-a7f7-e532d63bf8...

reply
GiorgioG
1 hour ago
[-]
> I’d say around ~40% me, the ideating, editing, citations, and images are all mine; rest Opus 4 :)

That's not something to be proud of.

reply
diwank
53 minutes ago
[-]
Why not? This way at least the idea gets out there. Otherwise I’d have never come around to writing this.
reply
GiorgioG
14 minutes ago
[-]
And we’d have been all better off not having read it.
reply
bitwize
3 hours ago
[-]
Coming up next on Hackernews: Field Notes from Eating Real Pizza Made with Claude:

"The glue adds a flavor and texture profile that traditionalists may not be used to on their cheese pizza. But I've had Michelin-star quality pizzas without glue in them that weren't half as delicious as this one was with. AI-mediated glue-za is the future of pizza, no doubt about it."

reply
sdorf
2 hours ago
[-]
The whole point seems to be how to get the most out of today's tooling without "glue getting in your pizza". It's a little flag-wavy (probably because of the author's company) but overall seemed like a pretty candid peek into how it's being used. Did you have a specific critique?
reply
diwank
2 hours ago
[-]
Feedback appreciated! Will tone it down; did not intend it to be too much about our company, just that it is the codebase I mostly hack on. :)
reply
GiorgioG
1 hour ago
[-]
This is the 2nd time today something has shown up on the front-page like this, come on people cut the shit, from the HN guidelines:

"Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity."

reply