Ask HN: Senior software engineers, how do you use Claude Code?
11 points
11 hours ago
| 7 comments
| HN
We’ve all seen the crazy “10 parallel agents” type setups, but I never saw it fitting my workflow.

What I usually do is I would have Claude Code build a plan, Codex find flaws in it, iterating until i get something that looks good. I’d give direction and make sure it follows my overall idea.

Implementation is working well on its own.

But this takes a lot of focus to get right for me, I can’t see myself doing it on the same project, multiple features.

Am I missing something?

falloutx
10 hours ago
[-]
"10 parallel agents" was just one guy with access to unlimited credits. Most of us are on Max plan or API plan, and cant afford to burn credits. He can just run the agents just to show that he is running 10 parallel agents.

As an avg dev, what I can even do with 10 agents? It like managing 10 toddlers who can code, it looks good but it becomes hard to manage as you have limited context in your brain.

2 is the best setup if you can afford. One can write Tests and other can write the code. This is better because if you just use the same agent instance, its not gonna be able to write good tests as it will just write tests that its code is gonna pass. Its different for everyone else, but for me, 2 is the best setup for TDD.

Apart from that, you can just go ahead and do it your own way. I have found that many senior engineers think they are special when they can make Claude Code do something, they think its with their setup but I am usually able to replicate without any setups or Agents.md/Claude.md, the models are good enough without any complex setup.

reply
allie1
9 hours ago
[-]
My experience with Opus 4.5 has been pretty okay tests. Every now and then i point out issues so i wouldn’t let it one-shot but with supervision even same agent has been okay.

I agree that extensive modding isn’t required. Just maintaining my claude.md seems to do the trick.

reply
aristofun
6 hours ago
[-]
Claude Code is very good at monkey work of replicating a boilerplate for another 100th variant of some overengineered and bloated piece of java code in a codebase with 99 examples to follow.

It’s like a fire extinguisher that helps engineers manage the problems they created to begin with.

reply
rozenmd
10 hours ago
[-]
LLMs are quite capable of rewrites these days - there are few tasks where I'd actually want 10 parallel agents, but rewriting off Next.js would've been faster with that setup.

(I ended up just using the claude web interface and making it use a checklist, took 8 hours)

reply
al_borland
7 hours ago
[-]
I can only use approved AI tools with company code, which is only Copilot. I try to use it from time to time, but am continually disappointed. I can’t relate to any of the hype. When I end up using it, I think it slows me down, and I don’t end up using any of its code at the end of the day.

I’ll occasionally have it write a little regex for me, which it does a decent job with, that’s its main use.

reply
kasey_junk
7 hours ago
[-]
Copilot is garbage. I routinely run the same tasks, with same models as codex or Claude and copilot just cannot keep up. It’s really hard to convey to people how important the agent is right now.

That is, don’t judge the llm hype by copilot its staggeringly bad.

reply
al_borland
3 hours ago
[-]
Copilot has agent mode now, but it just destroys everything it touches. I don’t if this is the same agent as codex or Claude (if I pick the associated model within Copilot).

Copilot is all I have to go by, due to the work restrictions. It took a good year to get that. I don’t know if anything else is in the works.

I’ve made a couple things outside of work with ChatGPT, but they were so basic that it was hardly something to get excited about. If it can’t help me at work, it’s hard for me to care much.

reply
kasey_junk
3 hours ago
[-]
It is not the same. The agents (Claude code, codex) are the actual processes you’d interact with instead of copilot.

The choices those processes make about tool calls, using subagents, etc make a huge difference in the quality of result you’ll get. Copilot is just an extremely bad agent compared to the sota agents.

Several of the open source agents will out perform copilot regularly (try aider, open code or cline if you can). It’s really just a baffling own goal by Microsoft on how they’ve managed this.

reply
tkiolp4
9 hours ago
[-]
You guys don’t have problems with letting these agents have access to private company code?

I use them only for early prototypes that we discard early , but can’t use them with legacy codebases because reasons.

reply
allie1
9 hours ago
[-]
They are so wildly used, and the companies usually tell you if they use your prompts for training. But again, at the current use case your code gets lost in the noise even if they do use it. Anthropic says they don’t. Microsoft allows you to disable, same with ChatGPT. Unclear on Goole.

But if you are worried, you can use an inference only solution like Groq.

reply
chiengineer
10 hours ago
[-]
Multi agents are per use case, enterprise big organizations with tiered access probably is a good one

For personal use vs code + GitHub copilot pro plus works great (highest limits available for code generation for 40$) includes has over 10 models

Because its been updated even just this past couple weeks - everything is there - agents - codex - claude

I only have 16 gb ram and im coding like 4 projects at once if im crazy enough

reply
rvz
11 hours ago
[-]
You are not missing anything. The “10 parallel agents” is just BS marketing on Twitter/X by vibe coders.

These agents work best when you know what you want done, specifically on implementation and if you know what you are doing, some code (such as frontend) can be one-shotted >90% of the time with minimal checks.

Anything lower than that must be checked over by a human + agent, otherwise you will risk introducing a critical leak, bug or a new security issue.

reply