Ask HN: How are you keeping AI coding agents from burning money?
3 points
14 hours ago
| 6 comments
| HN
My agents retry a bit more than it should, and there goes my bill up in the sky. I tried figuring out what is causing this but none of the tools helped much.

and the worse thing for me is that everything shows up as aggregate usage. Total tokens, total cost, maybe per model.

So I ended up hacking together a thin layer in front of OpenAI where every request is forced to carry some context (agent, task, user, team), and then just logging and calculating cost per call and putting some basic limits on top so you can actually block something if it starts going off the rails. It’s very barebones, but even just seeing “this agent + this task = this cost” was a big relief.

It uses your own OpenAI key, so it’s not doing anything magical on the execution side, just observing and enforcing.

I want to know you guys are dealing with this right now. Are you just watching aggregate usage and trusting it, or have you built something to break it down per agent / task?

If useful, here is the rough version I’m using : https://authority.bhaviavelayudhan.com/

grahammccain
4 hours ago
[-]
Kinda of an adjacent question but do you think the token/usage way of paying for things will stick? I still think people would rather pay a monthly subscription for a seat.
reply
jerome_mc
9 hours ago
[-]
AI outputs often feel like a gacha game. Paradoxically, the 'expensive' tokens are sometimes the cheapest in the long run. In my experience, higher-end models have a much higher 'one-shot' success rate. You aren't just saving on total token count by avoiding loops; you’re saving engineering time, which is always the most expensive resource anyway.
reply
DarthCeltic85
13 hours ago
[-]
I had gotten a student/ultra code for antigravity promo for three months, so I was using that, but that finally ran out this month. Currently Im using windstream and flipping between claude as my left brain and code extraction and the higher context but cheaperish models there.

honestly though, im getting to a point where im running custom project mds that flip between different models for different things, using list outputs depending on what it finds and runs. (I have two monorepo projects, and one thats a polyglot microengine that jumps using gRPC communication.)

The mds are highly specialized for each project as each project deals with vastly different issues. Cycling through the different pro accounts and keeping the mds in place over it all is helping me not kill my wallet.

reply
bhaviav100
13 hours ago
[-]
hmm interesting model routing + specialized MDs makes sense for cost efficiency.

I’m seeing a different failure mode though that even with good routing, agents are looping or retrying and burning my money.

reply
rox_kd
14 hours ago
[-]
In what settings do you mean - there are multiple strategies, I think building your own compaction layer in front seems a bit over-kill ? have you considered implementing some cache strategy, otherwise summary pipelines - I made once an agent which based on the messages routed things to a smaller model for compaction / summaries to bring down the context, for the main agent.

But also ensuring you start new fresh context threads, instead of banging through a single one untill your whole feature is done .. working in small atomic incrementals works pretty good

reply
bhaviav100
13 hours ago
[-]
yes, compaction and smaller models help on cost per step.

But my issue wasn’t just inefficiency, it was agents retrying when they shouldn’t.

I needed visibility + limits per agent/task, and the ability to cut it off, not just optimize it.

reply
rox_kd
2 hours ago
[-]
I'm working on a fun project I call OpenFAST, which essentially tries to solve the context transitioning - but its still in early days and haven't released anything yet.

I think one of the bigger issues, is the o(n) orchestration to agent calls that often feels uncontrolled .. ending up making the orchestrator of sub-agents the main bottleneck due to the large context it sometimes ends up with.

I'm working on an idea where agents delivers briefs & deliveres as real artifacts, and then having each spawned sub-agent read briefs, and if they need further information pick up the delivery for that specific brief.

It helps drift detection across agents, and the best part is orchestrator only delegates jobs, but doesn't do much further than that.

Whenever sub-agents has delivered their tasks, orchestrator can then read a merged brief/delivery for that specific round.

So far it helps cutting that extra tool call where each sub-agent answers the orchestrator - but it also helps the orchestrator only dwelve into deliveries which it believes are relevant rather than trying to understand and comprehend every small detail.

I can share more when I'm a bit further maybe you could get some inspiration here.

reply
spl757
8 hours ago
[-]
Don't use tech with deep, unresolved flaws and you won't get fucked.

Would you find it acceptable if Postgresql occassionally hallucinated and returned gibberish? Fuck no.

Wny is this okay with ANY software? Answer, it's not. AI IS NOT READY.

reply
spl757
8 hours ago
[-]
By not using it. The tech is flawed. It hallucinates. It's not production ready. I've said it before, and I will say it again. Anyone using AI in a production environment is a fucking idiot.
reply