Ask HN: How do you authorize AI agent actions in production?
3 points
8 hours ago
| 2 comments
| HN
I'm deploying AI agents that can call external APIs – process refunds, send emails, modify databases. The agent decides what to do based on user input and LLM reasoning.

My concern: the agent sometimes attempts actions it shouldn't, and there's no clear audit trail of what it did or why.

Current options I see: 1. Trust the agent fully (scary) 2. Manual review of every action (defeats automation) 3. Some kind of permission/approval layer (does this exist?)

For those running AI agents in production: - How do you limit what the agent CAN do? - Do you require approval for high-risk operations? - How do you audit what happened after the fact?

Curious what patterns have worked.

chrisjj
8 hours ago
[-]
If one asked the same about any other kind program that was known to be likely to produce incorrect and damaging output, the answer would be obvious. Fix the program.

It is instructive to consider why the same does not apply in this case.

And see https://www.schneier.com/blog/archives/2026/01/why-ai-keeps-... .

reply
throw03172019
8 hours ago
[-]
Human in the loop for certain actions.
reply
chrisjj
6 hours ago
[-]
But how do you get the bot to comply?
reply