Show HN: Fail-closed execution guard for AI agents (Python, pip installable)
1 points
1 hour ago
| 1 comment
| github.com
| HN
echoos
1 hour ago
[-]
I kept running into the same issue building agents with tool access: nothing enforces what an agent can or can't execute. The LLM picks a tool, and it runs. Prompt-level restrictions are suggestions at best. So I built a small guard that sits between LLM output and tool execution. You define a YAML policy per agent identity , allowed actions only. Anything not listed gets denied before it touches runtime. pip install agent-execution-guard Key decisions:

Default DENY for unknown agents and actions No model inference in the enforcement path, purely deterministic Every evaluation returns an auditable decision record Offline, no external calls

No dependencies beyond cryptography. Interested in feedback on whether the policy schema makes sense and what's missing for real use cases.

reply