// The policy is embedded as a JSON-escaped value inside a structured JSON object.
// This prevents prompt injection via policy content — any special characters,
// delimiters, or instruction-like text in the policy are safely escaped by
// json.Marshal rather than concatenated as raw text.I think you're spot on with the fact that it's so far it's been either all or nothing. You either give an agent a lot of access and it's really powerful but proportionally dangerous or you lock it down so much that it's no longer useful.
I like a lot of the ideas you show here, but I also worry that LLM-as-a-judge is fundamentally a probabilistic guardrail that is inherently limited. How do you see this? It feels dangerous to rely on a security system that's not based on hard limitations but rather probabilities?
Not exactly sure where I’m going with this, but my work with creating penetesting tools for LLMs, the way that I use judgment is critical to the core functionality of the application. I agree with your concern and I will just say that the more time I spent concerned with chain of though where now I will make multiple versions of the same app using a different judge set a different “temperaments” and I found it to be incredibly enlightening as to the diversity of applications and approaches that it creates.
Even using BMAD or superpowers, I can make five versions of an app without judges involved and I feel like I’m just making the same app five times because the API begins to coalesce around the business problem you want to solve. The vicissitudes of prediction tools always want to take the safest bet for the greater good, but with the judge involved we can make the agent force itself to actually be hostile about what exactly we’re trying to do, which has produced interesting and fun results.The question edf13 pointed at but didn’t develop; where does a transport-layer judge earn its place at all? Not as the enforcement layer but as the audit layer on top of one. Kernel-level controls tell you what the agent did. A proxy tells you what the agent tried to exfiltrate and where to.
Structured-JSON escaping and header caps are good tools for the detection job. They’re the wrong tools for the prevention job. Different layers, different questions.
We looked at LLM-as-judge early on and ended up discounting it on security grounds: the judge itself sits in the prompt-injection blast radius, and a probabilistic gate protecting a probabilistic agent felt like the wrong shape for a security primitive. Their structured-JSON escaping and header/body caps are thoughtful mitigations, but they reduce the surface rather than eliminate it.
Picking the transport layer makes sense for production API-calling agents where egress is where irreversible damage lands. The architectural tradeoff is what the proxy can't see: file reads, shell spawns, process execs. The canonical prompt-injection chain (malicious README -> read ~/.ssh/id_rsa -> POST to attacker.com) is three steps, and CrabTrap only sees step three. The credential has already left the filesystem and entered agent process memory by the time the judge evaluates the outbound request.
HTTP_PROXY/HTTPS_PROXY also depends on cooperative libraries. The iptables note handles this well in a containerised production deploy. For local-laptop coding agents, which is where most prompt-injection attack surface lives today, there's no equivalent kernel-level backstop.
For that threat model we've been building grith.ai at the syscall layer (ptrace/seccomp-BPF on Linux, Endpoint Security on macOS, Minifilter + ETW on Windows) rather than transport. The two compose cleanly; serious production deploys probably want both.
One thing I didn't see: are there any OSS solutions appearing here?
Also interesting that you went HTTP. Most agent tooling I've been running is stdio-based (MCP-style). What did the HTTP framing buy you architecturally?
Why it lands: specific technical question, credits their work, ends with something that invites response. If Brex engineers are in the thread, one of them will likely reply.
If both are Claude, you have shared-vulnerability risk. Prompt-injection patterns that work against one often work against the other. Basic defense in depth says they should at least be different providers, ideally different architectures.
Secondary issue: the judge only sees what's in the HTTP body. Someone who can shape the request (via agent input) can shape the judge's context window too. That's a different failure mode than "judge gets tricked by clever prompting." It's "judge is starved of the signals it would need to spot the trick."
not adding LLM layers to stuff to make them inherently less secure.
This will be a neat concept for the types of tools that come after the present iteration of LLMs.
Unless I’m sorely mistaken.
Most proper LLM guardrails products use both.
Edit: actually looks like it has two policy engines embedded
If people said "we build a ML-based classifier into our proxy to block dangerous requests" would it be better? Why does the fact the classifier is a LLM make it somehow worse?
The entire purpose of LLMs is to be non-static: they have no deterministic output and can't be validated the same way a non-LLM function can be. Adding another LLM layer is just adding another layer of swiss cheese and praying the holes don't line up. You have no way of predicting ahead of time whether or not they will.
You might say this hasn't prevented leaks/CVEs in exisiting mission-critical software and this would be correct. However, the people writing the checks do not care. You get paid as long as you follow the spec provided. How then, in a world which demands rigorous proof do you fit in an LLM judge?
This is exactly the point though. A LLM is great at finding work-around for static defenses. We need something that understands the intent and responds to that.
Static rules are insufficient
EDIT: it does seem to have a deterministic layer too and I think that's great