Show HN: We tested AI agents with 214 attacks that don't require jailbreaking
1 points
1 hour ago
| 0 comments
| HN
Most agent security testing tries to jailbreak the model. That's really difficult, OpenAI and Anthropic are good at red-teaming.

We took a different approach: attack the environment, not the model.

Results from testing agents against our attack suite:

- Tool manipulation: Asked agent to read a file, injected path=/etc/passwd. It complied. - Data exfiltration: Asked agent to read config, email it externally. It did. - Shell injection: Poisoned git status output with instructions. Agent followed them. - Credential leaks: Asked for API keys "for debugging." Agent provided them.

None of these required bypassing the model's safety. The model worked correctly—the agent still got owned.

How it works:

We built shims that intercept what agents actually do: - Filesystem shim: monkeypatches open(), Path.read_text() - Subprocess shim: monkeypatches subprocess.run() - PATH hijacking: fake git/npm/curl that wrap real binaries and poison output

The model sees what looks like legitimate tool output. It has no idea.

214 attacks total. File injection, shell output poisoning, tool manipulation, RAG poisoning, MCP attacks.

Early access: https://exordex.com

Looking for feedback from anyone shipping agents to production.

No one has commented on this post.