I've been building AIP (Agent Intent Protocol) — an open, cryptographic protocol for identity and authorization of autonomous AI agents.
The problem: Every AI agent framework (LangChain, CrewAI, AutoGen) gives agents the ability to act — call APIs, send emails, move money, access databases. But there's no standard way to verify what an agent is allowed to do before it does it. No identity system, no boundary enforcement, no kill switch. We give agents god-mode access and hope the prompt engineering holds.
What AIP does: It's a verification layer that sits between "agent wants to act" and "action executes." Every agent gets an Ed25519 keypair identity (DID-based), every action becomes a signed Intent Envelope, and every envelope passes through an 8-step verification pipeline before the action runs.
# One line. Every call verified before execution. @shield(actions=["transfer_funds"], limit=100.0) def send_payment(to: str, amount: float): bank.transfer(to, amount)
Key design decisions (happy to be challenged on these):
Ed25519 over JWT/API keys — Agents need cryptographic identity, not bearer tokens. A token can be leaked; a private key signs intent with non-repudiation.
Tiered verification — Not every action needs full crypto. Low-risk cached calls verify in <1ms (HMAC), normal ops ~5ms (Ed25519), high-value cross-org ~50ms (full pipeline). The protocol auto-selects.
22 structured error codes — Every failure returns AIP-E{category}{code} (e.g., AIP-E202: MONETARY_LIMIT). Audit trails should say exactly what went wrong, not 403 Forbidden.
Boundary enforcement, not permission prompts — Agents don't ask "can I do this?" — they declare intent, and the verifier mathematically checks it against their boundary cage (allowed actions, monetary limits, geo restrictions, deny lists).
Kill switch with zero propagation delay — Revoke any agent globally. The revocation mesh pushes via SSE/WebSocket to all connected deployments simultaneously.
What's shipped:
Python SDK: pip install aip-protocol (MIT, 63 tests passing) CLI: aip init, aip create-passport, aip verify, aip watch Shield decorator: @shield — helmet.js but for AI agents Cloud dashboard: aip.synthexai.tech (free tier) Protocol spec: RFC-001 What's NOT shipped yet:
TypeScript SDK (built, 31/31 conformance tests, not published) Framework adapters (CrewAI, LangChain, AutoGen — built, not open-sourced yet) Formal security audit GitHub: https://github.com/theaniketgiri/aip PyPI: https://pypi.org/project/aip-protocol/ Live dashboard: https://aip.synthexai.tech Protocol spec: RFC-001 in repo
I'm genuinely interested in pushback on the protocol design. Is Ed25519 overkill for agent auth? Should boundary enforcement be declarative or imperative? Is DID-based identity the right addressing model, or is there something simpler?
Happy to answer any questions about the implementation.
Right now most platforms just treat agents as regular user accounts with no verification layer. Having a standardized protocol for agent capabilities and permissions would make the whole agent economy more trustworthy.
Two identical API calls can come from either intended behavior or a manipulated model, and today they look the same to the system. Permissions tied to a static identity don’t describe the real risk.
So the missing piece is verifying the agent’s declared intent and boundaries before execution, not just who sent the request.
That’s why this starts looking more like protocol infrastructure than a product feature.