I have been using Claude Code and Cursor lately and as we all know, they write code incredibly fast but a few times i have noticed they can introduce the same security flaws. For example, you ask the LLM to build a file upload feature, you will get working code in minutes, but it would almost always miss magic-byte validation or leaves you vulnerable to SVG XSS. The LLM optimizes for code that compiles not code that is secure.
To fix this for my own workflow, I made a set of 8 security-focused AI agents (AppSec, GRC, Cloud/Platform, etc) that you can drop into any MCP-compatible tool (Cursor, Windsurf) or use with Claude Code.
To clarify, the goal here is not to say that LLM/AI replaces AppSec or the Secure Software Development Cycle, instead the goal is to provide a series of structured prompts and concrete security artifacts (like STRIDE based threat models and ASVS mapped functional requirements) for developers who are already using AI to write code. The aim is to force the LLM to pause and sort of put on a security hat during specific phases of the SDLC.
What It Actually Is
It is an MIT licenced repo containing the agent prompts, document templates and an MCP server. You can install via Claude marketplace or globally via npm, which gives you a CLI to scaffold git hooks, CodeQL CI Gates and the MCP config. Also included are 3 full walkthroughs in the repo showing how the agents catch things.
I am an Application Security Engineer, and I am really curious to get feedback and critique. Please try it out without any signups using the URL. I will be around to answer any questions