Turns out a lot of them are stupid and unsafe too having reverse shells, credential theft, prompt injection buried in configs people(and myself) blindly trust.
Clawned scans any skill before it touches your machine. 60+ threat patterns. Sub-2s. No signup. Paste a name or URL and go.
Already scanned 6,500+ skills. ~20% flagged as CAUTION or THREAT. That number honestly surprised me
Please give it a go and let me know how I can improve it
The 20% flagged rate is striking and honestly matches what I expected. The skill ecosystem grew fast and the trust model was essentially trust-the-repo, trust-the-author — fine when you read the code, but nobody actually does that at scale.
A few things I would want to know as a production user:
1. False positive rate. If I am blocking 20% of skills and half are legitimate, I will disable the scanner. What is the precision on the THREAT tier vs. CAUTION?
2. What counts as a threat pattern? Reverse shells and credential theft are obvious. But "prompt injection buried in configs" is more interesting — is this heuristic-based (pattern matching) or semantic (understanding what the injection is trying to do)?
3. Integration path. The ideal UX is not paste-a-URL-before-installing — it is a CLI wrapper that scans first then installs if clean. Or a pre-install hook OpenClaw could call natively. Any plans there?
The crowdsourced angle is smart. Security knowledge about what is actually dangerous should compound well over 6,500+ scans.
2. pattern matching and trying to match the purpose of skill, do you have any suggestion for me?
3. We have launched a skill here that you can install and provide our token and you will be able to see your instance security in the dashboard