Show HN: Secret Sanitizer – auto-masks secrets when you paste into AI chats
1 points
1 hour ago
| 0 comments
| github.com
| HN
I kept pasting code with hardcoded API keys, database credentials, and auth tokens into ChatGPT while debugging. Copy a failing function, paste it into AI, and realise your AWS secret key or Stripe token was right there in the snippet.

So I built (with some help from Claude) a simple Chrome extension that intercepts the paste, detects secrets using local regex, and replaces them with [MASKED] before they reach the chat. Originals stay in a local AES-256 encrypted vault for unmasking.

No servers. No network requests. No tracking. ~41 KB, zero dependencies. Don't take my word for it: 'grep -r "fetch\|XMLHttpRequest" content_script.js' returns nothing.

Works on ChatGPT, Claude, Gemini, Grok, Perplexity, DeepSeek, and any custom site you add. Supports 30+ patterns — AWS keys, GitHub tokens, JWTs, Stripe keys, database URLs, private keys, and more. You can toggle individual patterns off for false positives.

Open source, MIT licensed. With the recent news about extensions harvesting AI conversations, I figured more devs could use this.

Would love feedback — especially on patterns I might be missing or edge cases you hit.

No one has commented on this post.