The database updates automatically by monitoring chrome-stats for removed extensions and scanning security blogs. Currently tracking 1000+ known malicious extensions with extension IDs, names, and dates.
I'm working on detection tools (GUI + CLI) to scan locally installed extensions against this database, but wanted to share the raw data first since maintained threat intelligence lists like this are hard to find.
The automation runs 24/7 and pushes updates to GitHub. Free to use for research, integration into security tools, or whatever you need.
Happy to answer questions about the scraping approach or data collection methods.
The CLI/GUI tools I'm building read your locally installed extensions, extract their IDs, and check them against the CSV (which you can clone/download). No data leaves your machine during the scan.
The only "central" piece is the GitHub-hosted CSV itself, which is just a static file anyone can audit, fork, or host themselves. No API calls, no telemetry, no server lookups.
You're right that this design prevents the verification tool from becoming an attack vector. Even if my repo got compromised, worst case is a bad CSV, your local scan process stays isolated.
I'm also looking at surfacing critical permissions for locally installed extensions,things like "access to all websites," "read clipboard," etc. That way users can make informed decisions about what to keep based on what's actually authorized, even if an extension isn't in the malicious database yet.
Appreciate the security-minded feedback.