The key point: This doesn't touch Reddit's servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone.
What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine.
API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools.
Self-hosting options: - USB drive / local folder (just open the HTML files) - Home server on your LAN - Tor hidden service (2 commands, no port forwarding needed) - VPS with HTTPS - GitHub Pages for small archives
Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away.
Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic.
How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is "trust but verify" – it accelerates the boring parts but you still own the architecture.
Live demo: https://online-archives.github.io/redd-archiver-example/
GitHub: https://github.com/19-84/redd-archiver (Public Domain)
Pushshift torrent: https://academictorrents.com/details/1614740ac8c94505e4ecb9d...
What I'd really like is a plugin that automatically pulls from archives somewhere and replaces deleted comments and those bot-overwritten comments with the original context.
Reddit is becoming maddening to use because half the old links I click have comments overwritten with garbage out of protest for something. Ironically the original content is available in these archives (which are used for AI training) but now missing for actual users like me just trying to figure out how someone fixed their printer driver 2 years ago.
In practice I just give them more page views because I have to view more threads before I find the answer.
Reddit's DAU numbers have only gone up since the protest.
And so has the bot activity.
https://github.com/ArthurHeitmann/arctic_shift/releases
Arctic Shift https://academictorrents.com/browse.php?search=RaiderBDev
Watchful1 https://academictorrents.com/browse.php?search=Watchful1
sort of like forking a project.
registry readme: https://github.com/19-84/redd-archiver/blob/main/docs/REGIST...
register instances: https://github.com/19-84/redd-archiver/blob/main/.github/ISS...
reddit: https://github.com/19-84/redd-archiver/blob/main/tools/subre...
voat: https://github.com/19-84/redd-archiver/blob/main/tools/subve...
ruqqus: https://github.com/19-84/redd-archiver/blob/main/tools/guild...
data catalog readme: https://github.com/19-84/redd-archiver/blob/main/tools/READM...
reddit data: https://github.com/19-84/redd-archiver/blob/main/tools/subre...
You've probably come across this already but there are alternative archives to PushShift that may have differing sets of posts and comments (perhaps depending on removal request coverage?)
One is Arctic Shift: https://github.com/ArthurHeitmann/arctic_shift/releases
Another is PullPush: https://pullpush.io/
There's no `.env.example` file to copy from. And even if the env vars are set manually, there are issues with the mentioned volumes not existing locally.
Seems like this needs more polish.
https://github.com/19-84/redd-archiver/commit/0bb103952195ae...
the docs have been updated with mkdir steps
https://github.com/19-84/redd-archiver/commit/c3754ea3a0238f...
This is still missing creating the `output/.postgres-data` dir, without which docker compose refuses to start.
After creating that manually, going to http://localhost/ shows a 403 Forbidden page, which makes you believe that something might have gone wrong.
This is before running `reddarchiver-builder python reddarc.py` to generate the necessary DB from the input data.
EDIT: Is there any cheap way to search? I have MS TechNet archive which is useless without search, so I realky want to know a way to have a cheap local search w/o grepping everyting.
It's an open forum - similar to here, whatever I post I it's in the public forum and therefore I expect it to be used / remixed however anyone wants.
Maybe Gallowboob
Gross. Why would anyone want to have an archive of Reddit For Neonazis?