Go to this repo (https://github.com/DOSAYGO-STUDIO/HackerBook): you can download it. Big Query -> ETL -> npx serve docs - that's it. 20 years of HN arguments and beauty, can be yours forever. So they'll never die. Ever. It's the unkillable static archive of HN and it's your hands. That's my Year End gift to you all. Thank you for a wonderful year, have happy and wonderful 2026. make something of it.
I watched it in the browser network panel and saw it fetch:
https://hackerbook.dosaygo.com/static-shards/shard_1636.sqlite.gz
https://hackerbook.dosaygo.com/static-shards/shard_1635.sqlite.gz
https://hackerbook.dosaygo.com/static-shards/shard_1634.sqlite.gz
As I paginated to previous days.It's reminiscent of that brilliant SQLite.js VFS trick from a few years ago: https://github.com/phiresky/sql.js-httpvfs - only that one used HTTP range headers, this one uses sharded files instead.
The interactive SQL query interface at https://hackerbook.dosaygo.com/?view=query asks you to select which shards to run the query against, there are 1636 total.
This is my VFS: https://github.com/ncruces/go-sqlite3/blob/main/vfs/readervf...
And using it with range requests: https://pkg.go.dev/github.com/ncruces/go-sqlite3/vfs/readerv...
And having it work with a Zstandard compressed SQLite database, is one library away: https://pkg.go.dev/github.com/SaveTheRbtz/zstd-seekable-form...
I’ve used it in production to self-host Australia-only maps on S3. We generated a single ~900 MB PMTiles file from OpenStreetMap (Australia only, up to Z14) and uploaded it to S3. Clients then fetch just the required byte ranges for each vector tile via HTTP range requests.
It’s fast, scales well, and bandwidth costs are negligible because clients only download the exact data they need.
I want something like a db with indexes
Looks like it's still on PyPI though: https://pypi.org/project/sqlite-s3vfs/
You can see inside it with my PyPI package explorer: https://tools.simonwillison.net/zip-wheel-explorer?package=s...
https://github.com/simonw/sqlite-s3vfs
This comment was helpful in figuring out how to get a full Git clone out of the heritage archive: https://news.ycombinator.com/item?id=37516523#37517378
Here's a TIL I wrote up of the process: https://til.simonwillison.net/github/software-archive-recove...
From what I see in GitHub in your copy of the repo, it looks like you don’t have the tags.
Do you have the tags locally?
If you don’t have the tags, I can push a copy of the repo to GitHub too and you can get the tags from my copy.
git push --tags originhttps://simonwillison.net/2021/May/2/hosting-sqlite-database...
https://phiresky.github.io/blog/2021/hosting-sqlite-database...
https://just.billywhizz.io/sqlite/demo/#https://raw.githubus...
I believe that there are also indexing opportunities (not necessarily via eg hive partitioning) but frankly - am kinda out of my depth pn it.
There is also a file format to optimize this https://cogeo.org/
Where did you get the 22GB figure from? On the site it says:
> 46,399,072 items, 1,637 shards, 8.5GB, spanning Oct 9, 2006 to Dec 28, 2025
The HN post title (:
How was the entirety of HN stored in a single SQLite database? In other words, how was the data acquired? And how does the page load instantly if there's 22GB of data having to be downloaded to the browser?
I was getting an error that the users and user_domains tables aren't available, but you just need to change the shard filter to the user stats shard.
> Next, we've got more than just two tables. The quote/paraphrase doesn't make it clear, but we've got two tables per thing. That means Accounts have an "account_thing" and an "account_data" table, Subreddits have a "subreddit_thing" and "subreddit_data" table, etc.
https://www.reddit.com/r/programming/comments/z9sm8/comment/...
Question - did you consider tradeoffs between duckdb (or other columnar stores) and SQLite?
It has transparent compression built-in and has support for natural language queries. https://buckenhofer.com/2025/11/agentic-ai-with-duckdb-and-s...
"DICT FSST (Dictionary FSST) represents a hybrid compression technique that combines the benefits of Dictionary Encoding with the string-level compression capabilities of FSST. This approach was implemented and integrated into DuckDB as part of ongoing efforts to optimize string storage and processing performance." https://homepages.cwi.nl/~boncz/msc/2025-YanLannaAlexandre.p...
So you can dump e.g. all of Hacker News in a single multi-GB Parquet file somewhere and build a client-side JavaScript application that can run queries against that without having to fetch the whole thing.
You can run searches on https://lil.law.harvard.edu/data-gov-archive/ and watch the network panel to see DuckDB in action.
It's different in that it is tailored to analytics, among other things storage is columnar, and it can run off some common data analytics file formats.
duckdb is a 45M dynamically-linked binary (amd64)
sqlite3 1.7M static binary (amd64)
DuckDB is a 6yr-old project
SQLite is a 25yr-old project
Doesn't scream columnar database to me.
That's too bad, I'd like to see the inner-working with a subset of data, even with placeholders for the posts and comments.
I did something similar. I build a tool[1] to import the Project Arctic Shift dumps[2] of reddit into sqlite. It was mostly an exercise to experiment with Rust and SQLite (HN's two favorite topics). If you don't build a FTS5 index and import without WAL (--unsafe-mode), import of every reddit comment and submission takes a bit over 24 hours and produces a ~10TB DB.
SQLite offers a lot of cool json features that would let you store the raw json and operate on that, but I eschewed them in favor of parsing only once at load time. THat also lets me normalize the data a bit.
I find that building the DB is pretty "fast", but queries run much faster if I immediately vacuum the DB after building it. The vacuum operation is actually slower than the original import, taking a few days to finish.
[1] https://github.com/Paul-E/Pushshift-Importer
[2] https://github.com/ArthurHeitmann/arctic_shift/blob/master/d...
> The VACUUM command may change the ROWIDs of entries in any tables that do not have an explicit INTEGER PRIMARY KEY.
means SQLite does something to organize by rowid and that this is doing most of the work.
Reddit post/comment IDs are 1:1 with integers, though expressed in a different base that is more friendly to URLs. I map decoded post/comment IDs to INTEGER PRIMARY KEYs on their respective tables. I suspect the vacuum operation sorts the tables by their reddit post ID and something about this sorting improves tables scans, which in turn helps building indices quickly after standing up the DB.
Guess its common knowledge that SharedArrayBuffer (SQLite wasm) does not work with FF due to Cross-Origin Attacks (i just found out ;).
Once the initial chunk of data loads the rest load almost instantly on Chrome. Can you please fix the GitHub link (current 404) would like to peak at the code. Thank you!
Converting 22GB of uncompressed text into video essay lands us at ~1PB or 1000TB.
One could convert the Markdown/PDF to a very long image first with pandoc+wkhtml, then use ffmpeg to crop and move the viewport slowly over the image, this scrolls at 20 pixels per second for 30s - with the mpv player one could change speed dynamically through keys.
ffmpeg -loop 1 -i long_image.png -vf "crop=iw:ih/10:0:t*20" -t 30 -pix_fmt yuv420p output.mp4
Alternatively one could use a Rapid Serial Visual Presentation / Speedreading / Spritz technique to output to mp4 or use dedicated rsvp program where one can change speed.
One could also output to a braille 'screen'.
Scrolling mp4 text on the the TV or Laptop to read is a good idea for my mother and her macula degeneration, or perhaps I should make use of an easier to see/read magnification browser plugin tool.
Best locally of course to avoid “I burned a lake for this?” guilt.
I have always known I could scrape HN, but I would much rather take a neat little package.
I've been taking frequent "offline-only-day" breaks to consolidate whatever I've been learning, and Kiwix has been a great tool for reference (offline Wikipedia, StackOverflow and whatnot).
[0] https://kiwix.org/en/the-new-kiwix-library-is-available/
Such as DB might be entertaining to play with, and the threadedness of comments would be useful for beginners to practise efficient recursive queries (more so than the StackExchange dumps, for instance).
I have a much simpler database: https://play.clickhouse.com/play?user=play#U0VMRUNUIHRpbWUsI...
Nonetheless, random access history is cool.
Thank you btw
It would be nice for the thread pages to show a comment count.
With all due respect it would be great if there is an official HN public dump available (and not requiring stuff such as BigQuery which is expensive).
2026 prayer: for all you AI junkies—please don’t pollute H/N with your dirty AI gaming.
Don’t bot posts, comments, or upvote/downvote just to maximize karma. Please.
We can’t identify anymore who’s a bot and who’s human. I just want to hang out with real humans here.
> 20 years of HN arguments and beauty, can be yours forever. So they'll never die. Ever. It's the unkillable static archive of HN and it's your hands
I'm really sorry to have to ask this, but this really feels like you had an LLM write it?
I wonder if there's something like this going on here. I never thought it was LLM on first read, and I still don't, but when you take snippets and point at them it makes me think maybe they are
But it didn’t read LLM generated IMO.
Ooh, I used “sequential”, ooh, I used an em dash. ZOMG AI IS COMING FOR US ALL
Also for reference: “this shortcut can be toggled using the switch labeled 'Smart Punctuation' in General > Keyboard settings.”
Always write what you want, however you want to write it. If some reader somewhere decides to be judgemental because of — you know — an em dash or an X/Y comparison or a complement or some other thing that they think pins you down as being a bot, then that's entirely their own problem. Not yours.
They observe the reality that they deserve.
Ending a sentence with a question mark doesn’t automatically make your sentence a question. You didn’t ask anything. You stated an opinion and followed it with a question mark.
If you intended to ask if the text was written by AI, no, you don’t have to ask that.
I am so damn tired of the “that didn’t happen” and the “AI did that” people when there is zero evidence of either being true.
These people are the most exhausting people I have ever encountered in my entire life.
From the terms of use [0]:
"""
Commercial Use: Unless otherwise expressly authorized herein or in the Site, you agree not to display, distribute, license, perform, publish, reproduce, duplicate, copy, create derivative works from, modify, sell, resell, exploit, transfer or upload for any commercial purposes, any portion of the Site, use of the Site, or access to the Site. The buying, exchanging, selling and/or promotion (commercial or otherwise) of upvotes, comments, submissions, accounts (or any aspect of your account or any other account), karma, and/or content is strictly prohibited, constitutes a material breach of these Terms of Use, and could result in legal liability.
"""