Crawling a billion web pages in just over 24 hours, in 2025
66 points
10 hours ago
| 8 comments
| andrewkchan.dev
| HN
bndr
43 minutes ago
[-]
I run a small startup called SEOJuice, where I need to crawl a lot of pages all the time, and I can say that the biggest issue with crawling is the blocking part and how much you need to invest to circumvent Cloudflare and similar, just to get access to any website. The bandwith and storage are the smallest cost factor.

Even though, in my case, users add their own domains, it's still took me quite a bit of time to reach 99% chance to crawl a website — with a mix of residential proxies, captcha solvers, rotating user-agents, stealth chrome binaries, otherwise I would get 403 immediately with no HTML being served.

reply
dangoodmanUT
34 minutes ago
[-]
> because redis began to hit 120 ops/sec and I’d read that any more would cause issues

Suspicious. I don’t think I’ve ever read anything that says redis taps out below tens of thousands of ops…

reply
sunpolice
6 minutes ago
[-]
I was able to get 35k req/sec on a single node with Rust (custom http stack + custom html parser, custom queue, custom kv database) with obsessive optimization. It's possible to scrape Bing size index (say 100B docs) each month with only 10 nodes, under 15k$.

Thought about making it public but probably no one would use it.

reply
throwaway77385
59 minutes ago
[-]
> spinning disks have been replaced by NVMe solid state drives with near-RAM I/O bandwidth

Am I missing something here? Even Optane is an order of magnitude slower than RAM.

Yes, under ideal conditions, SSDs can have very fast linear reads, but IOPS / latency have barely improved in recent years. And that's what really makes a difference.

Of course, compared to spinning disks, they are much faster, but the comparison to RAM seems wrong.

In fact, for applications like AI, even using system RAM is often considered too slow, simply because of the distance to the GPU, so VRAM needs to be used. That's how latency-sensitive some applications have become.

reply
thefounder
1 hour ago
[-]
Well the most important part seems to be glossed over and that’s the IP addresses. Many websites simply block /want to block anything that’s not google and is not a “real user”.
reply
finnlab
6 hours ago
[-]
Nice work, but I feel like it's not required to use AWS for this. There are small hosting companies with specialized servers (50gbit shared medium for under 10$), you could probably do this under 100$ with some optimization.
reply
nurettin
1 hour ago
[-]
I did some crawling on hetzner back in the day. They monitor traffic and make sure you don't automate publically available data retrieval. They send you an email telling you that they are concerned because you got the ip blacklisted. Funny thing is: They own the blacklist that they refer to.
reply
varispeed
1 hour ago
[-]
This. AWS is like a cash furnace, only really usable for VC backed efforts with more money than sense.
reply
handfuloflight
1 hour ago
[-]
There was a time when being able to do this meant you were on the path to becoming a (m)(b)illionaire. Still is, I think.
reply
ph4rsikal
1 hour ago
[-]
When I read this, I realize how small Google makes the Internet.
reply