Website is served from nine Neovim buffers on my old ThinkPad
118 points
8 hours ago
| 8 comments
| vim.gabornyeki.com
| HN
ordinaryradical
4 hours ago
[-]
This has to be one of my favorite hacking articles I’ve read on HN. The analysis, concept, and execution :chefskiss:
reply
giancarlostoro
7 hours ago
[-]
I like that the author put it on a subdomain, probably a smarter move. I have an old laptop I keep installing Linux on and not deciding what I want to do with it. Maybe I should build quirky web servers on it..
reply
NoahZuniga
7 hours ago
[-]
I'm no expert, but could it be that one contributing factor to the speed is that neovim stores the files in ram while Nginx has to go to disk for every request?
reply
jerf
6 hours ago
[-]
Computers are fast. HTTP requests are not that hard. You have to go down to position 480-ish on the latest TechEmpower Fortune benchmark [1] to find a framework that is serving ~10,000 requests per second on this simple benchmark, and as that is running on a higher-spec system and possibly with more threads than "this guy's random laptop he had lying around" (although by the time you get that low in the benchmarks I suspect we're into single-thread-only frameworks) you could probably go to all but the last three to get something comparable in performance. (Yes, this is not a comparable task, because I'm making a point about speed of HTTP in general not static file serving.)

Also as mentioned nginx on a blog site will certainly not be hitting the disk.

Broadly speaking in 2025 if a website is slow it is 100% the fault of the app-specific code being run in the web request. I've been HN'd before on a very small VPS but since my blog is now all static content it doesn't even notice... even when it was making 4 or 5 DB reads per page it didn't notice. This web server is basically fast not because "it's fast" but simply because there's no reason for it to be slow. That's how computers are nowadays; you really have to give them a reason to be slow for a task like this.

You'd think everyone would know this but I fight a surprising amount of rule-of-thumb estimates from coworkers based on 2000 or 2010 performance of systems, even from developers who weren't developing then! It's really easy to not realize how much performance you're throwing away using a scripting language, and using multiple fancy runtime features that have multiplicative costs at runtime, and make bad use of databases with too many queries, that fail to do even basic optimizations on said databases, and come away thinking that 50 queries per second is a lot, when in fact in 2025 you hardly even need to consider the performance of the web requests themselves until you're into the range of interest until you're in the many thousands per core... and that's just when you need to start thinking about it.

Depending on what you are doing, of course, you may need to be considering how your code runs well before that, if your web requests are intrinsically expensive. But you don't need to worry about the web itself until at least that level of performance, and generally it'll be your code struggling to keep up, not the core web server or framework.

[1]: https://www.techempower.com/benchmarks/#section=data-r23

reply
StopDisinfo910
6 hours ago
[-]
> when in fact in 2025 you hardly even need to consider the performance of the web requests themselves until you're into the range of interest until you're in the many thousands per core... and that's just when you need to start thinking about it.

Pretending this is not the case is the bread and butter of so many companies nowadays, saying this is basically like screaming in the void.

You have no idea of the amount of "cloud-native" applications I have seen throwing 10k a month to Databricks for things that could have been done as efficiently by a small server in a cupboard with a proper architecture. The company’s architects did enjoy the conferences though.

At that point, it’s probably better for you to keep pretending and enjoy the graft like everyone else. Unless you are paying of course.

reply
TZubiri
4 hours ago
[-]
"Broadly speaking in 2025 if a website is slow it is 100% the fault of the app-specific code being run in the web request."

The other 100% it's an oversubscribed VPS, shared hosting, disk reads and network latency.

reply
troupo
5 hours ago
[-]
> I've been HN'd before on a very small VPS but since my blog is now all static content it doesn't even notice... even when it was making 4 or 5 DB reads per page it didn't notice.

And even then you can have a default Cloudflare setup that will just cache most of the stuff.

I once had two articles hit the top spot on HN. Meh https://x.com/dmitriid/status/1944765925162471619

:)

reply
diffuse_l
7 hours ago
[-]
I'm pretty sure that the website will reside in cache in any case.
reply
cr125rider
5 hours ago
[-]
Yup! The kernel will pull the page from disk and keep it in its disk cache in RAM. Since the kernel is solely in control of what gets written to disk it can be sure it doesn’t become stale, just “dirty” when it gets updated. It will then flush it to disk, but still keep active, hot pages in memory.
reply
miyuru
5 hours ago
[-]
For me it resolves to 198.74.55.216 which is a Linode USA IP. No IPv6.

No mention of why it needs to go through a Linode server.

reply
messe
5 hours ago
[-]
They might not have a static public IP, perhaps they're even behind CGNAT.
reply
conradev
4 hours ago
[-]
I have my public gateway on a Hetzner server routing traffic to an overlay network of my rinky-dink servers. Hetzner’s static IP is cheaper and more stable than one I could get from an ISP.
reply
mvieira38
4 hours ago
[-]
This is the kind of stuff that made me like HN initially. Great job
reply
jrop
4 hours ago
[-]
This is what I love HN for. "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should".

This awakens things I've been thinking about Neovim for a while now: now that libuv is embedded, there's really no reason not to use it as a cross-platform application runtime (except for the fact that that's horrific).

reply
jacquesm
4 hours ago
[-]
That's a fun article. As for the 'is this even safe' angle: this article should be the go-to example for anybody that thinks that their code will never be run in the context of unchecked requests coming in over the network because that would make no sense at all.
reply
yupyupyups
7 hours ago
[-]
Horrific
reply
yupyupyups
7 hours ago
[-]
Jokes aside, it's still cool you managed to do that.
reply
barbazoo
6 hours ago
[-]
I don’t get the joke here
reply
BirAdam
7 hours ago
[-]
It may be horrific, but it's wonderful too.
reply