Unix "find" expressions compiled to bytecode
104 points
15 hours ago
| 3 comments
| nullprogram.com
| HN
drob518
9 hours ago
[-]
From the article:

> I was later surprised all the real world find implementations I examined use tree-walk interpreters instead.

I’m not sure why this would be surprising. The find utility is totally dominated by disk IOPS. The interpretation performance of find conditions is totally swamped by reading stuff from disk. So, keep it simple and just use a tree-walk interpreter.

reply
Someone
7 hours ago
[-]
Is it truly simpler to do that? A separate “command line to byte codes” module would be way easier to test than one that also does the work, including making any necessary syscalls.

Also, decreasing CPU usage many not speed up find (much), but it would leave more time for running other processes.

reply
drob518
7 hours ago
[-]
If it was easier to interpret byte codes, nobody would use a tree-walk interpreter. There’s no performance reason to use a tree-walk interpreter. They all do it because it’s easy. You basically already have the expression in tree form, regardless of where you end up. So, stop processing the tree and just interpret it.
reply
maxbond
7 hours ago
[-]
File operations are a good candidate for testing with side effects since they ship with every OS and are not very expensive in a tmpfs, but you don't have to let it perform side effects. You could pass the eval function a delegate which it calls methods on to perform side effects and pass in a mocked delegate during testing.
reply
chubot
8 hours ago
[-]
Yeah that's basically what was discussed here: https://lobste.rs/s/xz6fwz/unix_find_expressions_compiled_by...

And then I pointed to this article on databases: https://notes.eatonphil.com/2023-09-21-how-do-databases-exec...

Even MySQL, Duck DB, and Cockroach DB apparently use tree-walking to evaluate expressions, not bytecode!

Probably for the same reason - many parts are dominated by I/O, so the work on optimization goes elsewhere

And MySQL is a super-mature codebase

reply
drob518
6 hours ago
[-]
I was just reading a paper about compiling SQL queries (actually about a fast compilation technique that allows for full compilation to machine code that is suitable for SQL and WASM): https://dl.acm.org/doi/pdf/10.1145/3485513

Sounds like many DBs do some level of compilation for complex queries. I suspect this is because SQL has primitives that actually compute things (e.g. aggregations, sorts, etc.). But find does basically none of that. Find is completely IO-bound.

reply
hxtk
4 hours ago
[-]
Virtually all databases compile queries in one way or another, but they vary in the nature of their approaches. SQLite for example uses bytecode, while Postgres and MySQL both compile it to a computation tree which basically takes the query AST and then substitutes in different table/index operations according to the query planner.

SQLite talks about the reasons for each variation here: https://sqlite.org/whybytecode.html

reply
drob518
2 hours ago
[-]
Thanks for the reference.
reply
tasty_freeze
12 hours ago
[-]
That is a fun exercise, but I imagine the time to evaluate the conditional expression is a tiny fraction, just a percent or less, than the time it takes to make the file system calls.
reply
nasretdinov
11 hours ago
[-]
For many cases you don't even need to make stat() call to determine whether or not the file is a directory (d_type specifically can tell it: https://man7.org/linux/man-pages/man3/readdir.3.html). That's what allows find(1) to be so quick
reply
loeg
8 hours ago
[-]
You could imagine determining from the parsed expression whether or not stat'ing was required.

NFS has readdirplus, but I don't think it ever made its way into Linux/POSIX. (Some filesystems could efficiently return dirents + stat information.)

reply
nasretdinov
7 hours ago
[-]
> readdirplus

Well, it definitely does _something_, because on NFS the subsequent stat() calls after reading the directory names do indeed complete instantly :), at least in my testing.

reply
loeg
7 hours ago
[-]
I mean, readdirplus as a local filesystem API. Ultimately unix programs are just invoking getdents() (or equivalent) + stat() (or statx, whatever). Linux nfsclient probably caches the result of readdirplus for subsequent stat.
reply
CerryuDu
11 hours ago
[-]
... not to mention the time it takes to load directory entries and inodes when the cache is cold.
reply
burnt-resistor
22 minutes ago
[-]
I recently wrote a "du" summarizer of additional stats in C because it's faster than du, find, or any sort of scripting language tree walker. The latter is orders of magnitude slower, but ultimately it's bounded by iteration of kernel vfs structures and any hard IOPS that are spent to fetch metadata from slower media.

For archiving, I also wrote a parallel walker and file hasher that only does one pass of data and stores results to a sqlite database. It's basically poor-man's IDS and bitrot detection.

reply