I find the ways people extend or build on top of Sqlite to be fascinating. I use it in a few apps but not on the server (yet). Multi-writer for something like would be amazing (incredibly difficult to do well, obviously). I work on a home-rolled distributed database (multi-writer) but it has numerous downsides/issues so I love seeing how other people approach and solve these things.
I'm still waiting on how they'll prevent accidental corruption from multiple writers; there's a PR implementing write leases, not sure if that's the direction they'll take.
That said, pausing local polling when writes are enabled - i.e. assuming you're the only writer - makes sense, it's a good idea; hadn't occurred to me.
Ideally, I'd like to offer durability on fullfsync. I think this is feasible. In a concurrent system (single host), while a writer is waiting for durability confirmation, readers can continue reading the previous state, and the next writer can read the committed - but not yet durable - data and queue its writes to be batched. You can have as many pending writes as you're willing to have connections.
I'm not sure you can have readers see something separate than writers. When SQLite promotes a read lock to a write lock under WAL then it checks if any of the data has changed and then fails the transaction if it has.
Does anyone know whether you could use this to stitch together a bunch of .db files (that share the same schema) in an ad-hoc way?
For example, if I decided I wanted to synchronize my friend's .db file, could I enable this using litestream? And, what if my friend wanted to sync two of his friends' .db files, but I'm only interested in his changes, not theirs? I assume this kind of fan out is not possible, but it would be fun if so.
To achieve what you describe, you should be just able to setup a Postgres replica that’s setup on top of ZeroFS.
[0] https://github.com/Barre/ZeroFS
[1] https://github.com/Barre/ZeroFS?tab=readme-ov-file#postgresq...