100k TPS over a billion rows: the unreasonable effectiveness of SQLite
245 points
5 hours ago
| 22 comments
| andersmurphy.com
| HN
weitendorf
4 hours ago
[-]
I've been working on a hybrid protobuf ORM/generic CRUD server based on sqlite

(code at https://github.com/accretional/collector - forgive the documentation. I'm working on a container-based agent project and also trialling using agents heavily to write the individual features. It's working pretty well but the agents have been very zealous at documenting things lol).

This is my first real project using sqlite and we've hit some similarly cool benchmarks:

* 5-15ms downtime to backup a live sqlite db with a realistic amount of data for a crud db

* Capable of properly queueing hundreds of read/write operations when temporarily unavailable due to a backup

* e2e latency of basically 1ms for CRUD operations, including proto SerDe

* WAL lets us do continuous, streaming, chunked backups!

Previously I'd only worked with Postgres and Spanner. I absolutely love sqlite so far - would still use Spanner for some tasks with an infinite budget but once we get Collector to implement partitions I don't think I would ever use Postgres again.

reply
bombela
3 hours ago
[-]
> * 5-15ms downtime to backup a live sqlite db with a realistic amount of data for a crud db

Did you consider using a filesystem with atomic snapshots? For example sqlite with WAL on BTRFS. As far as I can tell, this should have a decent mechanical sympathy.

edit: I didn't really explain myself. This is for zero downtime backups. Snapshot, backup at your own pace, delete the snapshot.

reply
hedora
1 hour ago
[-]
If it’s at 5-15ms of downtime already, you’re in the space where the “zero” downtime FS might actually cause more downtime. In addition to pauses while the snapshot is taken, you’d need to carefully measure things like performance degradation while the snapshot exists (incurring COW costs) and while it’s being GCed in the background.

Also, the last time I checked the Linux scheduling quanta was about 10ms, so it’s not clear backups are going to even be the maximum duration downtime while the system is healthy.

reply
matthewaveryusa
5 hours ago
[-]
The only caveat being this assumes all your data can fit on a single machine, and all your processing can fit on one machine. You can get a a u-24tb1.112xlarge with 448 vcores, 24TB RAM for 255/hour and attach 64TB of EBS -- that's a lot of runway.
reply
tuhgdetzhh
5 hours ago
[-]
Or rent a bare-metal machine from hetzner with 2-3x performance per core and 90% less costs[1].

[1] Various HN posts regarding Hetzner vs AWS in terms of costs and perf.

reply
tetha
4 hours ago
[-]
In my experience, a decently managed database scales very hard.

3x EX44 running Patroni + PostgreSQL would give you 64GB of working memory, at least 512 GB NVMe of dataset (configurable with more for a one-time fee) at HA + 1 maintenance node. Practically speaking, that would have carried the first 5 - 10 years of production at the company I work at with ease, for 120 Euros hardware cost/month + a decent sysadmin.

I also know quite a few companies who toss 3-4x 20k - 30k at DELL every few years to get a database cluster on-prem so that database performance ceases to be a problem (unless the application has bad queries).

reply
ethanwillis
3 hours ago
[-]
This might be true in terms of direct monetary costs.

I want to like Hetzner but the bureaucratic paper process of interacting with them and continuing to interact with them is just... awful.

Not that the other clouds don't also have their own insane bureaucracies so I guess it's a wash.

I'm just saying, I want a provider that leaves me alone and lets me just throw money at them to do so.

Otherwise, I think I'd rather simply deploy my own oversized server in a colo even with the insanely overpriced hardware prices currently.

edit: And shortly after writing this comment I see: "Microsoft won't let me pay a $24 bill, blocking thousands in Azure spending" https://news.ycombinator.com/item?id=46124930

reply
earthnail
3 hours ago
[-]
Can you elaborate on what the bureaucracy is you experienced? I'm a Hetzner customer since last month and so far I thoroughly enjoy it. Have not encountered any bureaucracy yet.
reply
ethanwillis
1 hour ago
[-]
I think I was still being a bit too harsh even after throwing into my comment that other providers aren't perfect either.

But basically after the initial paperwork I had some issues with my account getting flagged even though I wasn't using it 99.999% of the time. It's not a huge deal for me because I wasn't trying them out for anything serious. I just questioned how often that might happen if I was actually using it seriously and what kind of headaches it could cause me while re-verifying everything with them.

From people I know if everything is going good then their service is great. Server performance is good, pricing is good, etc.

reply
wesammikhail
14 minutes ago
[-]
I was asked for a passport photo when I tried to open an account. They literally asked for a passport photo immediately after the signup form. Like WHAT? I couldn't believe my eyes. The most insane shit I've ever seen.
reply
Nextgrid
3 hours ago
[-]
> I want a provider that leaves me alone and lets me just throw money at them to do so.

That’s been my experience with Hetzner.

A lot of people get butthurt that a business dares to verify who they’re dealing with as to filter out the worst of the worst (budget providers always attract those), but as long as you don’t mind the reasonable requirement to verify your ID/passport they’re hands-off beyond that.

reply
ethanwillis
1 hour ago
[-]
That's fair and I don't have any major issues with that.

I guess my concern on the bureaucracy is if you are unlucky enough to get flagged as a false positive it can be an annoying experience. And I can't really blame them too hard for having to operate that way in an environment of bad actors.

You're definitely right that the budget providers do attract the types of people trying to do bad things/exploit them in some way.

reply
adityaathalye
4 hours ago
[-]
Heh, the documentation calls out the limits. Maximum (theoretical) DB size is 281TB: https://sqlite.org/limits.html

> This particular upper bound is untested since the developers do not have access to hardware capable of reaching this limit.

> However, tests do verify that SQLite behaves correctly and sanely when a database reaches the maximum file size of the underlying filesystem (which is usually much less than the maximum theoretical database size) and when a database is unable to grow due to disk space exhaustion.

reply
DenisM
5 hours ago
[-]
Scale-up solves a lot of problems for stable workloads. But elasticity is poor, so you either live with overprovisinoed capacity (multiples, not percentages) or fail under spiky load which often time is the most valuable moment (viral traffic, Black Friday, etc).

No one has solved this problem. Scale out is typically more elastic, at least for reads.

reply
kragen
4 hours ago
[-]
That's a good point, but when one laptop can do 102545 transactions per second, overprovisioned capacity is kind of a more reasonable thing to use than back when you needed an Amdahl mainframe to hit 100 transactions per second.
reply
DenisM
2 hours ago
[-]
As compute becomes cheaper your argument becomes more and more true.

But it only works if workloads remain fixed. If workloads grow at similar rates you’re back to the same problem.

reply
kragen
2 hours ago
[-]
Well, it doesn't work for the newly added workloads. But for the most part we instead have the same workloads performed less efficiently.
reply
CuriouslyC
2 hours ago
[-]
I love hetzner for internal resources because they're not spikey. For external stuff I like to do co-processing, you can load balance to cloudflare/aws/gcp services like containers/Run/App Runner/etc.
reply
masterj
1 hour ago
[-]
I suspect that for a large number of orgs accepting over-provisioning would be significantly cheaper than the headcount required for a more sophisticated approach while allowing faster movement due to lower overall complexity
reply
embedding-shape
5 hours ago
[-]
> The only caveat being this assumes all your data can fit on a single machine

Does my data fit in RAM? https://yourdatafitsinram.net/

Not sure using EC2/AWS/Amazon is a good example here, if you're squeezing for large single-node performance you most certainly go for dedicated servers, or at least avoid vCPUs like a plague.

reply
jandrese
4 hours ago
[-]
That site is a bit questionable. I entered "64TB" as the answer and it was very happy to show me a bunch of servers that maxed out at 6 or 8TB. Even the one server that listed 64TB of RAM might be questionable since it's not leaving room for the OS or your applications. That said 64 TB is a gargantuan amount of data, so I'm not too worked up over it not fitting in RAM. Lord help you if you have a power outage and have to reload the data from disk.
reply
ErroneousBosh
3 hours ago
[-]
Getting on for ten year's worth of forum posts on https://rangerovers.pub/ comes to about 32MB of SQL dump.

So yeah, easily.

reply
paulddraper
3 hours ago
[-]
How does 25 TiB fit in RAM when the max machine has 24 TB?
reply
kiitos
3 hours ago
[-]
and that your application doesn't need to be resilient to host or network faults
reply
CyberDildonics
2 minutes ago
[-]
Those numbers are just called a database.
reply
diath
4 hours ago
[-]
> Hopefully, this post helps illustrate the unreasonable effectiveness of SQLite as well as the challenges you can run in with Amdahl's law and network databases like postgres.

No, it does not. This article first says that normally you would run an application and the database on separate servers and then starts measuring the performance of a locally embedded database. If you have to keep the initial requirement for your software, then SQLite is completely out of equation. If you can change the requirement, then you can achieve similar performance by tuning the local PGSQL instance -- and then it also becomes a valuation of features and not just raw throughput. I'm not saying SQLite is not an option either, but this article seems confusing in that it compares two different problems/solutions.

reply
sethev
3 hours ago
[-]
Right - but SQLite handily beats the case where postgres is on the same box as well. And it's completely reasonable to test technology in the configuration in which it would actually run.

As an industry, we seem to have settled on patterns that actually are quite inefficient. There's no problem that requires the solution of doing things inefficiently just because someone said databases should run on a different host.

reply
stickfigure
2 hours ago
[-]
If you're going to run on more than one piece of hardware, something is going to be remote to your single writer database.

As an industry, we've generally decided against "one big box", for reasons that aren't necessarily performance related.

reply
BugsJustFindMe
3 hours ago
[-]
> If you have to keep the initial requirement for your software, then SQLite is completely out of equation.

No it isn't? You can run a thin sqlite wrapping process on another server just fine. Ultimately all any DB service is, PostgreSQL included, is a request handler and a storage handler. SQLite is just a storage handler, but you can easily put it behind a request handler too.

Putting access to sqlite behind a serial request queue used to be the standard way of implementing multi-threaded writes. That's only spitting distance away from also putting it behind TCP.

reply
gunnarmorling
1 hour ago
[-]
You could do that, but you'd run into exactly the same bottleneck the author describes with a remote Postgres instance. The workload exposes high contention on hot rows. If transactions are kept open for several milliseconds due to this being a remote network call between client and DB server, throughput will be equally limited also when using SQLite.
reply
chasil
1 hour ago
[-]
As long as WAL mode is not enabled, connections over NFS/SMB or other file sharing protocols will work.

I'm not saying that this is a good idea, and it could fail in a spectacular manner, but it can be done. DML over this is just asking for trouble.

reply
Barathkanna
2 hours ago
[-]
Exactly. People forget that “SQLite can’t do X” often really means “SQLite doesn’t ship with X built in.” If you wrap it with a lightweight request handler or a queue, you essentially recreate the same pattern every other DB uses. The fact that PostgreSQL bundles its own coordinator doesn’t make SQLite fundamentally incapable. It just means you choose whether you want that layer integrated or external.
reply
formerly_proven
3 hours ago
[-]
Well that's just dqlite/rqlite.
reply
benjiro
7 minutes ago
[-]
> Well that's just dqlite.

Far from it, as now your not just dealing with network but also with raft consensus... So each write is not just a network trip, its also 2x acknowledging. And your reads go over the leader, what can mean if somebody accessed node 1 app but node 2 is the leader, well, ...

Its slower on reads and writes, then just replications that PostgreSQL does. And i do not mean async but even sync PostgreSQL will be faster.

The reason dqlite exists is because canonical needed something to synchronize their virtualization cluster (lxd), and they needed a db with raft consensus, that is a lib (as not a full blown server install like postgres). Performance was not the focus and its usage is totally different then most people needs here.

reply
BugsJustFindMe
3 hours ago
[-]
Dqlite and Rqlite are primarily for buildling fault-tolerant clusters. But if you just take the network access part, then ok sure, but also so what?
reply
otoolep
3 hours ago
[-]
rqlite[1] creator here.

Nit: dqlite is a library, it is not a network-exposed database like rqlite is. Sure, it requires connecting to other nodes over the network, but local access is via in-process. In contrast one connects with rqlite over the network - HTTP specifically.

[1] https://rqlite.io

reply
andersmurphy
4 hours ago
[-]
Paradoxically, raw throughput matters a lot more if you are going to scale on a single box. SQLite is 10x PG on a single box in this example. Considering databases tend to be the bottle neck that can take you an order of magnitude further. PG on the same server will also be slower the more complex the transaction as unix sockets are still going to be considerably slower than a function call.

The other thing to point out is in this article is that the PG network example CANNOT scale horizontally due to the power law. You can throw a super cluster at the problem and still fundamentally do around 1000 TPS.

reply
prisenco
3 hours ago
[-]
Also important is just how fast cheap hardware has gotten which means vertical scaling is extremely effective. People could get a lot farther with sqlite in wal mode on a single box with an nvme drive than they imagine. Feels like our intuition has not caught up with the material reality of current hardware.

And now that there are solid streaming backup systems, the only real issue is redundancy not scaling.

reply
barfoure
3 hours ago
[-]
> Paradoxically, raw throughput matters a lot more if you are going to scale on a single box.

There’s absolutely nothing paradoxical about any of this.

reply
embedding-shape
4 hours ago
[-]
> If you have to keep the initial requirement for your software, then SQLite is completely out of equation.

It'd be a very short article if so, don't you think? Full article would be something like: "Normally you'd have a remote connection to the database, and since we're supposed to test SQLite's performance, and SQLite is embedded, it doesn't compare. Fin"

reply
stonemetal12
4 hours ago
[-]
The table of data at the end of the article has 7 lines, only one has data for both DBs. What was the point of setting up the comparison if there is no comparison made?
reply
andersmurphy
4 hours ago
[-]
Because it shows that a network RDBS database cannot get you out of this predicament.
reply
fulafel
4 hours ago
[-]
What is says first is: "SQLite is for phones and mobile apps (and the occasional airliner)! For web servers use a proper database like Postgres!"

Though I'd say it's for a broader set of applications than that (embedded apps, desktop apps, low-concurrency server apps etc).

Phones and mobile apps installations of course outnumber web app deployments, and it doesn't say what you paraphrased about servers.

reply
badmonster
4 hours ago
[-]
The real insight here is recognizing when network latency is your bottleneck. For many workloads, even a mediocre local database beats a great remote one. The question isn't "which database is best" but "does my architecture need to cross network boundaries at all?"
reply
andersmurphy
4 hours ago
[-]
(author here) yes 100% this. This was never mean't to be a SQLite vs Postgres article per say, more about the fundamental limitations of the network databases in some contexts. Admittedly, at times I felt I struggle to convey this in the article.
reply
slashdave
4 hours ago
[-]
Sure. Now keep everything in memory and use redis or memcache. Easy to get performance if you change the rules.
reply
koakuma-chan
3 hours ago
[-]
You can use SQLite for persistence and a hash map as cache. Or just go for Mongo since it's web scale.
reply
Yodan2025
1 hour ago
[-]
yep, then add an AWS worker in-between
reply
jaketoronto
3 hours ago
[-]
Are you limiting your # of connections to postgres to 8? Is this unnecessarily throttling your throughput? This seems like quite the bottleneck... connection pools are good when your app is overwhelming your db.. but in this case, you really should be trying to put more load on Postgres... I'm concerned that this whole experiment is tainted by this choke point. I would love to see this tested again with a much larger connection pool. (Also, might be nice for you to share what the CPU and thread usage on the Postgres side was during this). (Side note: I'm a big fan of sqlite.. but I've run into lots of performance bottlenecks caused by arbitrarily low connection pools and the like)
reply
ch2026
18 minutes ago
[-]
Yeah I don’t buy this article. Something is off. I run vastly more TPS on a very barebones MySQL install over the network.
reply
jaketoronto
2 hours ago
[-]
To further explain:

You mention setting the conn pool to 8 to match your # of cores. That would be fine if you didn't have any sleeps inside of your txns... But the moment you added the sleeps inside the txns, your limit of 8 kills through throughput... because no other thread can access the DB once 8 of them grab connections and start the 20ms of total sleep. Imagine instead if you had 64 connections... you would 8x your throughput... What if you were to go even higher? At some point you might start overloading the DB... at that point, you could consider tuning the db to accept more connections... or... maybe you've truly reached the DB's peak performance limit.

I just don't think that 8 connections represents that limit... you need to do everything you can to open up your client config until you reach PG's limitations.

reply
andersmurphy
2 hours ago
[-]
A larger pool actually makes the number worse because it adds more contention. I tested it with 64 and all the results were worse. The last example which was 348 TPS drops to 164 TPS!

Tangentially I also highly recommend this article on pool sizing.

https://github.com/brettwooldridge/HikariCP/wiki/About-Pool-...

reply
tomasol
2 hours ago
[-]
Author is setting PRAGMA synchronous="normal", meaning fsync is not issued as part of every write tx, but eventually. In order to make the comparison fair it should be set to "full".
reply
andersmurphy
2 hours ago
[-]
PRAGMA synchronous="normal" is fine if you are in WAL mode. The database cannot be corrupted by power loss unlike in journal mode.

> The synchronous=NORMAL setting provides the best balance between performance and safety for most applications running in WAL mode. You lose durability across power lose with synchronous NORMAL in WAL mode, but that is not important for most applications. Transactions are still atomic, consistent, and isolated, which are the most important characteristics in most use cases.

reply
tomasol
1 hour ago
[-]
fsync is the most expensive operation during a write. NORMAL mode means you don't care whether last ~100 ms of transactions before a process crash / VM restart are going to be persisted or not. My suggestion is either to use synchronous="full" or disable `synchronous_commit` on Postgres to avoid comparing apples to oranges.

Edit: Also, the example indicates financial transactions. Can you explain why you need serializability but not durability?

reply
tomconnors
4 hours ago
[-]
Cool stuff as usual, Anders. One of the nice things about running a networked DB is that it makes redeploying the application a bit simpler. You can spin up a new EC2 instance or whatever and once it's online kill the old one. That gets 0 or close to 0 downtime. If the DB is on the same instance, replacing it requires loading up the DB onto the new instance, which seems more error prone than just restarting the app on the original instance, but in my experience that typically incurs downtime or some complicated handoff logic. Have you had to deal with anything like that running sqlite in prd?
reply
cwillu
16 minutes ago
[-]
Sqlite supports multi-process access, so as long as you've configured to permit that, you can do the same “start new process before retiring the old one” dance.
reply
andersmurphy
3 hours ago
[-]
Oh there are a bunch of considerations.

You're going to want persistent storage on your server, not ephemeral. You'll also want NVME. A lot of the time you're going to end up on bare metal running a single server anyway.

You're going to have down time for migrations unless you're very clever with your schema and/or replicas.

Litestream for me at least is what makes SQLite viable for a web app as prior to that there wasn't a good replication story.

With litestream it's much easier to have a backup on standby. That being said where I have used it in production some amount of downtime has been acceptable so mileage may vary.

reply
kiitos
2 hours ago
[-]
> You're going to have down time for migrations unless you're very clever with your schema and/or replicas.

probably worth stating these kinds of design considerations/assumptions up-front

i'm sure lots of applications are fine with "downtime for [database] migrations" but lots more are definitely not, especially those interested in synthetic metrics like TPS

reply
andersmurphy
1 hour ago
[-]
I'd argue the opposite most applications are fine with an hour of downtime a month and arguably much more downtime then that. The recent AWS and Cloudflare outages have proven that.

You can achieve zero downtime with Sqlite if you really need to.

TPS is not a synthetic metric when you cap out at 100 TPS because of Amdahl's law and your users having a power distribution.

reply
janci
2 hours ago
[-]
How does SQLite handle HA setups? The minimum I want is reliable automatic failover in reasonable time for user-facing service. Ideally an active-active setup.
reply
vhodges
2 hours ago
[-]
sqlite is just a library (in C)

A few projects:

  * https://github.com/rqlite/rqlite Distributed, fault tolerant cluster
  * https://litestream.io/ Replication to S3 (or compatible) - more disaster recovery than fail over
  * https://fly.io/docs/litefs/ Same Author as litestream).  Distributed replication.  Requires writes to be redirected to the primary.
I am debating Postgres vs sqlite (probably with litestream) for a project right now.

And other than HW redundancy, I can get pretty far by scaling vertically on a single box. And for my app, I could probably (and my users!) live with some occasional downtime (as long as the data is replicated/backed up).

If I get 20-50K users, it'll be a successful venture so I don't need much these days and it will be cheaper and easier to run as well.

reply
otoolep
1 hour ago
[-]
rqlite creator here, happy to answer any questions.
reply
nchmy
1 hour ago
[-]
there's various options now, but im most interested in Marmot - which is multi-master and just came out of a 2 year hibernation with a big overhaul that introduced a native gossip mechanism to replace NATS/Raft

https://github.com/maxpert/marmot/

reply
alexwennerberg
5 hours ago
[-]
Does anyone have examples of organizations that have leveraged SQLite and written about their experience? I've read a lot of theory and benchmarks about it lately and it seems extremely impressive, but I'm wondering if anyone has written about pushing it to its limits "in production"
reply
andersmurphy
4 hours ago
[-]
https://use.expensify.com/blog/scaling-sqlite-to-4m-qps-on-a...

Linked at the bottom of the article is the most extreme I've seen.

reply
alexwennerberg
4 hours ago
[-]
Thank you! I read this a long time ago and could never remember it, thinking it was Shopify that wrote it.
reply
yolo3000
4 hours ago
[-]
Maybe they should have stuck with traditional tech and used their talent on something else. Stock is down 96%
reply
ducktective
3 hours ago
[-]
Does anyone have rough numbers (max daily users etc) on viability of SQLite vs PostgreSQL for a typical user-facing webapp or e-commerce application?

I know due to some recent update, SQLite can support concurrent reads but still only a single writer. For which cases this would be a problem?

Some recommend it's better to start with postgres anyway if you have any remote thoughts of scaling in mind....

reply
bob1029
2 hours ago
[-]
This is great until you encounter a customer with a hard RPO requirement of 0. SQLite has a few replication options, but I would never trust this in high stakes domains over PGSQL/MSSQL/DB2/Oracle/etc.
reply
BenjiWiebe
2 hours ago
[-]
I'm curious, is an RPO of 0 truly expected or needed? I can easily believe that some places would "require" it.

What kind of data is so critical that the data from a quarter second before catastrophic destruction must be saved?

I guess weapons testing, at least... But that wouldn't be streaming data of that importance for a very large % of time.

reply
pmdulaney
4 hours ago
[-]
Prefer "surprising" to "unreasonable".
reply
jhoechtl
4 hours ago
[-]
But that wouldn't be cool
reply
jcadam
2 hours ago
[-]
I've been getting some good results from sqlite-vec on my current project. I need to look at geospatial extensions next.
reply
andrewvc
2 hours ago
[-]
The HN SQLite worship posts have gotten out of hand. What’s next a post on how appending to files is faster than Kafka?

It’s great that some people have workloads that this is a fit for. What’s more common is the use case managed databases like RDS etc solves for. You have some quantity of data you want to always be there, be available over a network for whatever app(s) need it and want backups, upgrades, access control etc solved for you.

I love SQLite and reach for it for hobby projects, but as a product for general business apps it is quite niche. It has the qualities that make for any popular product on HN, a great getting started experience and a complex maintenance and operational experience.

reply
gausswho
3 hours ago
[-]
SQLite db's do not reliably survive Syncthing across multiple devices. Until that's true I am always seeking a DB design that does.
reply
shadowgovt
2 hours ago
[-]
Good. I feel like people keep discovering this principle that "When the engine does far, far fewer things, it's faster," but I certainly won't bash on people learning this, however they get there.
reply
westurner
5 hours ago
[-]
That's a helpful TPS Report.

TIL `SAVEPOINT` can occur in a BEGIN ... END SQLite transaction, and that works with optimizing batch size on a particular node with a given load.

Is there a solution for SQLite WAL corruption?

From https://news.ycombinator.com/item?id=45133444 :

> "PSA: SQLite WAL checksums fail silently and may lose data" https://news.ycombinator.com/item?id=44672902

> sqlite-parquet-vtable, [...]

reply
matharmin
5 hours ago
[-]
As mentioned in those threads, there is no SQLite WAL corruption if you have a working disk & file system. If you don't, then all bets are off - SQLite doesn't protect you against that, and most other databases won't either. And nested transactions (SAVEPOINT) won't have have any impact on this - all it does in this form is reduce the number of transactions you have.
reply
westurner
4 hours ago
[-]
> working disk & file system

And a working ECC or non-ECC RAM bus, and [...].

How bad is recovery from WAL checksum / journal corruption [in SQLite] [with batching at 100k TPS]?

And should WAL checksums be used for distributed replication "bolted onto" SQLite?

>> (How) Should merkle hashes be added to sqlite for consistency? How would merkle hashes in sqlite differ from WAL checksums?

SQLite would probably still be faster over the network with proper Merkleization

reply
busymom0
4 hours ago
[-]
Previously, I had always used Postgres for database and Rust or NodeJS for my backend. For my new website (https://limereader.com/), I used Swift for my backend, SQLite for Database, Vapor for web server in the Swift app and am self-hosting the site on an old Mac mini.

A sqlite related issue I ran into had to do with accessing the SQLite database from multiple threads. Found out a solution easily: for multi-threading use, SQLite needed to be initialized with a `SQLITE_OPEN_FULLMUTEX` flag. Since then, the website has been running seamlessly for about 3 weeks now.

reply
maxmcd
4 hours ago
[-]
This will block threads while waiting for other threads to write. That might work great for your threading model but I usually end up putting the writer in one thread and then other threads send writes to the writer thread.
reply
busymom0
4 hours ago
[-]
I do open 2 connections:

First one for writing with flags:

    SQLITE_OPEN_CREATE | SQLITE_OPEN_READWRITE | SQLITE_OPEN_FULLMUTEX
Second one for reading with flags:

    SQLITE_OPEN_READONLY | SQLITE_OPEN_FULLMUTEX
As you can note, I have SQLITE_OPEN_FULLMUTEX on both of them. Should I only have it for the writing one?
reply
wmanley
4 hours ago
[-]
Use a connection per-thread instead. By sharing a connection across threads you’ll be limiting concurrency - and transactions won’t work as you’d expect. SQLite connections are not heavy.

Also: use WAL mode and enable mmap.

reply
andersmurphy
4 hours ago
[-]
You don't need fullmutex if you manage your connections correctly at the application level. I.e ensure each connection is only used from a single thread at a time. I also highly recommend having an MPSC queue for your batch/writes and make them go through a single connection so you don't have to deal with SQLITE_BUSY or SQLITE_LOCKED.
reply
dmezzetti
4 hours ago
[-]
I've used SQLite as the content storage engine for years with TxtAI. It works great. Also plenty of good add-ons for it such as sqlite-vec for storing vectors. It can take you pretty far and maybe it's all you need in many circumstances.
reply
koakuma-chan
3 hours ago
[-]
Alternative title: SQLite is All You Need
reply
Tractor8626
2 hours ago
[-]
Sqlite is very cool. But what was the point of slowing postgress down?

> But, wait our transactions are not serialisable, which they need to be if we want consistent transaction processing

You either don't know what serializable does or trying to mislead the reader. There is zero reason to use searializable here.

> Let's say you have 5ms latency between your app server and your database.

5ms latency is unrealistic. Unless you use wifi or you database is in another datacenter.

> I'm talking about transactions per second, specifically interactive transactions that are common when building web applications

No they are not common at all. You probably invented them just to make pg look bad.

reply
gunnarmorling
1 hour ago
[-]
> No they are not common at all. You probably invented them just to make pg look bad.

Interactive transactions are a well-defined concept in database technology, exactly with the semantics described by the author: transactions with multiple queries, with application logic in between, for instance taking a result from one query, processing it, and running a second query with the outcome of that processing as input.

That said, the example in the blog post feels a bit contrived to me, the question being whether that transaction could be structured in a why so that both updates run right after each other towards the end of the transaction, thus significantly reducing the duration of the row lock being held (which is not to say that the general problem described in the article doesn't exist, it does for sure).

reply
andersmurphy
1 hour ago
[-]
I wanted the example to be simple and relatable. So I went with something close to the transaction example on the Postgres site (which to be honest also feels contrived).

In practice I'd never implement a ledger like this these days, I'd much prefer an append only model with some triggers. But, like you said interactive transactions are very much a thing.

Out of curiosity do you have a better/less contrived example in mind?

reply
andersmurphy
2 hours ago
[-]
> You either don't know what serializable does or trying to mislead the reader. There is zero reason to use searializable here.

If you're processing financial transactions you want your isolation level to be serialisable. As the order in which the transactions are processed matters.

> 5ms latency is unrealistic. Unless you use wifi or you database is in another datacenter.

Even with 1ms latency. Amdahl's law will still make you cap out at a theoretical 1000 TPS if you have 100% row lock contention.

> No they are not common at all. You probably invented them just to make pg look bad.

I'm confused. I invented transactions? Are you saying you don't use transactions with rollback when you use PG?

reply
Tractor8626
55 minutes ago
[-]
> If you're processing financial transactions you want your isolation level to be serialisable. As the order in which the transactions are processed matters.

So you don't know what serializable level is.

> Even with 1ms latency. Amdahl's law will still make you cap out at a theoretical 1000 TPS if you have 100% row lock contention.

So why use 5ms and 10ms for examples?

> I'm confused. I invented transactions?

"Interactive" transactions.

reply