What if we treated Postgres like SQLite?
73 points
2 days ago
| 17 comments
| maragu.dev
| HN
eirikbakke
2 days ago
[-]
The PostgreSQL data directory format is not very stable or portable. You can't just ZIP it up and move it to a different machine, unless the new machine has the same architecture and "sufficiently" similar PostgreSQL binaries.

In theory the data directory works with any PostgreSQL binaries from the same major version of PostgreSQL, but I have seen cases where this fails e.g. because the binaries were from the same major version but compiled with different build options.

reply
OutOfHere
2 days ago
[-]
I would never ever zip up a PostgreSQL data directory and expect it to restore elsewhere. I would use a proper export. If the export is too slow, it could help to use streaming replication to write intermediate files which can be moved to a backup location.

Even with SQLite, for a simple file copy to work reliably, one has to set these three:

  "_pragma=synchronous(FULL)",
  "_pragma=checkpoint_fullfsync(ON)",
  "_pragma=fullfsync(ON)",
reply
markusw
2 days ago
[-]
Agreed about the Postgres export!

For sqlite, I would recommend "sqlite3 app.db '.backup backup.db'" though, if that's an option. Guaranteed consistent!

reply
tracker1
2 days ago
[-]
Yeah, I was going to mention, just upgrading between PG versions can be a bit of a pain. Dump/Restore really seems like a less than stellar option of you have a LOT of data. I mean you can stream through gzip/bzip to save space but still.

I often wish that Firebird had a license that people found friendlier to use as it always felt like a perfect technical option for many use cases from embedded to a standalone server. PG has clearly eclipsed it at this point though.

reply
markusw
2 days ago
[-]
On the other hand, and especially if migrating from SQLite, there's typically not _that_ much data. Even hundreds of GBs would probably be okay? Drives are so fast these days. Would be interesting to benchmark.
reply
tracker1
2 days ago
[-]
Maybe that's fair... just feels like a pain when I want to update an app, I have to jump through a few hoops as opposed to just updating the PostgreSQL version in a docker-compose or k8s config.

Would be nice if PG could at least automagically forward update when it starts and a database is from a prior version.

reply
WayToDoor
1 day ago
[-]
reply
markusw
2 days ago
[-]
Yeah, totally agreed. I'll have to look into that. I've really enjoyed the stability of SQLite, down to the file format.
reply
markusw
2 days ago
[-]
Yeah, that really is a great thing about SQLite.

I wonder whether packaging everything in Docker (including a specific Postgres container identified by hash or whatever) and deploying on the same architecture would solve this?

reply
edoceo
2 days ago
[-]
I work on a "complex" web-app stack that we want to be self-hostable.

Our docker-compose.yaml has Postgrs, Redis, OPA, Traefik and then our four services. Works a treat.

One thing we haven't solved for is how to have the PG upgrade work when we update major in the image. Nice that 16 should work for a long while.

reply
oulipo2
2 days ago
[-]
So the alternative would be to just pg_dump / pg_restore the content? Is it an issue?
reply
eirikbakke
1 day ago
[-]
That is the correct way to do it for PostgreSQL.

It does rule out some common SQLite use cases, such as using the database for app persistence. The app would never be able to upgrade the major version of PostgreSQL, unless it bundled PostgreSQL binaries for every major version ever used.

reply
chucky_z
2 days ago
[-]
If it's more than even like 10GB this is going to take _awhile_.

I love the pg_* commands but they ain't exactly the speediest things around.

reply
amtamt
2 days ago
[-]
Logical replication would be a good option, with temporarily extra hardware added.
reply
freedomben
2 days ago
[-]
I've been hit by this too, so definitely a risk. I've also had permissions on the files get broken which is a bitch to debug
reply
mutagen
2 days ago
[-]
Some apps do, the most used I know of is Blackmagic's Davinci Resolve, the video editor with a relatively full featured free edition available. I think this has more to do with its roots being in a high end networked environment but still, the local desktop version installs Postgres.
reply
markusw
2 days ago
[-]
Oh, interesting! But that's more of a desktop application now, right? I was thinking of web servers when writing the article, but I can see how that's not totally clear. :-)
reply
emschwartz
2 days ago
[-]
I think this is a neat direction to explore. I've wondered in the past whether you could use https://pglite.dev/ as if it were SQLite.
reply
tensor
2 days ago
[-]
Someone is working on a libpglite based around pglite. I think this is what will provide an actual sqlite alternative:

https://github.com/electric-sql/pglite-bindings

It would still be missing the standardized disk format aspect of sqlite, but otherwise will be neat.

reply
thruflo
2 days ago
[-]
This is very much the point of https://pglite.dev

It's an embeddable Postgres you can run in process as a local client DB, just like SQLite but it's actually Postgres.

reply
emschwartz
2 days ago
[-]
For sure. I’m curious if anyone using it in production as an alternative to SQLite and, if so, what the performance and experience is like.
reply
OutOfHere
2 days ago
[-]
I wish this were available for Go lang, perhaps via Wazero.
reply
ncruces
2 days ago
[-]
Me too. There's only so much time in life. I'd take it for a spin if someone else did it. :)
reply
markusw
2 days ago
[-]
Ah, I thought I recognized your username from a Github URL. :D Thank you for your contributions in the SQLite & Go space!
reply
xarope
1 day ago
[-]
You got my hopes up, but it's WASM for now, not something I could add into a golang [1] or python app and have running like sqlite. OK, still hoping...!

[1]https://github.com/electric-sql/pglite/issues/89

reply
markusw
2 days ago
[-]
Interesting! I'm going to look into that. :-) Thank you for sharing.
reply
srameshc
2 days ago
[-]
When I want to treat my Postgres like SQLite , I use DuckDB using PostgreSQL Extension https://duckdb.org/docs/stable/core_extensions/postgres.html and this is one of may extensions, there is more and I love DuckDB for that
reply
markusw
2 days ago
[-]
While I really like DuckDB, I wouldn't use it for OLTP workloads.

I'm curious, when do you want to treat your Postgres like SQLite? :-) That's basically the opposite of what I was thinking of in the article.

reply
srameshc
2 days ago
[-]
One of our web apps need a small subset of data from Postgres and we use DuckDB WASM to keep it clost to front end. Yes, I agree with not using it for for OLTP workloads. But instead of installing like author mentions in the post, this is a light replacement that works perfectly if you need a Postgres like DB in your app. Just have to add a new file, INSTALL postgres, LOAD postgres,

ATTACH 'dbname=mydb user=dbname host=25.25.25.25 password=aaaa AS db (TYPE postgres, READ_ONLY);

You can CREATE TABLE my_duck_table_stream AS SELECT * FROM db.table_stream;

It's just fun

reply
markusw
2 days ago
[-]
I had no idea that was possible. I'll keep that in mind if I need a similar workflow. DuckDB is awesome. :D
reply
oulipo2
2 days ago
[-]
In what kind of scenarios are you using Duckdb from postgres? And does it improve over, say, having a Clickhouse instance nearby?
reply
bob1029
2 days ago
[-]
I think this is a great idea for testing. MSSQL has LocalDB which is used a lot throughout the .NET ecosystem:

https://learn.microsoft.com/en-us/sql/database-engine/config...

For heavy duty production use (i.e., pushing the actual HW limits), I would feel more comfortable with the SQLite vertical. Unix sockets are fast, but you are still going across process boundaries. All of SQLite can be ran on the same thread as the rest of the application. This can dramatically reduce consumption of memory bandwidth, etc.

reply
bluGill
2 days ago
[-]
Memory bandwidth I don't worry about much - most of the time you should settup a small database with just enough data for that test, which hopefully is fast. However sockets and processes are a worry as starting as there are places things can go wrong not related to your test and then you have flakely tests nobody trusts.
reply
HackerThemAll
20 hours ago
[-]
> most of the time you should settup a small database with just enough data for that test, which hopefully is fast.

Yeah, this way you're not going to notice performance issues with your data model or application. Test databases should be of sizes comparable to what's expected in production, specifically after several years in production.

reply
markusw
2 days ago
[-]
Note: The last paragraph was about in production, not tests. :-)
reply
markusw
2 days ago
[-]
I am so tempted to benchmark some more… :D Would be great to get some numbers on this, super interesting.
reply
marcobambini
2 days ago
[-]
Instead of sqlite-vec you can take a look at the new sqlite-vector extension: https://substack.com/home/post/p-172465902
reply
markusw
2 days ago
[-]
Uuuuh, that looks super interesting! I hadn't heard of that. Thank you for sharing!
reply
kissgyorgy
2 days ago
[-]
The huge advantage of SQLite is not that it's on the same machine, but it's that is it in process which makes deployment and everything else just simpler.
reply
markusw
2 days ago
[-]
Yeah, totally agreed. An embedded Postgres would be sweet (see pglite elsewhere here in the comments, looks interesting).
reply
kketch
1 day ago
[-]
I've been using postgres as a local database for one of my personal projects, a GUI app or to run python tests that depend on it without having to rely on installing it in my environment.

I created a Python package that downloads an embedded Postgres binary, sets up the database, and gives you a database URL: https://github.com/kketch/tinypg

It downloads pg binaries from this project: https://github.com/zonkyio/embedded-postgres-binaries. There are other similar projects listed on that page that provide this for Java, Go, Rust and Node

reply
kketch
1 day ago
[-]
When not using python, been using this script to create ephemeral postgres databases for tests but also persistent one in my dev containers: https://github.com/eradman/ephemeralpg

I've wrapped it with yet another shell script to make it usable just like this:

`export DB_URL=$(./pgtest.sh)`

This version (pgtest.sh), just creates a disposable ephemeral postgres: https://gist.github.com/kketch/d4e19a7fb6ebc1cfc265af44c1b41...

This version (pgdev.sh), starts a postgres instance (if not already running) and persists the DB. Also supports seeding it with a SQL script (dev_seed.sql): https://gist.github.com/kketch/88bb5b766994e247a9f2c37f13306...

reply
nu11ptr
2 days ago
[-]
You can, just embed it in your Go app:

https://github.com/fergusstrange/embedded-postgres

reply
OutOfHere
2 days ago
[-]
Does this use an external binary or CGO or Wazero (Wasm) or is it rewritten in Go?

With SQLite, although all approaches are available, my fav is to use https://github.com/ncruces/go-sqlite3 which uses Wazero.

I try to avoid CGO if I can because it adds compile-time complexity, making it unfriendly for a user to compile.

reply
nu11ptr
2 days ago
[-]
> Does this use an external binary or CGO or Wazero (Wasm) or is it rewritten in Go?

Since Postgres is always a network connection, I don't believe any CGo is required.

> I try to avoid CGO if I can because it adds compile-time complexity, making it unfriendly for a user to compile.

Using zig as your C compiler mostly fixes this, but you can't 100% get rid of the complexity, but I've cross compiled using Zig cc to Windows/Mac/Linux pretty easily via CGo.

reply
tptacek
2 days ago
[-]
This just runs Postgres as a process, right?
reply
beckford
2 days ago
[-]
Not OP, but I think it does run Postgres as a process. However, IMHO the general use case for SQL is for external actors (humans, machines) to get access to the underlying data in a structured way. So I see a benefit for a true in-process embedding of Postgres if the process exposed a Postgres TCP/IP port 5432, etc. (Hook your software up to a query tool, a reporting interface, etc.)

Beyond that, why care whether the "embedding" involves a spawned process? It still works great for integration tests which I suspect is the main use case, and for specialized data analysis software where a spawned process is no big deal.

reply
stuaxo
1 day ago
[-]
Can you have a socket that's only shared between a parent and child process?

This sounds like it could be pretty useful.

reply
nu11ptr
2 days ago
[-]
Yes, but it embeds it in your executable so it is transparent to the end user.

UPDATE: Actually, I see it is downloaded (and I think cached?). I can't recall if you can embed as an option or not.

reply
8organicbits
2 days ago
[-]
I do this using the Docker approach, especially for low scale web apps that run on a single VM. I like that its full Postgres versus the sometimes odd limits of SQLite. My usual approach uses a Trafik container for SSL, Django+gunicorn web app, and Postgres container; all running as containers one VM. Postgres uses a volume, which I back up regularly. For testing I use `eatmydata` which turns off sync, and speeds up test cycles by a couple percent.

I haven't tried the unix socket approach, I suppose I should try, but it's plenty performant as is. One project I built using this model hit the HN front page. Importantly, the "marketing page" was static content on a CDN, so the web app only saw users who signed up.

reply
markusw
2 days ago
[-]
Yeah, basically the same here, except it's Caddy in front instead of Traefik.

So you do periodic backups, not incremental on every write or something (read replica-like)?

It's important to me to not lose any data once committed if at all possible.

(For testing, I've sped everything up by running migrations on `template1` and every test gets a random database name. Works wonders.)

reply
8organicbits
3 hours ago
[-]
Good catch. I am doing periodic, not incremental, backups on that system. It all depends on risk, cost, and tolerance for data loss.
reply
whartung
2 days ago
[-]
I don’t recall the mechanics but I do know that folks have bundled starting a local instance of PG solely for unit tests.

There’s a pre-install step to get PG on the machine, but once there, the testing framework stands it up, and shuts it down. As I recall it was pretty fast.

reply
majewsky
1 day ago
[-]
This is my version of it: https://pkg.go.dev/github.com/sapcc/go-bits/easypg#WithTestD...

The most annoying part of it is that Postgres absolutely detests running as PID 0, which makes running `make check` in Docker containers an unnecessarily frustrating experience. I understand why Postgres rejects PID 0 by default, but I would really like for them to recognize strcmp(getenv("I_AM_IN_DOCKER_AND_I_DONT_CARE"), "true") or something.

reply
nullzzz
2 days ago
[-]
This is IMO best done running pg in a container using docker-compose or similar
reply
OutOfHere
2 days ago
[-]
SQLite is used more in resource-constrained environments. If I had much memory to waste, I would've used a local PostgreSQL installation to begin with.

Note that advanced vector-embedding querying techniques for approximate nearest neighbor search inevitably always need an absurd amount of memory, so it typically doesn't make sense to use them in a resource-constrained environment to begin with.

reply
markusw
2 days ago
[-]
It's also used plenty in the cloud, for a variety of reasons. Check out the first linked article for something I wrote on that: https://www.maragu.dev/blog/go-and-sqlite-in-the-cloud
reply
yndoendo
2 days ago
[-]
I have used SQLite with embedded 2MB flash and 8MB RAM. The microSD card was used as swap for producing reports and serving them over HTTP.

Next embedded model version moved to 8MB flash and 64MB RAM with the same software infrastructure.

reply
munchlax
2 days ago
[-]
From what I've read about it, DuckDB comes close. Regular files, like sqlite, but pg functionality.
reply
OutOfHere
2 days ago
[-]
If I am not mistaken, DuckDB is suitable for columnar analytics queries, less so for multi-column row extractions. Which PG-like functionality does it offer on top?
reply
datadrivenangel
2 days ago
[-]
DuckDB does aim to be Postgres compatible from a SQL syntax perspective, but you are 100% correct that it is not optimized for individual transactions. I'm a huge advocate of DuckDB, but strongly consider your life choices if you want to use it as a transactional database.
reply
Imustaskforhelp
1 day ago
[-]
is there something like duckdb but suited for transactional database while still being postgres compatible

It might sound confusing but I wish to genuinely have something of a simpler postgres (in sqlite?) and then later if need be, i could migrate to postgres I suppose.

reply
nullzzz
2 days ago
[-]
No it’s not ”pg functionality”. It’s close to SQL standard compliance but not close to what Postgres has to offer. Also, single transaction writing at a time, in-process etc.
reply
JodieBenitez
2 days ago
[-]
> You can just install Postgres on your single big and beefy application server (because there’s just the one when you use SQLite, scaled vertically), and run your application right next to it.

Am I getting old ? Seems obvious to me.

reply
BiteCode_dev
2 days ago
[-]
Yes, you are old.

I meet an unexpected number of young devs that:

- Find hypermedia to be exotic. JSON web API is the only way that makes sense to them.

- Say they use the cloud to "save money".

- Use third-party services for things as basic as authentication because "it's hard".

- Encode security stuff in the frontend code.

It's the cycle of life.

reply
JodieBenitez
2 days ago
[-]
Damn... looks like my job is secured for another 25 years.
reply
omarqureshi
2 days ago
[-]
Literally prior to the cloud being a thing, for medium sized web apps, this was the way.
reply
ok_computer
2 days ago
[-]
I’ve always worked in a datacenter (non cloud) with separate db servers to the app servers. Besides network latency, what is the advantage of collocating the http server and database server on one machine?

It’s always given me a separation of concerns good feeling by seeing a dedicated db and app server and doesn’t seem like much overhead, given they are nearby machines in datacenter.

Also, our main reason was sharing a database license to have a well resourced multi-tenant/app db sever serving peripheral web app servers.

reply
markusw
2 days ago
[-]
The biggest one is latency. Network latency will almost always be orders of magnitude bigger than I/O latency. There's a whole class of problems that goes away when latency becomes very small.
reply
ok_computer
2 days ago
[-]
Noted, that confirms my suspicion. Thanks.
reply
markusw
2 days ago
[-]
Hehe. Yes and no.

In the cloud, as you probably know, the usual way now is to spin up Postgres separately (RDS, Supabase, Planetscale, Crunchy Bridge, you name it). We've gotten so used to it that a different way of doing it is often not even considered.

But I think tooling has come a long way, and there have been a lot of learnings from the cloud, so it's time to swing the pendulum back and reconsider assumptions!

reply
reactordev
2 days ago
[-]
reply
tptacek
2 days ago
[-]
You'd still be incurring client/server IPC for individual database queries, which would cost you one of the benefits of using SQLite (IDGAF-queries that just hammer tables repeatedly because there's almost no cost to doing so).
reply
nullzzz
2 days ago
[-]
TLDR: a dude ponders if postgres could be used instead of sqlite. Close to zero useful information, no learnings from serious production use.
reply
apalmer
2 days ago
[-]
I upvoted this, because while it was critical it didn't feel meanspirited and it was factually correct.

After fully reading the article I came to understand it really was not referring to anything sqllite specific, was really 'what if you ran postgres as an application on a server', there really is nothing more to be gained from reading the article beyond this, and this is kind of the most basic deployment model for postgres for like the last 40 years.

reply
markusw
2 days ago
[-]
Sure, you are correct! But I've already learned about pglite and sqlite-vector from the comments here alone. So if one reads the article AND the comments, I hope it's a net-positive for you, too, even if the article alone didn't give you anything.

And if not, I hope you didn't spend too long reading. :-)

reply
markusw
2 days ago
[-]
Indeed pondering! The dude here. I think writing a little piece like that often sparks some interesting discussions here and elsewhere. :-)
reply
nullzzz
2 days ago
[-]
I agree. Having read my comment I must apologize as it came out too dismissive. Keep pondering, dude :)
reply
markusw
2 days ago
[-]
No worries. :-) Thank you for your follow-up, kind stranger on the internet.
reply