The ecosystem is very active, and they have recently opened up "community extensions" to bring your own functions, data types and connections. A barrier at the moment is that extensions are written in C++, though this limitation should be removed soon.
I've been building a lot on top of DuckDB, two of the projects I'm working on are linked in the article:
- Evidence (https://evidence.dev): Build data apps with SQL + Markdown
- DuckDB GSheets (https://duckdb-gsheets.com): Read/Write Google Sheets via DuckDB
I can’t think of any of the advent of code questions this year where a database would have been of any use.
Do tell us more.
So far parsing the inputs has not been nearly as much of a hassle as I thought. I really like how DuckDB supports directly selecting from files (e.g. `select * from 'file.csv'`). I thought I was going to be spending more time creating schemas, running `copy`, etc.
Overall pretty nice quality of life experience so far, even if it is only to mess around with a few puzzles.
I cloned the CVE repository and ingested it into duckdb without worrying about table structures. Build a fastapi wrapper to query CVEs like SQL. A little bit of caching and it was pretty fast. Was done in a few hours.
Shame that their Rust crate wasn't as performant as the python module.
Remember all it takes is 1 employee to put that claim up there (although I do like evidence.dev).
https://gist.github.com/cpursley/c8fb81fe8a7e5df038158bdfe0f...
I mean, its a lot more expensive than Postgres for very very little benefit. Like what exactly is the value proposition? Is it saving developer time? faster? I guess none of those
Today I have no idea about SQL server but if the support is as good today as it was back then it can be worth it.
These days a I am not so sure what the case would be other than legacy or very occasionally some feature only SQL Server has. AD would be a big one.
Making a decision on DB is somewhat foundational for a lot of system architecture, in that you expect to be able to use it for years. It is not like some UI framework you toss out every 18 months.
So the benefits of the new hyped thing may be outweighed by the unknowns & risk of flaming out.
This is the kind of hype cycle that gives me pause when a new hot thing like DuckDB which actually ticks a TON of boxes for me, but has attracted some of the usual suspects in my work network that I consider to be contra signals.
Here's why it is better:
1. User-facing analytics vs. business analytics. Pinot was designed for user-facing analytics (meaning analytics result is used by end-user (for example, "what is the expected delivery time for this restaurant?"). The demands are much higher, including latency, freshness, concurrency and uptime.
2. Better architecture. To scale out ClickHouse uses sharding. Which means if you want to add a node you have to bring down the database, re-partition the database and reload the data, then bring it back up. Expect downtime of 1 or 2 days at least. Pinot on the other hand uses segments, which is smaller (but self-contained) pieces of data, and there are lots of segments on each node. When you add a node, Pinot just moves around segments, no downtime needed. Furthermore, for high availability ClickHouse uses replicas. Each shard needs 1 or 2 replicas for HA. Pinot does not have shards vs replica nodes. Instead each segment is replicated to 2 to 3 nodes. This is better for hardware utilization.
3. Pre-aggregation. OLAP cubes became popular in the 1990s. They pre-aggregate data to make queries significantly faster, but the downside is high storage cost. ClickHouse doesn't have the equivalent of OLAP cubes at all. Pinot has something better than OLAP cubes: Star trees. Like cubes, star trees pre-aggregate data along multiple dimensions, but don't need as much storage.
This is not true in practice. In a properly designed implementation you can add new shards without downtime. ClickHouse compression is so efficient that most installations don't need to shard in the first place--everything fits on a single server with a few TB of storage. (We've put up to 50Tb on single servers.)
1. If you are loading time series data you can often just wait for the shards to rebalance automatically as new data arrives and old data drops out.
2. If you are scaling to add new tenants, just put them on a separate shard and let them fill naturally.
3. It's not hard to move table parts, if you want to redistribute data across shards. ClickHouse comes with commands to do this and it's fairly easy to script.
4. ClickHouse can scale vertically for a long way. It works fine on VMs with 256GB of RAM and 50 TB of storage.
Just to be clear, it would be nice if open source ClickHouse could dispense with shards. In practice, though, it's not the biggest issue in most of systems I've seen (which is now in the hundreds), and there are several ways to design around it. Changing table partition keys is a harder issue to solve. Allocating adequate RAM for queries is another problem that's difficult to address.
Also, do you have thoughts on Starrocks?
Looks like they don't configure any indexes for Pinot in their benchmarks, which is one of Pinot's main selling points on the performance front - https://github.com/ClickHouse/ClickBench/issues/37.
[Disclosure: I work at StarTree, and we're powered by Apache Pinot.]
We are currently considering / evaluating different methodologies to benchmark more realistic situations for real-time analytics. Potential consideration for your own benchmarking / POCs, or for a future industry benchmark spec:
1. Some sort of "freshness" (data latency) measurement: time for streaming ingestion / indexing / data ready for query. Is it consistent, or are there pauses in ingestion?
2. Some sort of "ingestion scaling" measurement: how many objects per second can you get to before you choke IO? What happens to ingested objects at different payload sizes? (This interacts with "freshness" above; i.e., you might [or might not] be able to throttle ingestion to improve freshness.)
3. Query concurrency/throughput: does your query capacity scale linearly or non-linearly? What happens at 10 QPS? 100 QPS? 1000 QPS? 10000 QPS? 100000 QPS? (Or when does it top out?)
4. Data volume: Are you querying against 1TB? 10TB? 100TB? 1 PB? (More?) This interacts with query concurrency. Because driving 100 QPS against 1 PB is a totally different case than driving 100000 QPS against 1 TB.
5. Data storage type: Are you running against local NVMe, EBS, or S3 buckets? (Maybe even HDD?) Is it all uniform storage, or is it in a tiered storage topology? If tiered, what's the % mix of the different storage types? This is just an increasing reality all vendors need to deal with. Customers want to optimize their spend per use case.
6. Query complexity: Before talking simple "latencies," you have to understand what sort of queries you're running. These aren't simple atomic row CRUD operations like a Redis or a ScyllaDB. How are you doing aggregates? Are you running queries against denormalized data in a single table, or are you doing single JOINs or multiple table complex JOINs?
7. Indexing: As pointed out by shadow28, indexes are vital for best performance. Which type of index was used? (Apache Pinot supports about a dozen different types).
And my personal favorite to throw into the mix:
8. Cluster resilience: Great. All of the above worked on a fully-provisioned stable cluster. Now knock out a node. Do it. See what happens. How long before the cluster rebalances and quiesces? What happens to your QPS and latencies during the rebalance, and then after quiescence? Measure that. Now knock out a 2nd node. Maybe a third. How many nodes can you bring down before performance goes non-linear, or the cluster is rendered as utterly unreliable.
This latter I call the "Torpedo test;" and I've been preaching about it for years[1]. How many "torpedos" can you cluster take before it sinks under the waves. It's not specific to real-time OLAP. You can use this kind of methodology to test resilience of any distributed system. And you should probably do this before you hit production.
[1] https://www.slideshare.net/slideshow/what-we-learned-about-a...
Any idea if that would affect its performance?
The last time I checked there are several databases written in Go, which is also garbage-collected, but never saw one in Java except Apache Derby.
HSQLDB:
https://en.m.wikipedia.org/wiki/HSQLDB
H2:
https://en.m.wikipedia.org/wiki/H2_(database)
PointBase:
https://en.m.wikipedia.org/wiki/PointBase
I have briefly used either one of the first two, I think it was HSQLDB, with Java, years ago.
PointBase was created by Bruce Scott, a co-founder of Oracle, and a few others, including an architect of Sybase. This was in the early days of Java.
All three still seem to exist, according to the links above.
I've never had good experience with these implementations. If you say, the DB isn't performing well, the only answer you get is, well you forgot to tune these 1000 knobs and good luck if your data doesn't fit in RAM.
For some reason I feel your DB in a text file, would fare just as good as these given the same memory and resources that these Apache/java DB servers demand.
Something for me to look into next year, clearly.
Thanks for sharing! My choices are pretty coloured by personal experience, and I didn't want to re-tread anything from the book (Redis/Valkey, Neo4j etc) other than Postgres - mostly due to Postgres changing _a lot_ over the years.
I had considered an OSS Dynamo-like (Cassandra, ScyllaDB, kinda), or a Calvin-like (FaunaDB), but went with FoundationDB instead because to me, that was much more interesting.
After a decade of running DBaaS at massive scale, I'm also pretty biased towards easy-to-run.
maybe 2026, or some bonus content for 2025 :)
For document databases, I'm more interested in things like PoloDB and SurrealDB.
As the author mentioned, I completely agree with this statement. In fact, many companies like Cloudflare are built with exactly this approach and it has scaled them pretty well without the need of any third database.
> Another reason I suggest checking out ClickHouse is that it is a joy to operate - deployment, scaling, backups and so on are well documented - even down to setting the right CPU governor is covered.
Another point mentioned by author which is worth highlighting is the ease of deployment. Most distributed databases aren't so easy to run at scale, ClickHouse is much much easier and it has become even more easier with efficient storage-compute separation.
My team at ClickHouse has been working hard to make the integration even seamless. We work on PeerDB, an open source tool enabling seamless Postgres replication to ClickHouse https://github.com/PeerDB-io/peerdb/ This integration is now also natively available in the Cloud through ClickPipes. The private preview was released just last week https://clickhouse.com/cloud/clickpipes/postgres-cdc-connect...
PS: I am also a fan of Postgres, and we are using that for our startup. But I don't know the answer if someone asks me, why not Mysql. Hence asking
Overall I think Postgres adoption and integrations and thus community is much more wider than MySQL which gives it major advantage over MySQL. Also looking at the number of database-as-a-service companies of Postgres vs those of MySQL we can immediately acknowledges that Postgres is much widely adopted.
- MySQL performs a bit better when reading by primary key
- Postgres performs a bit better when doing random inserts/updates.
- MySQL you don't need to worry about vacuums
- The MySQL query optimizer is nice because you can give it hints when it misbehaves. This can be a godsend during certain production incidents.
- Last I checked MySQL still has a nicer scaling story than postgres, but I'm not sure what the latest here is.
- Connection pooling is still heavily in MySQLs favor i.e. you don't need the PG bouncer for lots for scenarios.
I don't know how much of that article points are still valid.
The other part in favor of mysql (in my opinion) are that there are lots of companies that use mysql in production - so the access patterns, and its quirks are very well defined Companies like Square, YouTube, Meta, Pinterest, now Uber all use mysql. From blind, Stripe was also thinking of moving all its fleet from Mongo to mysql
Perception wise, it looks like companies needing internet scale data are using mysql
Obviously there are alternatives like MariaDB but Postgres is a quality long standing open source solution.
I agree so much with the paragraphs about "Dependencies" and "Tooling".
On the AI front, vector databases like Pinecone and pgvector are exciting, but I’d love to see something even more integrated with AI workflows. The possibilities are huge. Curious to hear what others think!
Unrelated, not sure if it is just me, but ever since LLMs became popular, I've been seeing an enormous amounts of special utf8 characters no one used regularly, like this em dash you used.
How is this simple to type? If you're on a phone keyboard, you have to switch to special characters, and then have to long-hold the dash and then slide to em dash.
LLMs are everywhere.
there are also bindings for bullets • and probably other things I'm forgetting (or that may be conflicting with other bindings I have setup)
I have an AutoHotkey script that auto-replaces "--" with "—" (and auto-disables it in contexts where it's likely intended, like SQL editors and my terminal).
I also auto-replace "`-" with "–" so I can conveniently do number ranges with the (objectively correct) n-dash to indicate a range.
Do you mean a database still? Or something like Langflow or Dify? Curious what "something even more integrated" would look like as just a DB.
https://news.ycombinator.com/item?id=42330710 and https://news.ycombinator.com/item?id=42330639
If you don't know about FDB; there's an amazing video about how they test it that really got me into learning more:
Its a bit of a vicious circle - because there is low exposure, no one is building those layers. Because no one is building the layers, there is no exposure.
Between Dynamodb, Cassandra, and Scylla seems like that problem set is somewhat a solved problem? I know those products continue to move forward, but they all work really well at this point and solve the fundamental problem to a good degree.
Patroni for most cases. At Heroku we have our own control plane to manage HA and fencing which works very reliably. I also like the approach the Cloud Native PG folks have taken with implementing it in the k8s API via the instance manager[1].
Other options like Stolon or repmgr are popular too. Patroni is, despite the Jepsen testing, used well without issues in the majority of circumstances. I wouldn't over think it.
[1]: https://cloudnative-pg.io/documentation/1.24/instance_manage...
And, from the same vendor, IBM still ships IMS for those who find Codd's ideas too exciting.
Full recovery mode defaults combined with log growth settings, query parallelism costs, maintenance and backup solutions being offhanded to Ola, etc - many of those things are fixed but things like parameter sniffing are still absolutely causing pages going off for someone today.
There is a lot of activity in the space, from things like TurboPuffer (proprietary), Postgres extensions like VectorChord and pgvector, to established players like Elastic getting in on the action. Not to mention things like Pinecone.
> what characteristics make for a best in class ai database
As I said before, I think the space is moving too fast on what is "best in class" - see all the vector indexing work that has happened in the last ~6 months or so. Big, centralised vector stores for some applications will have very different profiles to vector stores on the edge/on device, for example.
As mentioned, I'm a big fan of boring technology, so I'm going to happily wait we have a boring "winner".
didn't it like go closes-source a while ago?
But relicensed to the "CockroachDB Software License" as a form of BSL to prevent reselling.
I feel like the article is written mainly for Database power-users (= people that look beyond the horizon of the default DB choice of Postgres/MySQL), as well as for people getting onto database engineering.
I would say for both of those audiences, Qdrant (or any of the dedicated vector databases) isn't too interesting. For the most part they are very normal (and usually less-featured) databases with a heavy focus on a special kind of index, which most DBs on the list can also handle either natively or via extensions.
The actual reason is that DuckDB's API and integration into other places (e.g. Evidence) and its use of extensions (like the aforementioned gsheets one) gives it priority for me.
Additionally, its being used in a bunch more places like pg_duckdb that make it more "worth it".
"CockroachDB gets its name from cockroaches, as they are known for being disaster-resistant.[5]"