Why DuckDB is my first choice for data processing
115 points
by tosh
9 hours ago
| 17 comments
| robinlinacre.com
| HN
mrtimo
3 hours ago
[-]
What I love about duckdb:

-- Support for .parquet, .json, .csv (note: Spotify listening history comes in a multiple .json files, something fun to play with).

-- Support for glob reading, like: select * from 'tsa20*.csv' - so you can read hundreds of files (any type of file!) as if they were one file.

-- if the files don't have the same schema, union_by_name is amazing.

-- The .csv parser is amazing. Auto assigns types well.

-- It's small! The Web Assembly version is 2mb! The CLI is 16mb.

-- Because it is small you can add duckdb directly to your product, like Malloy has done: https://www.malloydata.dev/ - I think of Malloy as a technical persons alternative to PowerBI and Tableau, but it uses a semantic model that helps AI write amazing queries on your data. Edit: Malloy makes SQL 10x easier to write because of its semantic nature. Malloy transpiles to SQL, like Typescript transpiles to Javascript.

reply
newusertoday
1 hour ago
[-]
its 32 mb uncompressed and around 6MB compressed, its not that small https://cdn.jsdelivr.net/npm/@duckdb/duckdb-wasm/dist/

it is also difficult to customize as compared to sqlite so for example if you want to use your own parser for csv than it becomes hard.

But yes it provides lot of convenience out of the box as you have already listed.

reply
arjie
1 hour ago
[-]
This is a great sell. I have this annoyingly manual approach with a SQLite import and so on. This is great. Thank you!
reply
oulipo2
1 hour ago
[-]
Malloy and PRQL (https://prql-lang.org/book/) are quite cool
reply
owlstuffing
27 minutes ago
[-]
100% agree.

> Writing SQL code

Language integration is paramount for med/lg projects. There's an experimental Java lang project, manifold-sql [1], that does the impossible: inline native DuckDB SQL + type-safety.

    """
    [.sql/] SELECT station_name, count(*) AS num_services
      FROM 'http://blobs.duckdb.org/train_services.parquet'
      WHERE monthname(date) = 'May'
      GROUP BY ALL
      ORDER BY num_services DESC
      LIMIT 3
    """
    .fetch()
    .forEach(row -> out.println(row.stationName + ": " + row.numServices));
1. https://github.com/manifold-systems/manifold/blob/master/doc...
reply
steve_adams_86
1 hour ago
[-]
It has become a favourite tool for me as well.

I work with scientists who research BC's coastal environment, from airborne observation of glaciers to autonomous drones in the deep sea. We've got heaps of data.

A while back I took a leap of faith with DuckDB as the data-processing engine for a new tool we're using to transform and validate biodiversity data. The goal is to take heaps of existing datasets and convert them to valid Darwin Core data. Keyword being valid.

DuckDB is such an incredible tool in this context. Essentially I dynamically build duckdb tables from schemas describing the data, then import it into the tables. If it fails, it explains why on a row-by-row basis (as far as it's able to, at least). Once the raw data is in, transformations can occur. This is accomplished entirely in DuckDB as well. Finally, validations are performed using application-layer logic if the transformation alone isn't assurance enough.

I've managed to build an application that's way faster, way more capable, and much easier to build than I expected. And it's portable! I think I can get the entire core running in a browser. Field researchers could run this on an iPad in a browser, offline!

This is incredible to me. I've had so much fun learning to use DuckDB better. It's probably my favourite discovery in a couple of years.

And yeah, this totally could have been done any number of different ways. I had prototypes which took much different routes. But the cool part here is I can trust DuckDB to do a ton of heavy lifting. It comes with the cost of some things happening in SQL that I'd prefer it didn't sometimes, but I'm content with that tradeoff. In cases where I'm missing application-layer type safety, I use parsing and tests to ensure my DB abstractions are doing what I expect. It works really well!

edit: For anyone curious, the point of this project is to allow scientists to analyze biodiversity and genomic data more easily using common rather than bespoke tools, as well as publish it to public repositories. Publishing is a major pain point because people in the field typically work very far from the Darwin Core spec :) I'm very excited to polish it a bit and get it in the hands of other organizations.

reply
uwemaurer
27 minutes ago
[-]
We use DuckDB to process analytics and feeds for Bluesky (https://bluefacts.app)

To get fast access to to queries we use the Apache Arrow interface and generate the code directly from DuckDB SQL queries using the SQG tool ( https://sqg.dev/generators/java-duckdb-arrow/)

reply
noo_u
3 hours ago
[-]
I'd say the author's thoughts are valid for basic data processing. Outside of that, most of claims in this article, such as:

"We're moving towards a simpler world where most tabular data can be processed on a single large machine1 and the era of clusters is coming to an end for all but the largest datasets."

become very debatable. Depending on how you want to pivot/ scale/augment your data, even datasets that seemingly "fit" on large boxes will quickly OOM you.

The author also has another article where they claim that:

"SQL should be the first option considered for new data engineering work. It’s robust, fast, future-proof and testable. With a bit of care, it’s clear and readable." (over polars/pandas etc)

This does not map to my experience at all, outside of the realm of nicely parsed datasets that don't require too much complicated analysis or augmentation.

reply
theLiminator
7 minutes ago
[-]
Yeah, i'm also similarly confused.

> "SQL should be the first option considered for new data engineering work. It’s robust, fast, future-proof and testable. With a bit of care, it’s clear and readable." (over polars/pandas etc)

SQL has nothing to do with fast. Not sure what makes it any more testable than polars? Future-proof in what way? I guess they mean your SQL dialect won't have breaking changes?

reply
RobinL
2 hours ago
[-]
Author here. Re: 'SQL should be the first option considered', there are certainly advantages to other dataframe APIs like pandas or polars, and arguably any one is better in the moment than SQL. At the moment Polars is ascendent and it's a high quality API.

But the problem is the ecosystem hasn't standardised on any of them, and it's annoying to have to rewrite pipelines from one dataframe API.

I also agree you're gonna hit OOM if your data is massive, but my guess is the vast majority of tabular data people process is <10GB, and that'll generally process fine on a single large machine. Certainly in my experience it's common to see Spark being used on datasets that are no where big enough to need it. DuckDB is gaining traction, but a lot of people still seem unaware how quickly you can process multiple GB of data on a laptop nowadays.

I guess my overall position is it's a good idea to think about using DuckDB first, because often it'll do the job quickly and easily. There are a whole host of scenarios where it's inappropriate, but it's a good place to start.

reply
noo_u
2 hours ago
[-]
I think both of us are ultimately wary of using the wrong tool for the job.

I see your point, even though my experience has been somewhat the opposite. E.g. a pipeline that used to work fast enough/at all up until some point in time because the scale of the data or requirements allowed it. Then some subset of these conditions changes, the pipeline cannot meet them, and one has to reverse engineer obscure SQL views/stored procedures/plugins, and migrate the whole thing to python or some compiled language.

I work with high density signal data now, and my SQL knowledge occupies the "temporary solution" part of my brain for the most part.

reply
chaps
2 hours ago
[-]
Yeah, my experiences match yours and I very, very much work with messy data (FOIA data), though I use postgres instead of duckdb.

Most of the datasets I work with are indeed <10GB but the ones that are much larger follow the same ETL and analysis flows. It helps that I've built a lot of tooling to help with types and memory-efficient inserts. Having to rewrite pipelines because of "that one dataframe API" is exactly what solidified my thoughts around SQL over everything else. So much of my life time has been lost trying to get dataframe and non-dataframe libraries to work together.

Thing about SQL is that it can be taken just about anywhere, so the time spent improving your SQL skills is almost always well worth it. R and pandas much less so.

reply
gofreddygo
1 hour ago
[-]
I advocated for a SQL solution at work this week and it seems to have worked. My boss is wary of the old school style SQL databases with their constraints and just being a pain to work with. As a developer, these pains aren't too hard to get used to or automate or document away for me and never understood the undue dislike of sql.

The fact that I can use sqlite / local sql db for all kinds of development and reliably use the same code (with minor updates) in the cloud hosted solution is such a huge benefit that it undermines anything else that any other solution has to offer. I'm excited about the sql stuff I learned over 10 years ago being of of great use to me in the coming months.

The last product I worked heavily on used a nosql database and it worked fine till you start tweak it just a little bit - split entities, convert data types or update ids. Most of the data access layer logic dealt with conversion between data coming in from the database and the guardrails to keep the data integrity in check while interacting with the application models. To me this is something so obviously solved years ago with a few lines of constraints. Moving over to sql was totally impossible. Learned my lesson, advocated hard for sql. Hoping for better outcomes.

reply
chaps
1 hour ago
[-]
I totally understand the apprehension towards SQL. It's esoteric as all hell and it tends to be managed by DBAs who you really only go to whenever there's a problem. Organizationally it's simpler to just do stuff in-memory without having to fiddle around with databases.
reply
hnthrowaway0315
3 hours ago
[-]
SQL is popular because everyone can learn and start using it after a while. I agree that Python sometimes is a better tool but I don't see SQL going away anytime.

From my experience, the data modelling side is still overwhelmingly in SQL. The ingestion side is definitely mostly Python/Scala though.

reply
alastairr
1 hour ago
[-]
I'm running duckdb over 500gb of parquet on a largish desktop (50gb ram) and it's been smooth & fast. I guess OOM issues will matter at some point, but I think it's going to be in the top 1% of real world use cases.
reply
jauntywundrkind
28 minutes ago
[-]
With a decent SSD (or eight), spilling to disk is really not bad these days! Yes!

And if that's still not enough, if you just need to crunch data a couple times a week, it's not unreasonable to get a massive massive cloud box with ridiculous amounts of ram or ram+SSD. I7i or i8g boxes. Alas, we have cheap older gen epycs & some amazing cheap motherboards but RAM prices to DIY are off the charts unbelievable, but so be it.

reply
efromvt
1 hour ago
[-]
DuckDB can't get enough love! Beyond being a really performant database, being so friendly [sql and devx] is really underrated and hard to pull off well and is key to it being so fun - you get a compounding ecosystem because it's so easy to get started. Hoping that they can manage to keep it vibrant without it slowing down the pace of innovation at all.

The web/WASM integration is also fabulous. Looking forward to more "small engines" getting into that space to provide some competition and keep pushing it forward.

reply
DangitBobby
5 hours ago
[-]
Being able to use SQL on CSV and json/jsonl files is pretty sweet. Of course it does much more than that, but that's what I do most often with it. Love duckdb.
reply
samuell
4 hours ago
[-]
Indeed! I generally like awk a lot for simpler CSV/TSV processing, but when it comes to cases where you need things like combining/joining multiple CSV files or aggregating for certain columns, SQL really shines IME.
reply
majkinetor
2 hours ago
[-]
Anybody with experience in using duckdb to quickly select page of filtered transactions from the single table having a couple of billions of records and let's say 30 columns where each can be filtered using simple WHERE clausule? Lets say 10 years of payment order data. I am wondering since this is not analytical scenario.

Doing that in postgres takes some time, and even simple count(*) takes a lot of time (with all columns indexed)

reply
morganherlocker
1 hour ago
[-]
I've used duckdb a lot at this scale, and I would not expect something like this to take more than a few seconds, if that. The only slow duckdb queries I have encountered either involve complex joins or glob across many files.
reply
tjchear
3 hours ago
[-]
I’ve not used duckdb before nor do I do much data analysis so I am curious about this one aspect of processing medium sized json/csv with it: the data are not indexed, so any non-trivial query would require a full scan. Is duckdb so fast that this is never really a problem for most folks?
reply
RobinL
2 hours ago
[-]
It is true that for json and csv you need a full scan but there are several mitigations.

The first is simply that it's fast - for example, DuckDB has one of the best csv readers around, and it's parallelised.

Next, engines like DuckDB are optimised for aggregate analysis, where your single query processes a lot of rows (often a significant % of all rows). That means that a full scan is not necessarily as big a problem as it first appears. It's not like a transactional database where often you need to quickly locate and update a single row out of millions.

In addition, engines like DuckDB have predicate pushdown so if your data is stored in parquet format, then you do not need to scan every row because the parquet files themselves hold metadata about the values contained within the file.

Finally, when data is stored in formats like parquet, it's a columnar format, so it only needs to scan the data in that column, rather than needing to process the whole row even though you may be only interested in one or two columns

reply
biophysboy
3 hours ago
[-]
Zonemaps are created for columns automatically. I process somewhat large tables w/ duckdb regularly (100M rows) and never have any problems.
reply
riku_iki
2 hours ago
[-]
that's true for duckdb native tables, but the question was about json.
reply
biophysboy
52 minutes ago
[-]
They said json and csv - it handles both!
reply
riku_iki
49 minutes ago
[-]
handles depends on size. But I tried to say there is no zonemaps for json.
reply
ayhanfuat
2 hours ago
[-]
If you are going to query it frequently then json/csv might become an issue. I think the reason it doesn't become a problem for duckdb/polars users is that we generally convert them to parquet after first read.
reply
simlevesque
3 hours ago
[-]
But when indexing your json or csv, if you have say 10 rows, each row is separated on your disk instead of all together. So a scan for one columb only needs to read a tenth of the disk space used for the data. Obviously this depends on the columns' content.
reply
gdulli
2 hours ago
[-]
But you can have a surprisingly large amount of data before the inefficiency you're talking about becomes untenable.
reply
akhundelar
3 hours ago
[-]
Not a duckdb user, but I use polars a lot (mentioned in the article).

Depends on your definition of medium sized, but for tables of hundreds of thousands of rows and ~30 columns, these tools are fast enough to run queries instantly or near instantly even on laptop CPUs.

reply
mpalmer
3 hours ago
[-]
I guess the question is: how much is medium? DuckDB can handle quite a lot of data without breaking a sweat. Certainly if you prefer writing SQL for certain things, it's a no-brainer.
reply
biophysboy
3 hours ago
[-]
I think my favorite part of duckdb is its flexibility. Its such a handly little swiss army knife for doing analytical processing in scientific environments (messy data w/ many formats).
reply
film42
2 hours ago
[-]
Just 10 minutes ago I was working with a very large semi-malformed excel file generated by a mainframe. DuckDB was able to load it with all_varchar (just keep everything a string) in under a second.

I'm still waiting for Excel to load the file.

reply
oulu2006
5 hours ago
[-]
That's really interesting, I love the idea of being able to use columnar support directly within postgresql.

I was thinking of using Citus for this, but possibly using duckdb is a better way to do. Citus comes with a lot more out of the box but duckdb could be a good stepping stone.

reply
biophysboy
3 hours ago
[-]
Its a really handy tool. I've queried basically everything you can w/ duckdb - csv, json, s3 buckets, MS SQL servers, excel sheets, pandas dataframes, etc - and have had very few issues.
reply
smithclay
4 hours ago
[-]
Agree with the author, will add: duckdb is an extremely compelling choice if you’re a developer and want to embed analytics in your app (which can also run in a web browser with wasm!)

Think this opens up a lot of interesting possibilities like more powerful analytics notebooks like marimo (https://marimo.io/) … and that’s just one example of many.

reply
canadiantim
3 hours ago
[-]
The wasm is pretty heavy data-wise tho, I’m hoping eventually it’ll be lighter for easier loading on not so good devices.
reply
efromvt
1 hour ago
[-]
I'd really love a minimalist version, I'm not sure how small it's feasible for them to shrink it. As long as it doesn't get bigger and devices keep getting faster, I suppose?
reply
countrymile
1 hour ago
[-]
It's worth noting that R has a great duckdb API as well. Saved me a lot of time when dealing with a 29GB CSV file and splitting it into separate parquet files on a low RAM ONS server a few months back.

https://r.duckdb.org/

reply
s-a-p
3 hours ago
[-]
"making DuckDB potentially a suitable replacement for lakehouse formats such as Iceberg or Delta lake for medium scale data" > I'm a Data Engineering noob, but DuckDB alone doesn't do metadata & catalog management, which is why they've also introduce DuckLake.

Related question, curious as to your experience with DuckLake if you've used it. I'm currently setting up s3 + Iceberg + duckDB for my company (startup) and was wondering what to pick between Iceberg and DuckLake.

reply
nchagnet
2 hours ago
[-]
We're using ducklake with data storage on Google cloud storage and the catalog inside a postgres database and it's a breeze! It may not be the most mature product, but it's definitely a good setup for small to medium applications which still require a data lake.
reply
biophysboy
3 hours ago
[-]
DuckLake is pretty new, so I guess it would depend on if you need a more mature, fully-featured app.
reply
pattar
2 hours ago
[-]
I went to a talk by the Motherduck team about why they built DuckLake instead of leaning more in on Iceberg. The key takeaway is that instead of storing all the table metadata inside files on s3 and dealing with latency and file io they instead store all of that info inside a duckdb table. Seems like a good idea and worked smoothly when I tried, however it is not quite in a stable production state it is still <1.0. They have a nice talk about it on youtube: https://youtu.be/hrTjvvwhHEQ?si=WaT-rclQHBxnc9qV
reply
rustyconover
48 minutes ago
[-]
DuckDB is awesome and Robin is too!
reply
clumsysmurf
4 hours ago
[-]
DuckDB has experimental builds for Android ... I'm wondering how much work it would take to implement a Java API for it similar to sqlite (Cursor, etc).
reply
jtbaker
2 hours ago
[-]
reply