ggsql: A Grammar of Graphics for SQL
99 points
1 hour ago
| 11 comments
| opensource.posit.co
| HN
anentropic
58 minutes ago
[-]
Maybe I skim read it too fast, but I did not find any clear description in the blog post or website docs of how this relates to SQL databases

I was kind of guessing that it doesn't run in a database, that it's a SQL-like syntax for a visualisation DSL handled by front end chart library.

That appears to be what is described in https://ggsql.org/get_started/anatomy.html

But then https://ggsql.org/faq.html has a section, "Can I use SQL queries inside the VISUALISE clause," which says, "Some parts of the syntax are passed on directly to the database".

The homepage says "ggsql interfaces directly with your database"

But it's not shown how that happens AFAICT

confused

reply
tantalor
3 minutes ago
[-]
> SQL databases ... confused

"SQL" and "databases" are different things

SQL is a declarative language for data manipulation. You can use SQL to query a database, but there's nothing special about databases. You can also write SQL to query other non-database sources like flat files, data streams, or data in a program's memory.

Conversely, you can query a database without SQL.

reply
thomasp85
53 minutes ago
[-]
That is fair - it is somewhat of a special concept.

ggsql connects directly with your database backend (if you wish - you can also run it with an in-memory DuckDB backend). Your visual query is translated into a SQL query for each layer of the visualisation and the resulting table is then used for rendering.

E.g.

VISUALISE page_views AS x FROM visits DRAW smooth

will create a SQL query that calculates a smoothing kernel over the data and returns points along that. Those points are then used to create the final line chart

reply
georgestagg
40 minutes ago
[-]
ggsql has the concept of a "reader", which can be thought of as the way ggsql interfaces with a SQL database. It handles the connection to the database and generating the correct dialect of SQL for that database.

As an alpha, we support just a few readers today: duckdb, sqlite, and an experimental ODBC reader. We have largely been focusing development mainly around driving duckdb with local files, though duckdb has extensions to talk to some other types of database.

The idea is that ggsql takes your visualisation query, and then generates a selection of SQL queries to be executed on the database. It sends these queries using the reader, then builds the resulting visualisation with the returned data. That is how we can plot a histogram from very many rows of data, the statistics required to produce a histogram are converted into SQL queries, and only a few points are returned to us to draw bars of the correct height.

By default ggsql will connect to an in-memory duckDB database. If you are using the CLI, you can use the `--reader` argument to connect to files on-disk or an ODBC URI.

If you use Positron, you can do this a little easier through its dedicated "Connections" pane, and the ggsql Jupyter kernel has a magic SQL comment that can be issued to set up a particular reader. I plan to expand a little more on using ggsql with these external tools in the docs soon.

reply
nojito
17 minutes ago
[-]
Highly suggest leveraging adbc. I would love to use this against our bigquery tables.
reply
password4321
55 minutes ago
[-]
Yes this was my question as well, an example showing all the plumbing/dependencies to generate a graph from an external database server would be very helpful.
reply
thomasp85
51 minutes ago
[-]
We certainly plan to create a few videos showing how to set it up and use it. If you use it in Positron with the ggsql extension it can interact directly with the connection pane to connect to the various backends you have there
reply
getnormality
31 minutes ago
[-]
I skimmed the article for an explanation of why this is needed, what problem it solves, and didn't find one I could follow. Is the point that we want to be able to ask for visualizations directly against tables in remote SQL databases, instead of having to first pull the data into R data frames so we can run ggplot on it? But why create a new SQL-like language? We already have a package, dbplyr, that translates between R and SQL. Wouldn't it be more direct to extend ggplot to support dbplyr tbl objects, and have ggplot generate the SQL?

Or is the idea that SQL is such a great language to write in that a lot of people will be thrilled to do their ggplots in this SQL-like language?

EDIT: OK, after looking at almost all the documentation, I think I've finally figured it out. It's a standalone visualization app with a SQL-like API that currently has backends for DuckDB and SQLite and renders plots with Vegalite. They plan to support more backends and renderers in the future. As a commenter below said, it's for SQL specialists who don't know Python or R.

reply
nchagnet
36 seconds ago
[-]
I was quite psyched when I read this so maybe I can tell you why it's interesting to me, although I agree the announcement could have done a better job at it.

In my experience, the only thing data fields share is SQL (analysts, scientists and engineers). As you said, you could do the same in R, but your project may not be written in R, or Python, but it likely uses an SQL database and some engine to access the data.

Also I've been using marimo notebooks a lot of analysis where it's so easy to write SQL cells using the background duckdb that plotting directly from SQL would be great.

And finally, I have found python APIs for plotting to be really difficult to remember/get used to. The amount of boilerplate for a simple scatterplot in matplotlib is ridiculous, even with a LLM. So a unified grammar within the unified query language would be pretty cool.

reply
nojito
15 minutes ago
[-]
It seems to be for sql users who don’t know python or r.
reply
nchagnet
6 minutes ago
[-]
I would even add that it fits into a more general trend where operations are done within SQL instead of in a script/program which would use SQL to load data. Examples of this are duckdb in general, and BigQuery with all its LLM or ML functions.
reply
kasperset
35 minutes ago
[-]
Will this ever integrate rest of the ggplot2 dependent packages described here: https://exts.ggplot2.tidyverse.org/gallery/ in the near or distant future? Sorry if it already mentioned somewhere.
reply
thomasp85
10 minutes ago
[-]
I don't think we will get the various niche geoms that have been developed by the ggplot2 community anytime soon.

The point of this is not to superseed ggplot2 in any way, but to provide a different approach which can do a lot of the things ggplot2 can, and some that it can't. But ggplot2 will remain more powerful for a lot of tasks in many years to come I predict

reply
hei-lima
3 minutes ago
[-]
Really cool!
reply
efromvt
44 minutes ago
[-]
Love the layering approach - that solves a problem I’ve had with other sql/visual hybrids as you move past the basics charts.
reply
gh5000
33 minutes ago
[-]
It is conceivable that this could become a duckdb extension, such that it can be used from within the duckdb CLI? That would be pretty slick.
reply
thomasp85
12 minutes ago
[-]
That is conceivable, not a top priority as we want to focus on this being a great experience for every backend, but certainly something we are thinking of
reply
kasperset
1 hour ago
[-]
Looks intriguing. Brings plotting to Sql instead of “transforming” sql for plotting.
reply
thomasp85
1 hour ago
[-]
The new visualisation tool from Posit. Combines SQL with the grammar of graphics, known from ggplot2, D3, and plotnine
reply
zcw100
59 minutes ago
[-]
Don't forget Vega! https://vega.github.io/vega/
reply
thomasp85
1 hour ago
[-]
I'm one of the authors - happy to take any questions!
reply
mi_lk
1 hour ago
[-]
I don't think D3 uses grammar of graphics model?
reply
thomasp85
1 hour ago
[-]
I'd say it does, though it is certainly much more low-level than e.g. ggplot2. But the basic premises of the building blocks described be Leland Wilkinson is there
reply
radarsat1
1 hour ago
[-]
Wow, love this idea.
reply
data_ders
14 minutes ago
[-]
ok, this is definitely up my alley. color me nerd-sniped and forgive the onslaught of questions.

my questions are less about the syntax, which i'm largely familiar with knowing both SQL and ggplot.

i'm more interested in the backend architecture. Looking at the Cargo.toml [1], I was surprised to not see a visualization dependency like D3 or Vega. Is this intentional?

I'm certainly going to take this for a spin and I think this could be incredible for agentic analytics. I'm mostly curious right now what "deployment" looks like both currently in a utopian future.

utopia is easier -- what if databases supported it directly?!? but even then I think I'd rather have databases spit out an intermediate representation (IR) that could be handed to a viz engine, similar to how vega works. or perhaps the SQL is the IR?!

another question that arises from the question of composability: how distinct would a ggplot IR be from a metrics layer spec? could i use ggsql to create an IR that I then use R's ggplot to render (or vise versa maybe?)

as for the deployment story today, I'll likely learn most by doing (with agents). My experiment will be to kick off an agent to do something like: extract this dataset to S3 using dlt [2], model it using dbt [3], then use ggsql to visualize.

p.s. @thomasp85, I was a big fan of tidygraph back in the day [4]. love how small our data world is.

[1]: https://github.com/posit-dev/ggsql/blob/main/Cargo.toml

[2]: https://github.com/dlt-hub/dlt

[3]: https://github.com/dbt-labs/dbt-fusion

[4]: https://stackoverflow.com/questions/46466351/how-to-hide-unc...

reply
thomasp85
3 minutes ago
[-]
Let me try to not miss any of the questions :-)

ggsql is modular by design. It consists of various reader modules that takes care of connecting with different data backends (currently we have a DuckDB, an SQLite, and an ODBC reader), a central plot module, and various writer modules that take care of the rendering (currently only Vegalite but I plan to write my own renderer from scratch).

As for deployment I can only talk about a utopian future since this alpha-release doesn't provide much tangible in that area. The ggsql Jupyter kernel already allows you to execute ggsql queries in Jupyter and Quarto notebooks, so deployment of reports should kinda work already, though we are still looking at making it as easy as possible to move database credentials along with the deployment. I also envision deployment of single .ggsql files that result in embeddable visualisations you can reference on websites etc. Our focus in this area will be Posit Connect in the short term

I'm afraid I don't know what IR stands for - can you elaborate?

reply
dartharva
34 minutes ago
[-]
Would be awesome if somehow coupled into Evidence.dev
reply