Sq.io: jq for databases and more
606 points
2 months ago
| 29 comments
| sq.io
| HN
rout39574
2 months ago
[-]
I love JQ. But ... I'd never considered its query language to be particularly admirable. If I want to ask questions of some databases, I don't understand why I'd choose JQ's XPATH-like language to do it.
reply
VMG
2 months ago
[-]
What I love about `jq` that I can edit my query (or "program") by appending tokens at the end. Similar to unix pipes. With plain SQL that is not easy.
reply
pkkm
2 months ago
[-]
Sounds like PRQL [1].

[1] https://prql-lang.org/

reply
snthpy
2 months ago
[-]
Some time ago I wrote pq (https://github.com/prql/prql-query) which aimed to be a simple CLI for PRQL to wrangle data on the command line, much like sq.

Unfortunately I haven't had time to maintain it so it is now archived and out of date. I hope that I might get a chance to update it again. More has happened since then and there are low hanging fruit to make it more usable, for example adding connector_arrow (https://github.com/aljazerzen/connector_arrow) support for other databases, etc...

Quick example of how things looked with pq:

    ```sh
    $ pq --from i=invoices.csv "from i | take 5"
    +------------+-------------+-------------------------------+-------------------------+--------------+---------------+-----------------+---------------------+-------+
    | invoice_id | customer_id | invoice_date                  | billing_address         | billing_city | billing_state | billing_country | billing_postal_code | total |
    +------------+-------------+-------------------------------+-------------------------+--------------+---------------+-----------------+---------------------+-------+
    | 1          | 2           | 2009-01-01T00:00:00.000000000 | Theodor-Heuss-Straße 34 | Stuttgart    |               | Germany         | 70174               | 1.98  |
    | 2          | 4           | 2009-01-02T00:00:00.000000000 | Ullevålsveien 14        | Oslo         |               | Norway          | 0171                | 3.96  |
    | 3          | 8           | 2009-01-03T00:00:00.000000000 | Grétrystraat 63         | Brussels     |               | Belgium         | 1000                | 5.94  |
    | 4          | 14          | 2009-01-06T00:00:00.000000000 | 8210 111 ST NW          | Edmonton     | AB            | Canada          | T6G 2C7             | 8.91  |
    | 5          | 23          | 2009-01-11T00:00:00.000000000 | 69 Salem Street         | Boston       | MA            | USA             | 2113                | 13.86 |
    +------------+-------------+-------------------------------+-------------------------+--------------+---------------+-----------------+---------------------+-------+
    $ # When there is only one input table then this automatically becomes the source relation, i.e. `from i | ` is prepended to the query
    $ # so this can be simplified to:
    $ pq --from invoices.csv "take 5"
    ...
    ```
reply
VMG
2 months ago
[-]
Kind of. However on first glance it seems like the separator in PRQL is a newline instead of a `|`, which is less ergonomic in a shell.
reply
pie_flavor
2 months ago
[-]
Newline and pipe mean the same thing and are interchangeable.
reply
VMG
2 months ago
[-]
nice - are there any PRQL CLI tools for json?
reply
snthpy
2 months ago
[-]
Not directly, but DuckDB, ClickHouse, and GlareDB all support PRQL and they each have CLIs.

In a previous HN comment [1] I showed how you can leverage PRQL to make your RDBM's JSON functionality more ergonomic. For example

    ```sh
    > prqlc compile <<EOF - | duckdb
    let get = path obj -> s"""{obj} -> {path}"""
    let getstr = path obj -> s"""{obj} ->> {path}"""
    let extract = obj path -> s"""json_extract({obj}, {path})"""
    
    from [{data='{"duck": [1, 2, 3]}'}]
    select { (data | get '$.duck[0]'), (data | getstr '$.duck[1]'), extract data '$.duck[2]'}
    EOF
    ┌───────────────────────┬──────────────────────────┬───────────────────────────────────┐
    │ "data" -> '$.duck[0]' │ ("data" ->> '$.duck[1]') │ json_extract("data", '$.duck[2]') │
    │         json          │         varchar          │               json                │
    ├───────────────────────┼──────────────────────────┼───────────────────────────────────┤
    │ 1                     │ 2                        │ 3                                 │
    └───────────────────────┴──────────────────────────┴───────────────────────────────────┘
    ```
More details in that post.

Unfortunately I don't think this really addresses the grandparent comment though because you're still using jsonpath type expressions to unpack the JSON objects. If you really wanted to use PRQL for everything you would have to first convert and flatten your JSON data into relational tables.

1: https://news.ycombinator.com/item?id=37569946

reply
egeozcan
2 months ago
[-]
Any language with map/reduce/flatten can more or less do that, no? I agree with your point about SQL though. I find it personally horrible for data-shaping tasks and tend to load everything in memory (as long as feasible).
reply
pletnes
2 months ago
[-]
KQL has some good things, pipes is one of them I think.
reply
AlphaSite
2 months ago
[-]
I think for certain types of data manipulation and querying it’s notable more succinct, sql with CTEs is a little better but still far more verbose than data piping.
reply
larodi
2 months ago
[-]
A reply by s.o. who sides with your (potentially unpopular) opinion. With all due respect to DSL languages, IMHO only few people can get on this APL-level of abstraction and cryptic choice for APIs..., and would have the nerves to write it.

From learning perspective JQ seems much more difficult than RegEX for example, its learning curve is potentially steeper than that of CSS and XPATH, which are other examples for querying tree-like-structs. While LLMs are welcome to write it (the jq) for me, my work has relatively little JSON transformations, and for the most part handling these in python/js/perl is okay as.

Stating all this with much fascination for the JQ language itself, as technology, but not as a tool that I find the need for on a daily basis. Besides for me it is much more easier to feed and transform data into Postgres (or even SQLite, which can also be challenging), rather than crunch it w/pandas or R where you can also find fourth generation language capabilities, but performance lags.

reply
hnbad
2 months ago
[-]
Presumably the target audience is people who already frequently use JQ and don't want to juggle different query languages when dealing with different data sources?
reply
neilotoole
2 months ago
[-]
Developer here.

That was exactly the target audience.

Note that sq doesn't just handle relational DBs, it also has (varying quality) support for CSV, JSON, Excel, and so on. At the time (2013) I wasn't aware of a convenient one-liner mechanism for munging all of those together from the command line.

reply
fer
2 months ago
[-]
Same, if anything I'd look for a SQL-like language for JSON.
reply
lucideer
2 months ago
[-]
This!

JQ is great because JSON didn't have a query language & needed one. JQ isn't the best query language - it's just the one that won adoption.

DBs already have query languages that are mostly superior to JQ's.

reply
baq
2 months ago
[-]
honestly I'd rather have sql for json than jq for a database... guess this is exactly what clickhouse-local does
reply
dewey
2 months ago
[-]
Sometimes I wonder if it wouldn't be more efficient for people to just learn SQL instead of trying to build tools or layers on top of it that introduce more complexities and are harder to search for.
reply
remon
2 months ago
[-]
HN is inundated with posts announcing paper thin abstractions on top of existing technology or utilities that just move the goalpost of what you knowledge you need to be effective. It's a weird trend that seems almost entirely motivated by people wanting open source projects in their resume, or seek funding if its a startup.
reply
neilotoole
2 months ago
[-]
> It's a weird trend that seems almost entirely motivated by people wanting open source projects in their resume

Developer here. I can't speak to what you see as the weird trend, but I can speak about sq's history:

- I created the first version of sq circa 2013 as my own personal dev tool to address some pain points (amongst other things: getting JSON out of various DBs from the command line in a consistent manner, and wanting an easy way to inspect DB schemas, i.e. "sq inspect")

- It was starting to be a minor PITA dealing with colleagues asking for the latest revision of the tool, so I open-sourced it. In 2016 or so I think?

- sq is FL/OSS and will remain so, no funding is sought, not even one of those "buy me a coffee" thingies

- I didn't create this HN post about sq, nor do I know the person who did. But thanks for sq's 15 mins of fame, kind stranger

reply
syndicatedjelly
2 months ago
[-]
Thank you for creating a composable, simple to grok Unix tool. sq is an example of what is so great about the Unix philosophy.

Personally, I can see a spot for sq in my developer toolbox. Instead of reaching for DBeaver’s bloated interface, or psql’s perplexing TUI, I like the idea of picking up an sq multitool to answer some simple questions quickly about my databases.

Thanks again!

reply
dlisboa
2 months ago
[-]
> It's a weird trend that seems almost entirely motivated by people wanting open source projects in their resume

That’s really harsh and misguided. If people didn’t do “paper thin abstractions” projects on their own time for the simple pleasure of doing it we wouldn’t have 90% of the successful projects we have today.

Let people have fun and don’t judge their motives when they’re making something Open Source. I can guarantee the person just thought “this would be cool to have” and implemented it.

reply
6LLvveMx2koXfwn
2 months ago
[-]
Unless you're the GP your guarantee about the persons motivation is as meaningless as the post you're replying to.
reply
neilotoole
2 months ago
[-]
I'm not GP, but I am the sq developer.

> the person just thought “this would be cool to have” and implemented it.

Correct. This is true of much OSS, or at least I've always suspected so.

reply
ddispaltro
2 months ago
[-]
I think his point is that we should treat something given freely, charitably
reply
EGreg
2 months ago
[-]
Charitably?

This! Is! HN!

reply
raydev
2 months ago
[-]
Thankfully that doesn't apply to this post. sq is a great, full-featured tool.
reply
WhitneyLand
2 months ago
[-]
What a cynical take. Most people working on side projects do so because they find them interesting or useful rather than to just score resume points.

They’re not moving anyone’s goalposts all usage is voluntary.

The fact that HN is mentioned is also confusing. If something had no value beyond paper thin abstraction I doubt we’d be seeing it on the front page with 500 votes.

Even if this project turned out to be less than hoped it seems counterproductive to complain about people creating and exploring. It’s a good thing to see and natural selection will do any sorting needed.

reply
goosejuice
2 months ago
[-]
There are a several features here that go beyond 'learn SQL'. I don't see how you could jump to this conclusion if you read the docs.

Why does pgcli exist. What about dbeaver, datagrip, pgadmin, postico, LINQ, ORMs? It's almost as if people value different interfaces to databases.

Maybe it's not for you, but it's not hard to imagine that someone, beyond the author, might find it useful. Maybe it's just me but to dismiss those people as individuals that should just 'learn SQL' is a pretty rude thing to say.

reply
krosaen
2 months ago
[-]
I know sql pretty well and still find value in this kind of tool - creating schemas and inserting data is a clunky part of sql - the query language is where it really shines. So I can imagine using this to quickly insert some data or to get familiar with the schema and then dive in with normal sql queries.
reply
8338550bff96
2 months ago
[-]
Lots of people struggle to understand the declarative programming language paradigm. It is really pesky because since you're declaring what kinds of results you expect rather than dictating what must be done, you're forced to define your boundary conditions up-front. Much more fun and exciting to charge ahead without worrying about such things
reply
raydev
2 months ago
[-]
Sometimes I wonder if it wouldn't be more efficient for people to just learn assembly instead of trying to build tools or layers on top of it that introduce more complexities and are harder to search for.
reply
fforflo
2 months ago
[-]
Paraphrasing Spencer:

Those who don't understand SQL are doomed to reinvent it, poorly

reply
EGreg
2 months ago
[-]
Actually, when it comes to SQL specifically, it almost demands using a language one higher up.

Any language that features the ability to embed/interpolate unescaped quotes is not secure enough to be used directly by professional developers. Just not worth it to track down all the potential injection attacks.

And if an abstraction lets you understand the structure of queries (eg for sharding them) and even writes the code for you and balances parentheses, flags other syntax errors at compile time etc. then it’s a net benefit.

And of course there is the Adapter Pattern to abstract away connection logic and differences between mysql, pg, etc.

I wrote the “Db” library 12 years ago in PHP and still use it.

I never released it on HN for reasons like this. But I might. Here it is, embedded in my larger library, but I might break it out:

https://github.com/Qbix/Platform/tree/main/platform/classes/...

Documentation:

https://qbix.com/platform/guide/database

https://qbix.com/platform/guide/models

One of the coolest features:

https://qbix.com/platform/guide/models#relations

If you read the above, please let me know your feedback!

Question: Should I release it as a Show HN?

reply
EGreg
2 months ago
[-]
I did it -- posting it here: https://github.com/Qbix/Db
reply
Hugsun
2 months ago
[-]
There's no reason not to post it.
reply
simplify
2 months ago
[-]
SQL itself is a reinvention of Prolog, poorly. Its core value is that we're stuck with it.
reply
bazoom42
2 months ago
[-]
SQL is based on relational algebra. Poorly some would argue.
reply
Xenoamorphous
2 months ago
[-]
It's funny because recently a "full stack" dev who's in reality 95% frontend was telling me he's not a fan of Tailwind and that he'd rather learn "proper" CSS.

And the irony is of course, he never wants to use a relational DB to avoid SQL, so No-SQL DB it is.

reply
freedomben
2 months ago
[-]
I've unfortunately met a number of "full stack" devs recently that are clearly 95% frontend as well, and have a very similar attitude as that person toward backend tech in general. We of course should not take such a small sample size and draw any sort of conclusions, but it's definitely a trend I'm looking out for now whenever someone claims to be "full stack." If the only backend stuff you've done is Firebase, or just a handful of serverless functions that don't do much, or if the "backend" you have in mind is just SSR-ing templates into HTML, then you really should qualify any claims about backend.

One recently after

reply
nikolamus
2 months ago
[-]
Wat
reply
matt_s
2 months ago
[-]
The sq and jq tools are both neat command line gimmicks but in my workflow their usefulness is very short. I can't imagine using sq on a query involving a handful of tables and some inner/outer joins. How would I know its outputting the correct SQL? If you mess up joins you end up with bad output.

Here's my theory: some developers see simple languages that are easy to learn and want to build something more complex to output that language. Maybe its a sub-conscious thing.

HTML is another one, if explained simply, HTML and CSS basic use cases should be easy-ish to pick up by nearly anyone. The fact that we have so many over-engineered frameworks and libraries out there that generate these is evidence of over complicating simple things. Maybe its called resume driven development? Maybe people see genuine useful frameworks that get wide adoption and are wannabes?

reply
neilotoole
2 months ago
[-]
> How would I know its outputting the correct SQL? If you mess up joins you end up with bad output.

The generated SQL is output in sq's logs, when logging is enabled.

https://sq.io/docs/config#logging

reply
matt_s
2 months ago
[-]
If I have to go look at log files to see if the intermediary tool is doing the right thing, I might as well just write the SQL myself, right? jq has more use cases dealing with JSON that could come in handy. sq seems to handle trivial use cases but for complicated data models its going to be a hindrance.
reply
_hyn3
2 months ago
[-]
I tend to agree! but this seems to have a subtly different use case. It's actually very cool. I can see this being a good addition to my toolbox.
reply
zmmmmm
2 months ago
[-]
I think it'd be a moot point if SQL wasn't painful and awkward to work with in the first place. But database purists control it and won't let go, so we will have to live with everyone else inventing layers to make their lives easier.
reply
dewey
2 months ago
[-]
For me this feels like the complaints about error handling in Go. People who work with it all the time, don't even think about it past the first week. If you are starting out it might bother people because they are not used to it.

Personally I really like working with SQL and find it quite elegant, I always encourage people to use it as it really is a job-superpower if you can just dig up issues directly in the DB quickly. It has a long history so for every question you have there's many answers or avenues to ask them.

reply
umanwizard
2 months ago
[-]
I spent 5 years working on a SQL database (materialize.com) and I think SQL is awful.

https://www.scattered-thoughts.net/writing/against-sql (by another former Materialize employee) is a good takedown.

reply
deergomoo
2 months ago
[-]
I agree with the GP’s point that people should probably just learn SQL, but there are things about SQL that are objectively bad, especially from the perspective of a software developer.

It’s not a very composable or consistent language, so I think it makes total sense that we see so many abstractions over it.

reply
jeltz
2 months ago
[-]
The issue with the abstractions over SQL is that while they fix some problems they always introduce a bunch of new problems so in the end SQL is still preferable. I have yet to see an example where that is not the case.
reply
setr
2 months ago
[-]
The fundamental issue is that they have to generate SQL at the end of the day, so there’s a hard limit on how much you can really change. I don’t know why every database treats SQL as the only API, even Postgres. Even the extension systems, which have the opportunity to hook directly into DB internals (and has no standardization to bother meeting) end up with SQL as the API to do actual db operations.
reply
anarazel
2 months ago
[-]
> Even the extension systems, which have the opportunity to hook directly into DB internals (and has no standardization to bother meeting) end up with SQL as the API to do actual db operations.

FWIW, nothing forces an extension to do so. I'm pretty sure there are several that do DML using lower level primitives.

reply
deergomoo
2 months ago
[-]
For me query builders are the quintessential example. Not ORMs, just thin layers that allow you build up a query in pieces.

If you have cases where you might need to conditionally join, or have any reasonably complex filtering based on input, building up a plain SQL statement using string interpolation at the call site gets very messy very quickly.

reply
jeltz
2 months ago
[-]
I have used many query builders and experienced both the upsides and the painful downsides. The by far best one I have used has been JOOQ (Lukas Eder is a genius who really understands SQL well) but even that often causes more pain than it helps. Compared to the issues caused by them I in most cases prefer string interpolation of SQL.
reply
zmmmmm
2 months ago
[-]
Take the simple example of joining via foreign key to another table in a parent child relationship. SQL makes you redundantly specify the other table even though it is fully defined by the foreign key relationship. Purists will say "what if you actually want to join on some other column" or "what if there are multiple foreign key relationships" and a plethora of other "what ifs" and all these ignore the reality that 99.99% of the time it is completely unambiguous and they are just happy that millions of developers all around the world are uselessly typing redundant terms into queries. This is what leads people to eventually say "screw it, I'm scratching that itch" and we have "yet another abstraction layer".
reply
jimbokun
2 months ago
[-]
I like Go a lot and can work with it's error handling paradigm, but I still often wonder if the same semantics could be accomplished with less verbosity in a way that makes the underlying algorithm more clear when reading the code.
reply
jahsome
2 months ago
[-]
I've always felt the "verbosity" is a feature.

Verbosity is in the eye of the beholder; To me, it's the verbosity of error handling that makes the algorithm clear. Of course I recognize with others that opinion probably changes depending on whether the person reading a given bit of code views error handling as part of the algorithm.

reply
KronisLV
2 months ago
[-]
> People who work with it all the time, don't even think about it past the first week.

I think SQL is often quite awkward, at least when you look at people exploring alternatives, like https://prql-lang.org/

Not just “the standard” variety, but also all of the vendor specific varieties that you’ll use in practice (MySQL/MariaDB, PostgreSQL, SQLite, Oracle, SQL Server etc.) and how the features offered by each differ, how the support for custom types differs, how the procedural languages differ, the actual experience of using them, seemingly half baked error messages when compared to most imperative languages, varying support for getting the query optimiser to do what you want (e.g. hints), query plans that aren’t pleasant to stare at, often no support for automated analysis of how things are running and suggestions for indices (e.g. Oracle had that sort of feature, it is sometimes helpful, but the automatically generated indices are or at least last I checked were treated as their own special thing and you couldn’t easily delete them). Even things like varying support for working with geospatial data, JSON or time series data.

Not just that, but also the tooling (or the lack of it) - good luck debugging the execution of stored procedures in your DB or placing breakpoints in there, good luck hooking up observability/tracing solutions as easily as you would for your back end, good luck debugging why your database link calls take 100x longer when executed through JDBC in your app but not in the developer tooling.

Not that ORMs or NoSQL make everything much better, you’d just trade one set of annoyances for another, especially when you end up trying to generate dynamic SQL with the likes of myBatis XML mappers. Don’t get me started on people over fetching or ending up with N+1 problems with most ORMs, or executing bunches of queries against the DB just to retrieve some data that you might as well get in one go with a well crafted DB view. Not that you can’t make your DB schema a mess as well with a liberal application of EAV or OTLT (or just attempting to make polymorphic links).

I think SQL as a language makes sense, but PRQL often feels more sensible. I feel like the current RDBMSes out there are very powerful, but that there are also aspects about them (and the differences between the ones you’ll use) that absolutely suck. It feels like where back end languages are getting progressively better DX, databases instead gave us… CTEs? Some syntactic sugar for JSON in PostgreSQL? Feels like they move painfully slow.

For what it's worth, that's why I welcome every new tool or attempt at improving things, even if they won't replace the usual enterprise stacks of having some RDBMS, a JDBC/ODB/whatever driver and probably an ORM on the back end.

reply
consteval
2 months ago
[-]
I truly have never understood this mentality. To me, SQL is very convenient and quick to work with. It's extraordinarily painless.

I've done data processing in Perl and Python before. THAT is painful. What I think people don't realize is that doing procedural "querying" in a typical PL is, like, 10x the amount of code as an equivalent SQL query. I don't see how the alternative is much better.

SQL just doesn't work like a typical PL, but that's a good thing.

reply
quotemstr
2 months ago
[-]
Yeah. Alternatives to long-established and useful technologies have to meet a high bar before their option pays for their disruption.

Things that seem potentially worth it to me:

* seL4

* Google's SQL syntax tweak

* Rust

* GraalVM

* systemd

* Tree sitter

* LSP

* CMake

* Bazel

These all get you a step change improvement in comprehensibility, safety, or something else important.

Things that seem like more churn than they're worth:

* Noise protocol (relative to TLS)

* JMAP (compared to good old IMAP)

* Nim/Zig/etc.

* Wayland (fait accompli now, but still)

* Varlink

* Fish shell

* YAML/TOML

* Sq?

* Meson

I wish we, as an industry, invested more in improving existing technologies instead of continually abandoning and replacing working solutions.

reply
jeltz
2 months ago
[-]
While I agree with your general idea my lists are quite different.

I would move Zig (innovates a lot in language and compiler design) and JMAP (IMAP is horrible and needs to be replaced) up and CMake and Bazel down (I count them the same as Meson). I would say the jury is still out on Google's tweak and on seL4.

reply
mschuster91
2 months ago
[-]
SQL interop is where the pain is at. Using standard tooling of most database systems, best you can get is CSV with all the pains this shithole of a data transfer format brings.
reply
seanhunter
2 months ago
[-]
That doesn't match my experience at all. Every database I'm aware of has multiple options for data export and import for batch and command processing in the standard tooling. CSV is almost never "the best you can get". What is going to be best depends on your use case, and indeed at a bare minimum you can almost always change the field and line seperators and get something that works just fine on the commandline and avoids most of the stuff people find hard about CSV.

Additionally, if people like jq, they don't need a special tool like this. They can just get the database to output json and use json/jq - another tool isn't needed. In postgres you can do something like this

   select array_to_json(array_agg(row_to_json (r))) from (
      ... put your sql query here...
   ) r;
...and postgres will output a json array for you where each item in the array is a json map of a single row of the output.

I'm sure other databases have similar functionality.

reply
itohihiyt
2 months ago
[-]
What's wrong with CSV? I love me a CSV file.
reply
adammarples
2 months ago
[-]
Having the control characters mixed in with the data, and having no defined encoding.
reply
mschuster91
2 months ago
[-]
Out of the random shit I had to deal with CSV file wrangling in the last two years:

- no defined encoding, so it may be anything from US-ASCII over ISO-8859-x to UTF-8 and it's always a guesswork what it actually is

- no definition of the separator sign, usually it can be a literal comma, a semicolon, or tabs

- escaping of the separator sign in column values is always fun

- escaping of newlines is even MORE fun

- line endings are not specified

Every piece of software there is has their own expectations on what specific kind of format it expects, and to make it worse, using "Microsoft Excel" as a target/source isn't foolproof either because it behaves differently between the Mac and the Windows version!

JSON in contrast is clearly defined in all of these issues and has battle-tested implementations in every programming language there is.

reply
seanhunter
2 months ago
[-]
Most of these issues while real don't actually arise in this case, because we're not trying to ETL some random file, we are the ones talking to the database so we get to choose exactly how the data gets formatted on extract.

For example, here's your list fully handled in postgres: 1. SET CLIENT_ENCODING TO 'value'; (eg 'UTF8') 2. COPY ... with FORMAT CSV DELIMITER 'delimiter_character' QUOTE 'quote_character'

Now the output format is fully specified and everything just works fine (including for input into excel)

   The values in each record are separated by the DELIMITER character. If the value contains the delimiter character, the QUOTE character, the NULL string, a carriage return, or line feed character, then the whole value is prefixed and suffixed by the QUOTE character, and any occurrence within the value of a QUOTE character or the ESCAPE character is preceded by the escape character.
https://www.postgresql.org/docs/current/sql-copy.html
reply
mschuster91
2 months ago
[-]
Yeah, the problem is not every application can read such files. Particularly when newlines are involved, it's a hit and miss.
reply
seanhunter
2 months ago
[-]
Well not every application can read any file and either way "sq" isn't going going to make any difference. Like I said they are real issues but don't arise in this case.
reply
rvalue
2 months ago
[-]
Sometimes I wonder why these SQL standards cost so much to burn a hole in a millionaire's pocket.
reply
jasongill
2 months ago
[-]
This is interesting. I wonder if there is anything that does the opposite - takes JSON input and allows you to query it with SQL syntax (which would be more appealing to an old-timer like me)
reply
eproxus
2 months ago
[-]
This tool can do that:

    $ cat example.json | sq sql 'SELECT * FROM data'
    actor_id  first_name  last_name  last_update
    1         PENELOPE    GUINESS    2020-06-11T02:50:54Z
    2         NICK        WAHLBERG   2020-06-11T02:50:54Z
One of the data source types is 'json' and the command 'sq sql ...' lets you execute SQL directly instead of using their default query language (SLQ).
reply
jasongill
2 months ago
[-]
wow, I missed that in the docs but this is exactly what I was thinking!
reply
minikomi
2 months ago
[-]
reply
danielhep
2 months ago
[-]
I'm using DuckDB to parse GTFS data, which comes in a CSV format. It works wonderfully.
reply
kwailo
2 months ago
[-]
clickhouse-local is incredible, and, in addition to JSON, support TSV, CSV, Parquet and many other input formats. See https://clickhouse.com/blog/extracting-converting-querying-l...
reply
ramraj07
2 months ago
[-]
Why is this better than DuckDB?
reply
kitd
2 months ago
[-]
Why is DuckDB better than clickhouse-local?
reply
snthpy
2 months ago
[-]
duckdb is shorter to type than clickhouse-local and at the command line brevity is king! Of course the winner here is chdb! (And don't talk to me about shell aliases) :-p

While on the topic, how exactly does chdb relate to clickhouse-local?

reply
zie
2 months ago
[-]
In PostgreSQL, you can just select against JSON and query away to your hearts content. They have JSON data types and functions to work on it.
reply
masklinn
2 months ago
[-]
That’s pretty much just jq though, you have bespoke json querying capabilities in sql but you’re not sql-querying your json.

At least not before postgres 17, in the latter json_table does provide this capability.

reply
zie
2 months ago
[-]
Thankfully PG17 is released and ready for production querying needs!
reply
nbk_2000
2 months ago
[-]
Octosql does this as well as a few other formats. I've found it useful several times.

https://github.com/cube2222/octosql

reply
snthpy
2 months ago
[-]
This question has come up a few times in this thread. However I don't see how people expect this to be possible unless they are talking about ndjson with flat records. JSON in general is a very nested format so languages based on relational algebra/calculus like SQL and PRQL are not going to be that useful unless the data is flattened and normalised first.
reply
fmajid
2 months ago
[-]
reply
ikari_pl
2 months ago
[-]
Postgres:) load It into a jsonb column and the possibilities are endless, including indexing
reply
beembeem
2 months ago
[-]
let me introduce you to this hot, open source, nosql database that's webscale...
reply
Summerbud
2 months ago
[-]
To be honest, JQ is handy but it's so hard to maintain. I found myself not able to fully read other's JQ related script
reply
rurban
2 months ago
[-]
Better would be the reverse. SQL queries over json: octosql.
reply
renewiltord
2 months ago
[-]
Related is Google’s pipe syntax for SQL https://research.google/pubs/sql-has-problems-we-can-fix-the...
reply
robertclaus
2 months ago
[-]
More tools are always great! Even if it doesn't become the mainstream, it's always great to see people explore new ways of dealing with databases!
reply
prepend
2 months ago
[-]
More good tools are always great. But I don’t think random clutter is always good.

Fortunately we don’t have to see it so it’s not like it blocks my vision.

But I just wanted to note that the idea of “anything is good” is not really true and I don’t like its spread as there’s opportunity cost. I think we need to spend more attention on evaluation and quality and making good things than the idea that even creating lots of bad things is good in some way.

reply
pratio
2 months ago
[-]
Though I respect and applaud the effort that went into creating this and successfully releasing it, It has fewer features than duckdb supports at the moment.

Duckdb supports both Postgres, Mysql, SQLite and many other extensions.

Postgres: https://duckdb.org/docs/extensions/postgres

MySQL: https://duckdb.org/docs/extensions/mysql

SQLite: https://duckdb.org/docs/extensions/sqlite

You can try this yourself.

1. Clone this repo and create a postgres container with sample data: https://github.com/TemaDobryyR/simple-postgres-container

2. Install duckdb if you haven't and if you have just access it on the console: https://duckdb.org/docs/installation/index?version=stable&en...

3. Load the postgres extension: INSTALL postgres;LOAD postgres;

4. Connect to the postgres database: ATTACH 'dbname=postgres user=postgres host=127.0.0.1 password=postgres' AS db (TYPE POSTGRES, READ_ONLY);

5. SHOW ALL TABLES;

6. select * from db.public.transactions limit 10;

Trying to access SQL data without using SQL only gets you so far and you can just use basic sql interface for that.

reply
_hyn3
2 months ago
[-]
Duckdb is different, though. Not having tried SQ but it seems like a better tool for quick declarative data-munges/parsing/etc, while Duckdb is more of a real project tool with real SQL.

https://duckdb.org/docs/api/cli/

reply
mritchie712
2 months ago
[-]
not to mention the dozen+ other sources DuckDB supports (Iceberg, Parquet, CSV, Delata, JSON, etc.).

DuckDB extension support / dev experience is quite good now too. I've been working on some improvements (e.g. predicate pushdown) to the Iceberg extension and it's been pretty smooth.

reply
Gbox4
2 months ago
[-]
If "sq" is pronounced "seek", then is "jq" pronounced "jeek"?
reply
candiddevmike
2 months ago
[-]
This is neat but I'm not really seeing anything I can't do with standard SQL and CLI tools like psql. Seems like you'd learn more reusable things using standard SQL too.
reply
varenc
2 months ago
[-]
I find sq handy when you use it to accomplish things you can't (easily) do with just raw SQL. Things like: exporting certain rows to JSON or CSV, transforming rows into nicely formatted log lines for viewing, or reading in a CSV file and querying it the same way you'd query other databases. It's particularly easy to start using if you're already familiar with the jq.

If you use things like `array_to_json(array_agg(row_to_json(....)))` in your psql commands to output some rows to JSON, then sq's `--json` or `--jsonl` is quite a bit easier IMHO. If you know the exact SQL query you want to run you can just do `sq sql '....'` as well, but I agree there's not much point in doing that if you aren't taking advantage of some other sq feature.

reply
maxfurman
2 months ago
[-]
You may know this already, but the SQLite CLI can actually read and query data directly from a csv file, with the right flags
reply
ec109685
2 months ago
[-]
They are Jedi’s at jq though.
reply
aurareturn
2 months ago
[-]
I agree with @candiddevmike. Might I add that you can do those things fairly easily now with ChatGPT/Claude. I doubt LLMs know how to use Sq.io that much.
reply
neilotoole
2 months ago
[-]
> I'm not really seeing anything I can't do with standard SQL and CLI tools like psql.

Developer here. There's a few features other than the query stuff that I still think are pretty handy.

The "sq inspect" stuff isn't easy to do with the standard CLI tools, or at least wasn't when I started working on sq back in 2013 or so.

https://sq.io/docs/inspect

I also regularly make use of the ability to diff the metadata/schema of different DB instances (e.g. "sq diff @pg_prod @pg_qa").

https://sq.io/docs/diff

reply
hvenev
2 months ago
[-]
The demo appears too stateful for me. The real power of `jq` is its reliability and the ability to reason about its behavior, which stateful tools inherently lack.
reply
peter_d_sherman
2 months ago
[-]
First there was shell scripting, then grep, then sed, then awk, later Perl... well, now there's 'sq'!

Looks like an absolutely great (and necessary!) utility, which will automate many future workflows and dataflows, save countless hours of time collectively for many people en masse, and therefore change the world (allow more people to get more done in less time!) much like Unix, shell scripting, grep, sed, awk and Perl gave the world...

Congratulations on writing what no doubt will become one of the major Unix/Windows/MacOS/Other OS/Linux shell scripting commands in the future, if it isn't already!

Well done!

reply
varenc
2 months ago
[-]
I love sq. It's handy for quickly performing simple operations on DBs and outputting that as CSV or JSON. Though my one wish is that the sq query language (SLQ) supported substring matching like SQL's `... LIKE "SOME_STRING%"`. Though you can just invoke SQL manually with `sq sql`
reply
neilotoole
2 months ago
[-]
Developer here. Thanks for the kind words.

Substring matching is on my short list (also totally open to a PR!).

reply
dartos
2 months ago
[-]
Wow what an expensive domain name.
reply
gampleman
2 months ago
[-]
It still seems to me a better solution to these sorts of problems is to use a better shell like nushell, that has richer datatypes, and so you can use the same tool to manipulate files, processes, json, csv, databases and more.
reply
lightningspirit
2 months ago
[-]
Although jq query style is not absolutely pleasant I see many examples where this tool can be used such as data transformation, import/export and linux pipelines that need access to databases.
reply
novoreorx
2 months ago
[-]
I really like the idea of https://github.com/dinedal/textql, which uses SQL to interact with file-based data stores. However, I don't understand why sq does the opposite—using a new DSL to access a database that already has a widely-adopted and easy-to-use language: good old SQL.
reply
lnxg33k1
2 months ago
[-]
It is great, I installed it, only thing I'd suggest, probably minor, is to also extract the commands to install from the bash script, and put them in the `Install` section directly, I don't run .sh script, especially if they need privileges, so I went through the bash script to take the commands for debian, they're there, probably could also be outside for other kind of people
reply
neilotoole
2 months ago
[-]
You can already install sq using several of the common package managers, or build from (Go) source if you prefer.

https://sq.io/docs/install

reply
lionkor
2 months ago
[-]
For anyone else wondering; it's written in Go, and it keeps state inside its config file, for example sources (like a db connection string).
reply
wreq2luz
2 months ago
[-]
I was reading about something like json output coming to Postgres one day (https://www.postgresql.org/message-id/flat/ZYBdnGW0gKxXL5I_@...). Also the `.wrangle | .data` wraps on an iPhone 13 mini.
reply
tmountain
2 months ago
[-]
Even without a JSON column in Postgres, this is pretty trivial: SELECT jsonb_pretty(to_jsonb(employees)) FROM employees;
reply
nashashmi
2 months ago
[-]
> sq is pronounced like seek. Its query language, SLQ, is pronounced like sleek

As a person who is apart from the tech scene, and lurks in the tech space out of interest, I appreciate this guidance. For the longest time I didn’t know nginx was pronounced Engine-X; I called it N-jinx.

reply
neilotoole
2 months ago
[-]
The theory at the time was that if "SQL" is pronounced like "sequel", and "sq" is just dropping the "L" from "SQL", then "sq" must be...

I suspect the uptake on the "seek" pronunciation is about 2%, if I'm being generous

reply
wvh
2 months ago
[-]
Don't sweat it. It's a running joke amongst guitar players no two people pronounce D'Addario the same way, not to mention the tremolo bar which technically should be called a vibrato bar. I surmise any scene has its trip-up words.
reply
deskr
2 months ago
[-]
D'Addario is of course pronounced "Dadda Rio", with emphasis on Rio and a slight Italian accent.
reply
tgmatt
2 months ago
[-]
Sorry but I am pronouncing that as 'ess-cue` and there is nothing anyone can do about it.

Looks kinda neat for when I don't want or need anything more than bash for a script.

reply
mlhpdx
2 months ago
[-]
Dang, I wish I had this while I still had SQL databases.
reply
fforflo
2 months ago
[-]
I love the idea of pushing JQ and other DSLs close to the database. I've written jq extensions for SQLite [0] and Postgres [1], but my approach involves basically embedding=pushing the jq compiler into the db. So you can do `select jq(json, jqprogram)` as an alternative to jsonpath.

Trying to understand: Is the main purpose of this to use jq-syntax for cataloging-like functionality and/or cross-query? I mean it's quite a few lines of code, but you inspect the database catalogs and offer a layer on top of that? I mean, how much data is actually leaving the database?

[0] https://github.com/Florents-Tselai/liteJQ [1] https://github.com/Florents-Tselai/pgJQ

reply
mynameyeff
2 months ago
[-]
Wow, very cool. I was looking for something like this
reply
mrbluecoat
2 months ago
[-]
TSV support might be nice for Zeek logs
reply
franchb
2 months ago
[-]
Maybe integrate with https://github.com/brimdata/zed ?
reply
cassepipe
2 months ago
[-]
Not to be confused with the gpg alternative from sequoia-pgp also called sq : https://sequoia-pgp.org/
reply
neilotoole
2 months ago
[-]
That's an unfortunate naming clash. This "sq" (sq.io) predates the sequoia "sq" by several years I believe.
reply
doctorpangloss
2 months ago
[-]
At some point, why not package Python into a single executable, and symbolic link applications and modules into it for Unixy-ness?

Another POV is all the developers I know who thrive the most and have found the most success: they rate aesthetic concerns the lowest when looking at their tools. That is to say that the packaging or aesthetic coherence in some broader philosophy matters less than other factors.

reply
sweeter
2 months ago
[-]
Its written in Go...
reply