I'm a big fan of sql in general (even if the syntax can be verbose, the declarative nature is usually pleasant and satisfying to use), but whenever dynamic nature creeps in it gets messy. Conditional joins/selects/where clauses, etc
How do folks that go all in on sql-first approaches handle this? Home-grown dynamic builders is what I've seen various places I've work implement in the past, but it's usually not built out as a full API and kind of just cobbled together. Eventually they just swap to an ORM to solve the issue.
* [1] https://kysely.dev
I always wondered about this. How reliable is that in your experience? Thank you in advance.
it's not (really) addressed by sqlx (intentionally), in the same way most ORM features are not addressed
but to some degree this is what is so nice about sqlx it mainly(1) provides the basic SQL functionality and then let you decide what to use on top of it (or if to use anything on top).
If you need more e.g. the sea-* ecosystem (sea-query, sea-orm) might fulfill you needs.
(1): It can compile time check "static" queries (i.e. only placeholders) which is a bit more then "basic" features, but some projects have to 99+% only static queries in which case this feature can move SQLx from "a building block for other sql libs" to "all you need" to keep dependencies thinner.
https://github.com/DapperLib/Dapper/blob/main/Dapper.SqlBuil...
But this is conditional on either your database or your minimal abstraction layer having support for bindings arrays of data with a single placeholder (which is generally true for Postgres).
How do people who choose to use a no-dsl SQL library, like SQLx, handle dynamic queries? Especially with compile-time checking. The readme has this example:
...
WHERE organization = ?
But what if you have multiple possible where-conditions, let's say
"WHERE organization = ?", "WHERE starts_with(first_name, ?)", "WHERE birth_date > ?",
and you need to some combination of those (possibly also none of those) based on query parameters to the API. I think that's a pretty common use case. WHERE organization = $1
AND ($2 IS NULL OR starts_with(first_name, $2)
AND ($3 IS NULL OR birth_date > $3)
With SQLx you would have all the params to be Options and fill them according the parameters that were sent to your API.Does that make sense?
So, if I have this use-case I'd reach for a query builder library. To answer the question of "how to do dynamic queries without a query builder library", I don't think there's any other answer than "make your own query builder"
no compile time checking and integration tests
in general sqlx only provides the most minimal string based query building so you can easily run into annoying edge cases you forgot to test, so if your project needs that, libraries like sea-query or sea-orm are the way to go (through it's still viable, without just a bit annoying).
in general SQLx "compile time query checking" still needs a concrete query and a running db to check if the query is valid. It is not doing a rem-implementation of every dialects syntax, semantics and subtle edge cases etc. that just isn't practical as sql is too inconsistent in the edge cases, non standard extensions and even the theoretical standardized parts due to it costing money to read the standard and its updates being highly biased for MS/Oracle databases).
This means compile time query checking doesn't scale that well to dynamic queries, you basically would need to build and check every query you might dynamically create (or the subset you want to test) at which point you are in integration test territory (and you can do it with integration tests just fine).
besides the sqlx specific stuff AFIK some of the "tweaked sql syntax for better composeability" experiments are heading for SQL standardization which might make this way less of a pain in the long run but I don't remember the details at all, so uh, maybe not???
---
EDIT: Yes there is an sqlx "offline" mode which doesn't need a live db, it works by basically caching results from the online mode. It is very useful, but still no "independent/standalone" query analysis.
I find writing sql in rust with sqlx to be far fewer lines of code than the same in Go. This server was ported from Go and the end result was ~40% fewer lines of code, less memory usage and stable cpu/memory usage over time.
It has the advantage that it implements the parsing and type checking logic in pure Go, allowing it to import your migrations and infer the schema for type checking. With SQLx you need to have your database engine running at compile time during the proc macro execution with the schema already available. This makes SQLx kind of a non-starter for me, though I understand why nobody wants to do what sqlc does (it involves a lot of duplication that essentially reimplements database features.) (Somewhat ironically it's less useful for sqlc to do this since it runs as code generation outside the normal compilation and thus even if it did need a live database connection to do the code generation it would be less of an impact... But it's still nice for simplicity.)
Comparing and contrasting, sqlc type checking happens via code generation, basically the only option in Go since there's nothing remotely like proc macros. Even with code generation, sqlc doesn't default to requiring an actual running instance of the database, though you can use an actual database connection (presumably this is useful if you're doing something weird that sqlc's internal model doesn't support, but even using PostgreSQL-specific features I hadn't really ran into much of this.)
The sqlc authors are to be applauded for making a static analyzer, that is no small feat. But if you can get away with offloading SQL semantics to the same SQL implementation you plan to use, I think that's a steal. The usability hit is basically free - don't you want to connect to a dev database locally anyway to run end to end tests? It's great to eliminate type errors, but unless I'm missing something, neither SQLx nor sqlc will protect you from value errors (eg constraint violations).
2. Sure, the database will probably be running locally, when you're working on database stuff. However, the trouble here is that while I almost definitely will have a local database running somehow, it is not necessarily going to be accessible from where the compiler would normally run. It might be in a VM or a Docker container where the database port isn't actually directly accessible. Plus, the state of the database schema in that environment is not guaranteed to match the code.
If I'm going to have something pull my database schema to do some code generation I'd greatly prefer it to be set up in such a way that I can easily wrap it so I can hermetically set up a database and run migrations from scratch so it's going to always match the code. It's not obvious what kinds of issues could be caused by a mismatch other than compilation errors, but personally I would prefer if it just wasn't possible.
I would definitely recommend writing a Compose file that applies your migrations to a fresh RDBMS and allows you to connect from the host device, regardless of what libraries you're using. Applying your migrations will vary by what tools you use, but the port forwarding is 2 simple lines. (Note that SQLx has a migration facility, but it's quite bare bones.)
Type inference was okay, since SQLite barely has any types. The bigger issue I had was dealing with migration files. The nice part about SQLx is that `cargo sqlx database setup` will run all necessary migrations, and no special tooling is necessary to manage migration files. sqlc, on the other hand, hard codes support for specific Go migration tools; each of the supported tools were either too opinionated for my use case or seemed unmaintained. SQLx has built-in tooling for migrations; it requires zero extra dependencies and satisfies my needs. Additionally, inferring types inside the actual database has its benefits: (1) no situations where subsets of valid query syntax are rejected, and (2) the DB may be used for actual schema validation.
For an example of why (2) may be better than sqlc's approach: databases like SQLite sometimes allow NULL primary keys; this gets reflected in SQLx when it validates inferred types against actual database schemas. When I last used sqlc, this potential footgun was never represented in the generated types. In SQLx, this footgun is documented in the type system whenever it can detect that SQLite allows silly things (like NULL primary keys when the PK satisfies certain conditions).
I ran sqlx / mysql on a 6M MAU Actix-Web website with 100kqps at peak with relatively complex transactions and queries. It was rock solid.
I'm currently using sqlx on the backend and on the desktop (Tauri with sqlite).
In my humble opinion, sqlx is the best, safest, most performant, and most Rustful way of writing SQL. The ORMs just aren't quite there.
I wish other Rust client libraries were as nice as sqlx. I consider sqlx to be one of Rust's essential crates.
Can you tell me why it's a non-starter for you?
For sqlc, it isn't really a big problem because you only need to run the code generation when you're actually modifying database things. Still, with that having been said, I think just passing a database URI and having analysis work based on that is unideal. Using an actual database isn't a huge problem, but having to manage the database instance out of band is the part that I think isn't great, because it allows for the schema in the code to trivially desync with the schema used in analysis. If I used SQLx I'd probably be compelled to try to wire up a solution that spawns the database and migrates it up hermetically for the caching part. Likewise if I used this mode of sqlc.
I guess it might be possible for sqlc to add first class support for that sort of concept, but I can see holes in it. For one thing, you have to figure out where to grab binaries from and what version. An approach using Docker/Podman works, and at least partly solves this problem because you could allow specifying any OCI image, but that has caveats too, like requiring Docker or Podman to be installed. The most heroic effort would be to use some kind of solution using WASM builds of database engines: pulling down and running something like PGlite in process seems like it would be an almost ideal solution, but it sticks you to whatever things can actually be made to work in WASM in terms of features, extensions and versions, at least unless/until database servers and extension vendors miraculously decide that supporting WASM as a target is a good idea. Still, if you want some crazy ideas for how to make the UX better, I think either the Docker approach or the WASM approach could be made to work to some degree.
Barring that, though, I'd be most likely to have some kind of Docker setup for running sqlc with an ephemeral database instance. It's not pretty, but it works...
I don't think it would be a non-starter, though. I only really think that connecting to the database from within rustc invocations is a non-starter.
I referred go-jet since it introspects the database for it's code generation instead.
I've been quite happy with this setup!
FWIW, the compile-time query checking is entirely optional. If you don't use the query syntax checking then you don't need live database and you don't need `sqlx prepare`.
That being said, as I understand it, SQLx does something very different. If you want dynamic queries, you'll basically have to build that module yourself. The power of SQLC is that anyone who can write SQL can work on the CRUD part of your Go backend, even if they don't know Go. Hell, we've even had some success with business domain experts who added CRUD functionality by using LLM's to generate SQL. (We do have a lot of safeguards around that, to make it less crazy than it sounds).
If you want fancy Linq, grapQL, Odata or even a lot of REST frameworks, you're not getting any of that with SQLC though, but that's typically not what you'd want from a Go backend in my experience. Might as well build it with C# or Java then.
Let's compare: SQLC - configuration file (yaml/json) - schema files - query files - understand the meta language in query file comments to generate code you want
SQLx - env: DATABASE_URL
Now does that mean that SQLx is the best possible database framework. No, it does not. Because I didn't spend my time doing things that weren't related to the exact queries I had to write I got more work done.
I want to appreciate the hard work the SQLx Devs have put in to push the bar for a decent SQL developer experience. People give them a really hard time for certain design decisions, pending features and bugs. I've seen multiple comments calling it's compile time query validation "gimmicky" and that's not nice at all. You can go to any other language and you won't find another framework that is as easy to get started with.
I would recommend using pg_dump for your schema file which means it'll not be related to SQLC as such. This way it will be easier for you to maintain your DB, we use Goose as an example. In our setup part of the pipeline is that you write your Goose migration, and then there is an automated process which will update the DB running in your local dev DB container, do a pg_dump from that and then our dev container instance of SQLC will compile your schema for you.
The configuration file is centralized as well, so you don't have to worry about it.
I agree with you on the SQLC meta language on queries, I appreciate that it's there but we tend to avoid using it. I personally still consider the meta language a beter way of doing things than in-code SQL queries. This is a philosophical sort of thing of course, and I respect that not everyone agres with me on this. It's hard for me to comment on SQLx, however, as I haven't really used it.
What I like about SQLC is that it can be completely de-coupled from your Go code.
And of course now that I have it, the incremental cost of adding a new query is really low as well
You could compare it to people writing CSS, JavaScript and Markup in separate files Vs having just one file in React/Svelte etc. which gives the user the option to combine everything into one.
There maybe a lot of drawbacks from the latter approach but it's makes everything a hell easier for people to just get started building.
As far as building something fast, I'm with you. I always reach out for Python with UV, Litestar and Advanced Alchemy when I want to build personal web projects. I don't think SQLC is bad as such, once you've written your SQL you can essentially compile that into a CRUD application which is ready to go. As you've pointed out, however, you'd need to slam something like a GraphQL engine on top of it if you wanted rich quries easily, and you'd still not have the auto-generated OpenAPI that comes with Python web frameworks.
SQLC is for code where you want a low amount (or zero) external depedencies. Which is a very "Go" thing to want. It does scale well, but that requires you to build various CLI tools to help maintain things as well as your own Go modules to add "quality of life" like dynamic routers and get queries for low traffic requests.
I'll try SQLx eventually when I get time to look more into Rust.
The more serious LoC offenders in Go were:
1. Marshalling/Unmarshalling code (for API responses, to/from external services, etc). In general, working with JSON in Go was painful and error prone. Rust's serde made this a complete non-issue.
2. Repetitive sql query code (query, scan for results, custom driver code for jsonb column marshalling/unmarshalling). Rust's sqlx made this a non-issue.
3. Go's use of context to share data through handlers was a real pain and error prone (type casting, nil checks, etc). Rust's actix-web made this a real beautiful thing to work with. Need a "User" in your handler? Just put it as an argument to the handler and it's only called if it's available. Need a db connection? Just put it as an argument to the handler.
4. Go's HTML/Text templates required more data to be passed in and also required more safety checks. Rust's askama was overall more pleasant to use and provided more confidence when changing templates. In Rust, I'd catch errors at compile time. In Go, I'd catch them at runtime (or, a user would).
I must admit I was surprised. I thought Rust would have been more lines of code because it's a lower level language, but it ended up being ~40% less code. My general sentiment around working with the code is very different as well.
In the Rust codebase I have no hesitation to change things. I am confident the compiler will tell me when I'm breaking something. I never had that confidence in Go.
My issues with SQLx when I first tried it were that it was really awkward (nigh impossible) to abstract away the underlying DB backend, I expect those issues are fixed now but for some simple apps it's nice to be able to start with SQLite and then switch out with postgres.
Then I wanted to dockerize an SQLx app at one point and it all becomes a hassle as you need postgres running at compile time and trying to integrate with docker compose was a real chore.
Now I don't use SQLx at all. I recommend other libraries like sqlite[1] or postgres[2] instead.
SQLx is a nice idea but too cumbersome in my experience.
[1]: https://docs.rs/sqlite/latest/sqlite/ [2]: https://docs.rs/postgres/latest/postgres/
[1]: https://docs.rs/sqlx/latest/sqlx/macro.query.html#offline-mo...
For needing a DB at compile time, there's an option to have it produce artefacts on demand that replace the DB, although you'll need to connect to a DB again each time your queries change. Even that is all optional though, if you want it to compile time check your queries.
Versus Python and Node often needing to properly link with the system they'll actually be running in.
Even in SaaS systems, once you get large enough with a large enough test suite you'll be wanting to tier those tests starting with a lowest common denominator (sqlite) that doesn't incur network latency before getting into the serious integration tests.
The target DB can change as a project goes from something mildly fun to tinker with to something you think might actually be useful.
Also I personally find that SQLite is just nice to work with. No containers or extra programs, it just does what you ask it to, when you ask it to
For lower level libraries there is also the more downloaded SQLite library, rusqlite [2] who is also the maintainer of libsqlite3-sys which is what the sqlite library wraps.
The most pleasant ORM experience, when you want one, IMO is the SeaQl ecosystem [3] (which also has a nice migrations library), since it uses derive macros. Even with an ORM I don't try to make databases swappable via the ORM so I can support database-specific enhancements.
The most Rust-like in an idealist sense is Diesel, but its well-defined path is to use a live database to generate Rust code that uses macros to then define the schema-defining types which are used in the row structs type/member checking. If the auto-detect does not work, then you have to use its patch_file system that can't be maintained automatically just through Cargo [4] (I wrote a Makefile scheme for myself). You most likely will have to use the patch_file if you want to use the chrono::DateTime<chrono::Utc> for timestamps with time zones, e.g., Timestamp -> Timestamptz for postgres. And if you do anything advanced like multiple schemas, you may be out of luck [5]. And it may not be the best library for you if want large denormalized tables [6] because compile times, and because a database that is not normalized [7], is considered an anti-pattern by project.
If you are just starting out with Rust, I'd recommend checking out SeaQl. And then if you can benchmark that you need faster performance, swap out for one of the lower level libraries for the affected methods/services.
[1] https://github.com/launchbadge/sqlx/commit/47f3d77e599043bc2...
[2] https://crates.io/crates/rusqlite
[3] https://www.sea-ql.org/SeaORM/
[4] https://github.com/diesel-rs/diesel/issues/2078
[5] https://github.com/diesel-rs/diesel/issues/1728
int year = 2019;
. . .
for(Film film: "[.sql/] select * from film where release_year > :rel_year".fetch(year)) {
out.println(film.title);
}
1. https://github.com/manifold-systems/manifold/blob/master/man...I've tried alternatives like Diesel and sea-orm. To be honest, I feel like full-blown ORMs really aren't a very good experience in Rust. They work great for dynamic languages in a lot of cases, but trying to tie in a DB schema into Rust's type system often creates a ton of issues once you try to do anything more than a basic query.
It's got a nice little migration system too with sqlx-cli which is solid.
The issues I saw seem to be related to these issues:
https://github.com/launchbadge/sqlx/issues/3080
https://github.com/launchbadge/sqlx/issues/2510
The problems did not manifest until the application was under load with multiple concurrent sessions.
Troubleshooting the issue by changing the connection pool parameters did not seem to help.
I ended up refactoring the application's data layer to use a NoSQL approach to work around the issue.
I really like the idea of SQLx and appreciate the efforts of the SQLx developers, but I would advise caution if you plan to use SQLx with SQLite.
Thank you for sharing it!
[1] https://github.com/launchbadge/sqlx/blob/main/FAQ.md#how-do-...
I use sqlpp11 in C++.
I generate code and I can use it with strong typing by including some headers. This Rust crate seems to provide compile-time checking.
But it will give me code-completion? It is very nice that by pressing '.' you know what you potentially have.
* [1] https://sql-page.com/
``` It is required to mark left-joined columns in the query as nullable, otherwise SQLx expects them to not be null even though it is a left join. For more information, see the link below: https://github.com/launchbadge/sqlx/issues/367#issuecomment-... ```
Did you have other problems beyond this, or are you referring to something different?
The issue above is a bit annoying but not enough that I'd switch to an ORM over it. I think SQLx overall is great.
* Or in your editor as you're writing code.
But the real problem is ergonomy. The better solution in almost any language is to leverage the syntax of your language to allow for as much (non-macro) type-safety and auto-completion as possible.
For example instead of:
SELECT country, COUNT(*) as count
FROM users
GROUP BY country
WHERE organization = ?
That should be select("country", count("\*").as("count))
.from("users")
.groupBy("country")
.where("organization".=(yourVariable))
[note that it matches SQL, not the language's collections function's names.]As you see, that's also nice because now you can actually use variables easy - and even use pure rust to decide dynamically on things.
You can further increase typesafety if you want by doing things like `.from(table("users"))` and running extra checks on that table() part, similar to what the lib probably does. Also, sometimes you might have to make a compromise on your syntax and things like `"organization".=(yourVariable)` might have to be slightly rewritten.
Still, I think that people will rather end up with a library like I described, unless the SQL is very basic/static.
With a library such as SQLx, you can never really factor anything out. Or at least it's very hard and you lose the actual typesafety. I've been there and done that with doobie [https://typelevel.org/doobie/] which is basically the same in green.
Dynamically constructing queries is awkward, but most of mine ha very limited dynamic variation.
Once you have any kind of dynamic stuff (like a dynamic filter) you don't have any 100% pure SQL anymore anyways. If you don't have that, okay, this lib will be more convenient.