Show HN: Rocky – Rust SQL engine with branches, replay, column lineage
71 points
21 hours ago
| 4 comments
| github.com
| HN
Hi HN, I'm Hugo. I've been building Rocky over the past month, shipping fast in the open. The binary is on GitHub Releases, `dagster-rocky` on PyPI, and the VS Code extension on the Marketplace. I held off on a broader announcement until the trust-system surface was coherent enough to talk about as one thing. The governance waveplan — column classification, per-env masking, 8-field audit trail on every run, `rocky compliance` rollup, role-graph reconciliation, retention policies — landed end-to-end last week in engine-v1.16.0 and rounded out in v1.17.4 (tagged 2026-04-26). That's the milestone I'd been waiting for.

The pitch: keep Databricks or Snowflake. Bring Rocky for the DAG. Rocky is a Rust-based control plane for warehouse pipelines. Storage and compute stay with your warehouse. Rocky owns the graph — dependencies, compile-time types, drift, incremental logic, cost, lineage, governance. The things your current stack can't give you because it doesn't own the DAG.

A few things I think are interesting:

- Branches + replay. `rocky branch create stg` gives you a logical copy of a pipeline's tables (schema-prefix today; native Delta SHALLOW CLONE and Snowflake zero-copy are next). `rocky replay <run_id>` reconstructs which SQL ran against which inputs. Git-grade workflow on a warehouse.

- Column-level lineage from the compiler, not a post-hoc graph crawl. The type checker traces columns through joins, CTEs, and windows. VS Code surfaces it inline via LSP.

- Governance as a first-class surface. Column classification tags plus per-env masking policies, applied to the warehouse via Unity Catalog (Databricks) or masking policies (Snowflake). 8-field audit trail on every run. `rocky compliance` rollup that CI can gate on. Role-graph reconciliation via SCIM + per-catalog GRANT. Retention policies with a warehouse-side drift probe.

- Cost attribution. Every run produces per-model cost (bytes, duration). `[budget]` blocks in `rocky.toml`; breaches fire a `budget_breach` hook event.

- Compile-time portability + blast radius. Dialect-divergence lint across Databricks / Snowflake / BigQuery / DuckDB (12 constructs). `SELECT *` downstream-impact lint.

- Schema-grounded AI. Generated SQL goes through the compiler — AI suggestions type-check before they can land.

What Rocky isn't:

- Not a warehouse — it's the control plane on top.

- Not a Fivetran replacement. `rocky load` handles files (CSV/Parquet/JSONL); for SaaS sources use Fivetran, Airbyte, or warehouse-native CDC.

- Not dbt Cloud — no hosted UI, no managed scheduler. First-class Dagster integration if you need orchestration.

Adapters: Databricks (GA), Snowflake (Beta), BigQuery (Beta), DuckDB (local dev / playground). Apache 2.0.

I'd love feedback on the trust-system framing, the governance surface (particularly classification-to-masking resolution in `rocky compile` and the `rocky compliance` CI gate), the branches/replay design, the cost-attribution primitives, or anything else that catches your eye. Happy to go deep in the thread.

ramon156
2 hours ago
[-]
If your introduction message already includes a bunch of uncurated claims and LLM smells, then what does that say about the code I'm about to run?
reply
hugocorreia90
1 hour ago
[-]
Yeah, fair pushback, and yes the intro was AI-assisted. Marketing is not my strength nor I am a native english speaker. I built this in about a month with heavy LLM tooling and the seed comment is part of that. I'm not going to pretend otherwise.

The code is what it is. `cargo test --workspace` runs across 19 crates. CI on 5 platforms (macOS ARM/Intel, Linux x86/ARM, Windows). JSON output schemas are codegen-checked in CI so docs can't drift from the binary.

If you want to skip the marketing copy and look at engine reasoning instead: PR #240 (audit trail), #241 (column classification + masking), #270 (failed-source surfacing in discover).

I'd rather hear "the code is bad" than "the post sounds AI-written".

reply
mollerhoj
2 hours ago
[-]
Its a bit confusing to claim that "The things your current stack can't give you because it doesn't own the DAG" and use DataBricks as your example: DataBricks includes jobs and pipelines, so it very much owns the DAG, no?
reply
hugocorreia90
1 hour ago
[-]
Fair point. Databricks owns a scheduling DAG (Workflows, DLT). What I meant by "owns the DAG" is the semantic DAG: model-to-model dependencies with column-level types that the compiler builds.

Workflows knows task A runs before task B. Rocky knows `dim_customer.email` flows from `raw_users.email_address` through three CTEs in `stg_customers`. Different layer, same word.

I'll be more careful with that framing.

reply
hasyimibhar
4 hours ago
[-]
Looks cool, I've been waiting for someone to build this since dbt and SQLMesh acquisition. It would be great to have model versioning and support for ClickHouse SQL.
reply
hugocorreia90
1 hour ago
[-]
Thanks. On model versioning — what's the use case you have in mind? A few options that map to different designs:

- dbt-style semantic-layer versions (v1/v2 of a model) - schema migration history - branch-based (Rocky already has branches + replay)

Different design choice for each, so it helps to know which problem you're trying to solve.

ClickHouse is tractable through the Adapter SDK without engine patching. If you can share roughly your model count and workload shape, I can put a real timeline on it. Open to community PRs too.

reply
kjuulh
1 hour ago
[-]
fyi, llm written comments are discouraged on hackernews.

https://news.ycombinator.com/item?id=47340079

Not saying yours are, but them -- dashes certainly looks like it ;)

reply
hugocorreia90
42 minutes ago
[-]
Fair. I just use it for tidying up my replies as I'm not a native English speaker.
reply
mergisi
4 hours ago
[-]
* * *
reply
hugocorreia90
1 hour ago
[-]
Thanks for the careful read. The "what breaks if I rename this column" question is exactly what column lineage from the compiler is meant to answer, and you said it better than I did in the post.

On the schema-grounded AI angle: agreed. The failure mode you describe — structurally valid SQL that joins on the wrong key or aggregates at the wrong grain because the model hallucinated a relationship — is exactly what the compiler is positioned to catch. AI-generated SQL runs through the type checker before it can land, so suggestions that don't validate against the actual DAG never reach the user. NL-to-SQL tools that integrate a compile step would close exactly the gap you're pointing at.

On your two questions:

1. Branch isolation for stateful models — mixed answer, and worth being honest about:

   - Incremental: isolated. The watermark `state_key` includes the resolved schema, and `rocky branch create` swaps the schema prefix. So a branch run reads/writes a different redb key than main and they don't advance each other.

   - Snapshot: not yet. Today `rocky branch create` only writes a branch record; it doesn't copy warehouse tables. A snapshot model on a branch starts with an empty table (CREATE TABLE IF NOT EXISTS in the branch schema) and accumulates from the first branch run, with no inherited history from main. That's the gap. The fix is the next wave: native Delta SHALLOW CLONE / Snowflake zero-copy at branch creation, which gives point-in-time snapshot semantics without copy-on-write overhead.
2. Cost attribution. Both bytes scanned and duration are captured per-model in the run record (`bytes_scanned` and duration on `RunRecord`). Budget gating today is on cost (USD) and duration — `max_usd` and `max_duration_ms` in `[budget]` blocks in `rocky.toml`, as independent thresholds. A direct bytes-scanned budget threshold isn't gateable today; the bytes are in the run record for analysis but you can't currently fail CI on "this run scanned more than N TB". Reasonable extension if there's demand.

   To your Snowflake point: the warehouse-size × duration credit model and the scan volume tell genuinely different stories, so they're tracked separately rather than rolled into a single number.
reply