Databases Were Not Designed for This
74 points
1 day ago
| 24 comments
| arpitbhayani.me
| HN
dherls
3 hours ago
[-]
Giving LLM agents direct, autonomous access to a real production databases with write access seems insane to me.

NO ONE, agent or human, should have direct write access to production databases outside of emergency break glass scenarios. This is why we have stored routines and API layers to pre-define what writes are allowed. The facts that agents CAN autonomously write to a database does not imply that they should.

For the point about query optimization, again your agents should not be issuing random queries against a production database. We have had the concept of separate analytics databases with different architectures to support exporatory queries for decades.

reply
tee-es-gee
2 hours ago
[-]
I agree and hope this is the case for anything serious enough. I also don't see this changing any time soon.

There are ways to give safe access to the data, at least read-only, that don't involve production risk and don't sacrifice privacy. For example, database branches with anonymization. Instead of accessing the prod/staging db, the agent creates a branch and has read/write access to that.

(disclaimer: I work at Xata, where we offer copy-on-write branches for Postgres, and the agent use-cases are the most popular right now)

reply
exec7
3 hours ago
[-]
I totally i agree! even read access specialty when databases has some sensitive/personal information about users.
reply
i7l
3 hours ago
[-]
How does that even work in compliance-relevant scenarios where the audit trail shows some LLM messed with the data? Who, if anyone, is on the hook?
reply
mr-wendel
2 hours ago
[-]
My guess is that if the database is subject to auditing then LLM access (obviously writes in particular, but even reads come with exfiltration risks) will be a hard "no" and instant red flag. When it's a person, there is a sense of accountability and opportunity for remediation.

I suppose that LLMs will be treated as a code artifact and liability will shift upstream towards who deployed/approved the access in the first place. Even though code is essentially deterministic, making that association fairly simple, it's going to boil down to this same paradigm.

Perhaps governance rules will evolve to even explicitly forbid it, but my gut feeling is that for what the future determines to be "practical" reasons (right or wrong) LLMs will warrant an entirely new set of rules to allow them to be in the chain at all.

+ EDIT: both my wife and I have experience in this area and the current answer is companies like KPMG don't have an answer yet. Existing rules do help (e.g. there better be good documented reasons why it was used and that access was appropriately scoped, etc), but there is enough ambiguity around these tools so they say "stay tuned, and take caution".

reply
joquarky
13 minutes ago
[-]
Whoever provided the authorization credentials to the agent is on the hook.
reply
bmurphy1976
2 hours ago
[-]
The dev who ran it. The manager who allowed it. The director/VP/CTO who enabled the culture. They all have some responsibility for it.
reply
coalstartprob
1 hour ago
[-]
wherever there's a cockpit there's a driver and an airline to sue / be accountable for.
reply
sgarland
1 hour ago
[-]
I’m a DBRE. I spend a good portion of my day with a shell into one or more prod databases. The schema definitions in code are scattered between ORM model definitions, Alembic migrations, and Liquibase migrations, so the only reliable way I have of understanding a schema as it exists is to view it. Plus, I am very comfortable with SQL, and the various system catalogs of both MySQL and Postgres, so it’s a ton easier to work with.

Truly sensitive customer information is encrypted, and on an isolated DB cluster that no one has regular access to. I also operate with a read-only grant, because manual writes to a prod DB is generally a terrible idea.

reply
cowlby
3 hours ago
[-]
LLM agents are unlocking demand and supply for applications that wouldn't have been possible before due to time constraints though. There's a growing demand for single user or smaller scoped apps where giving LLM agents direct access means velocity. The failure/rollback model is much easier with these as long as we have good backup hygiene.
reply
argomo
12 minutes ago
[-]
I think a lot of the objections to your post could be answered by reminding folks of how Microsoft Access databases tend to pop up in small businesses as well as corporate environments outside of IT departments. Yes, they're not "proper" databases but they /get business done/ and often serve as v0 before a real app can be properly conceived of.

One can easily imagine an LLM-enabled database that lets a wider audience build meat-and-potatoes line-of-business apps for small team use with minimal compliance concerns.

reply
throw5
2 hours ago
[-]
> There's a growing demand for single user or smaller scoped apps where giving LLM agents direct access means velocity. The failure/rollback model is much easier with these as long as we have good backup hygiene.

This makes no sense to me. For anything that has sensitive payment or personally identifieable data, direct access to DB is potentially illegal.

> The failure/rollback model is much easier with these as long as we have good backup hygiene.

Have you actually operated systems like this in production? Even reverting to a DB state that is only seconds old can still lose hundreds or thousands of transactions. Which means loads of unhappy customers. More realistically, recovery points are often minutes or hours behind once you factor in detection, validation and operational overhead.

DB revert is for exceptional disaster recovery scenarios, not something you want in normal day-to-day operations. If you are saying that you want to give LLM full access to prod DB and then revert every time it makes a mistake, you aren't running a serious business.

reply
2ndorderthought
2 hours ago
[-]
You are thinking way too hard. This person is a hazard that needs to learn the hard way.

If velocity means letting agents live edit a db, I'm fine being slow. Holy hell. Let these people crash and burn but definitely let me know the app name so I know never to use it first.

reply
steve_adams_86
2 hours ago
[-]
This narrative seems to come from people who haven't worked on meaningfully complex software systems. They're more like script kiddies than software developers. I don't mean that in a derogatory manner. They're right that LLMs are unlocking new possibilities in the realm of their work. They just don't realize that these new possibilities are constrained to relatively simple applications, or very thin slices of complex systems.

I use an LLM to access my database occasionally, but never in production and never with write access. It is genuinely useful. It would never be useful in a production setting, though.

It's worth noting too that people should be wary of what a read only user means in database land. There are plenty of foot guns where writes can occur with read-like statements, and depending on the schema, maybe this would be a rollback-worthy situation. You really need to understand your database and schema before allowing an LLM anywhere near it, and you should be reviewing every query.

reply
gmueckl
3 hours ago
[-]
But are those users allowed to see all the data in the databawe by the law? Some privacy laws require that personal information must be hidden from employees unless they have a narrow and specific business reason to view it. Blanket full access to a database may be illegal for that reason.
reply
raincole
2 hours ago
[-]
> single user

If you're just vibe coding a tool for yourself, you don't have 'production database' at all even if you use database technology for storage. Just like many Android apps use local sqlite DBs but they're not production databases.

Of course in this case no traditional wisdom about production databases matters to you. In other words, it's off-topic.

reply
antonvs
3 hours ago
[-]
This makes no sense whatsoever.

It's not news that if you just give all developers at a company write access to the production databases, owner permissions on all resources, etc. that velocity can be increased. But at what cost?

The reason we don't do that in most cases is that "move fast and break things" only makes sense for trivial, non-critical applications that don't have any real importance, like Facebook.

reply
3form
3 hours ago
[-]
I think the argument would be mostly about the companies where such trivialities like proper auth were given up to maximum possible extent. I'm sure even some bigger ones are only gnashing their teeth over implementing security measures that are required by law and not seeing much point to it.
reply
SegfaultSeagull
3 hours ago
[-]
This comment is savage and I’m here for it.
reply
nophunphil
3 hours ago
[-]
1) Can you explain what demand and supply mean in this context?

2) In regards to having good backup hygiene, who is we?

reply
ripped_britches
2 hours ago
[-]
I’m all-in on agents but this is a “you’re holding it wrong” situation.

If you want to give your agents a DB for their own work as a scratchpad or something that’s great. They can not only go to town, but also analyze their own work and iterate on it.

If you are talking about a production base, agents should not be hitting it directly under any circumstances. There needs to be an API layer with defined usage patterns, rate limits, etc.

This is basically the same as saying “databases weren’t designed for interns to run live inline migrations in prod”. Yeah of course they aren’t.

reply
bloaf
2 hours ago
[-]
> The API failed silently because the database connection pool was exhausted downstream.

I work with a team that does stuff like this, returning a 200 and a body containing "error: I didn't do what you said because _insert error here_"

The problem is that you returned OK instead of ERROR when things were not OK and there was an ERROR.

Its a design that smells of teams trying to hit some kind of internal metrics by slightly deceptive means.

reply
iambateman
3 hours ago
[-]
Giving an LLM write access is insane but I gave LLM’s read-only access to our database and it’s been a huge productivity win.

Executives who wouldn’t take the time to build a report are happy to ask an AI agent to do so.

reply
SOLAR_FIELDS
3 hours ago
[-]
I would hope that you're running this on a replica so that the massive table scan doesn't choke writes to the main db. Even then it's possible to bring the replica down and depending on the technology still create a problem (WAL backup for instance)

Another way to bring prod down even with read is depending on your atomicity settings, try starting a transaction and don’t commit or abort it, just leave it dangling. That’s a cute one

reply
mdavid626
1 hour ago
[-]
How do you prevent your customer data being used for training?
reply
Kwpolska
55 minutes ago
[-]
The same way everyone does, by not using free LLMs, but instead paying OpenAI/Microsoft/Anthropic for an enterprise subscription?
reply
mdavid626
47 minutes ago
[-]
I thought the way is not feeding customer data to the LLM.
reply
kgeist
3 hours ago
[-]
How do you validate that the reports are correct? What if an executive makes a wrong business decision because the LLM wrote a wrong SQL query?
reply
pocksuppet
2 hours ago
[-]
reply
nananana9
3 hours ago
[-]
The same way we've always done it - glance at it and see if the numbers look like they're within an order of magnitude of what looks reasonable.
reply
tremon
2 hours ago
[-]
So as long as the LLM only makes errors in the single-digit percentage range, everything is peachy. Make number go up, but not by too much.
reply
fl4regun
1 hour ago
[-]
so what if there were some numbers in the report which are in actuality, an order of magnitude or two outside of what you think is reasonable, because something was wrong, but the AI agent reports something that looks normal?
reply
Alex_L_Wood
3 hours ago
[-]
This article has all the correct conclusions and solutions based on one assumption that doesn’t have any hold in reality - that someone would be insane enough to allow direct DB access to an AI agent.
reply
dijksterhuis
2 hours ago
[-]
someone is potentially insane enough it would seem: https://news.ycombinator.com/item?id=47911579
reply
aleda145
3 hours ago
[-]
I totally agree on investing in a sane data model upfront. So many production systems have schemas that only made sense to the engineer that created them. I would be delighted if I can read a schema and understand what a column means without having to dig through a bunch of migration PRs.

I recently encountered `is_as BOOL` in an important table. After way too much invested time we found out it meant "is active service". </DDL rant>

reply
mamcx
1 hour ago
[-]
I integrate with many ERPs and this is the bane of my existence.

One of the worst has field names like `ft_0001...N` and table names like `UNCC_00001...N`, all in `text` fields (even numbers!), zero FK, almost no indexes and what are views?

The other has this funny field that is a blob that need decoding using a specific FreePascal version. The field? Where is the price of the product.

Other has, in the same column, mix of how handling "," or "." for numbers and I need to check the digital places to deduce which.

FUN.

P.D: I normalize all this Erps into my own schema and has get praise for things like, my product table is called products.

reply
phillipcarter
2 hours ago
[-]
I think the best db schema I had the displeasure of working with was one where it was a requirement that every table and column name NOT have vowels, except for the few that could, and "the few that could" were governed entirely by a spreadsheet owned by the DB admin.

And so you got tables like LANDMRK and columns like RCR_RCRDR.

reply
aleda145
2 hours ago
[-]
Oh my. What could possibly be the justification for this?
reply
jimlawruk
2 hours ago
[-]
I work with an Oracle database like this. In the old days, there was a 30 character limit on column names, so you end up with conventions like no vowels. The limit no longer exists today, but the DBA continues to enforce the limit on new columns.
reply
phillipcarter
2 hours ago
[-]
I never got an answer when I asked. This same government agency also got extremely mad when our dev manager upgraded the ASP.NET version for one project because it had some really useful features we were developing with. They deleted his permissions to deploy to production from there until the end of time, requiring us to email someone each time we wanted to update the application. It was great.
reply
tomwheeler
2 hours ago
[-]
> I recently encountered `is_as BOOL` in an important table.

Sounds like a table designed by Forrest Gump.

reply
mulmen
2 hours ago
[-]
Postgres has COMMENT ON to help with this but descriptive names are helpful.
reply
lateforwork
3 hours ago
[-]
There are two broad types of databases: operational and analytical.

Operational databases store transactions and support day-to-day application workflows.

For analysis, data is often copied into separate analytical databases (data warehouses), which are structured for efficient querying and large-scale data processing. These systems are designed to handle complex, random queries and heavy workloads.

LLM agents are the best way to analyze data stored in these databases. This is the future.

reply
gib444
3 hours ago
[-]
> LLM agents are the best way to analyze data stored in these databases

Why, and how?

reply
hasyimibhar
2 hours ago
[-]
> Why

Based on my experience with Claude, it's pretty damn good at doing data analysis, if given the right curated data models. You still need to eyeball the generated SQL to make sure it makes sense.

> and how?

1. Replicate your Postgres into Snowflake/Databricks/ClickHouse/etc, or directly to Iceberg and hook it up to Snowflake/Databricks/ClickHouse/etc.

2. Give your agent read access to query it.

3. Build dimensional models (facts and dimensions tables) from the raw data. You can ask LLM for help here, Claude is pretty good at designing data models in my experience.

4. Start asking your agent questions about your data.

Keep steps 3-4 as a tight feedback loop. Every time your agent hallucinates or struggle to answer your questions, improve the model.

Side note: I'm currently building a platform that does all 3 (though you still need to do 2 yourself), you just need Postgres + 1 command to set it up: https://polynya.dev/

reply
DaiPlusPlus
1 hour ago
[-]
> Claude is pretty good at designing data models in my experience

Yesterday, Claude decided to go with nvarchar(100) for an IP address column instead of varbinary(16), and thinks RBAR triggers are just-as-good as temporal tables.

So, no. Claude is not good at designing data models in my experience.

reply
gib444
45 minutes ago
[-]
> Side note: I'm currently building a platform

Oh ok this comment is just an ad then

reply
hasyimibhar
1 hour ago
[-]
I'm not sure why you are giving your agents write access to query your OLTP database, let alone write to it. The pattern that I use at work is:

- Read access through OLAP, not OLTP. You just need to setup a near real-time replication between your OLTP and OLAP.

- Write access through API, just like your application. You can add fancy things like approval layer, e.g. you agent cannot "ban_user(id)", but it can "request_to_ban_user(id)", and the action only happens once you approve it.

reply
sgarland
1 hour ago
[-]
The article describes idempotency keys and then completely misses making them the PK. The example is already using UUIDv4 as a PK, so they’re clearly not optimizing for performance. If you’re using the first 32 characters of a SHA256 hash, congratulations, store it as BINARY(8) / BYTEA - it’s even half the size of an encoded UUID, to boot.

Also, the DB will most certainly not silently ignore a unique constraint violation: it will send an error back. EDIT: unless you’re using INSERT OR IGNORE, of course.

reply
zarzavat
2 hours ago
[-]
I don't understand the premise. Who is letting "agents" run arbitrary SQL against their database without human review?

Before redesigning your database, consider seeing a psychiatrist.

reply
PunchyHamster
2 hours ago
[-]
> Never let an agent hard-delete anything. Use soft deletes as a baseline for any table an agent can write to

How you even enforce it ?

And why you are even giving agent access to live DB in the first place ?

reply
tremon
2 hours ago
[-]
> How you even enforce it ?

  DENY DELETE TO [agent] ON DATABASE::current;
(yes, the recommended way would be to simply grant only SELECT,INSERT,UPDATE but if I were the DBA here I would definitely put in place an explicit deny)
reply
ak217
2 hours ago
[-]
> Connections are Brief

This doesn't make sense, in the context of the author's chosen example (postgres). Postgres connections are very heavy and there is a huge performance penalty for cycling them quickly, and a whole range of silly workarounds for this fact (pgbouncer). Maybe the author meant to say that sessions are brief.

reply
brunkerhart
1 hour ago
[-]
Databricks introduced Lakebase exactly for this purpose. Lakebase supports branching that makes querying, schema evolution and writes cheap. Each agent can run its own branch without affecting production data.
reply
shmeeny
2 hours ago
[-]
I think the spirit of this post has merit, but the premise is flawed. ORMs have been causing this same class of problem for decades. Furthermore, It's not at all uncommon for humans to create different queries for the same result and for them to follow different review paths for the same underlying database.
reply
Bilal_io
2 hours ago
[-]
A query created by a human and reviewed by at least 1 other human becomes static after it's merged. But the query from an LLM is dynamic, it can change between two calls in the same session if the LLm sees a reason to change it, and there is no review pipeline and QA stage.
reply
red_admiral
1 hour ago
[-]
At one place, the saying was that databases can handle everything except a class of sophomores learning how to use databases.
reply
setr
1 hour ago
[-]
> The contract goes something like this: the caller is a human-authored application, running deterministic code, issuing predictable queries, reviewed by a developer before deployment. Writes are intentional. Connections are brief. When something goes wrong, a human notices. The database can be dumb and fast because the application layer is smart and careful.

This assumption is that of a non-DBA who happened to get a hold of a database.

When you have sufficient users, your expected set of queries is a complete shit show. Some will be efficient, many will be poorly tested and psychotic, and indistinguishable from a non-deterministic LLM.

Also who said humans can’t query the database directly in prod? If not direct sql access, business users have the next best thing with custom reports and such. And they will very much ask for ridiculous amounts of computation to answer trivial questions.

It was a foundational assumption of SQL that business users would directly access the database and write their own queries.

It’s why row level access and permissions exist. Use them

reply
bijowo1676
2 hours ago
[-]
think of a flat database table as a projection (of all previous SQL queries), should you give access to it to agents?

probably not, maybe only for analytical (OLAP) purposes in read-only mode.

for transactional OLTP loads, it is better to use Kafka style durable queues, have agents create a change record to mutate the state, but not the projection itself, which could be recomputed at arbitrary point in time via time-travel mechanism, could be branched out into different versions, etc

reply
pilgrim0
3 hours ago
[-]
Who the hell let agents directly use a database? Even humans don’t get this privilege. So, of all things, we forgot how to write APIs now? The article suggests creating a role for the agent directly in the database. What is wrong with you people? The very title of the article defeats its own purpose. They are not designed for this so don’t let them be used like this, ffs.
reply
estetlinus
2 hours ago
[-]
Uhm, everyone I know? To let it write to a production database, that’s on another level though.
reply
croisillon
2 hours ago
[-]
i wonder if that guy get requests per email from his "talks" section
reply
diavelguru
2 hours ago
[-]
Agree across the board.
reply
mdavid626
1 hour ago
[-]
Jesus Christ. All this is true only if you let them.

It’s good idea to be defensive, design the system in a way that it can “fix” itself.

But for love of god, don’t let an LLM do everything it wants.

reply
lowsong
1 hour ago
[-]
> None of this requires new technology. It requires treating the database as a defensive layer that assumes the caller might be wrong, might retry, and might not be watching the results.

This is one of those takes that is so close to understanding the problem, and then drawing an insane conclusion.

The problem is that AI agents and the code they output is untrustworthy, buggy, insecure, and lacking in any of the standards the industry has developed over the last 30 years. The solution to this is "don't use AI agents", not "change the rest of the stack to accommodate garbage".

reply
stavros
2 hours ago
[-]
I'm exasperated whenever I read articles like this. Anyone who underscores the difference between humans and agents by saying "[agents] write based on their current understanding of the task, which may be wrong" is clearly working with a different species of human than the one I've worked with.
reply
efficax
3 hours ago
[-]
Why are you connecting your agent to a database with write access? Are you out of your mind.
reply