Do you even need a database?
49 points
4 hours ago
| 31 comments
| dbpro.app
| HN
winrid
54 seconds ago
[-]
My recent project - a replacement for CodeMaster's RaceNet, runs on flat files! https://dirtforever.net/

Just have to use locks to be careful with writes.

I figured I'd migrate it to a database after maybe 10k users or so.

reply
z3ugma
2 hours ago
[-]
At some point, don't you just end up making a low-quality, poorly-tested reinvention of SQLite by doing this and adding features?
reply
freedomben
2 hours ago
[-]
Sometimes yes, I've seen it. It even tends to happen on NoSQL databases as well. Three times I've seen apps start on top of Dynamo DB, and then end up re-implementing relational databases at the application level anyway. Starting with postgres would have been the right answer for all three of those. Initial dev went faster, but tech debt and complexity quickly started soaking up all those gains and left a hard-to-maintain mess.
reply
leafarlua
2 hours ago
[-]
This always confuses me because we have decades of SQL and all its issues as well. Hundreds of experienced devs talking about all the issues in SQL and the quirks of queries when your data is not trivial.

One would think that for a startup of sorts, where things changes fast and are unpredictable, NoSQL is the correct answer. And when things are stable and the shape of entities are known, going for SQL becomes a natural path.

There is also cases for having both, and there is cases for graph-oriented databases or even columnar-oriented ones such as duckdb.

Seems to me, with my very limited experience of course, everything leads to same boring fundamental issue: Rarely the issue lays on infrastructure, and is mostly bad design decisions and poor domain knowledge. Realistic, how many times the bottleneck is indeed the type of database versus the quality of the code and the imlementation of the system design?

reply
marcosdumay
7 minutes ago
[-]
No, when things change fast and unpredictably, NoSQL is worse than when they are well-known and stable.

NoSQL gains you no speed at all in redesigning your system. Instead, you trade a few hard to do tasks in data migration into an unsurmountable mess of data inconsistency bugs that you'll never actually get into the end of.

> is mostly bad design decisions and poor domain knowledge

Yes, using NoSQL to avoid data migrations is a bad design decision. Usually created by poor general knowledge.

reply
mike_hearn
9 minutes ago
[-]
Disclaimer: I work part time on the DB team.

You could also consider renting an Oracle DB. Yep! Consider some unintuitive facts:

• It can be cheaper to use Oracle than MongoDB. There are companies that have migrated away from Mongo to Oracle to save money. This idea violates some of HN's most sacred memes, but there you go. Cloud databases are things you always pay for, even if they're based on open source code.

• Oracle supports NoSQL features including the MongoDB protocol. You can use the Mongo GUI tools to view and edit your data. Starting with NoSQL is very easy as a consequence.

• But... it also has "JSON duality views". You start with a collection of JSON documents and the database not only works out your JSON schemas through data entropy analysis, but can also refactor your documents into relational tables behind the scenes whilst preserving the JSON/REST oriented view e.g. with optimistic locking using etags. Queries on JSON DVs become SQL queries that join tables behind the scenes so you get the benefits of both NoSQL and SQL worlds (i.e. updating a sub-object in one place updates it in all places cheaply).

• If your startup has viral growth you won't have db scaling issues because Oracle DBs scale horizontally, and have a bunch of other neat performance tricks like automatically adding indexes you forgot you needed, you can materialize views, there are high performance transactional message queues etc.

So you get a nice smooth scale-up and transition from ad hoc "stuff some json into the db and hope for the best" to well typed data with schemas and properly normalized forms that benefit from all the features of SQL.

reply
freedomben
1 minute ago
[-]
I wanted to hate you for suggesting Oracle, but you defend it well! I had no idea
reply
dalenw
1 hour ago
[-]
It's almost always a system design issue. Outside of a few specific use cases with big data, I struggle to imagine when I'd use NoSQL, especially in an application or data analytics scenario. At the end of the data, your data should be structured in a predictable manner, and it most likely relates to other data. So just use SQL.
reply
greenavocado
1 hour ago
[-]
System design issues are a product of culture, capabilities, and prototyping speed of the dev team
reply
bachmeier
29 minutes ago
[-]
Based on what's in the article, it wouldn't take much to move these files to SQLite or any other database in the future.

Edit: I just submitted a link to Joe Armstrong's Minimum Viable Programs article from 2014. If the response to my comment is about the enterprise and imaginary scaling problems, realize that those situations don't apply to some programming problems.

reply
locknitpicker
25 minutes ago
[-]
> Based on what's in the article, it wouldn't take much to move these files to SQLite or any other database in the future.

Why waste time screwing around with ad-hoc file reads, then?

I mean, what exactly are you buying by rolling your own?

reply
bachmeier
19 minutes ago
[-]
You can avoid the overhead of working with the database. If you want to work with json data and prefer the advantages of text files, this solution will be better when you're starting out. I'm not going to argue in favor of a particular solution because that depends on what you're doing. One could turn the question around and ask what's special about SQLite.
reply
pythonaut_16
14 minutes ago
[-]
If your language supports it, what is the overhead of working with SQLite?

What's special about SQLite is that it already solves most of the things you need for data persistence without adding the same kind of overhead or trade offs as Postgres or other persistence layers, and that it saves you from solving those problems yourself in your json text files...

Like by all means don't use SQLite in every project. I have projects where I just use files on the disk too. But it's kinda inane to pretend it's some kind of burdensome tool that adds so much overhead it's not worth it.

reply
ablob
7 minutes ago
[-]
So you trade the overhead of SQL with the overhead of JSON?
reply
locknitpicker
9 minutes ago
[-]
> You can avoid the overhead of working with the database.

What overhead?

SQLite is literally more performant than fread/fwrite.

reply
cleversomething
3 minutes ago
[-]
That's exactly what I was going to say. This seems more like a neat "look Ma, no database!" hobby project than an actual production recommendation.
reply
noveltyaccount
2 hours ago
[-]
As soon as you need to do a JOIN, you're either rewriting a database or replatforming on Sqlite.
reply
pgtan
30 minutes ago
[-]
Here are two checks using joins, one with sqlite, one with the join builtin of ksh93:

  check_empty_vhosts () {
    # Check which vhost adapter doesn't have any VTD mapped
    start_sqlite
    tosql "SELECT l.vios_name,l.vadapter_name FROM vios_vadapter AS l
        LEFT OUTER JOIN vios_wwn_disk_vadapter_vtd AS r
    USING (vadapter_name,vios_name)
    WHERE r.vadapter_name IS NULL AND
      r.vios_name IS NULL AND
   l.vadapter_name LIKE 'vhost%';"
    endsql
    getsql
    stop_sqlite
  }

  check_empty_vhosts_sh () {
    # same as above, but on the shell
    join  -v 1  -t , -1 1 -2 1 \
   <(while IFS=, read vio host slot; do 
  if [[ $host == vhost* ]]; then
      print ${vio}_$host,$slot 
  fi
     done < $VIO_ADAPTER_SLOT | sort -t , -k 1)\
   <(while IFS=, read vio vhost vtd disk; do
  if [[ $vhost == vhost* ]]; then        
    print ${vio}_$vhost
  fi
     done < $VIO_VHOST_VTD_DISK | sort -t , -k 1)
  }
reply
randyrand
5 minutes ago
[-]
“You Aren’t Gonna Need It” - one of the most important software principles.

Wait until you actually need it.

reply
whalesalad
21 minutes ago
[-]
Reminds me of the infamous Robert Virding quote:

“Virding's First Rule of Programming: Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.”

reply
gorjusborg
2 hours ago
[-]
Only if you get there and need it.
reply
z3ugma
2 hours ago
[-]
but it's so trivial to implement SQLite, in almost any app or language...there are sufficient ORMs to do the joins if you don't like working with SQL directly...the B-trees are built in and you don't need to reason about binary search, and your app doesn't have 300% test coverage with fuzzing like SQLite does

you should be squashing bugs related to your business logic, not core data storage. Local data storage on your one horizontally-scaling box is a solved problem using SQLite. Not to mention atomic backups?

reply
gorjusborg
2 hours ago
[-]
Honestly, there is zero chance you will implement anything close to sqlite.

What is more likely, if you are making good decisions, is that you'll reach a point where the simple approach will fail to meet your needs. If you use the same attitude again and choose the simplest solution based on your _need_, you'll have concrete knowledge and constraints that you can redesign for.

reply
hirvi74
2 hours ago
[-]
Sqlite is also the only major database to receive DO-178B certification, which allows Sqlite to legally operate in avionic environments and roles.
reply
moron4hire
2 hours ago
[-]
Came here to also throw in a vote for it being so much easier to just use SQLite. You get so much for so very little. There might be a one-time up-front learning effort for tweaking settings, but that is a lot less effort than what you're going to spend on fiddling with stupid issues with data files all day, every day, for the rest of the life of your project.
reply
9rx
2 hours ago
[-]
> and your app doesn't have 300% test coverage with fuzzing like SQLite does

Surely it does? Otherwise you cannot trust the interface point with SQLite and you're no further ahead. SQLite being flawless doesn't mean much if you screw things up before getting to it.

reply
RL2024
1 hour ago
[-]
That's true but relying on a highly tested component like SQLite means that you can focus your tests on the interface and your business logic, i.e. you can test that you are persisting to the your datastore rather than testing that your datastore implementation is valid.
reply
9rx
1 hour ago
[-]
Your business logic tests will already, by osmosis, exercise the backing data store in every conceivable way to the fundamental extent that is possible with testing given finite time. If that's not the case, your business logic tests have cases that have been overlooked. Choosing SQLite does mean that it will also be tested for code paths that your application will never touch, but who cares about that? It makes no difference if code that is never executed is theoretically buggy.
reply
wmanley
42 minutes ago
[-]
Business logic tests will rarely test what happens to your data if a machine loses power.
reply
9rx
34 minutes ago
[-]
Then your business logic contains unspecified behaviour. Maybe you have a business situation where power loss conditions being unspecified is perfectly acceptable, but if that is so it doesn't really matter what happens to your backing data store either.
reply
upmostly
2 hours ago
[-]
Exactly. And most apps don't get there and therefore don't need it.
reply
evanelias
2 hours ago
[-]
Your article completely ignores operational considerations: backups, schema changes, replication/HA. As well as security, i.e. your application has full permissions to completely destroy your data file.

Regardless of whether most apps have enough requests per second to "need" a database for performance reasons, these are extremely important topics for any app used by a real business.

reply
ozgrakkurt
22 minutes ago
[-]
You need databases if you need any kind of atomicity. Doing atomic writes is extremely fragile if you are just on top of the filesystem.

This is also why many databases have persistence issues and can easily corrupt on-disk data on crash. Rocksdb on windows is a very simple example a couple years back. It was regularly having corruption issues when doing development with it.

reply
koliber
56 minutes ago
[-]
I love this article as it shows how fast computers really are.

There is one conclusion that I do not agree with. Near the end, the author lists cases where you will outgrow flat files. He then says that "None of these constraints apply to a lot of applications."

One of the constraints is "Multiple processes need to write at the same time." It turns out many early stage products need crons and message queues that execute on a separate worker. These multiple processes often need to write at the same time. You could finagle it so that the main server is the only one writing, but you'd introduce architectural complexity.

So while from the pure scale perspective I agree with the author, if you take a wider perspective, it's best to go with a database. And sqlite is a very sane choice.

If you need scale, cache the most often accessed data in memory and you have the best of both worlds.

My winning combo is sqlite + in-memory cache.

reply
kabir_daki
2 hours ago
[-]
We built a PDF processing tool and faced this exact question early on.

For our use case — merge, split, compress — we went fully stateless. Files are processed in memory and never stored. No database needed at all.

The only time a database becomes necessary is when you need user accounts, history, or async jobs for large files. For simple tools, a database is often just added complexity.

The real question isn't "do you need a database" but "do you need state" — and often the answer is no.

reply
bevr1337
34 minutes ago
[-]
> The real question isn't "do you need a database" but "do you need state" — and often the answer is no.

This is a solid takeaway and applies to a lot of domains. Great observation

reply
ktzar
1 hour ago
[-]
Writing your own storage is a great way to understand how databases work (if you do it efficiently, keeping indexes, correct data structures, etc.) and to come to the conclusion that if your intention wasn't just tinkering, you should've used a database from day 1.
reply
rglover
40 minutes ago
[-]
A few months back I decided to write an embedded db for my firm's internal JS framework. Learned a lot about how/why databases work the way they do. I use stuff like reading memory cached markdown files for static sites, but there are certain things that a database gives you (chief of which for me was query ergonomics—I loved MongoDB's query language but grew too frustrated with the actual runtime) that you'll miss once you move past a trivial data set.

I think a better way to ask this question is "does this application and its constraints necessitate a database? And if so, which database is the correct tool for this context?"

reply
shafoshaf
1 hour ago
[-]
Relational Databases Aren’t Dinosaurs, They’re Sharks. https://www.simplethread.com/relational-databases-arent-dino...

The very small bonus you get on small apps is hardly worth the time you spend redeveloping the wheel.

reply
nishagr
28 minutes ago
[-]
The real question - do you really need to hack around with in-memory maps and files when you could just use a database?
reply
forinti
2 hours ago
[-]
Many eons ago I wrote a small sales web application in Perl. I couldn't install anything on the ISP's machine, so I used file-backed hashes: one for users, one for orders, another for something else.

As the years went by, I expected the client to move to something better, but he just stuck with it until he died after about 20 years, the family took over and had everything redone (it now runs Wordpress).

The last time I checked, it had hundreds of thousands of orders and still had good performance. The evolution of hardware made this hack keep its performance well past what I had expected it to endure. I'm pretty sure SQLite would be just fine nowadays.

reply
da02
47 minutes ago
[-]
What type of product or service were they selling?
reply
forinti
43 minutes ago
[-]
A calendar for cutting your hair according to the phases of the moon.
reply
da02
34 minutes ago
[-]
Sounds like a tough business. The profit margins must have been razor thin.
reply
forinti
18 minutes ago
[-]
Jokes aside, the guy made an impressive amount of money with this.

I should have charged him a percentage. Even if I had charged 0.5%, I would have made more money.

reply
827a
14 minutes ago
[-]
I'm a big fan of using S3 as a database. A lot of apps can get a lot of mileage just doing that for a good chunk of their data; that which just needs lookup by a single field (usually ID, but doesn't have to be).
reply
swiftcoder
24 minutes ago
[-]
I feel like someone who works for a DB company ought to mention at least some of the pitfalls in file-based backing stores (data loss due to crashes, file truncation, fsync weirdness, etc)
reply
vovanidze
3 hours ago
[-]
people wildly underestimate the os page cache and modern nvme drives tbh. disk io today is basically ram speeds from 10 years ago. seeing startups spin up managed postgres + redis clusters + prisma on day 1 just to collect waitlist emails is peak feature vomit.

a jsonl file and a single go binary will literally outlive most startup runways.

also, the irony of a database gui company writing a post about how you dont actually need a database is pretty based.

reply
upmostly
3 hours ago
[-]
The irony isn’t lost on us, trust me. We spent a while debating whether to even publish this one.

But yeah, the page cache point is real and massively underappreciated. Modern infrastructure discourse skips past it almost entirely. A warm NVMe-backed file with the OS doing the caching is genuinely fast enough for most early-stage products.

reply
vovanidze
2 hours ago
[-]
props for actually publishing it tbh. transparent engineering takes are so rare now, usually its just seo fluff.

weve basically been brainwashed to think we need kubernetes and 3 different databases just to serve a few thousand users. gotta burn those startup cloud credits somehow i guess.

mad respect for the honesty though, actually makes me want to check out db pro when i finally outgrow my flat files.

reply
upmostly
2 hours ago
[-]
I'm feel like I could write another post: Do you even need serverless/Cloud because we've also been brainwashed into thinking we need to spend hundreds/thousands a month on AWS when a tiny VPS will do.

Similar sentiment.

reply
hooverd
35 minutes ago
[-]
Serverless is cheap as hell as low volumes. Your tiny VPS can't scale to zero. If you're doing sustained traffic your tiny VPS might win though. The real value in Cloud is turning capex spend into opex spend. You don't have to wait weeks or months to requisition equipment.
reply
hilariously
2 hours ago
[-]
You are both right, with the exception that it requires knowledge and taste to accomplish, both of which are in short supply in the industry.

Why setup a go binary and a json file? Just use google forms and move on, or pay someone for a dead simple form system so you can capture and commmunicate with customers.

People want to do the things that make them feel good - writing code to fit in just the right size, spending money to make themselves look cool, getting "the right setup for the future so we can scale to all the users in the world!" - most people don't consider the business case.

What they "need" is an interesting one because it requires a forecast of what the actual work to be done in the future is, and usually the head of any department pretends they do that when in reality they mostly manage a shared delusion about how great everything is going to go until reality hits.

I have worked for companies getting billions of hits a month and ones that I had to get the founder to admit there's maybe 10k users on earth for the product, and neither of them was good at planning based on "what they need".

reply
locknitpicker
10 minutes ago
[-]
> weve basically been brainwashed to think we need kubernetes and 3 different databases just to serve a few thousand users. gotta burn those startup cloud credits somehow i guess.

I don't think it makes any sense to presume everyone around you is brainwashed and you are the only soul cursed with reasoning powers. Might it be possible that "we" are actually able to analyse tradeoffs and understand the value of, say, have complete control over deployments with out of the box support for things like deployment history, observability, rollback control, and infrastructure as code?

Or is it brainwashing?

Let's put your claim to the test. If you believe only brainwashed people could see value in things like SQLite or Kubernetes, what do you believe are reasonable choices for production environments?

reply
grep_it
1 hour ago
[-]
Except that eventually you'll find you lose a write when things go down because the page cache is write behind. So you start issuing fsync calls. Then one day you'll find yourself with a WAL and buffer pool wondering why you didn't just start with sqlite instead.
reply
ghc
2 hours ago
[-]
I'm so old I remember working on databases that were designed to use RAW, not files. I'm betting some databases still do, but probably only for mainframe systems nowadays.
reply
bob1029
2 hours ago
[-]
reply
Joeboy
1 hour ago
[-]
Don't know if it counts, but my London cinema listings website just uses static json files that I upload every weekend. All of the searching and stuff is done client side. Although I do use sqlite to create the files locally.

Total hosting costs are £0 ($0) other than the domain name.

reply
matja
1 hour ago
[-]
If you think files are easier than a database, check out https://danluu.com/file-consistency/
reply
gavinray
2 hours ago
[-]
Not to nitpick, but it would be interesting to see profiling info of the benchmarks

Different languages and stdlib methods can often spend time doing unexpected things that makes what looks like apples-to-apples comparisons not quite equivalent

reply
randusername
2 hours ago
[-]
Separate from performance, I feel like databases are a sub-specialty that has its own cognitive load.

I can use databases just fine, but will never be able to make wise decisions about table layouts, ORMs, migrations, backups, scaling.

I don't understand the culture of "oh we need to use this tool because that's what professionals use" when the team doesn't have the knowledge or discipline to do it right and the scale doesn't justify the complexity.

reply
the_inspector
3 hours ago
[-]
In many cases not. E.g. for caching with python, diskcache is a good choice. For small amounts of data, a JSON file does the job (you pointed to JSONL as an option). But for larger collections, that should be searchable/processable, postgres is a good choice.

Memory of course, as you wrote, also seems reasonable in many cases.

reply
chuckadams
2 hours ago
[-]
I need a filesystem that does some database things. We got teased with that with WinFS and Beos's BFS, but it seems the football always gets yanked away, and the mainstream of filesystems always reverts back to the APIs established in the 1980s.
reply
jwitchel
2 hours ago
[-]
This is a great incredibly well written piece. Nice work showing under the hood build up of how a db works. It makes you think.
reply
JohnMakin
2 hours ago
[-]
everyone thinks this is a great idea until they learn about file descriptor limits the hard way
reply
jbiason
2 hours ago
[-]
Honestly, I have been thinking about the same topic for some time, and I do realize that direct files could be faster.

In my (hypothetical, 'cause I never actually sat down and wrote that) case, I wanted the personal transactions in a month, and I realized I could just keep one single file per month, and read the whole thing at once (also 'cause the application would display the whole month at once).

Filesystems can be considered a key-value (or key-document) database. The funny thing about the example used in the link is that one could simply create a structure like `user/[id]/info.json` and directly access the user ID instead of running some file to find them -- again, just 'cause the examples used, search by name would be a pain, and one point where databases would handle things better.

reply
m6z
2 hours ago
[-]
I have found that SQLite can be faster than using text or binary files, confirming their claims here: https://sqlite.org/fasterthanfs.html
reply
freedomben
2 hours ago
[-]
I avoided DBs like the plague early in my career, in favor of serialized formats on disk. I still think there's a lot of merit to that, but at this point in my career I see a lot more use case for sqlite and the relational features it comes with. At the least, I've spent a lot less time chasing down data corruption bugs since changing philosophy.

Now that said, if there's value to the "database" being human readable/editable, json is still well worth a consideration. Dealing with even sqlite is a pain in the ass when you just need to tweak or read something, especially if you're not the dev.

reply
giva
2 hours ago
[-]
> Dealing with even sqlite is a pain in the ass when you just need to tweak or read something, especially if you're not the dev.

How? With SQL is super easy to search, compare, and update data. That's what it’s built for.

reply
freedomben
2 hours ago
[-]
Pain in the ass was way too strong, I retract that. Mainly I meant relative. For example `nvim <filename>.json` and then /search for what I want, versus tracking down the sqlite file, opening, examining the schema, figuring out where the most likely place is that I care about, writing a SQL statement to query, etc.
reply
giva
2 hours ago
[-]
Well, you still need to track down the <filename> part and knowing what you want to search, so you need to examine the schema anyway.

However, if your all application state can be represented in a single json file of less than a dozen MB, yes, a database can be overkill.

reply
freedomben
1 hour ago
[-]
> Well, you still need to track down the <filename> part and knowing what you want to search, so you need to examine the schema anyway.

Yes agreed, but it's usually a lot easier to find the filename part, especially if the application follows XDG. Sqlite databases are usually buried somewhere because they aren't expected to be looked at.

reply
XorNot
2 hours ago
[-]
I've just built myself a useful tool which now really would benefit from a database and I'm deeply regretting not doing that from the get-go.

So my opinion has thoroughly shifted to "start with a database, and if you _really_ don't need one it'll be obvious.

But you probably do.

reply
stackskipton
1 hour ago
[-]
SRE here. My "Huh, neat" side of my brain is very interested. The SRE side of my brain is screaming "GOD NO, PLEASE NO"

Overhead in any project is understanding it and onboarding new people to it. Keeping on "mainline" path is key to lower friction here. All 3 languages have well supported ORM that supports SQLite.

reply
srslyTrying2hlp
2 hours ago
[-]
I tried doing this with csv files (and for an online solution, Google Sheets)

I ended up just buying a VPS, putting openclaw on it, and letting it Postgres my app.

I feel like this article is outdated since the invention of OpenClaw/Claude Opus level AI Agents. The difficulty is no longer programming.

reply
fifilura
2 hours ago
[-]
Isn't this the same case the NoSQL movement made.
reply
ForHackernews
2 hours ago
[-]
Surprised to see this beating SQLite after previously reading https://sqlite.org/fasterthanfs.html
reply
ethan_smith
22 minutes ago
[-]
The SQLite "faster than filesystem" page is specifically about reading small blobs where the overhead of individual filesystem calls (open/read/close per blob) exceeds SQLite reading from a single already-open file. Once you're talking about reading one big JSON file sequentially, that overhead disappears and you're just doing a single read - which is basically the best case for the filesystem and the worst case for SQLite (which still has to parse its B-tree, check schemas, etc).
reply
MattRogish
1 hour ago
[-]
"Do not cite the deep magic to me witch, I was there when it was written"

If you want to do this for fun or for learning? Absolutely! I did my CS Masters thesis on SQL JOINS and tried building my own new JOIN indexing system (tl;dr: mine wasn't better). Learning is fun! Just don't recommend people build production systems like this.

Is this article trolling? It feels like trolling. I struggle to take an article seriously that conflates databases with database management systems.

A JSON file is a database. A CSV is a database. XML (shudder) is a database. PostgreSQL data files, I guess, are a database (and indexes and transaction logs).

They never actually posit a scenario in which rolling your own DBMS makes sense (the only pro is "hand rolled binary search is faster than SQLite"), and their "When you might need" a DBMS misses all the scenarios, the addition of which would cause the conclusion to round to "just start with SQLite".

It should basically be "if you have an entirely read-only system on a single server/container/whatever" then use JSON files. I won't even argue with that.

Nobody - and I mean nobody - is running a production system processing hundreds of thousands of requests per second off of a single JSON file. I mean, if req/sec is the only consideration, at that point just cache everything to flat HTML files! Node and Typescript and code at all is unnecessary complexity.

PostgreSQL (MySQL, et al) is a DBMS (DataBase Management System). It might sound pedantic but the "MS" part is the thing you're building in code:

concurrency, access controls, backups, transactions: recovery, rollback, committing, etc., ability to do aggregations, joins, indexing, arbitrary queries, etc. etc.

These are not just "nice to have" in the vast, vast majority of projects.

"The cases where you'll outgrow flat files:"

Please add "you just want to get shit done and never have to build your own database management system". Which should be just about everybody.

If your app is meaningfully successful - and I mean more than just like a vibe-coded prototype - it will break. It will break in both spectacular ways that wake you up at 2AM and it will break in subtle ways that you won't know about until you realize something terrible has happened and you lost your data.

Didn't we just have this discussion like yesterday (https://ultrathink.art/blog/sqlite-in-production-lessons)?

It feels like we're throwing away 50 years of collective knowledge, skills, and experience because it "is faster" (and in the same breath note that nobody is gonna hit these req/sec.)

I know, it's really, really hard to type `yarn add sqlite3` and then `SELECT * FROM foo WHERE bar='baz'`. You're right, it's so much easier writing your own binary search and indexing logic and reordering files and query language.

Not to mention now you need a AGENTS.md that says "We use our own home-grown database nonsense if you want to query the JSON file in a different way just generate more code." - NOT using standard components that LLMs know backwards-and-forwards? Gonna have a bad time. Enjoy burning your token budget on useless, counter-productive code.

This is madness.

reply
fatih-erikli-cg
2 hours ago
[-]
I agree. Databases are useless. You don't even need to load it into the memory. Reading it from the disk when there is a need to read something must be ok. I don't believe the case that there are billions of records so the database must be something optimized for handling it. That amount of records most likely is something like access logs etc, I think they should not be stored at all, for such case.

Even it's postgres, it is still a file on disk. If there is need something like like partitioning the data, it is much more easier to write the code that partitions the data.

If there is a need to adding something with textinputs, checkboxes etc, database with their admin tools may be a good thing. If the data is something that imported exported etc, database may be a good thing too. But still I don't believe such cases, in my ten something years of software development career, something like that never happened.

reply
zeroonetwothree
1 hour ago
[-]
Poe’s law in action?
reply
Sharlin
2 hours ago
[-]
Not sure if sarcastic…
reply
bsenftner
2 hours ago
[-]
I worked as a software engineer for 30 years before being forced to use a database, and that was for a web site. I've been coding actively, daily, since the 70's. Forever we just wrote proprietary files to disk, and that was the norm, for decades. Many a new developer can't even imagine writing their own proprietary file formats, the idea literally scares them. The engineers produced today are a shadow of what they used to be.
reply
anonymars
2 hours ago
[-]
Yeah, it scares me because I'm experienced enough to know all the difficulties involved in keeping durable data consistent, correct, and performant
reply
vlapec
1 hour ago
[-]
>The engineers produced today are a shadow of what they used to be.

…and it won’t get better anytime soon.

reply