Jepsen: NATS 2.12.1
403 points
21 hours ago
| 19 comments
| jepsen.io
| HN
stmw
18 hours ago
[-]
Every time someone builds one of these things and skips over "overcomplicated theory", aphyr destroys them. At this point, I wonder if we could train an AI to look over a project's documentation, and predict whether it's likely to lose commmitted writes just based on the marketing / technical claims. We probably can.
reply
PeterCorless
8 hours ago
[-]
You can have DeepWiki literally scan the source code and tell you:

> 2. Delayed Sync Mode (Default)

> In the default mode, writes are batched and marked with needSync = true for later synchronization filestore.go:7093-7097 . The actual sync happens during the next syncBlocks() execution.

However, if you read DeepWiki's conclusion, it is far more optimistic than what Aphyr uncovered in real-world testing.

> Durability Guarantees

> Even with delayed fsyncs, NATS provides protection against data loss through:

> 1. Write-Ahead Logging: Messages are written to log files before being acknowledged

> 2. Periodic Sync: The sync timer ensures data is eventually flushed to disk

> 3. State Snapshots: Full state is periodically written to index.db files filestore.go:9834-9850

> 4. Error Handling: If sync operations fail, NATS attempts to rebuild state from existing data filestore.go:7066-7072"

https://deepwiki.com/search/will-nats-lose-uncommitted-wri_b...

reply
traceroute66
5 hours ago
[-]
> if you read DeepWiki's conclusion, it is far more optimistic

Well, its an LLM ... of course its going to be optimistic. ;-)

reply
63stack
3 hours ago
[-]
and your point is ...?
reply
esafak
1 hour ago
[-]
You can DIY without aphyr.
reply
otterley
55 minutes ago
[-]
But this example of DIY led to incorrect conclusions about data integrity.
reply
awesome_dude
17 hours ago
[-]
/me strokes my long grey beard and nods

People always think "theory is overrated" or "hacking is better than having a school education"

And then proceed to shoot themselves in the foot with "workarounds" that break well known, well documented, well traversed problem spaces

reply
whimsicalism
17 hours ago
[-]
certainly a narrative that is popular among the grey beard crowd, yes. in pretty much every field i've worked on, the opposite problem has been much much more common.
reply
johncolanduoni
14 hours ago
[-]
What fields? Cargo culting is annoying and definitely leads to suboptimal solutions and sometimes total misses, but I’ve rarely found that simply reading literature on a thorny topic prevents you from thinking outside the box. Most people I’ve seen work who were actually innovating (as in novel solutions and/or execution) understood the current SOTA of what they were working on inside and out.
reply
ownagefool
6 hours ago
[-]
I suspect they were more referring to curmudgeons not patching.

I was engaged after one of the worlds biggest data leaks. The Security org was hyper worried about the cloud environment, which was in its infancy, despite the fact their data leak was from on-prem mainframe style system and they hadn't really improved their posture in any significant way despite spending £40m.

As an aside, I use NATs for some workloads where I've obviously spent low effort validating whether it's a great idea, and I'm pretty horrified with the report. (=

reply
_zoltan_
16 hours ago
[-]
what's the opposite problem statement?
reply
whimsicalism
16 hours ago
[-]
People overly beholden to tried and true 'known' way of addressing a problem space and not considering/belittling alternatives. Many of the things that have been most aggressively 'bitter lesson'ed in the last decade fall into this category.
reply
awesome_dude
15 hours ago
[-]
Like this bug report?

The things that have been "disrupted" haven't delivered - Blockchains are still a scam, Food delivery services are worse than before (Restaurants are worse off, the people making the deliveries are worse off), Taxis still needed to go back and vet drivers to ensure that they weren't fiends.

reply
hbbio
15 hours ago
[-]
> Blockchains are still a scam

Did you actually look at the blockchain nodes implementation as of 2025 and what's in the roadmap? Ethereum nodes/L2s with optimistic or zk-proofs are probably the most advanced distributed databases that actually work.

(not talking about "coins" and stuff obviously, another debate)

reply
otterley
14 hours ago
[-]
> Ethereum nodes/L2s with optimistic or zk-proofs are probably the most advanced distributed databases that actually work.

What are you comparing against? Aren't they slower, less convenient, and less available than, say, DynamoDB or Spanner, both of which have been in full-service, reliable operation since 2012?

reply
hbbio
5 hours ago
[-]
Ethereum is so good at being distributed than it's decentralized.

DynamoDB and Spanner are both great, but they're meant to be run by a single admin. It's a considerably simpler problem to solve.

reply
derefr
10 hours ago
[-]
I think they mean big-D "Distributed", i.e. in the sense that a DHT is Distributed. Decentralized in both a logical and political sense.

A big DynamoDB/Spanner deployment is great while you can guarantee some benevolent (or just not-malevolent) org around to host the deployment for everyone else. But technologies of this type do not have any answer for the key problem of "ensure the infra survives its own founding/maintaining org being co-opted + enshittified by parties hostile to the central purpose of the network."

Blockchains — and all the overhead and pain that comes with them — are basically what you get when you take the classical small-D distributed database design, and add the components necessary to get that extra property.

reply
Agingcoder
7 hours ago
[-]
Which are both systems with a fair amount of theory behind them !
reply
drdrey
14 hours ago
[-]
the big difference is the trust assumption, anyone can join or leave the network of nodes at any time
reply
charcircuit
13 hours ago
[-]
I think you are being downvoted because Ethereum requires you to stake 32 Eth (about $100k), and the entry queue right now is about 9 days and the exit queue is about 20 days. So only people with enough capital can join the network and it takes quite some time to join or leave as opposed to being able to do it at any time you want.
reply
drdrey
9 hours ago
[-]
ok but these are details, the point is that the operators of the database are external, selfish and fluctuating
reply
j16sdiz
14 hours ago
[-]
The traditional way is paper trails and/or WORM (write-once-read-many) devices, with local checksums.

You can have multiple replica without extra computation for hash and stuffs.

reply
MrDarcy
16 hours ago
[-]
The ivory tower standing in the way of delivering value I think.
reply
colechristensen
16 hours ago
[-]
To be more specific, goals of perfection where perfection does not at all matter.
reply
johncolanduoni
11 hours ago
[-]
What does bothering to read some distributed systems literature have to do with demanding unnecessary perfection? Did NATS have in their docs that JetStream accepted split brain conditions as a reality, or that metadata corruption could silently delete a topic? You could maybe argue the fsync default was a tradeoff, though I think it’s a bad one (not the existence of the flag, just the default being “false”). The rest are not the kind of bugs you expect to see in a 5 year old persistence layer.
reply
stmw
11 hours ago
[-]
Exactly, "losing data from acknowledged writes" is not failing to be perfect, it's failing to deliver on the (advertised) basics of storing your data.
reply
LaGrange
10 hours ago
[-]
Last time I was at school requirement analysis was a thing, but do go off.
reply
staticassertion
1 hour ago
[-]
I don't have a "school education" and I know plenty of theory, I certainly have read the papers cited in this test.
reply
mzl
1 hour ago
[-]
You might not have a school education, but you have educated yourself. It is unfortunately common to hear people complain that the theory one learns in school (or by determined self-study) is useless, which I think is what the geybeard comment you replied to intends to say.
reply
belter
4 hours ago
[-]
The only post in this thread that actually summarized the core findings of the study, namely:

- ACKed messages can be silently lost due to minority-node corruption.

- A single-bit corruption can cause some replicas to lose up to 78% of stored messages

- Snapshot corruption can propagate and lead to entire stream deletion across the cluster.

- The default lazy-fsync mode can drop minutes of acknowledged writes on a crash.

- A crash combined with network delay can cause persistent split-brain and divergent logs.

- Data loss even with “sync_interval = always” in presence of membership changes or partitions.

- Self-healing and replica convergence did not always work reliably after corruption.

…was not downvoted, but flagged... That is telling. Documented failure modes are apparently controversial. Also raises the question: What level of technical due diligence was performed by organizations like Mastercard, Volvo, PayPal, Baidu, Alibaba, or AT&T before adopting this system?

So what is next? Nominate NATS for the Silent Failure Peace Prize?

reply
traceroute66
3 hours ago
[-]
> Nominate NATS for the Silent Failure Peace Prize?

One or two of the comments on GitHub by the NATS team in response to Issues opened by Kyle are also more than a bit cringeworthy.

Such as this one:

"Most of our production setups, and in fact Synadia Cloud as well is that each replica is in a separate AZ. These have separate power, networking etc. So the possibility of a loss here is extremely low in terms of due to power outages."

Which Kyle had to call them out on:

"Ah, I have some bad news here--placing nodes in separate AZs does not mean that NATS' strategy of not syncing things to disk is safe. See #7567 for an example of a single node failure causing data loss (and split-brain!)."

https://github.com/nats-io/nats-server/issues/7564#issuecomm...

reply
dboreham
17 hours ago
[-]
I've asked LLMs to do similar tasks and the results were very useful.
reply
johncolanduoni
14 hours ago
[-]
I can’t wait until it’s good enough to vibecode the next MongoDB.
reply
lnenad
5 hours ago
[-]
Aim for all three of CAP to really hit the right vibes.
reply
jwr
3 hours ago
[-]
For anyone dealing with databases, and especially distributed databases, I highly recommend reading the Jepsen page on consistency models: https://jepsen.io/consistency/models

It provides a dictionary of terms that we can use to have educated discussions, rather than throwing around terms like "ACID".

reply
rishabhaiover
17 hours ago
[-]
NATS be trippin, no CAP.
reply
veverkap
17 hours ago
[-]
Underrated
reply
jessekv
6 hours ago
[-]
A tiny bit more context here:

https://github.com/nats-io/nats-server/discussions/3312#disc...

(I opened this discussion 2.5 years ago and get an email from github every once in a while ever since. I had given up hope TBH)

reply
johncolanduoni
18 hours ago
[-]
Wow. I’ve used NATS for best-effort in-memory pub/sub, which it has been great for, including getting subtle scaling details right. I never touched their persistence and would have investigated more before I did, but I wouldn’t have expected it to be this bad. Vulnerability to simple single-bit file corruption is embarrassing.
reply
vrnvu
20 hours ago
[-]
Sort of related. Jepsen and Antithesis recently released a glossary of common terms which is a fantastic reference.

https://jepsen.io/blog/2025-10-20-distsys-glossary

reply
merb
20 hours ago
[-]
> 3.4 Lazy fsync by Default

Why? Why do some databases do that? To have better performance in benchmarks? It’s not like that it’s ok to do that if you have a better default or at least write a lot about it. But especially when you run stuff in a small cluster you get bitten by stuff like that.

reply
aaronbwebber
19 hours ago
[-]
It's not just better performance on latency benchmarks, it likely improves throughput as well because the writes will be batched together.

Many applications do not require true durability and it is likely that many applications benefit from lazy fsync. Whether it should be the default is a lot more questionable though.

reply
johncolanduoni
18 hours ago
[-]
It’s like using a non-cryptographically secure RNG: if you don’t know enough to look for the fsync flag off yourself, it’s unlikely you know enough to evaluate the impact of durability on your application.
reply
traceroute66
16 hours ago
[-]
> if you don’t know enough to look for the fsync flag off yourself,

Yeah, it should use safe-defaults.

Then you can always go read the corners of the docs for the "go faster" mode.

Just like Postgres's infamous "non-durable settings" page... https://www.postgresql.org/docs/18/non-durability.html

reply
tybit
4 hours ago
[-]
I also think fsync before acking writes is a better default. That aside, if you were to choose async for batching writes, their default value surprises me. 2 minutes seems like an eternity. Would you not get very good batching for throughout even at something like 2 seconds too? Still not safe, but safer.
reply
semiquaver
14 hours ago
[-]
You can batch writes while at the same time not acknowledging them to clients until they are flushed, it just takes more bookkeeping.
reply
senderista
17 hours ago
[-]
For transactional durability, the writes will definitely be batched ("group commit"), because otherwise throughput would collapse.
reply
otabdeveloper4
3 hours ago
[-]
> Many applications do not require true durability

Pretty much no application requires true durability.

reply
millipede
19 hours ago
[-]
I always wondered why the fsync has to be lazy. It seems like the fsync's can be bundled up together, and the notification messages held for a few millis while the write completes. Similar to TCP corking. There doesn't need to be one fsync per consensus.
reply
aphyr
18 hours ago
[-]
Yes, good call! You can batch up multiple operations into a single call to fsync. You can also tune the number of milliseconds or bytes you're willing to buffer before calling `fsync` to balance latency and throughput. This is how databases like Postgres work by default--see the `commit_delay` option here: https://www.postgresql.org/docs/8.1/runtime-config-wal.html
reply
to11mtm
17 hours ago
[-]
> This is how databases like Postgres work by default--see the `commit_delay` option here: https://www.postgresql.org/docs/8.1/runtime-config-wal.html

I must note that the default for Postgres is that there is NO delay, which is a sane default.

> You can batch up multiple operations into a single call to fsync.

Ive done this in various messaging implementations for throughput, and it's actually fairly easy to do in most languages;

Basically, set up 1-N writers (depends on how you are storing data really) that takes a set of items containing the data to be written alongside a TaskCompletionSource (Promise in Java terms), when your stuff wants to write it shoots it to that local queue, the worker(s) on the queue will write out messages in batches based on whatever else (i.e. tuned for write size, number of records, etc for both throughput and guaranteeing forward progress,) and then when the write completes you either complete or fail the TCS/Promise.

If you've got the right 'glue' with your language/libraries it's not that hard; this example [0] from Akka.NET's SQL persistence layer shows how simple the actual write processor's logic can be... Yeah you have to think about queueing a little bit however I've found this basic pattern very adaptable (i.e. queueing op can just send a bunch of ready-to-go-bytes and you work off that for threshold instead, add framing if needed, etc.)

[0] https://github.com/akkadotnet/Akka.Persistence.Sql/blob/7bab...

reply
aphyr
17 hours ago
[-]
Ah, pardon me, spoke too quickly! I remembered that it fsynced by default, and offered batching, and forgot that the batch size is 0 by default. My bad!
reply
to11mtm
17 hours ago
[-]
Well the write is still tunable so you are still correct.

Just wanted to clarify that the default is still at least safe in case people perusing this for things to worry about, well, were thinking about worrying.

Love all of your work and writings, thank you for all you do!

reply
loeg
15 hours ago
[-]
In some contexts (interrupts) we would call this "coalescing." (I don't work in databases, can't comment about terminology there.)
reply
kbenson
16 hours ago
[-]
That was my immediate thought as well, under the assumption the lazy fsync is for performance. I imagine in some situations, delaying the write until the write confirmation actually happens is okay (depending on delay), but it also occurred to me that if you delay enough, and you have a busy enough system, and your time to send the message is small enough, the number of open connections you need to keep open can be some small or large multiple of the amount you would need without delaying the confirmation message to actual write time.
reply
senderista
16 hours ago
[-]
In practice, there must be a delay (from batching) if you fsync every transaction before acknowledging commit. The database would be unusably slow otherwise.
reply
thinkharderdev
20 hours ago
[-]
> To have better performance in benchmarks

Yes, exactly.

reply
mrkeen
18 hours ago
[-]
One of the perks of being distributed, I guess.

The kind of failure that a system can tolerate with strict fsync but can't tolerate with lazy fsync (i.e. the software 'confirms' a write to its caller but then crashes) is probably not the kind of failure you'd expect to encounter on a majority of your nodes all at the same time.

reply
johncolanduoni
17 hours ago
[-]
It is if they’re in the same physical datacenter. Usually the way this is done is to wait for at least M replicas to fsync, but only require the data to be in memory for the rest. It smooths out the tail latencies, which are quite high for SSDs.
reply
loeg
15 hours ago
[-]
> It smooths out the tail latencies, which are quite high for SSDs.

I'm sorry, tail latencies are high for SSDs? In my experience, the tail latencies are much higher for traditional rotating media (tens of seconds, vs 10s of milliseconds for SSDs).

reply
johncolanduoni
14 hours ago
[-]
They’re higher relative to median latencies for each. A high end SSD’s P99/median is higher than a high end HDD. That’s the relevant metric for request hedging.
reply
loeg
13 hours ago
[-]
It's approximately a factor of 1000x for both.
reply
senderista
17 hours ago
[-]
You can push the safety envelope a bit further and wait for your data to only be in memory in N separate fault domains. Yes, your favorite ultra-reliable cloud service may be doing this.
reply
cnlwsu
16 hours ago
[-]
durability through replication and distribution and better throughput to build up more within the window on a lazy fsync
reply
dilyevsky
18 hours ago
[-]
Massively improves benchmark performance. Like 5-10x
reply
speedgoose
18 hours ago
[-]
/dev/null is even faster.
reply
formerly_proven
18 hours ago
[-]
/dev/null tends to lose a lot more data.
reply
onionisafruit
17 hours ago
[-]
Just wait until the jepsen report on /dev/null. It's going to be brutal.
reply
orthoxerox
17 hours ago
[-]
/dev/null works according to spec, can't accuse it of not doing something it has never promised
reply
mysfi
16 hours ago
[-]
Curious about the differences between content on aphyr.com/tags/jepsen and jepsen.io/analyses. I recently discovered aphyr.com and was excited about the potential insights!
reply
aphyr
15 hours ago
[-]
Jepsen started as a personal blog series in nights and weekends; jepsen.io is when I started doing it professionally, about ten years ago.
reply
bsaul
7 hours ago
[-]
Curious : do you have a team of people working with you, or is it mostly solo work ? your work is so valuable, i would be scared for our industry if it had a bus factor of 1.
reply
andersmurphy
8 hours ago
[-]
Highly recommend you check out the interview series they are a lot of fun.

> They will refuse, of course, and ever so ashamed, cite a lack of culture fit. Alight upon your cloud-pine, and exit through the window. This place could never contain you.

https://aphyr.com/posts/340-reversing-the-technical-intervie...

reply
dangoodmanUT
16 hours ago
[-]
Half-expected tbh, but didn’t expect to be this bad.

Just use redpanda.

reply
maxmcd
19 hours ago
[-]
> > You can force an fsync after each messsage [sic] with always, this will slow down the throughput to a few hundred msg/s.

Is the performance warning in the NATS possible to improve on? Couldn't you still run fsync on an interval and queue up a certain number of writes to be flushed at once? I could imagine latency suffering, but batches throughput could be preserved to some extent?

reply
scottlamb
19 hours ago
[-]
> Is the performance warning in the NATS possible to improve on? Couldn't you still run fsync on an interval and queue up a certain number of writes to be flushed at once? I could imagine latency suffering, but batches throughput could be preserved to some extent?

Yes, and you shouldn't even need a fixed interval. Just queue up any writes while an `fsync` is pending; then do all those in the next batch. This is the same approach you'd use for rounds of Paxos, particularly between availability zones or regions where latency is expected to be high. You wouldn't say "oh, I'll ack and then put it in the next round of Paxos", or "I'll wait until the next round in 2 seconds then ack"; you'd start the next batch as soon as the current one is done.

reply
ADefenestrator
9 hours ago
[-]
Yes, this is a reasonably common strategy. It's how Cassandra's batch and group commit modes work, and Postgres has a similar option. Hopefully NATS will implement something similar eventually.
reply
clemlesne
20 hours ago
[-]
NATS is a fantastic piece of software. But doc’s unpractical and half backed. That’s a shame to be required to retro engineer the software from GitHub to know the auth schemes.
reply
rdtsc
19 hours ago
[-]
> By default, NATS only flushes data to disk every two minutes, but acknowledges operations immediately. This approach can lead to the loss of committed writes when several nodes experience a power failure, kernel crash, or hardware fault concurrently—or in rapid succession (#7564).

I am getting strong early MongoDB vibes. "Look how fast it is, it's web-scale!". Well, if you don't fsync, you'll go fast, but you'll go even faster piping customer data to /dev/null, too.

Coordinated failures shouldn't be a novelty or a surprise any longer these days.

I wouldn't trust a product that doesn't default to safest options. It's fine to provide relaxed modes of consistency and durability but just don't make them default. Let the user configure those themselves.

reply
Thaxll
18 hours ago
[-]
I don't think there is a modern database that have the safest options all turned on by default. For instance the default transaction model for PG is read commited not serializable

One of the most used DB in the world is Redis, and by default they fsync every seconds not every operations.

reply
andersmurphy
8 hours ago
[-]
SQLite is alway serializable and by default has synchronous=Full so fsync on every commit.

The problem is it has terrible defaults for performance (in the context of web servers). Like just bad options legacy options not ones that make it less robust. Ie cache size ridiculously small, temp tables not in memory, WAL off so no concurrent reads/writes etc.

reply
jwr
3 hours ago
[-]
FoundationDB provides strict serializability by default.
reply
hxtk
11 hours ago
[-]
CockroachDB is serializable by default, but I don’t know about their other settings.
reply
hobs
18 hours ago
[-]
Pretty sure SQL Server won't acknowledge a write until its in the WAL (you can go the opposite way and turn on delayed durability though.)
reply
lubesGordi
18 hours ago
[-]
I don't know about Jetstream, but redis cluster would only ack writes after replicating to a majority of nodes. I think there is some config on standalone redis too where you can ack after fsync (which apparently still doesn't guarantee anything because of buffering in the OS). In any case, understanding what the ack implies is important, and I'd be frustrated if jetstream docs were not clear on that.
reply
akshayshah
10 hours ago
[-]
At least per the Redis docs, clusters acknowledge writes before they're replicated: https://redis.io/docs/latest/operate/oss_and_stack/managemen...

The docs explicitly state that clusters do not provide strong consistency and can lose acknowledged data.

reply
sk5t
14 hours ago
[-]
To the best of my knowledge, Redis has never blocked for replication, although you can configure healthy replication state as a prerequisite to accept writes.
reply
KaiserPro
18 hours ago
[-]
NATS is very upfront in that the only thing that is guaranteed is the cluster being up.

I like that, and it allows me to build things around it.

For us when we used it back in 2018, it performed well and was easy to administer. The multi-language APIs were also good.

reply
traceroute66
18 hours ago
[-]
> NATS is very upfront in that the only thing that is guaranteed is the cluster being up.

Not so fast.

Their docs makes some pretty bold claims about JetStream....

They talk about JetStream addressing the "fragility" of other streaming technology.

And "This functionality enables a different quality of service for your NATS messages, and enables fault-tolerant and high-availability configurations."

And one of their big selling-points for JetStream is the whole "stora and replay" thing. Which implies the storage bit should be trustworthy, no ?

reply
KaiserPro
18 hours ago
[-]
oh sorry I was talking about NATS core. not jetstream. I'd be pretty sceptical about persistence
reply
billywhizz
18 hours ago
[-]
the OP was specifically about jetstream so i guess you just didn't read it?
reply
KaiserPro
17 hours ago
[-]
just imagine I'm claude,

smoke bomb

reply
gopalv
18 hours ago
[-]
> Well, if you don't fsync, you'll go fast, but you'll go even faster piping customer data to /dev/null, too.

The trouble is that you need to specifically optimize for fsyncs, because usually it is either no brakes or hand-brake.

The middle-ground of multi-transaction group-commit fsync seems to not exist anymore because of SSDs and massive IOPS you can pull off in general, but now it is about syscall context switches.

Two minutes is a bit too too much (also fdatasync vs fsync).

reply
senderista
16 hours ago
[-]
IOPS only solves throughput, not latency. You still need to saturate internal parallelism to get good throughput from SSDs, and that requires batching. Also, even double-digit microsecond write latency per transaction commit would limit you to only 10K TPS. It's just not feasible to issue individual synchronous writes for every transaction commit, even on NVMe.

tl;dr "multi-transaction group-commit fsync" is alive and well

reply
0xbadcafebee
19 hours ago
[-]
Not flushing on every write is a very common tradeoff of speed over durability. Filesystems, databases, all kinds of systems do this. They have some hacks to prevent it from corrupting the entire dataset, but lost writes are accepted. You can often prevent this by enabling an option or tuning a parameter.

> I wouldn't trust a product that doesn't default to safest options

This would make most products suck, and require a crap-ton of manual fixes and tuning that most people would hate, if they even got the tuning right. You have to actually do some work yourself to make a system behave the way you require.

For example, Postgres' isolation level is weak by default, leading to race conditions. You have to explicitly enable serialization to avoid it, which is a performance penalty. (https://martin.kleppmann.com/2014/11/25/hermitage-testing-th...)

reply
TheTaytay
17 hours ago
[-]
> Filesystems, databases, all kinds of systems do this. They have some hacks to prevent it from corrupting the entire dataset, but lost writes are accepted.

Woah, those are _really_ strong claims. "Lost writes are accepted"? Assuming we are talking about "acknowledged writes", which the article is discussing, I don't think it's true that this is a common default for databases and filesystems. Perhaps databases or K/V stores that are marketed as in-memory caches might have defaults like this, but I'm not familiar with other systems that do.

I'm also getting MongoDB vibes from deciding not to flush except once every two minutes. Even deciding to wait a second would be pretty long, but two minutes? A lot happens in a busy system in 120 seconds...

reply
0xbadcafebee
15 hours ago
[-]
All filesystems that I'm aware of don't sync to disk on every write by default, and you absolutely can lose data. You have to intentionally enable sync. And even then the disk can still lose the writes.

Most (all?) NoSQL solutions are also eventual-consistency by default which means they can lose data. That's how Mongo works. It syncs a journal every 30-100 ms, and it syncs full writes at a configurable delay. Mongo is terrible, but not because it behaves like a filesystem.

Note that this is not "bad", it's just different. Lots of people use these systems specifically because they need performance more than durability. There are other systems you can use if you need those guarantees.

reply
andersmurphy
8 hours ago
[-]
I'd argue with mongo a lot of people use it because it has fantastic marketing.

https://nemil.com/2017/08/29/the-marketing-behind-mongodb/

reply
zbentley
18 hours ago
[-]
I think “most people will have to turn on the setting to make things fast at the expense of durability” is a dubious assertion (plenty of system, even high-criticality ones, do not have a very high data rate and thus would not necessarily suffer unduly from e.g. fsync-every-write).

Even if most users do turn out to want “fast_and_dangerous = true”, that’s not a particularly onerous burden to place on users: flip one setting, and hopefully learn from the setting name or the documentation consulted when learning about it that it poses operational risk.

reply
hxtk
11 hours ago
[-]
I always think about the way you discover the problem. I used to say the same about RNG: if you need fast PRNG and you pick CSPRNG, you’ll find out when you profile your application because it isn’t fast enough. In the reverse case, you’ll find out when someone successfully guesses your private key.

If you need performance and you pick data integrity, you find out when your latency gets too high. In the reverse case, you find out when a customer asks where all their data went.

reply
to11mtm
18 hours ago
[-]
In the defense of PG, for better or worse as far as I know, the 'what is RDBMS default' falls into two categories;

- Read Committed default with MVCC (Oracle, Postgres, Firebird versions with MVCC, I -think- SQLite with WAL falls under this)

- Read committed with write locks one way or another (MSSQL default, SQLite default, Firebird pre MVCC, probably Sybase given MSSQL's lineage...)

I'm not aware of any RDBMS that treats 'serializable' as the default transaction level OOTB (I'd love to learn though!)

....

All of that said, 'Inconsistent read because you don't know RDBMS and did not pay attention to the transaction model' has a very different blame direction than 'We YOLO fsync on a timer to improve throughput'.

If anything it scares me that there's no other tuning options involved such as number of bytes or number of events.

If I get a write-ack from a middleware I expect it to be written one way or another. Not 'It is written within X seconds'.

AFAIK there's no RDBMS that will just 'lose a write' unless the disk happens to be corrupted (or, IDK, maybe someone YOLOing with chaos mode on DB2?)

reply
ncruces
16 hours ago
[-]
> I -think- SQLite with WAL falls under this

No. SQLite is serializable. There's no configuration where you'd get read committed or repeatable read.

In WAL mode you may read stale data (depending on how you define stale data), but if you try to write in a transaction that has read stale data, you get a conflict, and need to restart your transaction.

There's one obscure configuration no one uses that's read uncommitted. But really, no one uses it.

reply
hansihe
17 hours ago
[-]
CockroachDB does Serializable by default
reply
wseqyrku
12 hours ago
[-]
> NATS only flushes data to disk every two minutes, but acknowledges operations immediately.

Wait, isn't that the whole point of acknowledgments? This is not acknowledgment, it's I'm a teapot.

reply
CuriouslyC
19 hours ago
[-]
NATS data is ephemeral in many cases anyhow, so it makes a bit more sense here. If you wanted something fully durable with a stronger persistence story you'd probably use Kafka anyhow.
reply
nchmy
19 hours ago
[-]
Core nats is ephemeral. Jetstream is meant to be persisted, and presented as a replacement for kafka
reply
traceroute66
18 hours ago
[-]
> NATS data is ephemeral in many cases anyhow, so it makes a bit more sense here

Dude ... the guy was testing JetStream.

Which, I quote from the first phrase from the first paragraph on the NATS website:

    NATS has a built-in persistence engine called JetStream which enables messages to be stored and replayed at a later time.
reply
petre
18 hours ago
[-]
So is MQTT, why bother with NATS then?
reply
KaiserPro
18 hours ago
[-]
MQTT doesn't have the same semantics. https://docs.nats.io/nats-concepts/core-nats/reqreply request reply is really useful if you need low latency, but reasonably efficient queuing. (making sure to mark your workers as busy when processing otherwise you get latency spikes. )
reply
RedShift1
18 hours ago
[-]
You can do request/reply with MQTT too, you just have to implement more bits yourself, whilst NATS has a nice API that abstracts that away for you.
reply
KaiserPro
18 hours ago
[-]
oh indeed, and clusters nicely.
reply
sreekanth850
11 hours ago
[-]
this is absolutely shocking!Does kafka do fsync on every write?
reply
akshayshah
10 hours ago
[-]
No. Redpanda has made a lot of noise about this over the years [0], and Confluent's Jack Vanlightly has responded in a fair bit of detail [1].

[0]: https://www.redpanda.com/blog/why-fsync-is-needed-for-data-s...

[1]: https://jack-vanlightly.com/blog/2023/4/24/why-apache-kafka-...

reply
sreekanth850
10 hours ago
[-]
I think all modern system even scylla db do commit batch no fsync on every write, you either need throughput or durability both cannot exist together. Only thing what redpanda claim is you have to do replication before fsync so your data is not lost if the written node is dead due to a power failure. this is how scylla and cassandra works, if iam not wrong, so even if a node dead before the batch fsync, replication will be done before fsync from memtable,so other nodes will bring the durability and data loss is no longer true in a replicated setup. single node? obviously 100% data loss. but this is the trade off for a high tps system vs durable single ndoe system brings. its how you want to operate.
reply
lionkor
7 hours ago
[-]
the article says no )
reply
shikhar
16 hours ago
[-]
If you are looking for a serverless alternative to JetStream, check out https://s2.dev

Pros: unlimited streams with the durability of object storage – JetStream can only do a few K topics

Cons: no consumer groups yet, it's on the agenda

reply
embedding-shape
16 hours ago
[-]
Have you tried running Jepsen against it?
reply
shikhar
15 hours ago
[-]
We do deterministic simulation testing

https://s2.dev/blog/dst https://s2.dev/blog/linearizability

We have also adopted Antithesis for a more thorough DST environment, and plan to do more with it.

One day we will engage Kyle to Jepsen, too. I'm not sure when though.

reply
embedding-shape
2 hours ago
[-]
I guess that's better than nothing. But now I'm unsure what your original comment was about, if your project doesn't use Jepsen for testing to "prove" it works fine, how is your project relevant to bring up on a submission about a Jepsen test of some other software?

If everyone who was making a database/message queue/whatever distributed system shared their projects on every Jepsen submission, we'd never have any discussions about the actual software in question.

reply
Kinrany
4 hours ago
[-]
NATS claims to use Antithesis as well, so that's nothing comparatively speaking
reply
pdimitar
6 hours ago
[-]
I'm not seeing full self-hosting yet, and "Book a call" link is an instant nope for many techies.

I understand that you need to make money. But you'll have to have a proper self-hosting offering with paid support as well before you're considered, at least by me.

I'm not looking to have even more stuff in the cloud.

reply
williamstein
17 hours ago
[-]
reply
williamstein
8 hours ago
[-]
For example, https://github.com/williamstein/nats-bugs/issues/5 links to a discussion I have with them about data loss, where they fundamentally don't understand that their incorrect defaults lead to data loss on the application side. It's weird.

I got very deep into using NATS last year, and then realized the choices it makes for persistence are really surprising. Another horrible example if that server startup time is O(number of streams), with a big constant; this is extremely painful to hit in production.

I ended up implementing from scratch something with the same functionality (for me as NATS server + Jetstream), but based on socket.io and sqlite. It works vastly better for my use cases, since socketio and sqlite are so mature.

reply
PaoloBarbolini
7 hours ago
[-]
There are many things they don't seem to understand about their own product.

https://github.com/nats-io/nats.rs/issues/1253#issuecomment-...

reply
dzonga
18 hours ago
[-]
nats jetstream vs say redis streams - which one have people found easier to work with ?
reply
ViewTrick1002
17 hours ago
[-]
When I worked with bounded Redis streams a couple of years ago we had to implement our own backpressure mechanism which was quite tricky to get right.

To implement backpressure without relying on out of band signals (distributed systems beware) you need to have a deep understanding of the entire redis streams architecture and how the the pending entries list, consumers groups, consumers etc. works and interacts to not lose data by overwriting yourself.

Unbounded would have been fine if we could spill to disk and periodically clean up the data, but this is redis.

Not sure if that has improved.

reply
ubercore
4 hours ago
[-]
I don't have a direct comment to add, but after working on the fringes of streams a bit, they've worked as advertised, but the API surface area for them is full of cases where, as you say, you have to kind of internalize the full architecture to really understand what's going on. It can be a bit overwhelming.
reply
gostsamo
19 hours ago
[-]
Thanks, those reports are always a quiet pleasure to read even if one is a bit far from the domain.
reply
selectodude
20 hours ago
[-]
Definitely thought this was about aviation for a moment.
reply
the__alchemist
14 hours ago
[-]
Yea! I did a double-take, as in addition to Jeppesen, NATS is something I worked with in the past as a UK NOTAM service.
reply
Infiltrator
19 hours ago
[-]
Likewise. It took me a moment to realise Jepsen!== Jeppesen
reply
crote
18 hours ago
[-]
It's named after Carly Rae Jepsen, of 2012 hit single "Call Me Maybe" fame.
reply
loeg
15 hours ago
[-]
I think Aphyr will insist it isn't actually named after Carly Rae for legal reasons, just a striking coincidence.
reply
selectodude
19 hours ago
[-]
And NATS being the North Atlantic tracks.
reply
t0i7a1r1a
15 hours ago
[-]
reply