You don’t pay the cost up front, you pay it 2 years in when the business logic has changed 5 times, half your events are “v2” or “DeprecatedFooHappened”, and you realize your “facts” about the past were actually leaky snapshots of whatever the code thought was true at the time. The hard part isn’t appending events, it’s deciding what not to encode into them so you can change your mind later without a migration horror show.
There’s also a quiet tradeoff here: you’re swapping “schema complexity + migrations” for “event model complexity + replay semantics”. In a bank-like domain that genuinely needs an audit trail, that trade is usually worth it. In a CRUD-ish SaaS where the real requirement is “be able to see who edited this record”, a well-designed append-only table with explicit revisions gets you 80% of the value at 20% of the operational and cognitive overhead.
Using Postgres as the event store is interesting because it pushes against the myth that you need a specialized log store from day one. But it also exposes the other myth: that event sourcing is primarily a technical choice. It isn’t. It’s a commitment to treat “how the state got here” as a first-class part of the domain, and that cultural/organizational shift is usually harder than wiring up SaveEvents and a Kafka projection.
I already refrained from introducing event sourcing to tackle wierd dependecies multiple time just by justaposing the amount of discipline that the team has that lead to the current state vs the discipline that is required to keep the event source solution going.
Immutable history sounds like a good idea, until you're writing code to support every event schema you ever published. And all the edge cases that inevitably creates.
CQRS sounds good, until you just want to read a value that you know has been written.
Event sourcing probably has some legitimate applications, but I'm convinced the hype around it is predominantly just excellent marketing of an inappropriate technology by folks and companies who host queueing technologies (like Kafka).
This is for you and the author apparently: Prating CQRS does not mean you're splitting up databases. CQRS is simply using different models for reading and writing. That's it. Nothing about different databases or projections or event sourcing.
This quote from the article is just flat out false:
> CQRS introduces eventual consistency between write and read models:
No it doesn't. Eventual consistency is a design decision made independent of using CQRS. Just because CQRS might make it easier to split, it doesn't in any way have an opinion on whether you should or not.
> by folks and companies who host queueing technologies (like Kafka).
Well that's good because Kafka isn't an event-sourcing technology and shouldn't be used as one.
On the other hand CQRS + single writer pattern on their owncan be a massive performance win because it allows for efficient batching of views and updates. It's also much simpler to implement than a fullblown event sourcing system.
Most all CQRS designs have some read view or projection built off consuming the write side.
If this is not the case, and you're just writing your "read models" in the write path; where is the 'S' from CQRS (s for segregation). You wouldn't have a CQRS system here. You'd just be writing read optimised data.
- Read side is a SELECT on a Postgres view
It's a real stretch to be describing a postgres view as CQRS
You can scale them independently in that you can control the rate at which your views are read and the batch size of your updates.
The whole big win wirh CQRS is it allows for very efficient batching.
That's EXACTLY what CQRS.
I think you might struggle to understand CQRS.
This is flat out false.
Or segregate even.
Technically, sure, you can bolt an append-only table on Postgres and call it a day. But the hard part is living with the consequences of “events are facts” when your product manager changes their mind, your domain model evolves, or a third team starts depending on your event stream as an integration API.
Events stop being an internal persistence detail and become a public contract. Now versioning, schema evolution, and “we’ll just rename this field” turn into distributed change management problems. Your infra is suddenly the easy bit compared to designing events that are stable, expressive, and not leaking implementation details.
And once people discover they can rebuild projections “any time”, they start treating projections as disposable, which works right up until you have a 500M event stream and a 6 hour replay window that makes every migration a scheduled outage.
Event sourcing shines when the business actually cares about history (finance, compliance, complex workflows) and you’re willing to invest in modeling and ops. Used as a generic CRUD replacement it’s a complexity bomb with a 12-18 month fuse.
Flip it on its head.
Would those domains be better off with simple crud? Did the accountants make a wrong turn when they switched from simple-balances to single-entry ledgers?
> vents stop being an internal persistence detail and become a public contract.
You can't blame event sourcing for people not doing it correctly, though.
The events aren't a public contract and shouldn't be treated as such. Treating them that way will result in issues.
> Used as a generic CRUD replacement it’s a complexity bomb with a 12-18 month fuse.
This is true, but all you're really saying it "Use the right tool for the right job".
You really can. If there's a technology or approach which the majority of people apply incorrectly that's a problem with that technology or approach.
You can blame the endless amount of people that jump in these threads with hot takes about technologies they neither understand or have experience with.
How many event sourced systems have you built? If the answer is 0, I'd have a real hard time understanding how you can even make that judgement.
In fact, half of this thread can't even be bothered to look up the definition of CQRS, so the idea that "Storing facts" is to blame for people abusing it is a bit of a stretch, no?
This thread appears to have stories from several people who have though, and have credible criticisms:
https://news.ycombinator.com/item?id=45962656#46014546
https://news.ycombinator.com/item?id=45962656#46013851
https://news.ycombinator.com/item?id=45962656#46014050
What's your response to the common theme that event sourcing systems are difficult to maintain in the face of constantly changing product requirements?
Perhaps not, but you can criticise articles like this that suggest that CQRS will solve many problems for you, without touching on _any_ of its difficulties or downsides, or the mistakes that many people end up making when implementing these systems.
It's not complicated or complex.
You get similar levels of historical insight, with the disadvantage that to replay things you might need to put a little CLI or script together to infer commands out of the audit log (which if you do a lot, you can make a little library to make building those one off tools quite simple - I've done that). But you avoid all the many well documented footguns that come from trying to run an event sourced system in a typical evolving business.
Too much dry code for my taste and not many remarks/explanations - that's not bad because for prose I'd recommend Martin's Fowler articles on Event processing, but _could be better_ ;-)
WRT to tech itself - personally I think Go is one of the best languages to go for Event Sourcing today (with Haskell maybe being second). I've been doing complexity analysis for ES in various languages and Go implementation was mostly free (due to Event being an interface and not a concrete structure).
Can you explain this? Go has a very limited type system.
You really don't want your streams/aggs to come close to being that large.