Event Sourcing in Go: From Zero to Production
45 points
by tdom
4 days ago
| 7 comments
| skoredin.pro
| HN
quapster
1 hour ago
[-]
The funny thing about event sourcing is that most teams adopt it for the sexy parts (time travel, Kafka, sagas), but the thing that actually determines whether it survives contact with production is discipline around modeling and versioning.

You don’t pay the cost up front, you pay it 2 years in when the business logic has changed 5 times, half your events are “v2” or “DeprecatedFooHappened”, and you realize your “facts” about the past were actually leaky snapshots of whatever the code thought was true at the time. The hard part isn’t appending events, it’s deciding what not to encode into them so you can change your mind later without a migration horror show.

There’s also a quiet tradeoff here: you’re swapping “schema complexity + migrations” for “event model complexity + replay semantics”. In a bank-like domain that genuinely needs an audit trail, that trade is usually worth it. In a CRUD-ish SaaS where the real requirement is “be able to see who edited this record”, a well-designed append-only table with explicit revisions gets you 80% of the value at 20% of the operational and cognitive overhead.

Using Postgres as the event store is interesting because it pushes against the myth that you need a specialized log store from day one. But it also exposes the other myth: that event sourcing is primarily a technical choice. It isn’t. It’s a commitment to treat “how the state got here” as a first-class part of the domain, and that cultural/organizational shift is usually harder than wiring up SaveEvents and a Kafka projection.

reply
pdhborges
16 minutes ago
[-]
I would upvote this comment more if I could.

I already refrained from introducing event sourcing to tackle wierd dependecies multiple time just by justaposing the amount of discipline that the team has that lead to the current state vs the discipline that is required to keep the event source solution going.

reply
simonw
1 hour ago
[-]
This comment just made it finally click for me why event sourcing sounds so good on paper but rarely seems to work out for real-world projects: it expects a level of correct-design-up-front which isn't realistic for most teams.
reply
zknill
3 hours ago
[-]
Anyone who's built, run, evolved, and operated any reasonably sized event sourced system will know it's a total nightmare.

Immutable history sounds like a good idea, until you're writing code to support every event schema you ever published. And all the edge cases that inevitably creates.

CQRS sounds good, until you just want to read a value that you know has been written.

Event sourcing probably has some legitimate applications, but I'm convinced the hype around it is predominantly just excellent marketing of an inappropriate technology by folks and companies who host queueing technologies (like Kafka).

reply
anthonylevine
1 hour ago
[-]
> CQRS sounds good, until you just want to read a value that you know has been written.

This is for you and the author apparently: Prating CQRS does not mean you're splitting up databases. CQRS is simply using different models for reading and writing. That's it. Nothing about different databases or projections or event sourcing.

This quote from the article is just flat out false:

> CQRS introduces eventual consistency between write and read models:

No it doesn't. Eventual consistency is a design decision made independent of using CQRS. Just because CQRS might make it easier to split, it doesn't in any way have an opinion on whether you should or not.

> by folks and companies who host queueing technologies (like Kafka).

Well that's good because Kafka isn't an event-sourcing technology and shouldn't be used as one.

reply
mrsmrtss
1 hour ago
[-]
Yes, I don't know where the misconception that CQRS or Event Sourcing automatically means eventual consistency comes from. We have built, run, evolved, and operated quite a few reasonably sized event sourced systems successfully, and these systems are running to this day without any major incidents. We added eventually consistent projections where performance justified it, fully aware of the implications, but kept most of the system synchronous.
reply
anthonylevine
1 hour ago
[-]
I think people lump CQRS, Event Sourcing, and event-driven into this a single concept and then use those words interchangeably.
reply
andersmurphy
3 minutes ago
[-]
Yup. It's a shame as amazing as event sourcing is it does come with complexity.

On the other hand CQRS + single writer pattern on their owncan be a massive performance win because it allows for efficient batching of views and updates. It's also much simpler to implement than a fullblown event sourcing system.

reply
zknill
1 hour ago
[-]
Please explain how you intend to use different models for reading and writing without there being some temporal separation between the two?

Most all CQRS designs have some read view or projection built off consuming the write side.

If this is not the case, and you're just writing your "read models" in the write path; where is the 'S' from CQRS (s for segregation). You wouldn't have a CQRS system here. You'd just be writing read optimised data.

reply
azkalam
1 hour ago
[-]
- Write side is a Postgres INSERT

- Read side is a SELECT on a Postgres view

reply
zknill
1 hour ago
[-]
I think you might struggle to "scale the read and write sides independently".

It's a real stretch to be describing a postgres view as CQRS

reply
andersmurphy
7 minutes ago
[-]
Sqlite can scale CQRS to 100000 events per second on a relatively small VPS. That's 10x what the author achieves with postgres.

You can scale them independently in that you can control the rate at which your views are read and the batch size of your updates.

The whole big win wirh CQRS is it allows for very efficient batching.

reply
anthonylevine
1 hour ago
[-]
Huh?

That's EXACTLY what CQRS.

I think you might struggle to understand CQRS.

reply
anthonylevine
1 hour ago
[-]
> Most all CQRS designs have some read view or projection built off consuming the write side.

This is flat out false.

reply
mrkeen
1 hour ago
[-]
> Just because CQRS might make it easier to split

Or segregate even.

reply
fleahunter
2 hours ago
[-]
The part people underestimate is how much organizational discipline event sourcing quietly demands.

Technically, sure, you can bolt an append-only table on Postgres and call it a day. But the hard part is living with the consequences of “events are facts” when your product manager changes their mind, your domain model evolves, or a third team starts depending on your event stream as an integration API.

Events stop being an internal persistence detail and become a public contract. Now versioning, schema evolution, and “we’ll just rename this field” turn into distributed change management problems. Your infra is suddenly the easy bit compared to designing events that are stable, expressive, and not leaking implementation details.

And once people discover they can rebuild projections “any time”, they start treating projections as disposable, which works right up until you have a 500M event stream and a 6 hour replay window that makes every migration a scheduled outage.

Event sourcing shines when the business actually cares about history (finance, compliance, complex workflows) and you’re willing to invest in modeling and ops. Used as a generic CRUD replacement it’s a complexity bomb with a 12-18 month fuse.

reply
javier2
58 minutes ago
[-]
This. This is also a reason why its so impressive google docs/sheets has managed to stay largely the same for so long
reply
mrkeen
1 hour ago
[-]
> Event sourcing shines when the business actually cares about history (finance, compliance, complex workflows)

Flip it on its head.

Would those domains be better off with simple crud? Did the accountants make a wrong turn when they switched from simple-balances to single-entry ledgers?

reply
mexicocitinluez
2 hours ago
[-]
> or a third team starts depending on your event stream as an integration API.

> vents stop being an internal persistence detail and become a public contract.

You can't blame event sourcing for people not doing it correctly, though.

The events aren't a public contract and shouldn't be treated as such. Treating them that way will result in issues.

> Used as a generic CRUD replacement it’s a complexity bomb with a 12-18 month fuse.

This is true, but all you're really saying it "Use the right tool for the right job".

reply
simonw
1 hour ago
[-]
> You can't blame event sourcing for people not doing it correctly, though.

You really can. If there's a technology or approach which the majority of people apply incorrectly that's a problem with that technology or approach.

reply
anthonylevine
50 minutes ago
[-]
No you can't.

You can blame the endless amount of people that jump in these threads with hot takes about technologies they neither understand or have experience with.

How many event sourced systems have you built? If the answer is 0, I'd have a real hard time understanding how you can even make that judgement.

In fact, half of this thread can't even be bothered to look up the definition of CQRS, so the idea that "Storing facts" is to blame for people abusing it is a bit of a stretch, no?

reply
simonw
21 minutes ago
[-]
I've not run an event sourcing system in production myself.

This thread appears to have stories from several people who have though, and have credible criticisms:

https://news.ycombinator.com/item?id=45962656#46014546

https://news.ycombinator.com/item?id=45962656#46013851

https://news.ycombinator.com/item?id=45962656#46014050

What's your response to the common theme that event sourcing systems are difficult to maintain in the face of constantly changing product requirements?

reply
zknill
1 hour ago
[-]
> You can't blame event sourcing for people not doing it correctly, though.

Perhaps not, but you can criticise articles like this that suggest that CQRS will solve many problems for you, without touching on _any_ of its difficulties or downsides, or the mistakes that many people end up making when implementing these systems.

reply
anthonylevine
1 hour ago
[-]
CQRS is simply splitting your read and write models. That's it.

It's not complicated or complex.

reply
techn00
10 minutes ago
[-]
Good article, you might like my lib https://github.com/DeluxeOwl/chronicle - covers a lot of event sourcing pains for Go
reply
liampulles
48 minutes ago
[-]
If you are considering event sourcing, run an event/audit log for a while and see if that does not get you most of the way there.

You get similar levels of historical insight, with the disadvantage that to replay things you might need to put a little CLI or script together to infer commands out of the audit log (which if you do a lot, you can make a little library to make building those one off tools quite simple - I've done that). But you avoid all the many well documented footguns that come from trying to run an event sourced system in a typical evolving business.

reply
xlii
4 hours ago
[-]
I'm going to have a word with my ISP. It seems that sites SSL certificates has expired. That's not a good thing, but my ISP decided I'm an idiot and gave me a condescending message about accepting expired certificate - unacceptable in my book. VPN helped.

Too much dry code for my taste and not many remarks/explanations - that's not bad because for prose I'd recommend Martin's Fowler articles on Event processing, but _could be better_ ;-)

WRT to tech itself - personally I think Go is one of the best languages to go for Event Sourcing today (with Haskell maybe being second). I've been doing complexity analysis for ES in various languages and Go implementation was mostly free (due to Event being an interface and not a concrete structure).

reply
azkalam
2 hours ago
[-]
> Go is one of the best languages to go for Event Sourcing toda

Can you explain this? Go has a very limited type system.

reply
mrsmrtss
3 hours ago
[-]
Have you also considered C# for Event Sourcing? We've built many successful ES projects with C# and the awesome Marten library (https://martendb.io/). It's a real productivity multiplier for us.
reply
azkalam
3 hours ago
[-]
How does event sourcing handle aggregates that may be larger than memory?
reply
mexicocitinluez
2 hours ago
[-]
Smaller aggregates.

You really don't want your streams/aggs to come close to being that large.

reply