What functional programmers get wrong about systems
168 points
7 hours ago
| 27 comments
| iankduncan.com
| HN
anonymous908213
4 hours ago
[-]
This article, despite clocking in at nearly 10,000 words, appears to have been written in only two days. In fact the author has written five engineering articles in the last week, seeming to average around 5000+ words per article. What a spurt of productivity! Coincidentally, the article happens to be filled with pithy headers and constant sensationally dramatic contrasts that are out of place in long-form technical writing, but which LLMs are known to spam ad nauseum because they are effective for hooking attention in clickbait rags. Curious, that.
reply
stickfigure
3 hours ago
[-]
It would explain why the information density is so low. I got about halfway through hoping for something clever, then started scanning, then just gave up.

10k words for "migrating code and data is a headache". Yeah. Next.

reply
ncgl
4 hours ago
[-]
Idk I'm halfway through it and am really resonating with its points AND missing the telltale signs of ai.

So either his prompt is making the writing more palatable, or maybe he's just prolific.

In either case, I would count this as a w

reply
Ronsenshi
3 hours ago
[-]
You know what triggered me as likely flag for LLM - the fonts, the style, the abundant comments.

Makes me think - what if all of this was written by an LLM agent?

reply
TheTaytay
3 hours ago
[-]
I came away impressed with this article. Whatever he is prompting might have crossed the uncanny valley.
reply
mpalmer
3 hours ago
[-]
I got the impression he tweaked bits of it. It's still extremely apparent though.
reply
jibal
1 hour ago
[-]
There's a huge amount of valuable content in the article. People saying silly nonsense like '10k words for "migrating code and data is a headache"' missed out on all of it--which may well be fine for them because not everyone needs to know these things, but this is an excellent article for those who do. And rejecting this article because an LLM may have been involved is irrational ideology.

My biggest problem with the article is the title, which is somewhat the inverse of clickbait--functional programmers are his apparent intended audience but the content is much broader.

reply
ccppurcell
1 hour ago
[-]
"This matters because..." - dead giveaway.
reply
mb7733
3 hours ago
[-]
How do you know when the articles were written?
reply
stronglikedan
3 hours ago
[-]
> appears to have been written in only two days

> the author has written five engineering articles in the last week

considering you can only see when the articles are published, not written, I'd say you're trying too hard...

reply
Ronsenshi
3 hours ago
[-]
The author had all these massive articles all on fairly different topics ready to go... and then decided to pump 4 of them in a span of 4 days.

If it looks like a duck...

reply
debatem1
2 hours ago
[-]
...then it is clearly AI?

It isn't impossible that it's AI, but assuming writing and publishing happen at the same time would also lead you to conclude that Anne Frank wrote from the afterlife.

reply
Ronsenshi
1 hour ago
[-]
Not clearly, but given the world we live in makes me highly suspicious that AI was involved to one degree or another.

Plenty of people out there are happy to let AI do most of the work while claiming it as their own.

reply
coolgoose
4 hours ago
[-]
What pity headers?

Even if it is, or isn't ai, the content is in fact correct.

There is no 'one' state of your application, unless you literally do maintenance window deploys and 0 queues and keep everything sync.

reply
anonymous908213
3 hours ago
[-]
These headers are quite exhausting:

> Message queues are version time capsules

> Event sourcing: the version problem as a way of life

> Temporal and bitemporal databases: time as a first-class citizen

> Semantic drift: the type didn’t change, but the meaning did

> Knowing what’s running changes everything

> What if the old code just kept working?

> The right tools, pointed at the wrong level

Presentation matters as much as content. Particularly if you want somebody to read 10,000 words, making that reading go down smoothly is a good thing to strive for. If this was by chance written by a human who happened to have absorbed LLM-like writing tendencies, I would still find fault in this article for how it is written, and would suggest they spend more time revising it rather than publishing a 5k-10k word technical article daily. Much like writing code, sheer lines written is not the goal; the actual goal is to succinctly and clearly represent your ideas in as refined a form as possible. This article dragged on and on and on, with fatiguing prose, for an idea that can be well expressed without such length.

reply
kbenson
3 hours ago
[-]
Perhaps this is just a form of technical writing you're unfamiliar with? Those titles are pretty standard for what I consider good technical writing section headers. LLM writing tendencies are tendencies LLMs have integrated by encountering those tendencies. If your assessment standard for AI is just "common best practices for a subset of good writers", then I think perhaps you need to adjust how you assess to be a bit more nuanced.
reply
anonymous908213
3 hours ago
[-]
For some reason people frequently suggest that my problem with LLM writing is that it's too good. Allow me to restate that I find fault with how the article is written, and that I do not in any way perceive this to be good writing. The flaws happen to manifest in a way that I would expect LLM flaws to manifest, which I also do not find to be good writing. I do not find LLMs to have absorbed good technical writing tendencies at all. Instead they absorb sensationalist tendencies that are likely both more common in their dataset and that are likely intentionally selected for in the reinforcement learning phase. Writing which is effective, in the same way that clickbait headlines and Youtube thumbnails are effective, but not good. I felt as though this article was, through its headers and overuse of specific rhetorical devices, constantly trying to grab my attention in that same shallow manner. This gets tiring at length, and good technical writing does not need to engage in such tendencies.

If you disagree and find this to be good writing, you are entitled to your opinion, but nonetheless this is my own feedback on the article.

reply
Folcon
2 hours ago
[-]
Can you please share an example of what you perceive to be good writing so we can compare?
reply
anonymous908213
1 hour ago
[-]
Sure, I guess? I feel like this is getting rather in the weeds and will not necessarily lead the conversation in any kind of particularly productive direction, but I will nonetheless take the opportunity to promote what I consider to be excellent writing. Dan Luu is a favorite of mine, and offers what I find to be a much more rewarding use of reading time. A sample picked basically at random: https://danluu.com/ftc-google-antitrust/
reply
kbenson
3 hours ago
[-]
> For some reason people frequently suggest that my problem with LLM writing is that it's too good.

> I felt as though this article was, through its headers and overuse of specific rhetorical devices, constantly trying to grab my attention in that same shallow manner.

I think perhaps you're quick to assess a certain type of writing, which many see as done quite well and in a way that's approachable and is good at retaining interest, as AI. Perhaps you just don't like this type of writing that many do, and AI tries to emulate it, and you're keying on specific aspects of both the original and the emulation and because you don't appreciate either it's hard for you to discern between them? Or maybe there is no difference between the AI and non-AI articles that utilize these, and it's just your dislike of them which colors your view?

I, for one, found the article fairly approachable and easy to read given the somewhat niche content and that it was half survey of the current state of our ability to handle change in systems like these. Then again, I barely pay any attention to section titles. I couldn't even remember reading the ones you presented. Perhaps I've trained myself to see them just as section separators.

In any case, nothing in this stuck out as AI generated to me, and if it was, it was well enough done that I don't feel I wasted any time reading it.

reply
voidhorse
2 hours ago
[-]
I am a technical writer. This article is not good technical writing.

Good technical writing allows you to get to and understand the point in a minimum of time, has a clear and obvious structure, and organizes concepts in such a way that their key relationships are readily apparent. In my opinion this article achieves none of these things (and it also is just bad insofar as its thesis is confused and misleading in a very basic way—namely the relationship between functional programming philosophy and distributed systems design is far more aligned than it suggests, and it sets up a false dichotomy of FP versus systems, when really the dichotomy is just one of different levels of design (one could write the exact same slop article about what OOP "gets wrong" about systems—it gets it "wrong" because low level programming paradigms techniques are in fact about structuring programs, not systems, and system design is largely up to designers—the thesis is basically "why don't these pragmatic program-leave techniques help me design systems at scale" or in other words "why don't all these hammering techniques help me design a house?")

reply
kbenson
1 hour ago
[-]
I would only loosely categorize this as technical writing, depending on how you categorize technical writing. It seems much more a survey of problems and discussion piece, with notes about projects making inroads on the problem. It's definitely not a "this is how you solve this problem, and these are the clear steps to do so" type of article. Maybe that's some of the disconnect in how we view it. If I was hoping that this communicated a clear procedure or how to accomplish something, I would be disappointed. I don't think that was their intention.

I came away with some additional understanding of the problem, and thinking there are various nascent techniques to address this problem, none of them entirely sufficient, but that it's being worked on from multiple directions. I'm not sure the article was aiming for more than that.

reply
jibal
1 hour ago
[-]
I'm a highly literate reader and writer of technical topics, and there are a lot of bad technical writers who think they aren't. Except perhaps for the title, which is way too narrow, the article is excellent writing about a technical topic (which is quite different from technical writing)--but then I actually read it, so I know that he doesn't talk about a dichotomy between FP and systems, but rather between single programs and systems, and he explicitly says that his points aren't restricted to FP, but that because FP addresses the single program issues so well, FP programmers are particularly prone to missing the problem.
reply
jibal
1 hour ago
[-]
The article is smooth reading and technically excellent.
reply
akoboldfrying
3 hours ago
[-]
If this was written by AI, then AI is now capable of writing insightful long-form technical posts.
reply
anon291
3 hours ago
[-]
meh... I have tons of articles that I write and leave hanging only to pick up later.
reply
valenterry
1 hour ago
[-]
> Static types, algebraic data types, making illegal states unrepresentable: the functional programming tradition has developed extraordinary tools for reasoning about programs

Looks like the term "functional programming" has been watered down so much that now it is as useful as OOP: not at all.

Look, what matters is pure functional programming. It's about referential transparency, which means managing effects and reason about code in a similar way you can do with math. Static typing is very nice but orthogonal, ADT and making illegal states unrepresentable are good things, but all orthogonal.

reply
thmpp
5 hours ago
[-]
The challenges mentioned on the article are challenges that are known to every larger distributed system. They are not challenges solved or to be solved by functional programming.

But the solutions and tools functional programming provides help to have higher verifiability, fewer side effects and more compile-time checks of code in the deployment unit. That other challenges exists is fully correct. But without a functional approach, you have these challenges plus all others.

So while FP is not a one-size-fits-all solution to distributed systems challenges, it does help solve a subset of the challenges on a single system level.

reply
zozbot234
3 hours ago
[-]
> They are not challenges solved or to be solved by functional programming.

These challenges can be solved by the usual tools of FP, but this requires each version of the system to be explicitly aware of the data schema varieties used by all earlier versions that are still in use. Then it's a matter of interpreting earlier-versioned data correctly, and translating data to earlier schema varieties whenever they may have to be interpreted by earlier versions of the code.

(It may also be helpful to introduce minimally revised releases of the earlier codes, that simply add some amount of forward compatibility re: dealing with later-released schemas, while broadly keeping all other behavior the same to avoid unwanted breakage. These approaches are not too hard to implement.)

reply
voidhorse
2 hours ago
[-]
This. The thesis statement is so confused and I cannot tell if it's deliberate clickbait or the author legitimately is that confused about the distinction between program-level language features and program structure and systems design and how they are two closely related but really distinct things.
reply
lmm
3 hours ago
[-]
This all sounds very clever, but in practice no, most of the problems are code problems and if you do functional programming you really can avoid getting paged at 3AM because everything broke. (Even if you don't solve the deployment problem the article talks about - you're not deploying at 3AM!). It's like those 400IQ articles about how Rust won't fix all your security problems because it can't catch semantic errors, whereas meanwhile in the real world 70% of vulnerabilities are basic memory safety issues.

And doing the things the article suggests would be mostly useless, because the problem is always the boundary between your neat little world and the outside. You can't solve the semantic drift problem by being more careful about which versions of the code you allow to coexist, because the vast majority of semantic drift problems happen when end users (or third parties you integrate with) change their mind about what one of the fields in their/your data model means. There are actually bitemporal database systems that will version the code as well as the data, that allow you to compute what you thought the value was last Thursday with the code you were running last Thursday ("bank python"). It solves some problems, but not as many as you might think.

reply
mh2266
38 minutes ago
[-]
> It's like those 400IQ articles about how Rust won't fix all your security problems because it can't catch semantic errors, whereas meanwhile in the real world 70% of vulnerabilities are basic memory safety issues.

even setting aside the memory safety stuff, having sum types and compiler-enforced error handling is also not nothing...

reply
troad
5 hours ago
[-]
> Static types, algebraic data types, making illegal states unrepresentable: the functional programming tradition has developed extraordinary tools for reasoning about programs.

But none of these things are functional programming? This is more the tradition of 'expressive static types' than it is of FP.

What about Lisp, Racket, Scheme, Clojure, Erlang / Elixir...

reply
esafak
5 hours ago
[-]
I think the more relevant functional programming quote is

> The immutability of the log is the entire value proposition.

reply
troad
5 hours ago
[-]
Agreed, but the article begins with the previous quote, and is entitled "what functional programmers get wrong", so I feel like there's some preliminary assumptions being made about FP that warrant examining.

In practice, much of the article seems to be about the problems introduced by rigid typing. Your quote, for instance, is used in the context of reading old logs using a typed schema if that schema changes. But that's a non-issue in the FP languages mentioned above since they tend towards the use of unstructured data (maps, lists) and lambdas. Conversely, reading state from old, schema-incompatible logs might be an issue in something like Java or C++, which certainly are not FP languages as the term is usually understood.

So overall, not really an FP issue at all, and yet the article is called "what functional programmers get wrong". The author's points might be very valid for his version of FP, but his version of FP seems to be 'FP in the ML tradition, with types so rigid you might want to consult a doctor after four hours'.

reply
yakshaving_jgt
1 hour ago
[-]
> In practice, much of the article seems to be about the problems introduced by rigid typing … But that's a non-issue in the FP languages mentioned above since they tend towards the use of unstructured data

Haskell doesn’t have this problem. None of the “rigid typing” languages have this problem.

You are in complete control of how strictly or how leniently you parse values. If you want to derive a parser for a domain-specific type, you can. If you want to write one, you can. If you want to parse values into more generic types, you can.

This is one of those fundamental misunderstandings that too many programmers have, and it seems like it’ll never die.

reply
tombert
3 hours ago
[-]
I feel like Erlang is a functional language that's honest about systems, not just a program. You are almost required to write distributed apps with Erlang, just because it's not weird to spin up thousands of Erlang processes, and you don't have any real way of knowing the order they're going to run so you have to build your applications around async messaging.

It makes me a little sad that Erlang/Elixir is kind of the only platform that does this though...

reply
roblh
1 hour ago
[-]
My experience with elixir, as a scrub who spends every day at work writing javascript, is pretty in line with that. The language forces you to work that way, and you spend half your time just architecting your supervision tree. But the language itself is so easy to write business logic in that it takes half as long as it would in another language. So it works out to the same total time investment but the return is so much higher cause your program is better and more predictable and has scaling for free.
reply
threethirtytwo
2 hours ago
[-]
>It makes me a little sad that Erlang/Elixir is kind of the only platform that does this though...

It's not. These are called green threads and under the hood it's not really threads. It's literally the same thing as async await in python and nodejs.

It's just different perspectives on the same concept of concurrency. Go and Erlang implicitly have calls to concurrency, while nodejs it's implicit.

reply
threethirtytwo
2 hours ago
[-]
Erlang and elixir has a huge hole, and that hole is no static checks.
reply
voidhorse
2 hours ago
[-]
Goroutines don't force it but I find working with them equally pleasant.
reply
egorelik
4 hours ago
[-]
I went into this expecting it to be a criticism in the context of low-level systems, and was a bit surprised when it ended up being about distributed. The mismatch the author is describing, I think, is not really about functional programmers so much as everyone not working directly on modern distributed or web software. For being upfront about all the ways distributed programming is different, this is actually one of the best intros I have seen for the non-distributed programmer on what distributed programming is really like.

I still want to believe that future programming languages will be capable of tackling these issues, but exploring the design space will require being cognizant of those issues in the first place.

reply
stingraycharles
4 hours ago
[-]
It’s about a specific branch of functional programming that values an almost mathematical rigor, though (like Haskell).

I don’t think the criticism of the author applies LISP-style functional programming, which is much more accommodating to embracing chaos.

reply
epgui
59 minutes ago
[-]
Yeah no, that criticism of Haskell is nonsense.
reply
stingraycharles
31 minutes ago
[-]
It wasn’t intended as criticism of Haskell at all, I love that language. I have a whole bunch of open source Haskell projects under my name.

It was intended as a criticism of the article, that its whole assumption “FP == making invalid states impossible to represent” is incorrect.

I recognize that it’s very much possible to embrace chaos with Haskell, and I should probably have worded that better.

reply
yakshaving_jgt
1 hour ago
[-]
Utterly ridiculous.

In Haskell you can “embrace chaos” exactly as much as you desire. It’s a general purpose programming language.

reply
agentultra
3 hours ago
[-]
I often recommend developers learn and use at least one model checking language for this purpose. They’re really good at helping us reason about global properties of systems.

This isn’t unique to Haskell or functional programming at all. I’ve seen programmers from JavaScript to C++ suffer the same fate: tied into the boundaries of a single process they try to verify the correctness of their systems informally. By writing documents and holding long meetings or convincing themselves by prototyping. It’s only a matter of time before the find the error in production.

Hopefully it’s of no real consequence. The error gets fixed and it’s a week or two of extra work. But some times it triggers an outage that causes SLO’s to overshoot and SLAs to trigger which cost the company money. That outage could have potentially been avoided.

But it’s tough getting developers to see the benefit in model checking or any kind of formal methods. The hubris is often too great. It’s not just functional programmers! Although it surprised me when I was working with Haskell developers that so many were resistant to the idea.

(If one has experience and understands functional programming, learning TLA should be rather straight forward)

reply
agentultra
3 hours ago
[-]
I also took a stab at the versioning challenge back in 2019 and wrote a library with a friend in a fevered evening for type safe migrations in Haskell [0].

In practice it’s a bit tricky to use because it doesn’t enforce encoding the schema identifier in the serialized payload. Oops.

But between that, automated typing from examples, and refinement it might be possible to develop a system that can manage to validate the serialized data against the parser and have the serialization data appear in the type. At that point you could query the whole system.

[0] https://hackage.haskell.org/packages/tag/library

reply
Aperocky
4 hours ago
[-]
An awesome article (if only we remove "functional programmers", feel like they are unfairly called out), hit close to home on operations and pull up a lot of (painful) memories.

Fundamentally - all your system after a certain scale becomes distributed systems, accuracy on a local level is usually not where systemic risks are coming from. A distributed system of fully accurate and correct local programs can implode or explode. Migration, compatibility and system stability are surprisingly dependent on engineer experience till this day.

reply
threethirtytwo
4 hours ago
[-]
It’s true. But static proof based checks can happen across system boundaries it’s just for historical and stupid reasons we misapply it here.

First put all your services in a monorepo have it all build as one under CI. That’s a static check across the entire system. When you do a polyrepo you are opening up a mathematical hole where two repos can be mismatched.

Second don’t use databases. Databases don’t type check and aren’t compatible with your functional code written on servers.

The first one is stupid… almost everyone just makes this mistake and most of the time unless you were there in the beginning to enforce monorepo you have to live with someone deploying a breaking change that effects a repo somehwere else. I’ve been in places where they simply didn’t understand the hole with polyrepos and they went with it despite me diagramming the issue on a white board (it’s that pervasive). Polyrepos exist for people who like organizing things with names and don’t understand static safety.

And it gets worse. So many people don’t understand that this problem only gets worse over time. The more repos you have in your polyrepo the more brittle everything becomes and they don’t even understand why.

The second one is also stupid but you had to be there in the very very beginning to have stopped it. The inception of the concept of databases should have been created such that any query you run on it needs to be compiled to run on the database and is thus statically checked. This is something that can easily be done but wasn’t done and now the entire World Wide Web just has to make do with testing. Hello? Type checking in application code but not in query code? Why?

That’s like half the problems with distributed systems completely solved. And these problems just exist because of raw stupidity. Stuff like race conditions and dead locks are solvable too but much harder. But these issues are obvious. What’s not obvious are the aforementioned problems that almost nobody talks about even though they are completely solvable.

reply
blandflakes
3 hours ago
[-]
> First put all your services in a monorepo have it all build as one under CI. That’s a static check across the entire system.

There are definitely benefits to this approach. My coworkers do fall into the trap of assuming that all the services will be deployed simultaneously, which is not something you can really guarantee (especially if you have to roll one of them back at some point), so the monorepo approach gives them confidence in some breaking changes that it shouldn't (like adding a new required field).

reply
threethirtytwo
3 hours ago
[-]
I mean you talk about it as if it's a "benefit"

I'm talking about it in the same sense as the "benefit" of using typescript over javascript. Not just a "benefit" but it's the obvious path, the obvious better way.

Everything about monorepos and polyrepos are basically mostly just debates about opinions and styles and preferences. But most people don't understand... the monorepo is definitively better. Don't think of it as a benefit, this makes that style of error literally impossible to occur.

reply
blandflakes
3 hours ago
[-]
Well, no, it doesn't. A monorepo does nothing to prevent you from making breaking changes, it just stops you from making changes that don't compile/test. You still have to understand that services aren't deploying as an atomic unit and make sure that your network calls are forward and backward compatible.
reply
threethirtytwo
3 hours ago
[-]
I never said it stops you from making ALL breaking changes. But it makes a whole class of very common breaking changes Impossible to occur. This is a definitive benefit. Monorepo means much less errors, Polyrepo means more, every other difference between the two is a debatable opinion but this is definitive.

>You still have to understand that services aren't deploying as an atomic unit and make sure that your network calls are forward and backward compatible.

The time between inception of a deploy and the termination of a successful deploy isn't solved. But a monodeploy solves an entire class of errors outside the boundary of an atomic deploy. Think about what's in that boundary and what's outside of that boundary? How long does a deploy take? An hour? How long are you not deploying?

That's they key, static checking can't fix everything and a monodeploy isn't a full guarantee of safety, but it does guarantee the impossibility of a huge class of errors in the interim time between successful monodeploys.

reply
kbenson
4 hours ago
[-]
> Second don’t use databases. Databases don’t type check and aren’t compatible with your functional code written on servers.

That isn't very useful by itself. What's your suggested alternative that aligns with your advice of "don't"? How does it deal with destructive changes to data (e.g. a table drop)?

reply
threethirtytwo
3 hours ago
[-]
There are no alternatives. My point is the whole concept was designed with flaws from the beginning.

>How does it deal with destructive changes to data (e.g. a table drop)?

How does type checking deal with this? What? I'm not talking about this. I'm talking something as simple as a typo in your sql query can bring your system down without testing or a giant orm that's synced with your database.

I'm not saying distributed systems are completely solved. I'm saying a huge portion of the problems exist because of preventable flaws. Why talk about the things that can't really be easily solved and why don't we talk about the things that can be solved?

reply
kbenson
3 hours ago
[-]
Oh, I thought you were speaking more to the topic and content of the article in question, which goes to great lengths to describe the sorts of problems that are much, much harder to catch than simple compiling of queries and checking them against the database, or the message store.

Even if you were to reduce the database to a simple API, the question then remains how do you make sure to version it along with the other portions of the system that utilize it to prevent problems. The point of the article seems to be to point out that while this is a much harder problem (which I think you are categorizing as "things that can't really be easily solved"), there are actually solutions being developed in different areas that can be utilized, and it surveys many of them.

reply
threethirtytwo
3 hours ago
[-]
>Oh, I thought you were speaking more to the topic and content of the article in question, which goes to great lengths to describe the sorts of problems that are much, much harder to catch than simple compiling of queries and checking them against the database, or the message stor

Right. But we haven't even have square one solved which is the easy stuff. That's my point.

>Even if you were to reduce the database to a simple API, the question then remains how do you make sure to version it along with the other portions of the system that utilize it to prevent problems.

I said monorepo and monodeploys in this thread. But you need to actually take it further then this. Have your monorepo be written in a MONOLANGUAGE, no application language + sql, just one language to rule them all. boom. Then the static check is pervasive. That's a huge section of preventable mistakes that no longer exist, now that type that represents your table can never ever be off sync.

I know it's not "practical" but that's not my point. My point is that there's a huge portion of problems with "systems" that are literally obviously solvable and with obvious solutions it's just I'm too lazy to write a production grade database from scratch, sorry guys.

reply
kbenson
1 hour ago
[-]
> I said monorepo and monodeploys in this thread

And that helps when you are dealing with schema changes that need to be rolled out at AWS, your local DB, a Kafka cluster, how? The whole point of this article was how to approach the problem when there are different components in the system which make a monorepo and what it provides for this infeasible or impossible.

> I know it's not "practical" but that's not my point. My point is that there's a huge portion of problems with "systems" that are literally obviously solvable and with obvious solutions it's just I'm too lazy to write a production grade database from scratch, sorry guys.

The article talks about database solutions that help with this problem.

I'm uncertain how to interpret your responses in light of the article, when they seem to be ignoring most of what the article is about, which is solving exactly these problems you are talking about. Is your position that we shouldn't look for solutions to the harder problems because some people aren't even using the solutions to the easy problems?

reply
threethirtytwo
32 minutes ago
[-]
The article is about coping mechanisms for a world where we already accepted fragmented systems: polyrepos, heterogeneous languages, independently versioned databases, queues, infra, and time-skewed deployments. Given that world, yes, you need sophisticated techniques to survive partial failure, temporal mismatch, and evolution over time.

That is not what I’m arguing against.

My point is more fundamental: we deliberately designed away static safety at the foundation, and then act surprised that “systems problems” exist.

Before Kafka versioning, schema migration strategies, backward compatibility patterns, or temporal reasoning even enter the picture, we already punched a hole:

Polyrepos break global static checking by construction.

Databases are untyped relative to application code

SQL is strings, not programs

Deployments are allowed to diverge by default

That entire class of failure is optional, not inherent.

When I say “we haven’t solved square one,” I’m saying: we skipped enforcing whole-system invariants, then rebranded the fallout as unavoidable distributed systems complexity.

So when you say “the article already offers solutions,” you’re misunderstanding what kind of solutions those are. They are mitigations for a world that already gave up on static guarantees, not solutions to the root design mistake.

I’m not claiming my position is practical to retrofit today. I’m claiming a huge portion of what we now call “hard systems problems” only exist because we normalized avoidable architectural holes decades ago.

You’re discussing how to live in the house after the foundation cracked.

I’m pointing out the crack was optional and we poured the concrete anyway.

I’m telling you this now so you are no longer uncertain and utterly clear about what I am saying and what my position is. If you are unclear please logically point out what isn’t clear because this phrase: “ The article talks about database solutions that help with this problem.” shows you missed the point. I am not talking about solutions that help with the problem, I am talking about solutions that make a lot of these problems non-existent within reality as we know it.

reply
theLiminator
4 hours ago
[-]
> The second one is also stupid but you had to be there in the very very beginning to have stopped it. The inception of the concept of databases should have been created such that any query you run on it needs to be compiled to run on the database and is thus statically checked. This is something that can easily be done but wasn’t done and now we all just make do with testing.

This is a good point, I've never really thought about this carefully before, but that makes so much sense.

reply
threethirtytwo
3 hours ago
[-]
Yeah it's the stupidest thing but nobody even thinks about this. Everyone I talk to thinks SQL is the greatest thing ever. I mean it's the greatest because there's no obviously superior alternative.
reply
akoboldfrying
3 hours ago
[-]
> First put all your services in a monorepo have it all build as one under CI. That’s a static check across the entire system.

That helps but is insufficient, since the set of concurrently deployed artifact versions can be different than any set of artifact versions seen by CI -- most obviously when two versions of the same artifact are actively deployed at the same time. It also appears to rule out the possibility of ever integrating with other systems (e.g., now you need to build your own Stripe-equivalent in a subdir of your monorepo).

> Second don’t use databases.

So you want to reimplement your own PostgreSQL-equivalent in another monorepo subdir too? I don't understand how opting not to use modern RDBMSes is practical. IIUC you're proposing implementing a DB using compiled queries that use the same types as the consuming application -- I can see the type-safety benefits, but this immediately restricts consumers to the same language or environment (e.g., JVM) that the DB was implemented in.

reply
threethirtytwo
3 hours ago
[-]
>That helps but is insufficient, since the set of concurrently deployed artifact versions can be different than any set of artifact versions seen by CI

Simple, although I only mentioned repos should be mono, I should've also said deployment should be mono as well. I thought that was a given.

>So you want to reimplement your own PostgreSQL-equivalent in another monorepo subdir too?

I'm too lazy to do this but in general I want an artifact that is built. All sql queries need to be compiled and built and the database runs that artifact. And of course all of this is part of a monorepo that is monodeployed.

>I don't understand how opting not to use modern RDBMSes is practical.

It's not practical. My point is there's some really stupid obvious flaws we live with because it's the only practical solution.

reply
blandflakes
3 hours ago
[-]
> Simple, although I only mentioned repos should be mono, I should've also said deployment should be mono as well. I thought that was a given.

Deploying your service graph as one atomic unit is not a given, and not necessarily even the best idea - you need to be able to roll back an individual service unless you have very small changes between versions, which means that even if they were rolled out atomically, you still run the risk of mixed versions sets.

reply
threethirtytwo
2 hours ago
[-]
>Deploying your service graph as one atomic unit is not a given,

It's not a given because you didn't make it a given.

>and not necessarily even the best idea - you need to be able to roll back an individual service unless you have very small changes between versions, which means that even if they were rolled out atomically, you still run the risk of mixed versions sets.

It is the best idea. This should be the standard. And nothing prevents you from rolling back an individual service. You can still do that. And you can still do individual deploys too. But these are just for patch ups.

When you roll back an individual service your entire system is no longer in a valid state. It's in an interim state of repair. You need to fix your changes in the monorepo and monodeploy again. A successful monodeploy ensures that the finished deploy is devoid of a common class of errors.

Monodeploy should be the gold standard, and individual deploys and roll backs are reserved for emergencies.

reply
wavemode
3 hours ago
[-]
> If you have a web application with more than one server, you have a distributed system.

Even if you have a web application with only one server, and an embedded database, and no other components, you still have a distributed system. The browser is part of that system.

reply
zug_zug
4 hours ago
[-]
What did I just read? I liked the bits explaining the basics of the difficulties of versioning data, well known points but explained in a simple way. But then it just kept going without any clear central thesis.

Was the central thesis supposed to be that it's impossible for any tools to ever represent complex migrating systems with well-defined data? Because that's not true. Or that it's not practical or cost-effective to do so? Or was there a central point?

reply
mpalmer
2 hours ago
[-]
You read a series of solid bullet points fleshed out semi-competently using AI, and given a slight human touch. It should be a quarter of this length at least.
reply
cfiggers
3 hours ago
[-]
I really enjoyed this article—I meant to just read the first few paragraphs and skim the rest. I ended up reading every word.

The core thesis sticks with me: existing tooling that helps programmers think about and enforce correctness (linters, compilers, type systems, test suites) operate primarily (and often exclusively) on one snapshot of a project, as it exists at a particular point in time. This makes those tools, as powerful as they can be, unable to help us think about and enforce correctness across horizons that are not visible from the standpoint of a single project at a single point in time (systems distributed across time, across network, across version history).

I feel like the issue of semantic drift is a bigger topic that probably has roots beyond systems architecture/computer engineering.

Also I found the writing style very clear. It helpfully repeats the core ideas, but (IMHO, this is subjective) doesn't do so ad nauseum.

I'm interested in reading other writing from the same author.

reply
lock1
1 hour ago
[-]

  > This makes those tools, as powerful as they can be, unable to help us think about and enforce correctness across horizons that are not visible from the standpoint of a single project at a single point in time (systems distributed across time, across network, across version history).
Eh?

I don't think there's something preventing you from constructing guardrails with type system & tests enforcing correctness that handles versioning.

I'm not buying the "unable to help us to think about" part. I argue that Rust's `Option`/`Result`/sum-type + exhaustive match/switch very valuable for proper error handling. You could define your own error type and exhaustively handle each case gracefully, with the bonus that now your compiler is also able to check it.

reply
epgui
58 minutes ago
[-]
The author is basically ChatGPT.
reply
epgui
1 hour ago
[-]
This falls super flat for me, because the LLM that wrote this spends all of its time seemingly attacking a straw man.
reply
w10-1
4 hours ago
[-]
This is really, really good as an overview of system-level issues. I write only to encourage others to read it.

Ideas that helped me situate many of the issues we struggled with at various companies:

- distinguishing programs (the code as designed) from (distributed) systems (the whole mess, including maintenance and migration, as deployed)

- versioning code (reversibly) and data (irreversibly)

- walk-through of software solutions for distributed systems, and their blind spots wrt data versioning; temporal databases

- semantic drift: schema-less change

- Limitations of type systems for managing invariants

- co-evolving code and data; running old code on old data

The real benefit of systems thinking is recognizing whether an issue is unsolvable and/or manageable (with the current approach, or warrants a technology upgrade or directional shift). It's like O(n) evaluation of algorithms, not for space/time but for issue tractability.

reply
qouteall
4 hours ago
[-]
Normally the database and message queue is decoupled from the backend service. This decoupling makes managing simpler and make cross-language things easier.

But if the type system need to cover all these components, then they start coupling again.

Coupling is not necessarily a bad thing as long as it gives good developer UX. If the database is tightly coupled with programming language, then it looks like ORM but better. And it probably can also reduce CRUD biolerplate or N+1 issue etc.

Related, there is SpacetimeDB that make backend run within database, and the backend code is highly coupled with SpacetimeDB's own API

reply
josh2600
59 minutes ago
[-]
There’s so much in this article where I look at it and I’m like, is this ai slop?

I’m beginning to really appreciate short articles with a few bullet point takeaways.

With respect to version control across systems, when you get into serious stuff where mistakes are measured in lives and/or three commas, there’s just a lot more simplicity in the design of those systems than most people think.

Really big systems often have very simple design principles at their core which are echoed through out the topology.

In secure code like the stuff signal uses, having a hash of the code that is attested by a network of servers and chained back to self-signed identities on the client is the only way to go.

Here’s the hash of the code that’s supposed to be running on my server, here’s the proof that I verified it with all of the hardware and software tools at my disposal from the server, and here’s that hash and attestation embedded into the app on your phone or laptop that’s connecting to my box.

If there’s an easier way to get some semblance of “my device is talking to the right code on the right box,” please enlighten me.

reply
mbo
5 hours ago
[-]
This is an absolutely wonderful article. I helped build a system very similar to the one quoted below:

> A deploy pipeline that queries your orchestrator for running image tags, checks your migration history against the schema registry, diffs your GraphQL schema against collected client operations, and runs Buf’s compatibility checks: this is buildable today, with off-the-shelf parts.

that was largely successful at eliminating interservice compatibility errors, but it felt like we were scrambling around in a dark and dusty corner of software engineering that not many people really cared about. Good to see that there's a body of academic work that I was not entirely aware about that is looking into this.

The article successfully identifies that, yes, there are ways, with sufficient effort, to statically verify that a schema change is (mostly) safe or unsafe before deployment. But even with that determination, a lot of IDLs make it still very hard to evolve types! The compatibility checker will successfully identify that strengthening a type from `optional` to `required` isn't backwards compatible: _now what_. There isn't a safe pathway for schema evolution, and you need to reach for something like Cambria (https://www.inkandswitch.com/cambria/, cited in the article) to handle the evolution.

I've only seen one IDL try to do this properly by baking evolution semantics directly into its type model, typical (https://github.com/stepchowfun/typical) with it's `asymmetric` type label: it makes a field required in the constructor but not the deserialiser. I would like to see these "asymmetric" types find their ways into IDLs like Protocol Buffers such that their compatibility checkers, like Buf's `buf breaking` (https://buf.build/docs/reference/cli/buf/breaking/), can formally reason about them. I feel like there is a rich design space of asymmetric types that are poorly described (enum values that can only be compared against but never constructed, paired validations, fallback values sent over the wire, etc.)

Another one that I think makes a pretty good attempt is the config language CUE and its type system (https://cuelang.org/docs/concept/the-logic-of-cue/), which allows you to do first-class version compatibility modelling. But I have yet to investigate whether CUE is fit for purpose for modelling compatibility over the wire / between service deploys.

reply
cryptonector
3 hours ago
[-]
Terrible article. Nowhere in its litany of what one has to do to build systems that work is anything said about functional programming is incompatible with any of that, or even how the mindset of the functional programmer is incompatible with any of that.

The formula for this article was: truism about X followed by unrelated stuff about Y that the reader is supposed to come away thinking X is incompatible with.

reply
mpalmer
2 hours ago
[-]
I'm very puzzled at the positive comments it's getting. It's insanely, discourteously long for the number of distinct ideas in it. There's banal LLM-ish smirking quips, very little personality and a lot of repetition.

There's certainly no personal anecdotes of difficult problems which would go a long way to show the author actually knows their stuff.

reply
Krei-se
5 hours ago
[-]
The version problem is solved well in FP by catching the old function and translating it cleanly. This ofc also works with data representation.

Im not sure what the point of the post is. Minimizing side effects in your code (keeping functions pure) is what gives you the flexibility he says FP misses?!

reply
platinumrad
5 hours ago
[-]
If your gut response is "here's a way to make the code better" then, yeah, you missed the entire point of the post.
reply
Krei-se
5 hours ago
[-]
I was hoping for discussion and got a zero content kneejerk response
reply
hippo22
5 hours ago
[-]
The point of the article is that the system is not representable in code so all the tools FP provides are useless in systems programming.
reply
Krei-se
5 hours ago
[-]
You my friend need more milewski in your life and let go of the idea that code and data are different things.
reply
tadfisher
5 hours ago
[-]
In this case, we don't have good data representations for code which produces or consumes structured data, where the structure of that data changes over time. We have some pieces figured out for certain domains, such as database schemas, but we do not have a "theory-of-everything" for enforcing types/structure across all domains of our systems. The domain addressed by GraphQL, HTTP APIs, is another instance of this same problem.

There are experimental systems like Unison that keep old versions of your code alive as data, which are fascinating solutions to the underlying problem.

reply
Krei-se
5 hours ago
[-]
Did you just state that FP cannot solve the problem because some protocol wont allow you to specify types in the manner?
reply
hippo22
4 hours ago
[-]
Did you read the article? The issue isn’t code vs. data, it’s code vs. system state. The system can be in states not model-able by code, e.g. versions 1 and 2 of a piece of software running concurrently. Other examples I can think of are things like nodes going down, database consistency, etc.
reply
akoboldfrying
3 hours ago
[-]
Outstanding piece. It articulates the actually fundamental correctness problem of most software development:

> “given the actual set of deployments that exist right now, is it safe to add this one?”

This problem has been right in front of our faces for decades, but I haven't ever seen it laid out like this, or fully realised it myself. Which is surprising, given the obvious utility of types for program-level correctness.

I think the next big leap forward in software reliability will be the development/integration of both the tooling to accomplish this, and the cultural norms to use them. Automating this (as far as possible -- they do note that semantic drift is immune to automatic detection) will be as big a deal as the concept of version control for code, or CI/CD.

There's lots here that I need to dig into further.

reply
sakesun
4 hours ago
[-]
Very thoughtful and well written article.
reply
voidhorse
3 hours ago
[-]
What? Distributed systems 101 is to write workers that execute stateless, pure, functions and to manage state centrally (basically monads in yet another guise, fan in/fan outs) and to use functional operations to merge and transform data (map reduce).

There are a lot of reasons people don't succeed in systems thinking or in writing good distributed architectures, but functional programming is not one of them. If anything it's an inability to think functionally and to leak state and establish brittle dependencies absolutely everywhere that scuppers systems. Yes, FP has ton's of techniques that aid in good design at lower levels too, that doesn't mean it hampers you at high levels. In fact, much of what this article puts forward is a false dichotomy. None of the lower level FP principles and techniques like strong and expressive types systems are in conflict with the much more general principles of system design covered in the article (and they actually help you realize these principles—think version skew is bad in Haskell? good luck figuring out wtf is even happening in a tangled javascript async promises monstrosity undergoing version skew)

Also, slop, gross. At least edit some of that empty AI verbiage out of the final essay. It has the character and charm of a wet rag and adds zero information.

reply
foxes
5 hours ago
[-]
Are there any other libaries / research / etc that sorta take a more functional approach to solving these sort of problems?

For db stuff - what if we flipped the way we write schemas around. A schema is its something that you derive at a current state, rather than starting by writing a schema and then generating migrations? So you can reason over all of it rather than a particular snapshot?

reply
AlotOfReading
5 hours ago
[-]
You might be interested in the datomic model [0], where you never remove data and all previous views of the database remain accessible.

[0] https://docs.datomic.com/datomic-overview.html#information-m...

reply
sgarland
5 hours ago
[-]
I thought MongoDB was the worst thing I’d ever heard of. Today, I learned there is a new champion. “Let us deliberately lean into EAV, and eschew types entirely.” What?!
reply
AlotOfReading
4 hours ago
[-]
I haven't used datomic specifically, but the datalog model it implements is honestly a pretty good way to think about things. The worst part about learning it has been realizing how many systems out there are just worse reimplementations of the same idea.

Gameplay tags for example [0], and quite a lot of SQL.

[0] https://dev.epicgames.com/documentation/en-us/unreal-engine/...

reply
anon291
4 hours ago
[-]
Datomic is not sql but it is not mongo either. Mongo is completely unprincipled. Sql is halfway principled. Datomic and datalog are actually completely principled
reply
sgarland
4 hours ago
[-]
What do you mean by “principled?”
reply
anon291
4 hours ago
[-]
The semantics are simple and complete. Sql is based off a complete system (relational algebra), but the null handling results in degenerate cases.
reply
Krei-se
5 hours ago
[-]
Thats actually how its done! You accept all schemas in your db and default value or afterfit around old interfaces to keep compatibility. You never really write directly ofc so maybe the author is confused as to how FP handles this. For a category this is just another morphism be it a simple schema or a more complicated function, its just many in/out plugged together.

So when you call writeXY your caller has absolutely no need to know what actually happens. Catching and modifying old versions is just another morphism. You can even keep the layout and just accept version and payload as input.

reply
foxes
5 hours ago
[-]
I use a lot of elixir and rust. `mix compile` does not necessarily reason across migrations for the app with what limited type checking it does in 1.19.

You define an ecto schema and write your own migrations, but like the state of the app seems tied to that kinda snapshot of the schema. Its not like you have some lens into the overall chain of all the migrations together.

reply
throwup238
5 hours ago
[-]
How do you derive constraints without a schema?

The value of a schema in a db like Postgres isn’t in the description of the storage format or indexing, but the invariants the database can enforce on the data coming in. That includes everything from uniqueness and foreign key constraints to complex functions that can pull in dozens of tables to decide whether a row is valid or should be rejected. How do you derive declarative rules from a turing complete language built into the database?

reply
foxes
5 hours ago
[-]
The current state must satisfy all constraints?

Eg some table Users -> you start with `add user_id` , `add org_id`, `remove org_id` for example, so then the current state is `Users{ user_id }`. But you trust the compiler to derive that, and then when you want to do something with Users you have to scope into it, or tell it how to handle different steps in that chain.

Im not saying this is not equivalent at the end of the day, just if anything surfaces it this way, or makes it more ergonomic.

reply
throwup238
4 hours ago
[-]
Ah I think I misunderstood. That sounds like event sourcing? As far as I know that has never been implemented at a language level.
reply
anon291
4 hours ago
[-]
The issue is our current databases were not designed for proper schema resolution.

The correct answer here is that a database ought to be modeled as a data type. Sql treats data type changes separate from the value transform. To say this is retarded is an understatement.

The actual answer is that the schema update is a typed function from schema1 to schema2. The type / schema of the db is carried in the types of the function. But the actual data is moved via the function computation.

Keeping multiple databases around is honestly a potential good use of homotopy type theory extended with support for partial / one-way equivalences

reply
anon291
4 hours ago
[-]
With all due respect to Duncan... The reason why this is an issue at all is because the people who architected the internet were not thinking functionally. If they were, they would have come up with something like Nix.

But yes, interfacing between formally proven code and the wild wild West is always an issue. This is similar to when a person from a country with a strong legal system suddenly find themselves subject to the rule of the jungle.

The better question to ask is why did we choose to build a chaotic jungle and how do we get out of it

reply
diebillionaires
6 hours ago
[-]
this website is incredibly unfriendly to scroll on firefox mobile. i could read it. it jumps every time i scroll down
reply
the__alchemist
4 hours ago
[-]
Desktop too.
reply
srik
5 hours ago
[-]
reader mode
reply
nineteen999
4 hours ago
[-]
OMG. Sweet relief. This is one of the greatest things I've read in ages.

An FP programmer admitting they live in ivory towers. I never thought I'd see the day.

I love it. HN needs 10 times more of this.

This person has grown up - the rest of you? Get a grip.

reply