Process-Based Concurrency: Why Beam and OTP Keep Being Right
31 points
4 hours ago
| 6 comments
| variantsystems.io
| HN
joshsegall
3 hours ago
[-]
I think the practitioner angle is what makes interesting. Too many BEAM advocacy posts are theoretical.

I would push back on the "shared state with locks vs isolated state with message passing" framing. Both approaches model concurrency as execution that needs coordination. Switching from locks to mailboxes changes the syntax of failure, not the structure. A mailbox is still a shared mutable queue between sender and receiver, and actors still deadlock through circular messages.

reply
IsTom
34 minutes ago
[-]
> actors still deadlock through circular messages

I've rarely seen naked sends/receives in Erlang, you mostly go through OTP behaviors. And if you happen to use them and get stuck (without "after" clause), the fact you can just attach a console to a running system and inspect processes' states makes it much easier to debug.

reply
stingraycharles
53 minutes ago
[-]
Stateless vs stateful concurrency management is very different, though; I can roll back / replay a mail box, while this isn’t possible with shared locks. It’s a much cleaner architecture in general if you want to scale out, but it has more overhead.
reply
mrngm
20 minutes ago
[-]
Related thread from 11 days ago: https://news.ycombinator.com/item?id=47067395 "What years of production-grade concurrency teaches us about building AI agents", 144 points, 51 comments.
reply
baud9600
1 hour ago
[-]
Very interesting. Reading this made me think of occam on the transputer: concurrent lightweight processes, message passing, dedicated memory! I spent some happy years in that world. Perhaps I should look at BEAM and see what work comes along?
reply
rapsey
2 hours ago
[-]
> Backpressure is built in. If a process receives messages faster than it can handle them, the mailbox grows. This is visible and monitorable. You can inspect any process’s mailbox length, set up alerts, and make architectural decisions about it. Contrast this with thread-based systems where overload manifests as increasing latency, deadlocks, or OOM crashes — symptoms that are harder to diagnose and attribute.

Sorry but this is wrong. This is no kind of backpressure as any experienced erlang developer will tell you: properly doing backpressure is a massive pain in erlang. By default your system is almost guaranteed to break in random places under pressure that you are surprised by.

reply
mnsc
47 minutes ago
[-]
I wonder how much the roots of erlang is showing now? Telephone calls had a very specific "natural" profile. High but bounded concurrency (number of persons alive), long process lifetime (1 min - hours), few state changes/messages per process (I know nothing of the actual protocol). I could imagine that the agentic scenario matches this somewhat where other scenarios, eg HFT, would be have a totally different profile making beam a bad choice. But then again, that's just the typical right-tool-for-the-job challenge.
reply
Twisol
2 hours ago
[-]
Yes, this is missing the "pressure" part of "backpressure", where the recipient is able to signal to the producer that they should slow down or stop producing messages. Observability is useful, sure, but it's not the same as backpressure.
reply
IsTom
31 minutes ago
[-]
Sending message to a process has a cost (for purposes of preemption) relative to the current size of receiver's mailbox, so the sender will get preempted earlier. This isn't perfect, but it is something.
reply
librasteve
2 hours ago
[-]
Occam (1982 ish) shared most of BEAMs ideas, but strongly enforced synchronous message passing on both channel output and input … so back pressure was just there in all code. The advantage was that most deadlock conditions were placed in the category of “if it can lock, then it will lock” which meant that debugging done at small scale would preemptively resolve issues before scaling up process / processor count.
reply
baud9600
1 hour ago
[-]
Once you were familiar with occam you could see deadlocks in code very quickly. It was a productive way to build scaled concurrent systems. At the time we laughed at the idea of using C for the same task
reply
librasteve
58 minutes ago
[-]
I spreadsheeted out how many T424 die per Apple M2 (TSMC 3nm process) - that's 400,000 CPUs (about a 600x600 grid) at say 1GIPs each - so 400 PIPS per M2 die size. Thats for 32 bit integer math - Inmos also had a 16 bit datapath, but these days you would probably up the RAM per CPU (8k, 16k?) and stick with 32-bit datapath, but add 8-,16-bit FP support. Happy to help with any VC pitches!
reply
gethly
1 hour ago
[-]
Go is good enough.
reply
socketcluster
1 hour ago
[-]
The Node.js community had figured this out long before BEAM or even Elixir existed.

People tried to introduce threads to Node.js but there was push-back for the very reasons mentioned in this article and so we never got threads.

The JavaScript languages communities watch, nod, and go back to work.

reply
pentacent_hq
1 hour ago
[-]
> The Node.js community had figured this out long before BEAM or even Elixir existed.

Work on the BEAM started in the 1990s, over ten years before the first release of Node in 2009.

reply
masklinn
1 hour ago
[-]
And BEAM was the reimplementation of the Erlang runtime, the actual model is part of the language semantics which was pretty stable by the late 80s, just with a Prolog runtime way too slow for production use.
reply
hlieberman
41 minutes ago
[-]
Forget Node.js; _Javascript_ hadn't even been invented yet when Erlang and BEAM first debuted.
reply
leoc
1 hour ago
[-]
You may be thinking of some recent round of publicity for BEAM, but BEAM is a bit older than JavaScript.
reply
worthless-trash
42 minutes ago
[-]
I think the author is trying to be clever to parody what was written in tfa.
reply
xtoilette
1 hour ago
[-]
BEAM predates node js
reply