What years of production-grade concurrency teaches us about building AI agents
112 points
15 hours ago
| 15 comments
| georgeguimaraes.com
| HN
randomtoast
3 hours ago
[-]
I’ve built fairly large OTP systems in the past, and I think the core claim is directionally right: long lived, stateful, failure prone "conversations" map very naturally to Erlang processes plus supervision trees. An agent session is basically a call session with worse latency and more nondeterminism.

That said, a lot of current agent workloads are I/O bound around external APIs. If 95% of the time is waiting on OpenAI or Anthropic, the scheduling model matters less than people think. The BEAM’s preemption and per process GC shine when you have real contention or CPU heavy work in the same runtime. Many teams quietly push embeddings, parsing, or model hosting to separate services anyway.

Hot code swapping is genuinely interesting in this context. Updating agent logic without dropping in flight sessions is non trivial on most mainstream stacks. In practice though, many startups are comfortable with draining connections behind a load balancer and calling it a day.

So my take is: if you actually need millions of concurrent, stateful, soft real time sessions with strong fault isolation, the BEAM is a very sane default. If you are mostly gluing API calls together for a few thousand users, the runtime differences are less decisive than the surrounding tooling and hiring pool.

reply
zknill
58 minutes ago
[-]
There's two parts to this article. The scheduler/preemption, and the transport over the network. The article is absolutely right that long-lived request/response over HTTP connections with SSE streamed responses suck.

The article touches very briefly on Phoenix LiveView and Websockets. I wrote about why chatbots hate page refresh[1], and it's not solved by just swapping to Websockets. By far the best mechanism is pub/sub, especially when you can get multi-user/multi-device, conversation hand-off, re-connection, history resumes, and token compaction basically for free from the transport.

1: https://zknill.io/posts/chatbots-worst-enemy-is-page-refresh...

reply
znnajdla
3 hours ago
[-]
Is someone at Anthropic reading my AI chats? I literally came to this conclusion a few weeks ago after studying the right framework for building long running browser agents. Have zero experience with Elixir but as soon as I looked at their language constructs it just "clicked" immediately that this is exactly suited to AI agent frameworks. Another problem you solve immediately: distributed deployment without kubernetes hell.
reply
amelius
1 hour ago
[-]
> Is someone at Anthropic reading my AI chats?

It is not forbidden by their EULA/ToS, I suppose.

reply
quadruple
1 hour ago
[-]
> The BEAM's "let it crash" philosophy takes the opposite approach. Instead of anticipating every failure mode, you write the happy path and let processes crash. The supervisor detects the crash and restarts the process in a clean state. The rest of the system continues unaffected.

Do I want this? If my request fails because the tool doesn't have a DB connection, I want the model to receive information about that error. If the LLM API returns an error because the conversation is too long, I want to run compacting or other context engineering strategies, I don't want to restart the process just to run into the same thing again. Am I misunderstanding Elixir's advantage here?

reply
dumpsterdiver
58 minutes ago
[-]
Especially now that those workloads might have something to say about it… e.g. “Why did you make me this way?”
reply
veunes
53 minutes ago
[-]
The Let it crash concept is perfect for deterministic bugs, but does it work for probabilistic errors?

If an LLM returns garbage, restarting the process (agent) with the same prompt and temperature 0 yields the same garbage. An Erlang Supervisor restarts a process in a clean state. For an agent "clean state" = lost conversation context

We don't just need Supervision Trees, we need Semantic Supervision Trees that can change strategy on restart. BEAM doesn't give this out of the box, you still code it manually

reply
mccoyb
11 hours ago
[-]
Broadly agree with the author's points, except for this one:

> TypeScript/Node.js: Better concurrency story thanks to the event loop, but still fundamentally single-threaded. Worker threads exist but they're heavyweight OS threads, not 2KB processes. There's no preemptive scheduling: one CPU-bound operation blocks everything.

This cannot be a real protest: 100% of the time spent in agent frameworks is spent ... waiting for the agent to respond, or waiting for a tool call to execute. Almost no time is spent in the logic of the framework itself.

Even if you use heavyweight OS threads, I just don't believe this matters.

Now, the other points about hot code swapping ... so true, painfully obvious to those of us who have used Elixir or Erlang.

For instance, OpenClaw: how much easier would "in-place updating" be if the language runtime was just designed with the ability in mind in the first place.

reply
znnajdla
2 hours ago
[-]
> Even if you use heavyweight OS threads, I just don't believe this matters.

It matters a lot. How many OS threads can you run on 1 machine? With Elixir you can easily run thousands without breaking a sweat. But even if you need only a few agents on one machine, OS thread management is a headache if you have any shared state whatsoever (locks, mutexes, etc.). On Unix you can't even reliably kill dependent processes[1]. All those problems just disappear with Elixir.

[1] https://matklad.github.io/2023/10/11/unix-structured-concurr...

reply
wqaatwt
15 minutes ago
[-]
Presumably if you can afford to pay for all those tokens the computational cost should be mostly insignificant?

Spending too much time optimizing for the 1% of extra overhead seems suboptimal..

reply
kibwen
1 hour ago
[-]
> How many OS threads can you run on 1 machine?

Any modern Linux machine should be able to spawn thousands of simultaneous threads without breaking a sweat.

reply
simianwords
7 hours ago
[-]
I don’t see the point of agent frameworks. Other than durability and checkpoints how does it help me?

Claude code already works as an agent that calls tools when necessary so it’s not clear how an abstraction helps here.

I have been really confused by langchain and related tech because they seem so bloated without offering me any advantages?

I genuinely would like to know what I’m missing.

reply
veunes
15 minutes ago
[-]
You don't need frameworks for one-off scripts, but in prod, you're going to need RAG, proper memory, tools, and orchestration anyway. Without standards you'll just end up writing your own janky framework on top of requests. LangChain is definitely a bloated mess, but it provides structure. The beauty of Elixir is that this structure (OTP) is baked into the language, not duct-taped on the side
reply
d4rkp4ttern
1 hour ago
[-]
I ran into this question when thinking about the approach for a recent project. Yes CLI coding tools are good agents for interactive use, but if you are building a product then you do need an agent abstraction.

You could package Claude Code into the product (via agents-sdk or Claude -p) and have it use the API key (with metered billing) but in my case I didn’t find it ergonomic enough for my needs, so I ended up using my own agent framework Langroid for this.

https://github.com/langroid/langroid

(No it’s not based on that similarly named other framework, it’s a clean, minimal, extensible framework with good dx)

reply
spoiler
3 hours ago
[-]
There's lots of things you could do. Imagine you're making a group chat bot (way more difficult than a 1-1 chat) where people can play social games by giving the LLM game rules. You can have an agent that only manages game state using natural language (controlled by the main LLM). You could have an agent dedicated to remembering important conversation, while not paying attention to chit-chatting
reply
koakuma-chan
2 hours ago
[-]
100% agreed
reply
mackross
1 hour ago
[-]
I’m a huge elixir fan, but imho it doesn’t solve durable execution out of the box which is a major problem that often gets swept under the rug by BEAM fanboys. Because ETS and supervision trees don’t play well with deployment via restart, you’ve got to write some level of execution state to relational database or files. You can choose persistent ETS, mnesia, etc, (which have their own tradeoffs and come with some kind of gnarley data-loss scenarios in deep documentation). But, whatever you choose, in my experience you will need to spend a fair amount of time considering how your processes are going to survive restarts. Alternatively, Oban is nice, but it’s a heavy layer that makes control flow more complex to follow. And, yes you can roll your own hot code deploy and run in persistent VMs/bare metal and be a true BEAM native, but it’s not easy out of the box and comes with its own set of foot guns. If I’m missing something, I would love for someone to explain to me how to do things better, as I find this to be a big pain point whenever I pick up elixir. I want to use the beautiful primitives, but I feel I’m always fighting durable execution in the event of a server restart. I wish a temporal.io client or something with similar guarantees was baked into the lang/frameworks.
reply
veunes
47 minutes ago
[-]
Spot on. BEAM is great at surviving process crashes, but if the whole cluster goes down or you redeploy, that in-memory state evaporates. It's not magic. For agents that might hang around for days, pure Elixir isn't enough, you still need a persistence layer. The ecosystem is catching up (Oban Pro, FLAME), but in reality, we're still building hybrids: fast actors for active chats and a good old DB for history and long-running processes
reply
codethief
45 minutes ago
[-]
@dang I think the original title is much better (more specific):

> Your Agent Framework Is Just a Bad Clone of Elixir: Concurrency Lessons from Telecom to AI

reply
vinnymac
1 hour ago
[-]
Been using Elixir with agents for over a year. Seemed like an obvious choice to me.

Node is great, but scaling Elixir threads is more so.

reply
nottorp
1 hour ago
[-]
I don't understand. Shouldn't we just task the "AI" agents to improve themselves?

What's that about years of experience? That's obsolete thinking!

reply
manojlds
2 hours ago
[-]
Surely they mean Erlang not Elixir
reply
christophilus
2 hours ago
[-]
Sounds to me like they mean “BEAM” rather than a specific language. But BEAM means Elixir for most newcomers.
reply
cyberpunk
2 hours ago
[-]
Which is a real shame as if you actually spend some time with both you’ll probably eventually realise erlang is the nicer language.

Elixir just feels… Like it’s a load of pre-compile macros. There’s not even a debugger.

reply
igravious
2 hours ago
[-]
addressed at the very top of the article

   A note on terminology: Throughout this post I refer to "the BEAM." BEAM is
   the virtual machine that runs both Erlang and Elixir code, similar to how the
   JVM runs both Java and Kotlin. Erlang (1986) created the VM and the
   concurrency model. Elixir (2012) is a modern language built on top of it with
   better ergonomics. When I say "BEAM," I mean the runtime and its properties.
   When I say "Elixir," I mean the language we write.
reply
bitwize
9 hours ago
[-]
Ackshually...

Erlang didn't introduce the actor model, any more than Java introduced garbage collection. That model was developed by Hewitt et al. in the 70s, and the Scheme language was developed to investigate it (core insights: actors and lambdas boil down to essentially the same thing, you really don't need much language to support some really abstract concepts).

Erlang was a fantastic implementation of the actor model for an industrial application, and probably proved out the model's utility for large-scale "real" work more than anything else. That and it being fairly semantically close to Scheme are why I like it.

reply
josevalim
8 hours ago
[-]
The team that built Erlang (Joe, Robert, Mike, and Bjorn) didn't know the actor model was actually a thing. They wanted to build reliable distributed systems and came up with the isolated processes model you find in Erlang today. Eventually (probably when Erlang was open sourced?), folks connected the dots that the actor model was the most accurate description of what was going on!
reply
bitwize
8 hours ago
[-]
Spontaneous evolution of the same idea. It actor-models when it's actor-model time!
reply
fud101
5 hours ago
[-]
This is all well and good but Elixir with Phoenix and Liveview is super bloated and you have to have a real reason to buy into such a monster that you couldn't do with a simpler stack.
reply
monooso
2 hours ago
[-]
I'm not sure I agree with the "bloated" description, but I will say that I really like Elixir, and really dislike LiveView. Which is a shame, because the latter is pretty inescapable in Elixir world these days.
reply
ipnon
2 hours ago
[-]
Interesting take. I’m an Elixir fanboy because I find LiveView to be very slim. You’re just sending state diffs over WebSocket and updating the DOM with MorphDOM. The simplicity is incomparable to state of the art with JavaScript frameworks in my humble opinion.
reply
btreecat
1 hour ago
[-]
That seems more complicated than jQuery
reply
fud101
1 hour ago
[-]
I'd use something like a lightweight python framework (take your pick) and pair it with htmx. You can run that on low powered hardware or a cheap VPS. I can't even dev elixir on my N100 minipc, it's too demanding. Otherwise Python and SolidJS or Preact will work perfectly for a SPA.
reply
koakuma-chan
2 hours ago
[-]
You Should Just Use Rust
reply