SBCL Fibers – Lightweight Cooperative Threads
147 points
20 days ago
| 10 comments
| atgreen.github.io
| HN
smallstepforman
20 days ago
[-]
256Kb stack per Fiber is still insane overhead compared to Actors. I guess if we survey programming community, I’d guesstimate that less than 2% of devs even know what the Actor model is, and an even smaller percentage have actually used it in production.

Any program that has at least one concurrent task that runs on a thread (naturally they’ll be more than one) is a perfect reason to switch to Actor programming model.

Even a simple print() function can see performance boost from running on a 2nd core. There is a lot of backround work to print text (parsing font metrics, indexing screen buffers, preparing scene graphs etc) and its really inefficient to block your main application while doing all this work while background cores sit idle. Yet most programmers dont know about this performance boost. Sad state of our education and the industry.

reply
my-next-account
20 days ago
[-]
Actors are a model, I have no clue why you're saying that there is a particular memory cost to them on real hardware. To me, you can implement actors using fibers and a postbox.

I've no idea what the majority of programmers know or do not know about, but async logging isn't unknown and is supported by libraries like Log4j.

reply
vitaminCPP
20 days ago
[-]
Yeah that was my also my thought.

I always understood that if you give a thread to each actor you get the "active object" design pattern.

reply
zelphirkalt
20 days ago
[-]
I remember Joe Armstrong saying something like 2kB in his talks, for an Erlang process. That's 1/128 of 256kB.
reply
my-next-account
20 days ago
[-]
2KiB is a peculiar size. Typical page size is 4KiB, and you probably want to allocate two pages - one for the stack and one for a guard page for stack overflow protection. That means that a fibers' minimal size ought to be 8KiB.
reply
surajrmal
19 days ago
[-]
You should look into how go manages goroutines. It is indeed 2kib stacks by default without the need for guard pages. They use a different mechanism to determine overflows. Other runtimes can do similar things.
reply
my-next-account
19 days ago
[-]
Aha, yeah. It utilizes the compiler to insert guards at each function prologue, determining the remaining stack size.
reply
atgreen
20 days ago
[-]
256k is just's just a placeholder for now. The default will get reduced as we get more experience with the draft implementation. The proposal isn't complete yet.
reply
hrmtst93837
20 days ago
[-]
People fixate on stack size, but memory fragmentation is what bites as fiber counts grow, and actors dodge some of that at the cost of more message-passing overhead plus debugging hell once state gets hairy. Atomics or explicit channels cost cycles that never show up in naive benchmarks. If you need a million concurrent 'things' and they are not basically stateless, you're already in Erlang country, and the rest is wishful thinking.
reply
tinco
20 days ago
[-]
What is more expensive, copying the message, or memory fencing it, or do you always need both in concurrent actors? Are you saying the message passing overhead is less than the cost of fragmented memory? I wouldn't have expected that.
reply
hrmtst93837
20 days ago
[-]
Usually both, but they show up in different places.

You need synchronization semantics one way or another. Even in actor systems, "send" is not magic. At minimum you need publication of the message into a mailbox with the right visibility guarantees, which means some combination of atomic ops, cache coherence traffic, and scheduler interaction. If the mailbox is cross-thread, fencing or equivalent ordering costs are part of the deal. Copying is a separate question: some systems copy eagerly, some pass pointers to immutable/refcounted data, some do small-object optimization, some rely on per-process heaps so "copy" is also a GC boundary decision.

The reason people tolerate message passing is that the costs are more legible. You pay per message, but you often avoid shared mutable state, lock convoying, and the weird tail latencies that come from many heaps or stacks aging badly under load. Fragmentation is less about one message being cheaper than one fence. It is more that at very high concurrency, memory layout failures become systemic. A benchmark showing cheap fibers on day one is not very informative if the real service runs for weeks and the allocator starts looking like modern art.

So no, I would not claim actor messaging is generally cheaper than fragmented memory in a local micro sense. I am saying it can be cheaper than the whole failure mode of "millions of stateful concurrent entities plus ad hoc sharing plus optimistic benchmarks." Different comparison.

reply
praptak
20 days ago
[-]
The stack size is just mmapp-ed address space. It only needs backing memory for the pages actually used by the stack.
reply
20k
20 days ago
[-]
Fibers are primarily when you have a problem which is easily expressible as thread-per-unit-of-work, but you want N > large. They can be useful for eg a job system as well, and in that case the primary advantage is the extremely low context switch time, as well as the manual yielding

There are lots of problems where I wouldn't recommend fibers though

reply
mark_l_watson
20 days ago
[-]
I am excited by the proposal and early work. SBCL Common Lisp is my second most used programming language - glad to see useful extensions. Most of my recent experiments with SBCL involve tooling to be called by LLMs/agents and high speed tooling to provide LLMs/agents with better long term memory and context. Fibers will be useful for most of that work.
reply
jjtheblunt
19 days ago
[-]
what's your first most used programming language?
reply
mark_l_watson
19 days ago
[-]
Python because so much of my paid work in the last decade has been in deep learning.
reply
nothrabannosir
20 days ago
[-]
I strongly recommend having a look at the mailing list to get some context:

https://sourceforge.net/p/sbcl/mailman/sbcl-devel/thread/CAF...

and

https://sourceforge.net/p/sbcl/mailman/sbcl-devel/thread/CAC...

This will certainly speak to some people taking part in some of the more controversial discussions taking place on HN recently, to put it mildly.

reply
anonzzzies
20 days ago
[-]
Hmm, must have missed that ; tried to find. There was a SBCL discussion a few days ago but didn't read much controversial things in that? I'm a fanboy though so possibly i'm blind to these things.
reply
ragall
19 days ago
[-]
I think he was referring to "Begone, slop men", which is the right answer to this.
reply
beepbooptheory
20 days ago
[-]
Idk if I can quite place it but by the time it gets to, "I've created github issues for each section of your reviews.." in the second link its just so infuriating. Just want to shake them and say "for the love of god just talk to them"!
reply
HexDecOctBin
20 days ago
[-]
Is there a similar document for the memory arena feature? I tried searching the official documentation, but found scant references and no instructions on how and when to use it.
reply
dmpk2k
20 days ago
[-]
Huh, you're right.

Apparently it's still considered experimental (even though Google uses it in production) so it's not in the User Manual. There's this: https://github.com/sbcl/sbcl/blob/master/doc/internals-notes...

reply
nesarkvechnep
19 days ago
[-]
Every move in the concurrency direction is good but I really wanted to see preemptive scheduling and Erlang-like processes.
reply
justinhj
20 days ago
[-]
They should be called Anthony Green Threads. Seriously though, great to see.
reply
matthewfcarlson
20 days ago
[-]
I personally like the name fiber better than green threads. But everywhere I’ve worked in user space cooperative threads, it’s always been green threads.
reply
lll-o-lll
20 days ago
[-]
They are different things perhaps? Fibers imply strict cooperative behaviour; I have to explicitly “yield” to give the other fibers a go, green threads are just runtime managed threads?
reply
varjag
20 days ago
[-]
Green threads are cooperative threads. Preemption requires ability to handle hardware interrupts, which are typically handled by the OS.
reply
patrec
20 days ago
[-]
What do you mean by this?
reply
reverius42
16 days ago
[-]
Everything purely in userspace is by definition cooperative, because to be non-cooperative, you must dip down below userspace.
reply
pestatije
20 days ago
[-]
SBCL - Steel Bank Common Lisp
reply
theParadox42
20 days ago
[-]
I really thought this was gonna be a sick material science paper. Still cool though
reply
lukasb
20 days ago
[-]
Serious question - I thought LLMs were bad at balancing parentheses?
reply
cmrdporcupine
20 days ago
[-]
I had some ideas for extending the lem editor (emacs in common lisp) the other day and I am barely literate in Lisp. So I had Claude Code do it.

Fully awesome. No problems. A few paren issues, buit it seemed to not really struggle really. It produced working code. Was also really good at analyzing the lem codebase as well.

I even had it write an agentic coding tool in Common Lisp using the RLM ideas: https://alexzhang13.github.io/blog/2025/rlm/

Lisp is a natural fit for this kind of thing. And it worked well.

(I also suspect if parens were really a problem... there's room here for MCP or other tooling to help. Basically paredit but for agents)

reply
atgreen
20 days ago
[-]
They are much better these days.
reply
ivanb
20 days ago
[-]
Besides, one can easily code a skill+script for detecting the problem and suggesting fixes. In my anecdotal experience it cuts down the number of times dumber models walk in circle trying to balance parens.
reply