Throwing an async mutex to fix the lack of atomicity before yielding is basically telling me that "i don't know when I'm calling await in this code so might as well give up". In my experience this is strongly correlated with the original designer not knowing what they are doing, especially in languages like JavaScript. Even if they did understand the problem, this can introduce difficult-to-debug bugs and deadlocks that would otherwise not appear. You also introduce an event queue scheduling delay which can be substantial depending on how often you're locking and unlocking.
IMO this stuff is best avoided and you should just write your cooperative multi-tasking code properly, but this is does require a bit more advanced knowledge (not that advanced, but maybe for the JS community). I wish TypeScript would help people out here but it doesn't. Calling an async function (or even normal functions) does not invalidate type narrowing done on escaped variables probably for usability reasons, but is actually the wrong thing to do.
_lock = asyncio.Lock()
_instance = None
def get_singleton():
if _instance:
return _instance
async with _lock:
if not _instance:
_instance = await costly_function()
return _instance
How do you suggest to replace this?It looks eerily familiar to Spring DI framework, yikes.
Developers should properly learn the difference between push and pull reactivity, and leverage both appropriately.
Many, though not all, problems where an async mutex might be applied can instead be simplified with use of an async queue and the accompanying serialization of reactions (pull reactivity).
I got especially confused at the actor part, aren't actors share nothing by design?
It does to say there's only 1 actor, so I guess it has nothing to do with actors? They're talking about GenServer-like behavior within a single actor.
As I write this, I'm figuring out maybe they mean it's like a single actor GenServer sending messages to itself, where each message will update some global state. Even that though, actors don't yield inside callbacks, if you send a message it gets queued to run but won't suspend the currently running callback, so the current method will run to completion and then the next will run.
Erlang actors do single-threaded actors with serialized message processing. If I understand the article, that avoids the issue it brings up completely as you cannot have captured old state that is stale when resuming.
Me as well. I thought actors are a bit like mailboxes that process and send their messages.
It may seem like single-threaded execution doesn't need mutexes, but it is a fact that async/await is just another implementation of userspace threads. And like all threads, it may lead to data races if you have yield points (await) inside a critical section.
People may say what they want about bad design and reactive programming, but thread model is inherently easier to reason about.
the entire Compaction needs to be wrapped in a mutex, and every callback needs to start with lock/unlock pair ... So explicit locking probably gravitates towards having just a single global lock around the entire state, which is acquired for the duration of any callback.
This is just dining philosophers.When I compact, I take read locks on victims A and B while I use them to produce C. So callers can still query A and B during compaction. Once C is produced, I atomically add C and remove A and B, so no caller should see double or missing records.
The tl;dr is that it uses garbage collection to let readers see the older version of the tree they were walking while the latest copy still gets updated.
I read a paper recently about concurrent interval skip lists the other day as well which was interesting.
https://en.wikipedia.org/wiki/Non-blocking_algorithm#Obstruc...