The paper manages to present previous work on Levy's optimal beta reduction in a more streamlined fashion, generalizing duplicators to so-called replicators that avoid the need for separate book-keeping gadgets (like croissants and brackets) . Its author also proves himself to be a skilled programmer in bringing his theory to life with this web based evaluator.
The app contrasts traditional graph-based λ-calculus reduction (which replaces nodes with entire subgraphs) with interaction (which makes only local modifications and hence needs more steps), while showing that semantics is preserved.
[1] https://news.ycombinator.com/item?id=46034355
I see Salvatore has a fork, so they are obviously aware of it. unsure whether theyre proposing the exact same thing without reference or citation…
Both approaches have been researched for a long time now, where HOC's subset is typically referred to as "abstract algorithm". For example, a version of the lambdas calculus where any variable can be used at most once (the "affine lambda calculus"), can be reduced optimally with interaction nets without requiring any bookkeeping.
The novel thing about Salvadori's work is that it develops a new (and better explained) bookkeeping mechanism.
The linked paper: https://arxiv.org/pdf/2505.20314 claims the squiggles they introduce are apparently a model to solve Levy-optimal parallel reduction of lambda terms.
But the author has no affiliation, it's very weird they're calling this "lambda-reduction" and it heavily smells of AI slop?
I hope I'm wrong but it doesn't look right. Can anyone with expertise in this field chime in?
We have a paper from someone not working in the field, with no affiliation, and with an abstract that claims to "solve the longstanding enigma with groundbreaking clarity", a sentence never before uttered by a human being in flesh and blood, and that feels like it takes 4 (four) citations to justify that lambda calculus is Turing-complete, a fact that's well-known to every undergraduate student.
I'm sorry if this gives reviewer #2 vibes, but this doesn't look right to me and I'm asking if someone with actual expertise in the field can clarify what's happening.
Can't comment on the delta-nets. If you're looking for a real person who's been plugging away at parallel & optimal reduction of lambda terms, this is where to look: https://github.com/VictorTaelin
I don't think "lambda-reduction" is a red flag. The "real" term would be "beta-reduction" (but that's the incumbent algorithm which TFA claims to replace or improve on - so why not give it a new name?)
But if I were to go sniffing for red flags:
From the first commit:
lambdacalc.ts: // The original lambda calculus introduced by Church was the 'relevant' lambda calculus which doesn't allow for weakening/erasure. This is why I add the '+' below to indicate that the lambda calculus started at 1936 but was extended afterwards.
What?
util.ts: Why is this full of Gaussian Elimination of Matrices? The paper doesn't mention it
That was my reaction as well, only ever having heard of β-reduction, α-conversion (to prevent variable collisions), and η-reduction (the logical equivalence of a β-redex of a term and a bound variable with the term itself, provided the variable does not occur free in said term). Sloppy use of nomenclature is absolutely a red flag.
To be transparent: I don't understand this stuff all that well and it's entirely possible I'm missing something, but everything here is weird AF.
- Who is the author? Why he has no affiliation?
- What is the main result of the paper? How does it improve on the state of the art? Even for stuff that's way beyond my pay grade, I can usually tell from the abstract. I'm completely baffled here.
- Why do they introduce graphical notation without corresponding formal definitions?
- Why is it written in this weird style where theorems are left implicit? Usually, there's at least a sketch of proof.
- Why does it not address that the thing they're claiming to do isn't elementary recursive as per https://doi.org/10.1006/inco.2001.2869?
Again, it's entirely possible that it's a skill issue on my part and I'd love to be corrected, but I'm completely baffled and I still have absolutely no idea of what I'm looking at. Am I the stupid one and it's obvious to everyone else?
α-conversion is not required in interaction nets. η-reduction is an additional rule not typically discussed, but see for example [2].
[1] https://arxiv.org/pdf/2505.20314
[2] https://www.sciencedirect.com/science/article/pii/S030439750...