Recursive Language Models
160 points
3 days ago
| 6 comments
| arxiv.org
| HN
Legend2440
3 days ago
[-]
Isn't this just subagents? You call another LLM to go read a file and extract some piece of information or whatever, so that you don't clutter up the main context with the whole file.

Neat idea, but not a new idea.

reply
adagradschool
2 days ago
[-]
Yes! Contrary to the anthropomorphized subagents, I view them as ways of managing context primarily. I'm exploring this idea in Scope[0] to have observable subagents that recursively break down the task to avoid having to compact. One thing I haven't been able to figure out is how to evaluate/improve this planning step. I am using markdown files to encode heuristics for planning but it feels too unstructured for me to measure. Would love it if someone pointed me to some existing literature/projects around this idea!

[0] https://github.com/adagradschool/scope

reply
schmuhblaster
2 days ago
[-]
Hi, I stumbled on this article in my twitter feed and posted it because I found it to be very practical, despite the somewhat misleading title. (and I also don't like encoding agent logic in .md files). For my side project I am experimenting with describing agents / agentic workflows in a Prolog-based DML [1]

[1] https://www.deepclause.ai

reply
wiesbadener
3 days ago
[-]
They state:

> RLMs are not agents, nor are they just summarization. The idea of multiple LM calls in a single system is not new — in a broad sense, this is what most agentic scaffolds do. The closest idea we’ve seen in the wild is the ROMA agent that decomposes a problem and runs multiple sub-agents to solve each problem. Another common example is code assistants like Cursor and Claude Code that either summarize or prune context histories as they get longer and longer. These approaches generally view multiple LM calls as decomposition from the perspective of a task or problem. We retain the view that LM calls can be decomposed by the context, and the choice of decomposition should purely be the choice of an LM.

reply
nostrebored
2 days ago
[-]
lol this is literally one of the only reason competent people are using subagents. it is literally

@summarizable(recursive=True)

def long_running_task(Subagent)

on my long horizon tasks, where the hierarchy is determined at agent execution time…

reply
seeknotfind
3 days ago
[-]
Yeah, from the title, it sounds like perhaps the entire operation is differentiable and therefore trainable as a whole model and that such training is done. However, upon close inspection, I can't find any evidence that more is done than calling the model repeatedly.
reply
AlexCoventry
3 days ago
[-]
No, there's no training going on, here, as far as I can tell. E.g., they use GPT-5 as their base model. Also, AFAICT from a quick skim/search there's no mention of loss functions or derivatives, FWIW.
reply
alextheparrot
2 days ago
[-]
The derivative being a grad(ient) student sampling scaffolds against evals + qualitative observations: most prompt-based llm papers
reply
lelanthran
3 days ago
[-]
Unless that subagebt you call can call subagents itself which can call subagents themselves, ad infinitum, it's not recursive.
reply
songodongo
3 days ago
[-]
The paper says they used a recursive depth of 1. Does that mean subagents or sub-subagents?
reply
johnnyfived
3 days ago
[-]
A recursive depth of 1? So it's just subagents..? How exactly can this be described as recursive then?
reply
daralthus
3 days ago
[-]
sub-agents that access (and manipulate) the SAME context (a file system or variables in the REPL)
reply
bob1029
3 days ago
[-]
> The key insight is that long prompts should not be fed into the neural network (e.g., Transformer) directly but should instead be treated as part of the environment that the LLM can symbolically interact with.

How is this fundamentally different from RAG? Looking at Figure 4, it seems like the key innovation here is that the LLM is responsible for implementing the retrieval mechanism as opposed to a human doing it.

reply
NitpickLawyer
3 days ago
[-]
Two differences that I see:

1. RAG (as commonly used) is more of a workflow, this thing is more "agentic"

2. The recursive nature of it

First, the way I see workflow vs. agentic: the difference is where the "agency" is. In a workflow, the coder decides (i.e. question -> embed -> retrieve -> (optional) llm_call("rerank these parts with the question {q} in mind") -> select chunks -> llm_call("given question {q} and context {c}, answer the question to the best of your knowledge") )

The "agentic" stuff has the agent decide what to search for, how many calls to make and so on, and it then decides when to answer (i.e. if you've seen claude code / codex work on a codebase, you've seen them read files, ripgrep a repo, etc).

The second thing, about recurrence has been tried before (babyagi was one of the first that I remember, ~ '23) but the models weren't up to it. So there was a lot of glue around them to make them kinda sorta work. Now they do.

reply
alansaber
2 days ago
[-]
The terminology we use is rather imprecise, the interpretation of RAG inflates year on year
reply
zed31726
3 days ago
[-]
T̶u̶r̶t̶l̶e̶s̶ LLMs all the way down
reply
downboots
2 days ago
[-]
attention is all you need but over and over and over and over... Precision is what we should ask for.
reply
yawnxyz
3 days ago
[-]
here's a more readable version: https://alexzhang13.github.io/blog/2025/rlm/
reply
mccoyb
3 days ago
[-]
My wishlist for 2026: Anthropic / OpenAI expose “how compaction is executed” to plugin authors for their CLI tools.

This technique should be something you could swap in for whatever Claude Code bakes in — but I don’t think the correct hooks or functionality is exposed.

reply
rockwotj
3 days ago
[-]
Isn’t codex open source and you can just go read what they do?

I have read the gemini source and it’s a pretty simple prompt to summarize everything when the context window is full

reply
MillionOClock
3 days ago
[-]
It should be noted that OpenAI now has a specific compaction API which returns opaque encrypted items. This is AFAICT different from deciding when to compact, and many open source tools should indeed be inspectable to that regard.
reply
omneity
3 days ago
[-]
It's likely to either be an approach like this [0] or something even less involved.

0: https://github.com/apple/ml-clara

reply
cubefox
3 days ago
[-]
Seems similar to this paper: https://arxiv.org/abs/2510.14826
reply