Show HN: RLM-MCP optimize context in Claude Code Using MIT's recursive LM paper
1 points
1 hour ago
| 1 comment
| HN
RLM-MCP – Analyze massive files in Claude Code using MIT's Recursive Language Models.

I built an MCP server that lets Claude Code analyze files that exceed its context window by implementing the Recursive Language Models approach from MIT (arXiv:2512.24601).

The problem: Claude Code can't fit a 10GB log file in its context. Traditional grep/read returns thousands of tokens of raw matches.

The solution: Instead of stuffing data into context, treat the file as an external environment. Claude writes Python code, the MCP server executes it on the full file, and only the results come back.

You: "Find all errors in this 5GB log"

Claude: loads file → writes regex → executes in REPL → returns matches

Result: 78% fewer tokens, same accuracy

Real benchmark (300KB log file): - Grep/Read: ~12,500 tokens - RLM: ~2,700 tokens - Both found identical results

No API keys needed – works with Claude Code subscriptions. Claude itself is the "brain," the MCP server is just the "hands" executing Python.

Install: pip install rlm-mcp

  Add to Claude Code settings:
  {"mcpServers": {"rlm": {"command": "rlm-mcp"}}}

  GitHub: https://github.com/ahmedm224/rlm-mcp
  Paper: https://arxiv.org/abs/2512.24601
Would love feedback on the approach. The MIT paper tested with GPT-5 and Qwen – this adapts it for Claude Code's MCP architecture
ahmedm24
1 hour ago
[-]
This is just an initial beta but it works, I saw the MIT paper really useful for analyzing logs and errors. I created the MCP to solve a real pain I have when using AI coders.
reply