Show HN: Graph-Oriented Generation – Beating RAG for Codebases by 89%
9 points
18 hours ago
| 2 comments
| github.com
| HN
LLMs are better at being the "mouth" than the "brain" and I can prove it mathematically. I built a deterministic graph engine that offloads reasoning from the LLM. It reduces token usage by 89% and makes a tiny 0.8B model trace enterprise execution paths flawlessly. Here is the white paper and the reproducible benchmark.
ysleepy
5 hours ago
[-]
Is that a Paper without any citations?

Also, how does it differ to providing an language specific LSP MCP to the Agent?

I dislike the hamfisted way Agents use grep to understand a statically typed codebase which has perfect code navigation in the IDE. So this is generally interesting, but it needs comparison to existing approaches.

Also lol, seeking collaboration with frontier AI research labs.

This has major crank vibes.

reply
jaen
2 hours ago
[-]
Single testcase benchmark, no citations, inventing nonsense terms for trivial concepts like "Synaptic Plasticity", LLM-slop style writing.

Nobody in their right mind would publish this to ArXiv. I suggest looking up and reading guides on how to write a research paper.

reply