Another requirement was keeping latency as low as possible (we managed to get < 5 seconds with 85%+ accuracy). Their approach seems to have very unpredictable latencies, sometimes up to thousands of seconds (may be fine for background tasks), and it scales poorly with corpus size.
Interesting research anyway, but I'd still stick with embedding/reranker-based retrieval (+BM25 for hybrid search) because you do not waste time wandering around blindly each time, trying to find the minimal context to start from, which could have been found immediately with an index. Another issue is that research papers often implement subpar baselines for the approaches they compare against. When I was implementing retrieval, the straightforward implementation gave me 40% accuracy, and various tricks/parameter tuning pushed it to 85%+ without changing the overall architecture (took about a month of experimentation).
It was tuned for a specific set of open-source models we run ourselves on our own GPUs, so I can't share exact golden numbers (for example, if I replace those small models with Claude Haiku+Cohere Embed, the results get worse). A proper reranker helped tremendously because it removed noise; BM25 helped a lot too, because in many cases you want exact-match searches instead of fuzzy/vector search (so again, less noise).
For small open-source models (we used them because we wanted speed), prompt engineering mattered too, especially in cross-language benchmarks where the model may get confused about which language it should respond in (the system prompt's language, the user query's language, or the documents' language). What mattered was even the order of fields in the output JSON schema (in intermediate steps), because LLMs are autoregressive: if you order the fields incorrectly, the model may guess or hallucinate during extraction when the first value in the schema can't be extracted reliably without first considering other dependent fields that should've been extracted earlier (we don't use reasoning models to save on speed).
I used LLM-as-a-judge to quickly figure out what improved scores and what didn't. Then humans tested it manually too and calculated scores to see whether their scores diverged from the machine's scores. I think if I had to do it again, I probably would use an agent (like autoresearch) to autonomously find the best configuration for the exact set of models via intelligent bruteforce (dunno if it would work, but interesting to try).
We don't have 1B+ vectors; our system is split into tenants (organizations), a single tenant usually doesn't have that many vectors, plus every document in the system has a specific hierarchical structure, so your mileage may vary
how was this done before llms and ai? can you share some examples of these documents.
But current IR methods both lexical and semantic retrieval definitely have bottlenecks as pointed out in the the obliq-bench paper (https://arxiv.org/abs/2605.06235).
In many cases cheap methods like grepping and BM25 just are not going to work well, so semantic similarity is the best initial retriever/filter, followed by LLM-as-judge as a second filter/reranker if you need the precision.
Is anyone using small, low-latency, fast LLMs to implement stuff like search as a RAG alternative? Could be the perfect use case for that Llama3 8B ASIC some company showed off a few months ago.
But it still has to enumerate synonyms to find things.
I would assume it's very domain dependent, like code or technical docs would have more precise terminology that is better for fixed string search. On the other hand, medical or legal text can have many many ways to say something