Show HN: LexReviewer – Because "Chat with PDF" is broken for legal workflows
1 points
2 hours ago
| 1 comment
| github.com
| HN
Hi HN!

Most “chat with PDF” tools work fine until you try using them for something that actually matters, like contracts.

The issue isn’t that they can’t answer questions. It’s that you can’t trust the answers. They return something that sounds correct, but don’t clearly show where it came from, or they miss context from referenced clauses and related documents.

Legal docs make this harder because questions aren’t uniform: - sometimes you’re searching concepts - sometimes exact clause IDs - sometimes text from a different linked document

Most systems handle all of those the same way, which is where things break.

So I built LexReviewer, an open-source backend designed around a single rule: ""an answer isn’t useful unless you can verify it instantly.""

Instead of treating every query identically, it adapts its search strategy based on what you’re asking and can follow references across documents when needed. The result is answers that stay grounded in real text and point directly to the source passage.

Repo: https://github.com/LexStack-AI/LexReviewer

-- Currently tested on 300+ page contracts with cross-references

Feedback I’d especially value:

- Where do current document-AI systems fail hardest for you? - What’s been the biggest blocker to trusting AI outputs in production workflows? - If you’ve built something similar, what design choices ended up mattering most?

sherebanuk
2 hours ago
[-]
Author here, happy to dive into technical details if anyone’s curious.

I’m especially interested in how others are solving: - multi-document reasoning - citation reliability - retrieval accuracy in dense technical text

Would love to hear what’s worked (or failed) in real deployments.

reply