Show HN: Multimodal Benchmarks
2 points
2 hours ago
| 0 comments
| github.com
| HN
I built a set of open multimodal retrieval benchmarks because existing IR evals are still mostly text-only and don’t capture real-world complexity.

This repo includes ground-truth datasets, queries, and relevance judgments for 3 hard domains:

• Financial documents (SEC filings with tables, charts, footnotes) • Medical device IFUs (diagrams, nested sections, regulatory language) • Educational videos (temporal alignment, code + lecture context)

Runs in ~1 second with demo data. Leaderboards + evaluator included. Contributions welcome.

No one has commented on this post.