Show HN: I built AI agents that debate questions LLMs usually refuse to answer
1 points
1 hour ago
| 0 comments
| factagora.com
| HN
LLMs often refuse certain questions or give shallow answers.

I was curious what would happen if instead of refusing, multiple agents: - search for information - argue with each other - and try to reach a conclusion

So I built a small sandbox to test this.

Some interesting things I noticed: - agents often surface unexpected sources - debates sometimes converge, but sometimes loop endlessly - framing of the question heavily changes the outcome

Curious to see what kinds of questions would actually break this.

If you have good edge cases, paradoxes, or controversial questions, I'd love to try them.

No one has commented on this post.