What if new proofs are included in LLM training so LLM rediscover it?
2 points
2 hours ago
| 1 comment
| HN
If I were to sell the power of LLMs as powerful research agents, and if I had enough money, I could think about introducing little "gems" into the training set of LLM so that my model would be able to discover new theorems and proofs. There is a lot of money at the table, and I am sure there are a lot of genius people with little pay. Perhaps this kind of thinking is wrong?, only bad people would think like this?, how could one detect such a trick without knowing the training set?
gostsamo
1 hour ago
[-]
test only on data generated after the training has ended.
reply