Worsen. LLMs discard/loses and mixes data on their statistical "compression" to create their vectorial database model. Across the time, successive feed back will be homologous to create a jpg image sourcing a jpg image that was created from another jpg image, through this "gaussian" loop.
Those faster (but worst) results will degrade real valuable data and science at a speed/rate that will statistically discard good done science on a regular basis, systematically.
IMHO.
which contains Heathrow Terminals 1, 2, 3, 4 & 5 on the Picadilly line. For about 15 seconds I imagined a world where Heathrow has had 5 terminals since 1933, then I read the map itself: "Recreated by Arthurs D". Phew.
Awesome example of improving information conveyance through abstractions though!
"Designing AI for Disruptive Science" is a bit market-ey, but "AI Risks 'Hypernormal' Science" is just a trimmed section heading "Current AI Training Risks Hypernormal Science".
Maybe they could be, but it seems pretty unlikely. The edges of a lot of scientific understanding are now past practical applicability. The edges are essentially models of things impossible to test. In fact, relativity was only recently fully backed up with experimental data.
I think also what's practically applicable changes constantly. Perhaps we're truly at the End of Science, but empirically we've been wrong every other time we've said that. My money is that there's more race to run.
I'm also a little skeptical about the practical value of the bleeding edge of both experimental and theoretical physics. Interesting? Sure.
Taking away some complexity comes at a price, and for some people, it’s hard to see that it outweighs the practicality.