AI Risks "Hypernormal" Science
28 points
2 hours ago
| 8 comments
| asimov.press
| HN
drtgh
11 minutes ago
[-]
> AI could repeat this pattern at a larger scale — generating faster results within the existing paradigm, while the structural conditions for disruptive science remain unchanged or worsen.

Worsen. LLMs discard/loses and mixes data on their statistical "compression" to create their vectorial database model. Across the time, successive feed back will be homologous to create a jpg image sourcing a jpg image that was created from another jpg image, through this "gaussian" loop.

Those faster (but worst) results will degrade real valuable data and science at a speed/rate that will statistically discard good done science on a regular basis, systematically.

IMHO.

reply
piker
12 minutes ago
[-]
I got stuck for a minute on the caption "Harry Beck’s 1933 map of the London Underground" to: https://substackcdn.com/image/fetch/$s_!VsWm!,f_auto,q_auto:...

which contains Heathrow Terminals 1, 2, 3, 4 & 5 on the Picadilly line. For about 15 seconds I imagined a world where Heathrow has had 5 terminals since 1933, then I read the map itself: "Recreated by Arthurs D". Phew.

Awesome example of improving information conveyance through abstractions though!

reply
boulos
1 hour ago
[-]
Please don't editorialize titles unless they're clearly clickbait.

"Designing AI for Disruptive Science" is a bit market-ey, but "AI Risks 'Hypernormal' Science" is just a trimmed section heading "Current AI Training Risks Hypernormal Science".

reply
cogman10
52 minutes ago
[-]
The article presumes that the models we have today describing everything could still be subject to a major paradigm shift.

Maybe they could be, but it seems pretty unlikely. The edges of a lot of scientific understanding are now past practical applicability. The edges are essentially models of things impossible to test. In fact, relativity was only recently fully backed up with experimental data.

reply
tech_ken
34 minutes ago
[-]
I don't think paradigm shifts have to be 'better' in some march-toward-progress sense, they can be lateral or even regressive in that way and still lead to longer-horizon improvements.

I think also what's practically applicable changes constantly. Perhaps we're truly at the End of Science, but empirically we've been wrong every other time we've said that. My money is that there's more race to run.

reply
throwaway27448
25 minutes ago
[-]
Physics is a bit of a special case. This certainly doesn't apply to, say, biology, medicine, cognition, not to mention any of the social sciences—i.e. most research.

I'm also a little skeptical about the practical value of the bleeding edge of both experimental and theoretical physics. Interesting? Sure.

reply
ArRENCEAI
21 minutes ago
[-]
What's more alarming isn't that AI is limited to existing domain data, it's that when people push it to deviate outside those known data points it confidently hallucinates nonsense.
reply
vivid242
1 hour ago
[-]
I wasn’t aware of the map empire, thank you!

Taking away some complexity comes at a price, and for some people, it’s hard to see that it outweighs the practicality.

reply
ortusdux
1 hour ago
[-]
Reminds me of the coastline paradox - https://en.wikipedia.org/wiki/Coastline_paradox
reply
tech_ken
40 minutes ago
[-]
My hot take is that mathematical and scientific 'soundness' is ultimately more of an aesthetic preference than an objective quality of reality. Good science makes sense to humans, and 'what makes sense' is ultimately what fits satisfyingly in your brain. There's nothing inherently wrong with an enormous epicycle model of reality from the perspective of the God of Math; so long as your formal system is consistent and expressive enough to represent everything then meh, it's a model. But the model that humans want to elevate to canonical status has far stricter requirements, and ultimately it's the one which the majority of sufficiently credentialed tastemakers decide is 'best'. Parsimony works well in physics where you have closed form expressions for all your stuff, but the biology cases are so much messier because it turns out that sometimes reality isn't parsimonious. All this to say that good science is a matter of taste, and while AI can gist the broad strokes of taste I've yet to see it take on the role of genuine tastemaker.
reply
bananaflag
54 minutes ago
[-]
I find it funny how people are so concerned that AI cannot innovate, that AI coding agents only give the most bland solutions to any problem etc. when the next step in OpenAI's 5 stages to AGI is literally called "Innovators".
reply
jacquesm
53 minutes ago
[-]
It's marketing.
reply
munk-a
34 minutes ago
[-]
Do you mean to say my current AI workflow doesn't involve secret agents running around Bond-style sabotaging those that'd impede my efforts to build a super secret RSS forwarder that pig-lantinifies the text before sending it to my client?
reply
thegrim33
39 minutes ago
[-]
My two step plan is to go to sleep and then wake up the next day and be a billionaire. Surely because that's my stated next step that means when I wake up tomorrow I'll be rich.
reply
jacquesm
31 minutes ago
[-]
At no risk to myself I will try your plan and if it doesn't work you owe me your billion.
reply