For a Turing machine that already solves a problem in n time and √n space (in other words, a lot of them!), it doesn't say anything.
If we're being pedantic, it's trading time for the space guarantee.
For algorithms, a little memory outweighs a lot of time - https://news.ycombinator.com/item?id=44055347 - May 2025 (139 comments)
Is it something like some sort of reverse compiler which creates super efficient code by analyzing the inverse flow of code or something?
Obvious example: the flickering stack frame of tail call elimination.
But even without it, you can emulate mutation in a pure language by threading a "heap" parameter through everything.
There's only at most a log factor of extra space and time required in most computing models to "update" a persistent map (though I'm not sure the best way to encode persistent maps directly in Turing machine tapes, which is the model this result is specifically about)