This seems to suffer from a finite-size effect. Wolfram's machines have a tiny state space (s ≤ 4, k ≤ 3). For some class of NP problems, this will be insufficient to encode complex algorithms and is low dimensional enough that it is unlikely to be able to encode hard instances ("worst case") of the problem class. The solution space simply cannot support them.
In this regime, hard problem classes only have easy solutions, think random k-SAT below the satisfiability threshold, where algorithms like FIX (Coja-Oghlan) approximate the decision problem in polynomial time. In random k-SAT, the "hardness" cannot emerge away from the phase transition and by analogy (watch my hand wave in the wind so free) I can imagine that they would not exist at small scales. Almost like the opposite of the overlap gap property.
Wolfram's implicit counter-claim seems to be that the density of irreducibility among small machines approximates the density in the infinite limit (...or something? Via his "Principle of Computational Equivalence"), but I'm not following that argument. I am sure someone has brought this up to him! I just don't understand his response. Is there some way of characterizing / capturing the complexity floor of a given problem (For an NP-hard Problem P the reduced space needs to be at least as big as S to, WHP, describe a few hard instances)?
The `s` stands for states and `k` for colors, without talking at all about tape length. One way to say "principle of computational equivalence" is that "if it looks complex, it probably is". That is, TME is the norm, rather than the exception.
If true, this probably means that you can make up for the clunky computation power of small (s,k) by conditioning large swathes of input tape to overcome the limitation. That is, you have unfettered access to the input tape and, with just a sprinkle of TME, you can eeke out computation by fiddling with the input tape to get the (s,k) machine to run how you want.
So, if finite sized scaling effects were actually in effect, it would only work in Wolfram's favor. If there's a profusion of small TME (s,k), one would probably expect computation to only get easier as (s,k) increases.
I think you also have the random k-SAT business wrong. There's this idea that "complexity happens at the edge of chaos" and I think this is pretty much clearly wrong.
Random k-SAT is, from what I understand, effectively almost surely polynomial time solveable. Below the critical threshold, it's easy to determine in the negative if the instance is unsolvable (I'm not sure if DPLL works, but I think something does?). Above the threshold, when it's almost surely solveable, I think something as simple as walksat will work. Near, or even "at", the threshold, my understanding is that something like survey propagation effectively solves this [0].
k-SAT is a little clunky to work in, so you might take issue with my take on it being solveable but if you take something like Hamiltonian cycle on (Erdos-Renyi) random graphs, the Hamiltonian cycle has a phase transition, just like k-SAT (and a host of other NP-Complete problems) but does have a provably an almost sure polynomial time algorithm to determine Hamiltonicity, even at the critical threshold [1].
There's some recent work with trying to choose "random" k-SAT instances with different distributions, and I think that's more hopeful at being able to find difficult random instances, but I'm not sure there's actually been a lot of work in that area [2].
[0] https://arxiv.org/abs/cs/0212002
I appreciated the great explanation of space complexity and it eludicated why some scientific authors don't include it in their analysis of algorithms. However, Wolfram found that "by successively investigating both larger inputs and longer runtimes, one can develop reasonable confidence that—at least most of the time—one is correctly identifying both cases that lead to halting, and ones that do not." There are exceptions like Machine 600720 that have exceptionally long runtimes but I gain a much better understanding about an algorithm if I'm provided the space complexity. It's still an open question in pure theory but it could be understood from empirical results.
"But what if one were to look at the question empirically, say in effect just by enumerating possible programs and explicitly seeing how fast they are, etc.?"
It is absolutely rammed with m dashes, which is not conclusive. For me, a bit of a clanger is that the writer might have decided to instruct the beastie to go fast and loose with grammar "norms". So, we have loads and loads of sentences starting off with a conjunction (and, but).
It just gets worse. The article is huge - it's over 17,000 words. I've skimmed it and its awful.
Please don't do this.
> "But what if one were to look at the question empirically, say in effect just by enumerating possible programs and explicitly seeing how fast they are, etc.?"
I don't think much of Wolfram's writing, but this seems to me to be just the way that scientists write. I wouldn't blink if I encountered it in a scientific paper. (Well, I'm a mathematician, so I don't know for sure what experimental-science or even theoretical CS papers look like, but I certainly wouldn't blink if I encountered it in a math paper.)