The reality is that there are some great, mature solvers out there that work well enough for most cases. And while it might be possible to eke out more performance in specific problems, it would be very hard to beat existing solvers in general.
Theoretical developments like this, while interesting on their own, don't really contribute much to day-to-day users of linear programming. A lot of smart people have worked very hard to "optimize the optimizers" from a practical standpoint.
> The next logical step is to invent a way to scale linearly with the number of constraints. “That is the North Star for all this research,” she said. But it would require a completely new strategy. “We are not at risk of achieving this anytime soon.”
However, this requires to solve a quadratic 'best direction' problem each time which if IIRC reduces to 'Linear complementarity problem (LCP)' (https://en.wikipedia.org/wiki/Linear_complementarity_problem). The LCP problem scales with the number of active constraints which is always smaller than the dimensionality (N) of the problem. So if you have number of constraints P >> N you are golden.
Note that Dantzig has also contributed to LCP.
Obviously any breakthrough in these basic methods is directly translatable to more efficient learning algorithms for training single layer neural nets (perceptrons). Extending to multi layer NNs is not far off from there...
Here "risk" seems odd (or it's a translation/language-nuance mistake).
There are a lot of problems like this. Traveling salesman, for example. Exponential in the worst case, but polynomial almost all the time.
Does this indicate progress on P = NP?
It is probably just a “git gud” situation. I even re-read Lars Blackmore’s “Lossless Convexification of Nonconvex Control Bound and Pointing Constraints of the Soft Landing Optimal Control Problem” from time to time hoping that i find a way to apply a similar convexification idea to my problems. With all of that I’m not that surprised that convex optimisation is not more widely known.
To me, convex optimization is more the domain of engineering when there are continuous functions and/or stochastic processes involved.
Much of signal processing and digital communication systems are founded around convex optimization because it’s actually a sensible way to concretely answer “was this designed right?”.
One can use basic logic to prove a geometry proof, or the behavior of a distributed algorithm.
But if one wants to prove that a digital filter was designed properly for random/variable inputs, it leads to finding solutions of convex optimization problems (minimization of mean squared error or such).
Of course, whether the right problem is being solved is a different issue. MMSE is just mathematically extremely convenient but not necessarily the most meaningful characterization of behavior.