Because It Doesn't Have To
28 points
by zdw
1 day ago
| 5 comments
| blog.computationalcomplexity.org
| HN
dzink
2 hours ago
[-]
Children learn by playing because not much is expected of the outcome in play. Improvement happens when you can play. When AI has a play environment to learn with reinforcement. When entrepreneurs are allowed to try and fail and do better. Doctors learn by practicing under supervision, or on corpses, until they can do it for real. No straight line goes up without a jiggle in the beginning.
reply
chermi
3 hours ago
[-]
I like the networking perspective, but the ML perspective is such a loose analogy that it's hard to even judge. I mean, we've known forever softening constraints allows you to reach solutions otherwise unreachable, for one? There's a gulf of difference between succeeding at something deterministic by allowing failure vs. good pattern matching by optimizing over a rough landscape of examples.
reply
Animats
7 minutes ago
[-]
> I like the networking perspective, but the ML perspective is such a loose analogy that it's hard to even judge.

Right. ML doesn't have to work well because it's used in situations where the cost of the errors falls on someone other than the service provider. Hallucinations require a business model where their cost is an externality, like pollution.

With an objective goal, such as tests or a spec or driving without hitting anything, to check the results, it's possible to do better, of course.

The Internet only works because fiber optic bandwidth is cheap. As someone who was working on congestion in the early days, I could see that congestion in the middle of the network had no known solution. If congestion could be pushed out to the edges, there were strategies, but there were no good solutions in the middle. And, in fact, the whole Internet would sometimes go into congestion collapse in the early 1990s, with the big peering points at MAE-EAST and MAE-WEST losing well over half of the packets. What saved the Internet was cheap long-haul bandwidth and big hardware-supported switches. This kept congestion at the fringes.

reply
xg15
50 minutes ago
[-]
Yeah, I didn't find his initial take very convincing, but he lost me at the followup:

> For most cases I don't think having explainability is worth the trade offs in capability. That'll be a good topic for a future post.

reply
nh23423fefe
2 hours ago
[-]
I'm not seeing how describing measures over possibility space as allowing for mistakes.

Seems like content reverse engineered from title.

reply
dataviz1000
2 hours ago
[-]
The LLM reasoning models behave strikingly similar to superscalar out-of-order execution processors with decomposition, verification, and error correction steps.

Moreover, the LLM reasoning models are reliably consistent solving the same task with the same prompt using different variables. This can be demonstrated.

Not everything has to be deterministic to be useful. Nonetheless, understand how LLM models can be applied and be useful will help a lot of people to be less frustrated and spend less tokens.

reply
CSSer
35 minutes ago
[-]
I suppose another interesting thing about this observation is that this is true about the universe too! Einstein thought God doesn't play dice with the universe and Niels Bohr proved him wrong.

So either it's an interesting statement because it's infinitely generalizable or not interesting for the same reason.

reply
booleandilemma
3 hours ago
[-]
Interesting. I could apply this to some people I've worked with. They work so well because they don't have to.
reply