Why question-space can't be baked into LLM weights (preprint)
2 points
2 hours ago
| 1 comment
| zenodo.org
| HN
h_hasegawa
2 hours ago
[-]
I've been building an external cognitive OS for LLMs called KIS (Knowledge Innovation System) for 18 months. The core argument:

As LLMs get smarter, they converge faster. This is the problem. Genuine inquiry requires non-convergent, open-ended exploration — which is structurally incompatible with how trained models work.

The math: question-space is a Colimit (open, non-convergent expansion). Model weights implement closure operators (Galois Connections), where φ(φ(q)) = φ(q). These two structures are fundamentally incompatible. Scaling won't fix this.

KIS operates upstream of LLMs — designing initial conditions before generation begins. Currently operational as WebKIS. Effect size d ≈ 0.8 in invention support experiments.

Preprint: https://zenodo.org/records/19305025

Happy to discuss the category theory or architecture.

reply