While RAG has become the standard for "adding knowledge" to LLMs, it often fails at scale due to semantic noise and the destruction of logical boundaries during chunking. Superfast treats memory as an architectural layer. It utilizes Louvain community detection to mathematically derive functional clusters, giving agents a deterministic "Logic Layer" that persists across sessions.
We’ve maintained the strict TDD and Socratic discipline of the original framework but scaled it for environments like Microsoft Fabric and AWS Glue where "token waste" is a primary bottleneck.
Check it out here: https://github.com/FastBuilderAI/superfast
Anyways, will check out the repo, looks like something worth digging into.