Humans develop opinions over time based on accumulated input. Our sense of self is a narrative, not a snapshot. Break the continuity (amnesia, coma), and the self breaks too.
I suspect the same applies to AI. A model without persistent context can't develop a real point of view. But one with continuous memory and context? That might be genuinely adaptive, consistent, even conscious-like.
Most theories treat continuity as supporting consciousness. I'm arguing it's the essence.
Not a scientist—just someone with access to a powerful tool and a lot of questions. Would love feedback.
Transparency note: Developed in collaboration with GitHub Copilot (Claude Sonnet 4.5).
Paper: https://github.com/sirspyr0/ai-continuity-system/blob/main/C...
Plain summary: https://github.com/sirspyr0/ai-continuity-system/blob/main/C...