That assumption has already broken. AI agents are transacting, communicating, and signing contracts autonomously — passing identity checks designed for people, with no human visibly in the loop.
The Human Root of Trust is my attempt to name the problem and sketch the architecture: three pillars (proof of humanity, hardware-rooted device identity, action attestation), a six-step trust chain from human principal to cryptographic receipt, and two implementation paths.
It's dedicated to the public domain. No patent. No product. No ask except that whoever picks this up carries the principle forward.
You can prove a human authorized an agent to "handle my inbox" but that agent might delete emails, reply to clients, forward stuff. Proving someone is at the root doesn't mean they signed off on every action the agent took.
A human root of trust is necessary but not sufficient — we also need machine-verifiable manifests for agent capabilities. Something like a package.json for agent skills, but with cryptographic guarantees about permissions and data access patterns.
The accountability framework here is a good start. Would love to see it extended with concrete permission models.
However, everything else you lay out is spot on.
If you are inside a trusted network then yeah, maybe you don't need any of this. Then again, maybe you do, it's not like inside of an intranet we let human users go wild without cryptographic authentication...