I built a proof-of-concept captcha that uses motion rather than distortion or object recognition. Solution characters move along a shared path with coordinated momentum; noise characters may follow shapes too but out of sync. You observe, identify the group by how it moves, type the code.Rationale: static captchas are increasingly solved by vision models. Temporal reasoning over multiple frames is harder and more expensive for frame-by-frame AI analysis.
Client-side only, solution is visible in memory. Accessibility caveats apply (motion, vestibular, seizures).
Curious what the HN crowd thinks.
Demo: https://betree.github.io/movement-captcha