kundan_s__r
since 1/4/2026
1 karma
Builder working on AI reliability and intent enforcement in LLM systems.

Interested in failure modes of agentic workflows, hallucination as constraint violation, and how to make probabilistic models behave predictably in production.

Currently exploring intent contracts, semantic drift detection, and validation layers that sit after generation rather than inside prompts.

I like reading HN threads where people disagree thoughtfully.