AI agents don't crash. They just quietly give wrong answers. You end up scrolling through traces one by one, trying to find a pattern across hundreds of sessions.
Kelet automates that investigation. Here's how it works:
1. You connect your traces and signals (user feedback, edits, clicks, sentiment, LLM-as-a-judge, etc.) 2. Kelet processes those signals and extracts facts about each session 3. It forms hypotheses about what went wrong in each case 4. It clusters similar hypotheses across sessions and investigates them together 5. It surfaces a root cause with a suggested fix you can review and apply
The key insight: individual session failures look random. But when you cluster the hypotheses, failure patterns emerge.
The fastest way to integrate is through the Kelet Skill for coding agents — it scans your codebase, discovers where signals should be collected, and sets everything up for you. There are also Python and TypeScript SDKs if you prefer manual setup.
It’s currently free during beta. No credit card required. Docs: https://kelet.ai/docs/
I'd love feedback on the approach, especially from anyone running agents in prod. Does automating the manual error analysis sound right?
There's like three reasons for this disconnect.
1. The agents aren't expert at your proprietary code. They can read logs and traces and make educated guesses, but there's no world model of your code in there.
2. The people building these apps are unqualified to review the output. I used to mock narcissists evaluating ChatGPT quality by asking it for their own biography, but they're at least using a domain they are an expert in. Your average MLE has no profound truths about kubernetes or the app. At best, they're using some toy "known broken" app to demonstrate under what are basically ideal conditions, but part of the holdout set should be new outages in your app.
3. SREs themselves are not so great at causal analysis. Many junior SRE take the "it worked last time" approach, but this embeds a presumption that whatever went wrong "last time" hasn't been fixed in code. Your typical senior SRE takes a "what changed?" approach, which is depressingly effective (as it indicates most outages are caused by coworkers). At the highest echelons, I've seen research papers examining meta-stablity and granger causality networks, but I'm pretty sure nobody in SRE or these RCA agents can explain what they mean.
> The key insight: individual session failures look random. But when you cluster the hypotheses, failure patterns emerge.
My own insight is mostly bayesian. Typical applications have redundancy of some kind, and you can extract useful signals by separating "good" from "bad". A simple bayesian score of (100+bad)/(100+good) does a relatively good job of removing the "oh that error log always happens" signals. There's also likely a path using clickhouse level data and bayesian causal networks, but the problem is traditional bayesian networks are hand crafted by humans.
So yea, you can ask an LLM for 100 guesses and do some kind of k-means clustering on them, but you can probably do a better job doing dimensional analysis first and passing that on to the agent.
Great points, but I think there's a domain confusion here . You're describing infra/code RCA. Kelet does an AI agent Quality RCA — the agent returns a 200 OK, but gives the wrong answer.
The signal space is different. We're working with structured LLM traces + explicit quality signals (thumbs down, edits, eval scores), not distributed system logs. Much more tractable.
Your Bayesian point actually resonates — separating good from bad sessions and looking for structural differences is close to what we do. But the hypotheses aren't "100 LLM guesses + k-means." Each one is grounded in actual session data: what the user asked, what the agent did, what came back, and what the signal was.
Curious about the dimensional analysis point — are you thinking about reducing the feature space before hypothesis generation?
Yep. We can integrate with every solution that supports OpenTelemetry :) so it's pretty native, just use the integration skill
npx skills add kelet-ai/skills
I'm so tired
Yes. I definitely assisted LLM in writing it. Yeah - I should have stripped it better.
Yet it's f*ing painful to do error analysis and go through thousands of traces. Hope you can live with my human mistakes
langsmith/langfuse/braintrust collect traces, and then YOU need to look at them and analyze them(error analysis/RCA).
Kelet do that for you :)
Does that make any sense? If not, please tell me, I'm still trying to figure out how to explain that, lol.
YEP, Good catch! Kelet as input/prompt in Hebrew :)
From what we discovered analyzing ~33K+ sessions, most of the time when the agent selects the wrong tool, it's because the tool's description (i.e., prompt) was not good enough, or there's a missing nuance here.
That goes exactly under Kelet's scope :)