Show HN: Rudel – Claude Code Session Analytics
144 points
23 days ago
| 29 comments
| github.com
| HN
We built rudel.ai after realizing we had no visibility into our own Claude Code sessions. We were using it daily but had no idea which sessions were efficient, why some got abandoned, or whether we were actually improving over time.

So we built an analytics layer for it. After connecting our own sessions, we ended up with a dataset of 1,573 real Claude Code sessions, 15M+ tokens, 270K+ interactions.

Some things we found that surprised us: - Skills were only being used in 4% of our sessions - 26% of sessions are abandoned, most within the first 60 seconds - Session success rate varies significantly by task type (documentation scores highest, refactoring lowest) - Error cascade patterns appear in the first 2 minutes and predict abandonment with reasonable accuracy - There is no meaningful benchmark for 'good' agentic session performance, we are building one.

The tool is free to use and fully open source, happy to answer questions about the data or how we built it.

dmix
23 days ago
[-]
I've seen Claude ignore important parts of skills/agent files multiple times. I was running a clean up SKILL.md on a hundred markdown files, manually in small groups of 5, and about half the time it listened and ran the skill as written. The other half it would start trying to understand the codebase looking for markdown stuff for 2min, for no good reason, before reverting back to what the skill said.

LLMs are far from consistent.

reply
cbg0
23 days ago
[-]
Try this: Keep your CLAUDE.md as simple as possible, disable skills, and request Opus to start a subagent for each of the files and process at most 10 at a time (so you don't get rate limited) and give it the instructions in the skill for whatever processing you're doing to the markdowns as a prompt, see if that helps.
reply
conception
22 days ago
[-]
reply
keks0r
23 days ago
[-]
yes we had to tune the claude.md and the skill trigger quite a bit, to get it much better. But to be honest also 4.6 did improve it quite a bit. Did you run into your issues under 4.5 or 4.6?
reply
dmix
23 days ago
[-]
I was using Sonnet 4.6 since it was a menial task
reply
stpedgwdgfhgdd
22 days ago
[-]
Try the latest skill-creator, has a/b testing
reply
emehex
23 days ago
[-]
For those unaware, Claude Code comes with a built in /insights command...
reply
loopmonster
23 days ago
[-]
insights is straight ego fluffing - it just tells you how brilliant you are and the only actionable insights are the ones hardcoded into the skill that appear for everyone. things like be very specific with the success criteria ahead of time (more than any human could ever possibly be), tell the llm exactly what steps to follow to the letter (instead of doing those steps yourself), use more skills (here's an example you can copy paste that has 2 lines and just tells it to be careful), and a couple of actually neat ideas (like having it use playwright to test changes visually after a UI change)
reply
hombre_fatal
23 days ago
[-]
It gave you a couple neat ideas and you're complaining.
reply
fragmede
22 days ago
[-]
Some people just can't take a compliment, especially if it's generated. (I'm one of them.) Still, /insight did give useful help, but I wasn't able to target it to specific repo/sessions.
reply
hombre_fatal
22 days ago
[-]
Isn't it using the sessions in the cwd where you're running it?
reply
keks0r
23 days ago
[-]
Ohh this is exciting, I kinda overlooked it. I assume there are still a lot of differences, especially for accross teams. But I immediately ran it, when I saw your comment. Actually still running.
reply
evrendom
23 days ago
[-]
true, the best comes out of it when one uses claude code and codex as a tag team
reply
Aurornis
23 days ago
[-]
> 26% of sessions are abandoned, most within the first 60 seconds

Starting new sessions frequently and using separate new sessions for small tasks is a good practice.

Keeping context clean and focused is a highly effective way to keep the agent on task. Having an up to date AGENTS.md should allow for new sessions to get into simple tasks quickly so you can use single-purpose sessions for small tasks without carrying the baggage of a long past context into them.

reply
sethammons
22 days ago
[-]
this jumped out at me too. What counts as "abandoned"? How do you know the goal was not simply met?

I have longer threads that I don't want to pollute with side quests. I will pull up multiple other chats and ask one or two questions about completely tangential or unrelated things.

reply
eddythompson80
22 days ago
[-]
I abandon sessions when I ask for something then it spins for a minute, fills up 40% of the context window and comes back with the totally wrong questions and I don't like the approach it took to get there. I don't answer any of the questions and just kill the session and start a new one with a different prompt.
reply
longtermemory
23 days ago
[-]
I agree. In my experience: "single-purpose sessions for small tasks" is the key
reply
zippolyon
19 days ago
[-]
Great work on the session analytics. The "error cascade in first 2 minutes predicts abandonment" finding is exactly the kind of signal that causal auditing can act on. We built K9 Audit for the complementary problem: not just when sessions fail, but why — recording every tool call as a CIEU five-tuple (intent vs actual outcome) with a hash chain. The "26% abandoned" stat likely hides silent deviations that looked like success. k9log causal --last traces root cause across steps in seconds. https://github.com/liuhaotian2024-prog/K9Audit
reply
mrothroc
23 days ago
[-]
Nice, I've been working on the same problem from a different direction. Instead of analyzing sessions after the fact, I built a pipeline that structures them. Stages (plan, design, code, review, same as you'd have with humans) with gates in between.

The gates categorize issues into auto-fix or human-review. Auto-fix gets sent back to the coding agent, it re-reviews, and only the hard stuff makes it to me. That structure took me from about 73% first-pass acceptance to over 90%.

What I've been focused on lately is figuring out which gates actually earn their keep and which ones overlap with each other. The session-level analytics you're building would be useful on top of this, I don't have great visibility into token usage or timing per stage right now.

I wrote up the analysis: https://michael.roth.rocks/research/543-hours/

I also open sourced my log analysis tools: https://github.com/mrothroc/claude-code-log-analyzer

reply
keks0r
23 days ago
[-]
This is great. How are you "identifying" these stages in the session? Or is it just different slash commands / skills per stage? If its something generic enough, maybe we can build the analysis into it, so it works for your use case. Otherwise feel free to fork the repo, and add your additional analysis. Let me know if you need help.
reply
mrothroc
23 days ago
[-]
I use prompt templates, so in the first version of my analysis script on my own logs I looked for those. However, to make it generic, I switched to using gemini as a classifier. That's what's in the repo.
reply
tmaly
22 days ago
[-]
I have seen numbers claiming tools are only called 59% of the time.

Saw another comment on a different platform where someone floated the idea of dynamically injecting context with hooks in the workflow to make things more deterministic.

reply
evrendom
22 days ago
[-]
interesting, where did you see that?
reply
monsterxx03
22 days ago
[-]
I built something in a similar space: Linko (https://github.com/monsterxx03/linko), a transparent MITM proxy with a webui that lets you see what's actually being sent between Claude Code and LLM APIs in real time.

  It's been really helpful for me to debug my own sessions and understand what the model is seeing (system prompts, tool definitions, tracing tool calls etc.).
reply
jimmySixDOF
20 days ago
[-]
Note: Linko currently only supports macOS.
reply
152334H
23 days ago
[-]
is there a reason, other than general faith in humanity, to assume those '1573 sessions' are real?

I do not see any link or source for the data. I assume it is to remain closed, if it exists.

reply
keks0r
23 days ago
[-]
Its our own sessions, from our team, over the last 3 months. We used them to develop the product and learn about our usage. You are right, they will remain closed. But I am happy to share aggregated information, if you have specific questions about the dataset.
reply
languid-photic
23 days ago
[-]
it's reasonable to note that w/o sharing the data these findings can't be audited or built upon

but i think the prior on 'this team fabricated these findings' is v low

reply
dboreham
22 days ago
[-]
One potential reason for sessions being abandoned within 60 seconds in my experience is realizing you forgot to set something in the environment: github token missing, tool set for the language not on the path, etc. Claude doesn't provide elegant ways to fix those things in-session so I'll just exit, fix up and start Claude again. It does have the option to continue a previous session but there's typically no point in these "oops I forgot that" cases.
reply
c5huracan
22 days ago
[-]
The "no meaningful benchmark for good agentic session performance" point resonates. Success varies so much by task type that a single metric is almost meaningless. A 60-second documentation lookup and a 30-minute refactoring session could both be successes.

Curious what shape the benchmark takes. Are you thinking per-task-type baselines, or something more like an aggregate efficiency score?

reply
marconardus
23 days ago
[-]
It might be worthwhile to include some of an example run in your readme.

I scrolled through and didn’t see enough to justify installing and running a thing

reply
keks0r
23 days ago
[-]
Ah sorry, the readme is more about how to run the repo. The "product" information is rather on the website: https://rudel.ai
reply
blef
23 days ago
[-]
reply
keks0r
23 days ago
[-]
Our focus is a little bit more cross team, and in our internal version, we have also some continuous improvement monitoring, which we will probably release as well.
reply
mentalgear
23 days ago
[-]
> A local-first desktop and web app for browsing, searching, and analyzing your past AI coding sessions. See what your agents actually did across every project.

Thx for the link - sounds great !

reply
KaiserPister
23 days ago
[-]
This is awesome! I’m working on the Open Prompt Initiative as a way for open source to share prompting knowledge.
reply
keks0r
23 days ago
[-]
Cool, whats the link? We have some learnings, especially in the "Skill guiding" part of our example.
reply
alyxya
23 days ago
[-]
Why does it need login and cloud upload? A local cli tool analyzing logs should be sufficient.
reply
keks0r
23 days ago
[-]
We used it across the team, and when you want to bring metrics together across multiple people, its easier on a server, than local.
reply
steve_adams_86
21 days ago
[-]
Does this comply with Anthropic's terms? I've been developing small apps here and there on top of Claude Code and each time I find I'm too uncomfortable with their terms to bother distributing it.
reply
mbesto
23 days ago
[-]
So what conclusions have you drawn or could a person reasonably draw with this data?
reply
avilesrafa
23 days ago
[-]
Hey, here is Rafa, another Rudel AI developer. The ultimate goal is to make developers more productive. Suddenly, we had everyone having dozens of sessions per day, producing 10X more code, we were having 10X more activity but not necessarily 10X productivity.

With this data, you can measure if you are spending too many tokens on sessions, how successful sessions are, and what makes them successful. Developers can also share individual sessions where they struggle with their peers and share learnings and avoid errors that others have had.

reply
evrendom
22 days ago
[-]
yes what rafa said... aaand we see who wastes the 200 bucks claude subscription by not using it
reply
ekropotin
23 days ago
[-]
> That's it. Your Claude Code sessions will now be uploaded automatically.

No, thanks

reply
keks0r
23 days ago
[-]
It will be only enabled for the repo where you called the `enable` command. Or use the cli `upload` command for specific sessions.

Or you can run your own instance, but we will need to add docs, on how to control the endpoint properly in the CLI.

reply
tgtweak
23 days ago
[-]
Big ask to expect people to upload their claude code sessions verbatim to a third party with nothing on site about how it's stored, who has access to it, who they are... etc.
reply
keks0r
23 days ago
[-]
We dont expect anything, we put it out there, and we might be able to build trust as well, but maybe you dont trust us, thats fair. You can still run it yourself. We are happy about everyone trying it out, either hosted or not. We are hosting it, just to make it easier for people that want to try it, but you dont have to. But you have a good point, we should probably put more about this on the website. Thanks.
reply
lgvdp
22 days ago
[-]
I see a lot of people with concerns about privacy and security. Not shown in the post, but the github shows how to self host. No need to use 3rd party, you can just have your own too
reply
evrendom
22 days ago
[-]
yup!
reply
locma
20 days ago
[-]
What's the end goal here — using session data as a feedback loop to iteratively improve CLAUDE.md and agent workflows based on real usage patterns?
reply
ericwebb
22 days ago
[-]
I 100% agree that we need tools to understand and audit these workflows for opportunities. Nice work.

TBH, I am very hesitant to upload my CC logs to a third-party service.

reply
zippolyon
19 days ago
[-]
The hesitation about log upload is exactly why K9 Audit works differently — local by default, SHA256 hash-chained, zero data leaves your machine unless you explicitly configure a sync endpoint. pip install k9audit-hook and drop one JSON file in .claude/. https://github.com/liuhaotian2024-prog/K9Audit
reply
evrendom
22 days ago
[-]
you can host the whole thing locally :)
reply
ericwebb
22 days ago
[-]
I missed that important detail :) thanks
reply
swaminarayan
21 days ago
[-]
26% of AI coding sessions are abandoned within 60 seconds .Is this a prompt problem, a tooling problem, or a limitation of current models?
reply
alvarogar
21 days ago
[-]
The main reason we've seen is a bad first impression: the agent misunderstands the initial prompt, doesn't load the available tools or starts doing something clearly wrong, and the user ctrl+c's instead of correcting it since it's faster to just start a new one.

The learning is that it is fixable. Better CLAUDE md instructions, clearer initial prompts, and skill configurations that reduce the uncertainty cut abandonment significantly in our team.

reply
evrendom
21 days ago
[-]
evren here from the rudel team. and that's a great question.. i genuinely dont know. let me find out
reply
smallerfish
22 days ago
[-]
> content, the content or transcript of the agent session

Does this include the files being worked on by the agent in the session, or just the chat transcript?

reply
evrendom
22 days ago
[-]
file content is also be uploaded as well https://github.com/obsessiondb/rudel?tab=readme-ov-file#secu...

if you dont trust us with that data though (which i can understand) you can host that thing locally on your machine

reply
anthonySs
23 days ago
[-]
is this observability for your claude code calls or specifically for high level insights like skill usage?

would love to know your actual day to day use case for what you built

reply
keks0r
23 days ago
[-]
the skill usage was one of these "I am wondering about...." things, and we just prompted it into the dashboard to undertand it. We have some of these "hunches" where its easier to analyze having sessions from everyone together to understand similarities as well as differences. And we answered a few of those kinda one off questions this way. Ongoing, we are also using a lot our "learning" tracking, which is not really usable right now, because it integrates with a few of our other things, but we are planning to release it also soon. Also the single session view sometimes helps to debug a sessions, and then better guide a "learning". So its a mix of different things, since we have multiple projects, we can even derive how much we are working on each project, and it kinda maps better than our Linear points :)
reply
mentalgear
23 days ago
[-]
How diverse is your dataset?
reply
keks0r
23 days ago
[-]
Team of 4 engineers, 1 data & business person, 1 design engineer.

I would say roughly equal amount of sessions between them (very roughly)

Also maybe 40% of coding sessions in large brownfield project. 50% greenfield, and remaining 10% non coding tasks.

reply
lau_chan
23 days ago
[-]
Does it work for Codex?
reply
keks0r
23 days ago
[-]
Yes we added codex support, but its not yet extensively tested. Session upload works, but we kinda have to still QA all the analytics extraction.
reply
bool3max
22 days ago
[-]
Why is the comment calling out the biggest issue with this so heavily downvoted? Privacy is a massive concern with this.
reply
cluckindan
23 days ago
[-]
Nice. Now, to vibe myself a locally hosted alternative.
reply
vidarh
23 days ago
[-]
I was about to say they have a self-hosting guide, but I see they use third party services that seem absolutely pointless for such a tiny dataset. For comparison, I have a project that happily analyzes 150 million tokens worth of Claude session data w/some basic caching in plain text files on a $300 mini pc in seconds... If/when I reach billions, I might throw Sqlite into the stack. Maybe once I reach tens of billions, something bigger will be worthwhile.
reply
keks0r
23 days ago
[-]
There is also a docker setup in there to run everything locally.
reply
vidarh
23 days ago
[-]
That's great. It's still over-engineered given processing this data in-process is more than fast enough at a scale far greater than theirs.
reply
keks0r
23 days ago
[-]
The docker-compose contain everything you should need: https://github.com/obsessiondb/rudel/blob/main/docker-compos...
reply
longtermemory
23 days ago
[-]
From session analysis, it would be interesting to understand how crucial the documentation, the level of detail in CLAUDE.md, is. It seems to me that sometimes documentation (that's too long and often out of date) contributes to greater entropy rather than greater efficiency of the model and agent.

It seems to me that sometimes it's better and more effective to remove, clean up, and simplify (both from CLAUDE.md and the code) rather than having everything documented in detail.

Therefore, from session analysis, it would be interesting to identify the relationship between documentation in CLAUDE.md and model efficiency. How often does the developer reject the LLM output in relation to the level of detail in CLAUDE.md?

reply
avilesrafa
23 days ago
[-]
This is a great idea, documented and added to our roadmap.
reply
vova_hn2
23 days ago
[-]
This is so sad that on top of black box LLMs we also build all these tools that are pretty much black box as well.

It became very hard to understand what exactly is sent to LLM as input/context and how exactly is the output processed.

reply
keks0r
23 days ago
[-]
The tool does have a quite detailed view for individual sessions. Which allows you to understand input and output much better, but obviously its still mysterious how the output is generated from that input.
reply