I've been using Claude Code heavily for autonomous agent workflows and hit a consistent pain point: zero visibility into what's running, what's stuck, and how close to context overflow each session is.
So I built claudedash. One command:
npx -y claudedash@latest start
Opens a local dashboard on port 4317. It passively watches ~/.claude/tasks/
(the files Claude Code writes natively) and renders:- Live Kanban: pending → in_progress → completed → blocked - Context health % per session (last-message tokens / model max) - Worktree observability (per-branch agent status) - Plan Mode: reads queue.md, renders dependency graph, syncs execution.log - MCP server: so Claude can query its own dashboard - Cost tracker, hook event log, session history
Technical notes for the curious: - Fastify + chokidar for the server - SSE (one shared connection via module-level singleton) - Server-side mtime-based caching (no unnecessary file reads) - Static Next.js export, no server rendering - Zero telemetry, fully local
The context health feature was trickier than expected — the naive approach of summing all input_tokens gives wildly wrong numbers. The correct approach is using only the last message's token counts.
180 commits, MIT license.
GitHub: https://github.com/yunusemrgrl/claudedash Website: https://yunusemrgrl.github.io/claudedash/
Happy to answer questions about the implementation or design decisions.
Built this because I kept coming back to long Claude Code sessions with no idea what had happened — which task was done, which was stuck, how close to context overflow.
The trickiest part technically: context health calculation. The naive approach (summing all input_tokens across the session lifetime) gives wildly inflated numbers. The correct approach is using only the LAST message's input_tokens + cache_read_input_tokens — that's what's actually in the context window right now.
Stack: Fastify + chokidar (file watcher) + SSE + Next.js static export. Zero telemetry, everything local.