Streams your prompt + completion usage in real time
Predicts whether you’ll hit the cap before the session ends
Runs 100 % locally (no auth, no server)
Presets for Pro, Max × 5, Max × 20 — tweak a JSON if your plan’s different
GitHub: https://github.com/Maciek-roboblog/Claude-Code-Usage-Monitor
It’s already spared me a few “why did my run just stop?” moments, but it’s still rough around the edges. Feedback, bug reports, and PRs welcome!
If I can offer any advice, it's that the high use of emojis in a project readme (at least for me) looks so unprofessional and makes me worry that a project was vibe -coded in the sense that the AI was possibly not babysat to the extent I think they should. That's just me, though
But hey — if it's stupid and it works, it ain't stupid.
As a separate comment, would it not be better to ask for your plan on first run and setup a config file to remember it? With a note how to change it. Rather than rely on cmd line variables?
Also, shouldn't it be able to pick up the timezone from the local computer? Why would it "default" to a fixed timezone of poland?
Need to write a document about converting a Rust project to Typescript? A picture of an abandoned warehouse full of expressionless baby doll heads fits perfectly.
The irony of comments like this on software designed entirely for ai coding...
Up until recently I tended to “trust” github repos a bit more, now I feel like I need to have my guard up so I don’t fall into a trap of using something like this. Funnily enough a good first metric for me now is # of emojis in the readme - the more emojis the more likely you should stay away from it
That turns thrse meandering emoji fests into suitable documentation. YMMV
My total tokens used since I started using Claude Code on May 27th was 1,374,439,311 worth around $3397.34.
Do they have huge margins on API or are they just losing money? I use it everyday but I don't feel like I'm abusing it or anything
"Claude Code is also the most direct way to consume Sonnet for coding, rather than going through all the hidden prompting and optimization than the other products do. You will feel that right away, as the average spend per user is $6/day on Claude Code compared to $20/mo for Cursor, for example. Apparently, there are some engineers inside of Anthropic that have spent >$1,000 in one day!"
Link: [https://www.latent.space/p/claude-code](https://www.latent.space/p/claude-code)
To be honest, I'm a little scared to use this also. I feel like ideally each worktree would also run in a container, but that seems quite a bit harder to make work as smoothly as this does.
Or is this a Claude Code specific limit? I haven't used Claude Code extensively yet.
While my plan didn't pan out, cuz it was way too effective, I can confidently say that I'm going through 3-6k tokens per prompt on average, and usually get around 3 hours of usage before I'm hitting the rate limit.
The limit is probably closer to 300k then <10k
Also the chat interface doesn't have a separate limit, once you hit it via Claude code, you cannot use the website either anymore.
Maybe it's a 7k limit per prompt? Dunno if I exceeded that before
Transforming… (212s · 26.1k tokens · esc to interrupt)
I reset just under 2 hours ago, probably been going at this pace for the last hour or so.
This seems like a problem if for example, you hit 90% usage, pass the window, then burn through the remaining 10% quickly and have to wait a long time.
Can this be installed with uv? https://github.com/astral-sh/uv
Edit:
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install the required CLI tool (Node.js)
npm install -g ccusage
# Clone and setup
git clone https://github.com/Maciek-roboblog/Claude-Code-Usage-Monitor.git
cd Claude-Code-Usage-Monitor
# Install Python deps with uv
uv add pytz
chmod +x ccusage_monitor.py
# Run it
uv run python ccusage_monitor.py --plan max20 --timezone America/New_York
> pipx install git+https://github.com/Maciek-roboblog/Claude-Code-Usage-Monitor
> ccusage_monitor
I think there is a similar command for uv; uvx? Although, I am not sure if uvx has the same functionality / purpose as pipx.
how about making it a tool that claude can use directly?
https://ma.rtin.so/posts/monitoring-claude-code-with-datadog...
Disclaimer - I work at Anthropic but not on Claude Code, the team is responsive via GH issues though!
[0] https://docs.anthropic.com/en/docs/claude-code/monitoring-us...
Even then, this can happen from time to time. It's important to remember that you're using an extremely expensive tool which, despite what YouTubers and bloggers say, isn't magic.
I'm of an age where I rarely watch short-form videos so would read the article but you'll miss an audience if that's all you produce.
I did a bisect and found this one to be consuming ~1.5gb memory alone and that's when I removed it. YMMV.
How many tokens used in a heavy vibe coding day?
Average US daily per capita is like 40kg CO2
A single flight from JFK to LAX produces around 20,000kg of CO2. Using the 8.3g value means a flight is equivalent to 2.41 billion tokens.
Plastic bags, paper straws (wrapped in plastic), most realities of recycling, vehicle selections, etc.
Leads to a lot of unpopular things.
Have you walked a beach in the last decade?
Finally, most of the (local, not even thinking about the developing world) pollution is not deliberate. It blows in from other places, usually. I live rural and I'm continually picking up plastic garbage from my ditch or back forest or fields that blows in from the nearby highway and roads, especially after garbage pickup day.
Production needs to be severely curtailed.
Ultimately, individual habits do add up. But with climate, one would be hard pressed to find evidence that conservation is the path forward. It does not work, unfortunately.
For now, sure it might be ridiculously minor, but when it starts to ramp up who's to say it wont be just a ridiculous amount of energy ? Maybe not even measure the CO2, but I would love to graph the increase of energy spent over time.
So on that day you are 10x'ing the US person day
Repo is here if you're curious: https://github.com/Maciek-roboblog/Claude-Code-Usage-Monitor...
It produced a patch. Unfortunately it was for removing the emojis from the readme.
ccusage says I had 1k input tokens, 12k output and 1.2m cache create.
I'm not sure if that is 18.3g, 138.3g or 1213 * 8.3g.
At the highest number that's 10kg or 25% of average US daily per capita emission or 1 gallon of gas.
This article [0] mentions "8.3g CO2" but it's linked source [1] uses a different number - 4.32g. Perhaps revised after publication.
I ran the search again and got different numbers... I'm sure the real numbers will be changing quite a bit over time too.
[0] https://ditchcarbon.com/blog/llm-carbon-emissions
[1] https://smartly.ai/blog/the-carbon-footprint-of-chatgpt-how-...
Fair enough.
For $200 per month they’ll only use carbon-free power sources for your prompts. At $100 they’ll use nuclear, then $20 per month for coal, and then there will be a free tier where your prompts are powered by throwing baby seals into a furnace.
I think that's by design
Tool also says I used 198% of my max5 plan %).
I guess it expect opus usage and I was usin sonnet (after the first 20%, the auto thing)
But man do I just want a way to quickly glance at my API credits and to just occasionally chat with a model from those credits without librechat of openwebui. Or set some limits, or see some usage metrics.
And please please use "forever auth" with passkeys or something, what is up with that auth email that just takes ages for a quick glance?! It always takes me 3 attempts to find what address I used at sign-up...
Oh and make it clearer why you have that API/credits system and a subscription, why is it so difficult to understand when you start using Claude that it's 2 different unrelated worlds?? First time I started my subscription I just couldn't figure out where the API section was, until I realized it just wasn't there.
I feel like I'm "holding it wrong", but please make it easier to hold it right then.
As a person with the simple but brilliant technology of a freaking password manager, I LOATHE email to login/no-password websites. They are dreadful, we've somehow managed to come up with something worse in UX rather than move forward.
If you're working on a product that does this, or wants to do this, please please PLEASE reconsider, it's such a PITA for technical users and normies alike.
The email auth flow is a simplified and more efficient way to achieve the same outcome.
So when I open it up on a new machine or after months, you have to go through that magic link bs multiple times! For all your accounts/channels! I did that 2 or 3 times and then I just stopped using it.
I don't even remember what I signed up for over the years, I know some of it was nice (like an LoRa IOT channel, an "AWS professionals" channel, I set something up for the LoRaWan network in the previous city I lived in... All just tuned out because of the login bs.
Oops, when I saw the /install-github-app command, I assumed that since I'm on a max plan and Claude Code in my terminal is free, then the Github integration would be free.
So I hooked it up to my repo and tagged @claude in everything. It was a lot of fun tagging it in backburner issues and seeing it solve issues I couldn't be bothered to do for years. Or just seeing what it would come up with on really low effort poorly explained issues.
But not worth spending 50+ cents every time.
On another hand, you solved several years-old issues for under $50…which seems like a big win.
Currently in gray area but allowed as per anthropic comment on it - https://github.com/anthropics/claude-code-action/issues/4#is...
I'll try it out right now.
Isn't Claude and Anthropic API two separate platforms? How does it spend from your other account?
Your link and my link are just two views into the same underlying account (unless you registered separate accounts of course). At your link, you can manage your Claude subscription. At my link, you can manage API keys and API credits.
But they aren't siloed like you (or I) think.
I got Claude Code when it launched and it always charged API credits until it was included in the $100 max subscription (and then in the cheap base subscription). It moves fast and it's not well communicated.
It also just psychologically just saps the fun out of something when there's a fee attached per invocation. And the UX is that I buy $50 of API credits, time passes, and then it breaks because I have to refill it again.
If it were literally a "fix issue for 50 cents" button that charges you 50 cents and that's that, then it would be different. But instead your API credits drain and you can't evaluate if it was worth it.
I guess I don't want to duplicate Python/Node for every tool, but I also don't want it to be fragile. And this wants a Node CLI tool installed globally, which I've found breaks easily with changing versions.
`uv tool install` doesn’t duplicate Python for every tool.