Show HN: Terminal dashboard that throttles my PC during peak electricity rates
97 points
1 day ago
| 13 comments
| naveen.ing
| HN
WattWise is a CLI tool that monitors my workstation’s power draw using a smart plug and automatically throttles the CPU & GPUs during expensive Time-of-Use electricity periods. Built with Python, uses PID controllers for smooth transitions between power states. Works with TP-Link Kasa plugs and Home Assistant.
naveen_k
1 day ago
[-]
Quick update: Definitely wasn't expecting this to end up on the front page. I was more focused on publishing the dashboard than the power optimizer service I'm running. I'll take all the feedback into account and will open source an improved version of it soon. Appreciate all the comments!
reply
vondur
1 day ago
[-]
That's quite a beefy workstation you got there!
reply
PeterStuer
1 day ago
[-]
Had a quick look through the code but I can't find where he actually throttles the PC. Anyone can point me to it?

https://github.com/naveenkul/WattWise

reply
dartos
1 day ago
[-]
Yeah I don’t see anything that even suggests it throttles the PC.

Looks like it’s just a display.

reply
dylan604
1 day ago
[-]
You mean like the title of the submission suggestion of it throttling?
reply
dartos
1 day ago
[-]
I meant in the code……… obviously…
reply
naveen_k
1 day ago
[-]
Sorry, I only open sourced the dashboard part as mentioned in the bottom of the blogpost. Still working on improving the 'Power optimizer' service so will open source that soon as well.
reply
PeterStuer
1 day ago
[-]
If it were up to me I would go for switching complete performance profiles through something like tuned-adm rather than trying to change just cpu frequencies. There's too many interlinked things that can have an effect on throughput efficiency.
reply
naveen_k
1 day ago
[-]
Thanks, I'll check it out.
reply
russdill
1 day ago
[-]
If your computer is still doing bursty jobs during that period, it will use less power but still as much energy. Sure, you can reduce the power but if you aren't also reducing what you ask it to do, it'll just use that max amount of allowed power for a longer period of time.
reply
PaulKeeble
1 day ago
[-]
All the modern CPUs will boost into high clockspeeds and voltage to get work done quicker but at considerably higher power draws per operation. On that side of the equation its clear that it uses more energy. The problem is the entire CPU package is on longer if you don't do that and this costs power too and so its a trade off between the two. Generally we consider there isn't much difference between them but I don't know about that having seen the insanity that was the 13th and 14th gen Intel's consuming 250W when 120W gets about 95% the performance I think its very likely moving down to power save and avoiding that level of boosting definitely saves small amounts of power.
reply
delusional
1 day ago
[-]
This is some pretty old analysis, but I remember when smartphones came out and people were thinking about throttling their applications to lower power consumption the general advice was to just "race to idle".

The consensus thus was that spending more time in lower power states (where you use ~0W) was much more efficient than spending a longer amount of time in the CPU sweetspot, but with all sort of peripherals online that you didn't need anyway.

I remember when Google made a big deal out of "bundling" idle CPU and network requests, since bursting them out was more efficient than having the radio and CPU trotting along at low bandwidth.

reply
wongarsu
1 day ago
[-]
However there are two factors that might make "race to idle" more valid on phones than on most other platforms:

Smartphone chips are designed to much stricter thermal and power limits. There is only so much power you can get out of the tiny battery, and only so much heat you can get rid of without making the phone uncomfortably hot. Even in a burst that puts a limit on the wastefulness. Desktop CPUs are very different: If you can get 10% more performance while doubling power draw, people will just buy bigger coolers and bigger power supplies to go along with the CPU. Notebook CPUs are somewhere in the middle: limited, but with much better cooling and much more powerful batteries than phones.

The other thing is the operating system: "race to idle" makes sense in phones because the OS will actually put the CPU into sleep states if there's nothing to do, and puts active effort into not waking the CPU up unnecessarily and cramming work into the time slots when the CPU is active anyways. Desktop operating systems just don't do that to the same degree. You might race to idle, but the rest of the system will then just waste power with background work once it's idle.

reply
PaulKeeble
1 day ago
[-]
The thing is all these phones are also coming with 2 or 3 different classes of CPUs in order to save power, those slower cores must be achieving much better Watts per Op otherwise they wouldn't bother and be using all that die space for the faster cores. Clearly they are not doing race to idle in most mobile phones either at this point given they have been using a big/little strategy for over a decade.

What has happened is they realised waking up CPUs to do individual tasks was inefficient and stopped CPUs sleeping so they batched work together but that is basically the overhead of going in and out of sleeping which presumably does waste a bunch of power. This also explains why so many high end phones now have 1 ultra slow core that deals with the background stuff when the phone screen is off, the rest can all just sleep and this low power core can sit there on low clocks sipping minimal power most of the time.

reply
jorvi
1 day ago
[-]
> Desktop CPUs are very different: If you can get 10% more performance while doubling power draw, people will just buy bigger coolers and bigger power supplies to go along with the CPU.

No they won't. OEMs will do that to put the 10% higher number on the box ('OC edition!'), but undervolting / powerlimiting has become increasingly popular. Taking a 2-5% haircut on clock frequency lets you reduce power draw by 20-50% for a 1-3% haircut in performance. Suddenly your CPU and GPU are whisper quiet on aircooling.

reply
bee_rider
1 day ago
[-]
Race to idle probably makes more sense in the context of smartphones where there’s at least some chance that “idle” means the screen might be turned off.

For a desktop, the usage… I mean, it is sort of different really. If I’m writing a Tex file for example, slower compiles mean I’ll get… fewer previews. The screen is still on. More previews is vaguely useful, but probably doesn’t substantially speed up the rate at which I write—the main bottleneck is somewhere between my hat and my hands, I think.

reply
sockbot
1 day ago
[-]
Technology Connections just did a timely video on the very topic of power vs energy.

https://youtu.be/OOK5xkFijPc?si=Uya3fI5oy_JFfSqI

reply
toast0
23 hours ago
[-]
As with everything, it depends. If you are going to do the same jobs regardless of the amount of time it takes, then yeah, dropping the max power probably just spreads the energy use over time. That doesn't usually help you save money, unless you have a very interesting residential plan.

OTOH, if it's something like realtime game rendering without a frame limiter, throttling would reduce the frame rate, reducing the total amount of work done, and most likely the total energy expended.

reply
KennyBlanken
1 day ago
[-]
It is well known in the PC hardware enthusiast community that the last few digits of percent of performance come at enormous increases in power consumption as voltages are raised to prevent errors as clock speeds go up.

Manufacturers chase benchmark results by youtubers and magazines. Even a few percent difference in framerate means the difference between everyone telling each other to buy a particular motherboard, processor, or graphics card over another.

Amusingly, you often get better performance by undervolting and lowering the processor's power limits. This keeps temperatures low and thus you don't end up with the PC equivalent of the "toyota supra horsepower chart" meme.

1400W for a desktop PC is...crazy. That's a threadripper processor plus a bleeding edge top of the line GPU, assuming that's not just them reading off the max power draw on the nameplate of the PSU.

If their PC is actually using that much power, they could save far more money, CO2, etc by undervolting both the CPU and GPU.

reply
PeterStuer
1 day ago
[-]
I myself massively overspec my PSU's for my builds as I want ti keep them in the optimal efficiency range rather than pushing their limits. For a typical 800W budget I usually go with a tier1 1200W offering.
reply
creaturemachine
1 day ago
[-]
1400 is definitely the sticker on the side of the PSU. There is some theory behind keeping your PSU at 30-50% for optimal efficiency, but considering the cost of these 1k+ W units You're probably better off right-sizing it.
reply
naveen_k
1 day ago
[-]
It's a 1600W PSU (Coolmaster 1600 V2 Platinum)
reply
naveen_k
1 day ago
[-]
I'm actually using a 1600W PSU. 1400W is my target max draw. This is a dual EPYC (64 core CPU each) system btw. The max draw by the CPU+MB+Drives running at peak 3700MHz without the GPU is 495W! Adding 4x 4090 (underclocked) will quickly get you to 1400W+.
reply
gorbypark
1 day ago
[-]
Pretty neat! I’m currently working on a project that uses an ESP-C6 that just exposes a “switch” over matter/thread thats based off the results from the Spanish electricity prices API. The idea is have the switch be on when it’s one of the cheapest hours of the day, and off otherwise. Then other automations can be based on it. This was pretty trivial to do in home assistant but I want something that’s ultra low power and can just be completely independent of anything for less technical users. My end goal is to have a small battery powered device that wakes up from deep sleep once a day to check the day ahead prices via WiFi. The C6 might be overkill for this, but once I have a proof of concept working I’ll try and pick something that’s ultra low super ultra low power. Something that needs charging once or twice a year would be ideal.

The ideal form factor might be a smart plug itself, but I can’t find any with hackable firmware and also matter/thread/wifi.

reply
naveen_k
1 day ago
[-]
That's actually pretty cool. ESPs are awesome little things.
reply
Symbiote
21 hours ago
[-]
Within the next year or two, I'm going to look at implementing something similar at my work.

We don't pay for electricity directly (it's included in the rackspace rental), but we could reduce our carbon footprint by adjusting the timing of batch processing, perhaps based on the carbon intensity APIs from https://app.electricitymaps.com/

Though, the first step will be to quantify the savings. I have the impression from being in the datacentre while batch jobs have started that they cause a significant increase in power use, but no numbers.

reply
reaperman
19 hours ago
[-]
You’re probably already on top of it but if your company doesn't operate the datacenter you’ll also want to estimate the carbon cost of cooling in addition to the electricity that the machines consume.
reply
dehrmann
21 hours ago
[-]
Can you run the batch processing on other machines at off-peak hours?
reply
PeterStuer
1 day ago
[-]
Nice project, but would it not be more rational to have your system running underclocked/undervolted at the optimal perf/watt at all times, with an optional boost to max performance for a time critical task? Running it away from the optimum might save on instant consumption but increase your aggregate consumption.
reply
blitzar
1 day ago
[-]
Bring back the "turbo" button on the front of the PC.
reply
naveen_k
1 day ago
[-]
Thanks! That's an excellent point. You're right that there's likely a sweet spot that would be more efficient overall than aggressive throttling.

The current implementation uniformly sets max frequency for all 128 cores, but I'm working on per-core frequency control that would allow much more granular optimization. I'll definitely measure aggregate consumption with your suggestion versus my current implementation to see the difference.

reply
schiffern
21 hours ago
[-]
Zooming out, 80-90% of a computer's lifecycle energy use is during manufacturing, not pulled from the wall during operation.[1] To optimize lifetime energy efficiency, it probably pushes toward extending hardware longevity (within reason, until breakeven) and maximizing compute utilization.

Ideally these goal are balanced (in some 'efficient' way) against matching electricity prices. It's not either/or, you want to do both.

Besides better amortizing the embodied energy, improving compute utilization could also mean increasing the quality of the compute workloads, ie doing tasks with high external benefits.

Love this project! Thanks for sharing.

[1] https://forums.anandtech.com/threads/embodied-energy-in-comp...

reply
KennyBlanken
1 day ago
[-]
Please go learn about modern Ryzen power and performance management, namely Precision Boost Overdrive and Curve Optimizer - and how to undervolt an AM4/AM5 processor.

The stuff the chip and motherboard do, completely built-in, is light-years ahead of what you're doing. Your power-saving techniques (capping max frequency) are more than a decade out of date.

You'll get better performance and power savings to boot.

reply
naveen_k
1 day ago
[-]
Thanks for the suggestion! I'm actually using dual EPYC server processors in this workstation, not Ryzen. I'm not sure EPYC supports PBO/Curve Optimizer functionality that's available in AM4/AM5 platforms.

That said, I'm definitely interested in learning more about processor-specific optimizations for EPYC. If there are server-focused equivalents to what you've mentioned that would work better than frequency capping, I'd love to explore them!

reply
ac29
1 day ago
[-]
For people with Intel processors, check out raplcap: https://github.com/powercap/raplcap

It lets you set specific power consumption limits in W instead of attempting to do the same by restricting maximum core frequencies (which could also be useful in addition to overall power limits).

reply
csdvrx
1 day ago
[-]
Another suggestion: when you want to save power, use irq affinity with /proc/irq/$irq/smp_affinity_list to put them all on one core.

This core will get to sleep less than the others.

You can also use the CPU "geometry" (which cores share cache) to set max frequency on its neighboring cores first, before recruiting the other cores

reply
naveen_k
1 day ago
[-]
Thanks for the suggestion. Will check it out.
reply
throwaway3231
1 day ago
[-]
It's well established that completing the same task more slowly at a lower clock rate is actually less energy-efficient.
reply
yjftsjthsd-h
1 day ago
[-]
Right, "race to idle"
reply
nottorp
1 day ago
[-]
How is it with modern overclocked by default cpus? If you cut power use by 50% you still get 80% of the performance?
reply
throwaway3231
1 day ago
[-]
It's usually more energy-efficient to finish a task quickly with a higher power draw, also known as race-to-idle.
reply
naveen_k
1 day ago
[-]
Good point. I'm often running multiple parallel jobs with varying priorities where uniform throttling actually makes sense. Many LLM inference tasks are long-running but not fully utilizing hardware (often waiting on I/O or running at partial capacity)

The dual Epyc CPUs (128 cores) in my setup have a relatively high idle power draw compared to consumer chips. Even when "idle" they're consuming significant power maintaining all those cores and I/O capabilities. By implementing uniform throttling when utilization is low, the automation actually reduces the baseline power consumption by a decent amount without much performance hit.

reply
foobarian
17 hours ago
[-]
It seems it may be relatively accessible to take a few representative tasks and actually measure the soup-to-nuts energy consumed at the plug. Would be very interesting to see in tandem with the power optimizations!
reply
naveen_k
17 hours ago
[-]
That's exactly what I did first! I ran a CPU torture test at full clock speed and measured the power draw at the plug, then repeated the same test with the lowest clock speed setting. For the Epyc system, there was about 225W lower power draw at the reduced clock speed. Even at idle, capping the max frequency reduced the power draw by about 20+%.
reply
gitpusher
1 day ago
[-]
People have made valid criticisms about the basic effectiveness of your strategy. But in any case, this is a pretty awesome hacker project - nicely done! Love the appearance of your CLI tool. I am definitely bookmarking for future inspo
reply
naveen_k
1 day ago
[-]
Thanks! I initially just wanted to build a dashboard, with the power optimization part being a later addition. Based on the HN response, it seems that's the feature that resonated most with people. I'll be making improvements to the optimization component in the coming days and will publish what I have.
reply
Havoc
1 day ago
[-]
From what I’ve seen price per token make home generation uncompetitive in most countries. And that’s just on elec - never mind cost of gear

Only really makes sense for learning or super confidential info

reply
gtirloni
1 day ago
[-]
Could you share how much you have saved in $?
reply
naveen_k
1 day ago
[-]
The power optimizer daemon has only been running for a few days, so it's hard to measure in $ value but based on my peak pricing I would estimate the savings to be around a few dollars since then.
reply
whalesalad
1 day ago
[-]
Wonder if a big UPS/power bank would be better? Charge it during periods where power is cheaper, and utilize it when power is more expensive. Then again if you do not need full performance all the time - this is a cool solution.
reply
naveen_k
1 day ago
[-]
Definitely, I've been contemplating getting a 5-10kWh LFP battery backup with <10ms UPS switchover to run the workstation and home backup. This is an intermediate solution until then.
reply
ajsnigrutin
1 day ago
[-]
Why all this instead of a simple cronjob switching from performance to powersave profiles depeding on current time (=electricity price)?
reply
naveen_k
1 day ago
[-]
A cronjob would definitely work in most cases if the goal is just to auto change freq profiles during set ToU periods. I just wanted a more flexible system where the system can auto change the profiles based on actual utilization so demanding tasks aren't slowed down.
reply
pests
1 day ago
[-]
I'm on a time of use rate plan, most expensive from 11am-7pm. However they also have "Critical Peak Events" which increase the rate about 10x to over a $1/kwh that last up to 4 hours. Just saying it would be a bit more complex then just checking the time.
reply
ajsnigrutin
1 day ago
[-]
So how do you get that data (status) now (if(is-critical-peak-event){})? Do the smartplugs gather some smartgrid-style data?
reply
joshvm
1 day ago
[-]
It depends on your supplier, because they set the pricing and that information gets displayed on your meter. Octopus (UK) has a dynamically-priced service called Agile where you can query the API as a user; in some cases the API doesn't even need a login for regional pricing. You would have to build some logic on top for most smartplugs through something like HomeAssistant. There are storage batteries which can react to pricing, and some which will also work in concert with current solar power or other renewables.

https://octopus.energy/blog/agile-smart-home-diy/

https://www.zerofy.net/2024/03/26/meter-data.html for some more European info on meters, though mostly focused on accessing your own usage data.

reply
pests
1 day ago
[-]
I do believe I've seen some plugs that do connect to some data source to do it automatically, but I'd rather not give plugs access to the internet haha. Our provider also gives advanced notice via text or email so its totally possible to connect it up yourself.
reply
mythrwy
18 hours ago
[-]
This looks cool but I feel it should notify the user with a snip from the song "You Suffer" by "Napalm Death" when throttling occurs.
reply