Apple Silicon costs more than OpenRouter
132 points
2 hours ago
| 34 comments
| williamangel.net
| HN
dijit
50 minutes ago
[-]
Frontier AI companies are selling at a loss.

Excusing everything else that u/bastawhiz said[0]; the obvious fact here is that Claude, OpenAI, Gemini et al. are quite literally burning through 100's of billions of dollars and selling it back to you for pennies on the dollar in the hopes that they get to be the only one left.

If I spend $10 growing Oranges and sell them to you for $1; then of course it's more expensive for you to do the growing.

I feel like I'm taking crazy pills. These models will become more expensive over time, it's functionally impossible for them not to, they just want to capture the market before they have to stop selling at a huge loss.

[0]: https://news.ycombinator.com/item?id=48168433

reply
vanviegen
40 minutes ago
[-]
That seems unlikely. There are many providers for open models on openrouter. It seems unlikely that they are throwing money away for each token they sell.

Also, there a good technical reasons for inference being much more efficient at scale.

reply
dijit
30 minutes ago
[-]
The providers on OpenRouter serving open models aren't "throwing money away", agreed.

But that's not the point I'm making. (or, it kind of is, but it's more high level than that).

They're running spot and preemptible GPU instances (60-80% cheaper than on-demand), paying wholesale industrial electricity rates, and running at multi-tenant utilisation densities that make your MacBook look like a bonfire. Of course they're not individually loss-making on inference, they're aggregating cheap commodity compute and skimming a margin, and on paper that's what makes it seem like a good idea, certainly not a loss leader right?

But zoom out a bit; the entire stack is swimming in VC money. OpenRouter itself just raised at a $1.3B valuation backed by a16z. The Chinese models that now account for 36% of all tokens routed through the platform (DeepSeek, Qwen) are priced the way they are because Beijing-adjacent capital has decided market share matters more than margin right now.

So yes, technically no single party is "throwing money away" on each token; they're just all simultaneously subsidising different parts of the stack for strategic reasons. The floor price you're seeing isn't a stable equilibrium, it's a pile of investor money that hasn't entirely finished burning yet.

reply
vlovich123
19 minutes ago
[-]
> The floor price you're seeing isn't a stable equilibrium, it's a pile of investor money that hasn't entirely finished burning yet.

All that says is that it gets more expensive in the future as competitors exit the market and sustainability becomes important. That’s why Uber and Lyft were so cheap until they killed taxis. One major difference of course is that some models will remain largely good enough and the incremental cost of running will keep dropping to 0 over time since the hardware needed doesn’t get more expensive and is already purchased.

reply
dijit
2 minutes ago
[-]
I think we agree.

I only object to taking current prices as if they are perpetual prices.

reply
brianwawok
23 minutes ago
[-]
So many more efficiencies possible at scale though. I cannot keep a local model 98% utilized 24/7, at least not with my current workload. A big cloud can. I can’t power my servers with DC, I have this AC to DV conversion nonsense. The list goes on.
reply
NicuCalcea
38 minutes ago
[-]
The blog compares the cost of running Gemma4 31b, which on OpenRouter is offered by small no-name inference providers, not by frontier AI companies. It seems like a fair comparison to me.
reply
OsrsNeedsf2P
25 minutes ago
[-]
The models have been dropping 10x in price for completing the same tasks, year over year. Even if you think Anthropic is losing money charging 10x more than everyone else for their 400B model, the prices will continue to go down based on model improvement alone
reply
tempest_
19 minutes ago
[-]
It is the model training that is dragging them down.

If the arms race stopped tomorrow the current price pays for the inference.

reply
Danox
1 minute ago
[-]
But isn’t training models, a forever task like iterating in tech you can never take a day off, adding humans to the equation don’t humans train/teach themselves new skills over a lifetime, and isn’t one of the selling points in the future when selling this AI slop your AI never goes to sleep and can always be trained forever? The AI price for entry as we go on into the future will only increase.
reply
ianberdin
37 minutes ago
[-]
Do you have a proof? Anthropic’s CEO said they Are profitable. Same with OpenAI.
reply
dijit
35 minutes ago
[-]
Profitable for inference if you completely ignore training costs and that you absolutely must continuously train new models.
reply
vlovich123
32 minutes ago
[-]
Which is where your analogy breaks down and why you think you’re taking crazy pills. Inference is growing and selling the oranges in your analogy. Model building is growing the farm to sell larger, juicier more addicting oranges.
reply
skippyboxedhero
9 minutes ago
[-]
The same mistake was made with Amazon, and a million other tech companies in the early 2010s.

Amazon were losing money, they were losing money because were growing and spent all of their cash flow on growth. It wasn't merely regarded as a hopelessly unprofitable business, if was regarded as potentially fraudulent. The share price collapsed in 2014 because, some thought, the profit would never come, investing in growth was pointless, etc.

Last year Amazon made nearly $100bn in profit. Stock is up 20x from then...this is after AWS was known (everyone also that was a massive fraud, could never be profitable...we know it was printing from day one), after it was the world's biggest retailer, etc.

It is difficult to understate how consistently people make this mistake, not just individually but in aggregate. You see the same thing with restaurants, consumer products, office leasing, so many businesses. This is not to say that the future will happen any particular way but that what Anthropic and co are doing is obviously rational and based upon very real cash flow. Anthropic's growth in revenue is, I believe, unparalleled in modern corporate history. A slight difference in this case is also that the economics of training these models is improving exponentially over time.

reply
dijit
23 minutes ago
[-]
Are ya fuckin' serious mate?

The restaurant next to the mines were profitable up until the moment the mines themselves shut down: one doesn't exist without the other.

You can't ringfence inference as "the profitable bit" and then hand-wave away the training. Without continuous training there is no inference product.

Claude 3 Opus isn't sitting there making revenue in 2026 - the thing is just deprecated. The moment you stop spending billions on the next model, your "profitable" inference business is on borrowed time until someone else makes it obsolete.

Maybe I made a mistake in my analogy... They're not growing a farm and then selling oranges. They're on a treadmill where stopping is death, and the treadmill costs $10bn a year to keep running.

reply
vlovich123
10 minutes ago
[-]
> They're on a treadmill where stopping is death, and the treadmill costs $10bn a year to keep running.

You’re literally describing all companies. Google takes about $270bn/year to run. If they stopped spending that they’d die pretty darn quick. It’s also a description of working - unless you’d built up significant savings, if you stopped working you’re also going to die.

reply
spzb
30 minutes ago
[-]
And ignore capital costs, depreciation, user churn etc
reply
tiffanyh
34 minutes ago
[-]
Do you mind sharing source links to that profitability claim?

I’m struggling to find the quotes.

reply
miltonlost
22 minutes ago
[-]
If only they had their books open to do more than just "say"
reply
poly2it
46 minutes ago
[-]
Well, I'd be surprised if non-R&D inference providers were selling at a loss. There are a plethora to choose from, competition is quite healthy. Will they keep providing cheap tokens while the labs raise their prices? Probably, but then I don't see how they could be raised in the first place. And what timescale are you talking about? A couple of years? It is appropriate to assume inference will become more efficient over time. If you raise your prices, you are going to be out competed before it's profitable (if you assume it is unprofitable) which would be negligent. I don't see how this makes sense.
reply
vlovich123
34 minutes ago
[-]
Except that’s not what the analysis is. They’re spending < $1 to get $1 from you and the other $9 to figure out how to improve the model further and build up products on top of that to turn that $1 spend into $5 in the future.

In other words, inference is fairly profitable for them and the rest of the money is spent growing revenue as quickly as possible. Building models is still an expensive line item but the costs for that are going down with time.

There is also maybe a “capture the market” mentality but I don’t think that’s necessarily it - the tools and processes are largely fungible and that’s a huge problem. They need to figure out how to make it sticky for “capture the market”, but there’s also a very real “grow as big as possible as quickly as possible to take on Google”; Google has an existential threat here.

reply
EGreg
32 minutes ago
[-]
These models will become more expensive over time, it's functionally impossible for them not to, they just want to capture the market before they have to stop selling at a huge loss.

They could have said the same about transistors. People keep inventing new ways to keep the costs down. Just look at the latest Qwen, DeepSeek, BitNet. Interesting tidbit: they’re all open, and as Google said in 2022: they have no moat.

reply
MattRix
39 minutes ago
[-]
The inference is absolutely not sold at a loss. The reason frontier model companies aren’t profitable is because training the models is so costly, not inference.
reply
MuffinFlavored
40 minutes ago
[-]
> Frontier AI companies are selling at a loss.

How big/deep of a loss?

I feel like I read this every day for years that Uber did this same "idiotic, losing" strategy (how it was pitched/discussed) and then one day we woke up and... without much fuss, boom, they were profitable seemingly overnight.

reply
brianwawok
21 minutes ago
[-]
Well and uber cut the driver pay in half and doubled the price. They didn’t really find any efficiencies, robo drivers don’t exist yet. Also why I hardly touch them anymore.
reply
spzb
27 minutes ago
[-]
Ed Zitron discusses this as part of his post on AI economics : https://www.wheresyoured.at/ais-economics-dont-make-sense/
reply
ajross
36 minutes ago
[-]
> I feel like I'm taking crazy pills.

Why? It's no less crazy than when Uber and Lyft were doing the same thing. Or when the entire tech industry was doing it in the dot com boom.

Investment-driven market growth at a loss is like the least surprising thing in all of this. The tech is new and fascinating. The bubble is just another trip through the funhouse.

reply
bastawhiz
2 hours ago
[-]
This isn't a good analysis, and it's because it keeps rounding everything up. He rounds up the cost of electricity by 10%. He has a range of power use, takes the high end (which is 2x the low end) and multiplies it by the inflated electricity cost.

But then they talk about using a newly purchased Mac to do the inference, running at full capacity, 24/7. Why would you do that? Apple silicon is fast but the author points out: you're only getting 10-40 tokens per second. It's not bad, but it's not meant for this!

It's comparing apples to oranges. Yeah, data centers don't pay residential electricity rates. Data centers use chips that are power efficient. Data centers use chips that aren't designed to be a Mac.

Apple silicon works out pretty good if you're not burning tokens 24/7/365 and you're not buying hardware specifically to do it. I use my Mac Studio a few times a week for things that I need it for, but I can run ollama on it over the tailnet "for free". The economics work when I'm not trying to make my Mac Studio behave like a H100 cluster with liquid cooling. Which should come as no surprise to anyone: more tokens per watt on hardware that's multi tenant with cheap electricity will pretty much always win.

reply
datadrivenangel
1 hour ago
[-]
Rounding everything down in the most optimistic setting got me to $0.40 per million tokens, and openrouter has the same model at $.38/mtok.
reply
650REDHAIR
1 hour ago
[-]
I’ll keep my data local over a $.02/mtok difference.
reply
quietsegfault
1 hour ago
[-]
It’s more than just data locality. OpenRouter is faster, no? I have an M4 pro, and anything but the smallest dumbest models are unusably slow for interactive use. I personally haven’t yet found a good use case for offline/non-interactive LLM work locally.
reply
datadrivenangel
4 minutes ago
[-]
Yeah. The speed is the biggest issue. The intelligence of open models is good enough for serious work (though still worse than the frontier models), but the cloud models are often 3-7 times faster, and you can get more parallelization and so get speeds on the order of hundreds of tokens per second, which makes things fast!
reply
statestreet123
24 minutes ago
[-]
Rounded up, yes, and oddly inefficient for someone obsessed with inefficiency. One could buy a brand new 64gb M5 macbook for well over 4k. Another could buy a scratched up but functioning M1 Max 64gb off of ebay for a little over 1k—and somehow get the same 10-20 t/s with 31b that the author does with an M5. Or better yet, have a frontier model do the planning and judging, and have a local MOE model execute at 50 t/s. All of this achievable by a former English major with too much free time.
reply
faitswulff
1 hour ago
[-]
The article makes no sense. I can't use OpenRouter as a general purpose computing device. Why are we comparing a whole computer to a single purpose SaaS?
reply
tuwtuwtuwtuw
1 hour ago
[-]
I think it's because there are a lot of people writing articles about the benefits of running local models. I think it's fair to say that there are daily threads on HN singing the praises or local inference. I also see people buying new hardware where the main trigger is ability to run local models.
reply
FuckButtons
10 minutes ago
[-]
But the people who want to do local inference are putting some amount of value on privacy that’s not captured by the raw monetary value so just comparing the price is somewhat beside the point, it’s also true that, if you have eg a Mac and you use that as your main computing device then you would have spent money on it anyway, so you can’t even really compare its value to spend on something that’s not general purpose.
reply
llm_nerd
10 minutes ago
[-]
Your post makes sense if you bought the hardware for other reasons, and maybe run models occasionally as a novelty.

That isn't the case for many, though, and there is a whole social media space where people are hyping up the latest homebrew options for running models, believing it frees them from the yoke of big AI.

Millions of people are buying big $ maxed-out hardware like the Mac Studios or DGX specifically to run LLMs. Someone rationally running the numbers is a good thing.

reply
dist-epoch
1 hour ago
[-]
using it 24/7 brings the average cost down, not up.

the less you use local LLM, the less sense it makes since you paid a lot for hardware you don't use

reply
bastawhiz
54 minutes ago
[-]
That's the point: why would you buy a device that's specifically not optimized to be used for 24/7 inference? It's expensive hardware that's not designed to be used in that situation! The power use for inference isn't especially good and you're not getting even a fraction of the benefit from the hardware that you're paying for.
reply
groundzeros2015
1 hour ago
[-]
The hardware has multiple uses for the same cost. The pay-per-use server does not.
reply
cyanydeez
1 hour ago
[-]
nothing about the current data center craze looks efficient.
reply
bastawhiz
52 minutes ago
[-]
Whether you think building data centers or not is a good idea it's inarguable that the per-token efficiency (power, hardware, etc) is FAR higher in a data center. That's literally what it's designed for.
reply
trollbridge
20 minutes ago
[-]
Probably because lots of data centres are being built (or half-built) which are sitting idle.
reply
applfanboysbgon
2 hours ago
[-]
Unless I'm misunderstanding, this is counting the entire laptop in the cost of generating tokens. The calculation seems to omit that, in addition to receiving LLM output, you have also received a laptop in exchange for your money. If you intend to put this machine in a dark corner and run it solely as a token-munching server, a laptop would be an exceptionally poor choice of technology for this purpose. But if you intend to use the laptop as a laptop, having a laptop is a pretty big benefit over not having a laptop.

You also get the benefit of privacy, freedom from censorship, and control over the model used (i.e. it will not be rugpulled on you in three months after you've built a workflow around a specific model's idiosyncrasies).

reply
andai
2 hours ago
[-]
Yeah, a better metric might be, the difference in cost between the laptop you need to run local models, and the laptop you would have bought anyway.
reply
fwipsy
43 minutes ago
[-]
The base 14" m5 MacBook pro is $1700 with 16gb/1tb. The author's spec is $4300 - $2600 more.

It depends on how often you use it (and your tolerance for slow inference) and whether you would have otherwise bought a higher spec. For my needs, this costs a LOT more.

reply
dist-epoch
1 hour ago
[-]
> control over the model used

but you lose access to the most capable models, you can run only the small ones

reply
sleepyeldrazi
42 minutes ago
[-]
If you want a good dense model, use qwen3.6 27B instead, speed will be up, and if you don't take my word for it being smarter, take openrouter's prices of it against the bigger, slower and less memory-efficient gemma do the talking.

If you want a faster model, go for qwen3.6 35B (or gemma 4 26B if gemma models perform better for your tasks). There is a reason why people (myself included) haven't shut up about those two (especially the 27B). Its small enough to run at a decent speed (especially with the built in MTP that finally has official llama.cpp support) and for many workloads (every benchmark I have ever thrown at it) it is matching or surpassing models it has no right to.

A couple of days ago I woke up with my internet being down, started 27B in pi, told it to diagnose whats wrong by giving it my router's password, went to grab a coffee and by the time I got back, i had a full report with suggestion on how to proceed. I love openrouter and I use it for many things, but it is not cheaper.

Subjectivity and opinions based on personal experience with all those models implied naturally, I assume the 31B gemma has cases in which it edges out, I've just failed finding any and I have been running all 4 models mentioned since hours after each of them dropped nonstop for different tasks. Hell, for my hermes, I've started getting better results once I switched from gemma 4 26B to qwen3.5 9B, not even the massively improved 3.6 series. It just feels outdated/ cherrypicked to not use what by many accounts is the current consumer hardware SOTA if doing such an analysis.

reply
trollbridge
14 minutes ago
[-]
Right. Qwen 3.6 45b (6 parameter) runs on a commodity 5090, which, if you're into video games, you probably already have one of. It is entirely usable for most code generation tasks. (Not all, but most.)

Likewise, DeepSeek V4 Flash is quite accessible on local models, with DwarfStar 4 making it easy to run on a 96GB MacBook.

There's nothing wrong with paying for inference, but local models bring up some pretty amazing possibilities, such as entirely offline usage or being able to work on private PII, legally privileged, etc. sort of data, or performing tasks with no concern given whatsoever towards billing overruns.

The other possibility is being able to build a service which you can be 100% assured you can keep running without worrying about a service going down or being end-of-lifed, which is currently a problem with frontier models. My local Qwen setup is entirely predictable. It can run as long as I can keep finding hardware to run it.

A sensible strategy uses both: have local inference tools available, and use both low-cost and high-cost cloud based models. You can use GPT-5.5 and Opus-4.7 for things they excel at (including laundering the latter via a Claude subscription to make it cheaper) for demanding reasoning tasks, DeepSeek V4 Pro for slightly less demanding tasks, V4 Flash for most (not all) code generation, and then local models for things where you want a local model.

reply
Jayakumark
1 hour ago
[-]
OP is comparing against Gemma everywhere but concludes paying Anthropic make more sense. Anthropic is $15 per million output token which is 30-35x more expensive even in openrouter .

This is like comparing e-bike at home with e-bike rental and concluding therefore we need to rent Toyota since it can go at similar speeds. Getting tired of bad posts getting much attention .

reply
maho
2 hours ago
[-]
The author only compared output token costs -- but for typical agentic workloads, input tokens dominate the costs by a large margin. Running inference locally, input tokens are, to first order, free. (They only generate implicit costs through higher time-to-first-token, higher power use, and lower token output speed).
reply
Wilya
30 minutes ago
[-]
Yeah, that completely invalidates his point.

I looked at a couple random agentic sessions in my openrouter activity, and the input cost is 10x the output cost.

Prompt caching on openrouter is complicated and unreliable. On local hardware with llama-cpp, it's mostly free.

reply
antirez
1 hour ago
[-]
Mmmm, nope if you do the smart thing. MacBook M5 max 128gb is a premium laptop at 6k, but with it you can do many things and is your good main driver for the day. Then, it can also run DeepSeek V4 flash and perform non trivial tasks locally, without censorship or limitations, even without an internet connection and on very privacy sensitive data. That's a good deal. If you buy 25k for a dual Mac Studio 512gb to abandon OpenAI and company you are going to be disappointed by both performance and cost.
reply
kamranjon
56 minutes ago
[-]
Yea my m4 max with 128gb has ended up making a lot of sense for me. I do video editing, I train ml models, I run large open AI models, I do 3d modeling, rendering and cad work. I never do all of this 100% of the time, I’ll setup a ml training to run over night and check results in the morning, during work I’ll set it up as a server and run local models, on my own time I’ll edit video and work on 3d modeling. It’s an incredibly versatile machine - and all of this is done while keeping your data on your device and giving you full control over your workflows.
reply
clearstack
9 minutes ago
[-]
Apple services are ~27% of revenue and growing double-digits. The chip is a moat for that flywheel, not a standalone compute bet.
reply
konaraddi
52 minutes ago
[-]
A lot of comments here are about the issues with the analysis in OP’s post but much of them are “a distinction without a difference” with respect to the broader conclusion. When we look at purely cost and performance (setting aside privacy) then it’s better for individual devs to pay for hosted then for self hosting. Employers are paying for tokens on the job and most devs are finding the $PREFERRED_PROVIDER’s $20/$100/$200/month subscription sufficient outside of work. Most devs don’t fall in the conditions under which running local models make sense purely on the basis of cost vs performance.

More critically, in practice, setting up local models seems more like a hobby, an educational exercise, or an act of privacy control than it is for cost cutting or productivity.

reply
regexorcist
2 hours ago
[-]
I simply can't go back to cloud AI. Privacy and full control are more important to me than speed and SOTA models.
reply
xyzzy123
1 hour ago
[-]
Also predictability, resilience, sovereignty. I'm not worried about other people's outages, that unexpected demand will impact me at an inconvenient time, that someone's watering down my model, that my costs will change unpredictably or that some unforseen error will lead to a huge bill.

It's in the same category as rooftop solar for me. It doesn't have to make strict economic sense if you're the particular type of person who gets peace of mind from control of infrastructure / reduced dependency.

reply
synthos
2 hours ago
[-]
How much does your data privacy cost?
reply
datadrivenangel
2 hours ago
[-]
As stated in the analysis, thousands of dollars. That said, the smart thing to do is target smaller models (few billion parameters) and then use larger models for non-privacy tasks.
reply
michaelbuckbee
2 hours ago
[-]
Slightly different slice into this a very similar situation (local vs OpenRouter AI inference).

But in _every_ metric other than privacy it was better to run via OpenRouter than a local model, and not by a small amount.

Direct link to the comparison charts:

https://sendcheckit.com/blog/ai-powered-subject-line-alterna...

reply
nu11ptr
2 hours ago
[-]
"Accelerated depreciation (if any) from shortening the lifespan of the device will be more expensive than the electricity"

Shortening the lifespan?

reply
Der_Einzige
1 hour ago
[-]
The amount of FUD and notion that hardware depreciates in this manner is widely held. I blame Michael Burry of the Big Short who is perpetuating these lies to the investor community today.

There's a bunch of retro hardware which should make people pause and realize they're stupid to assume hardware slows down on average even 5% 20 years later (it's probably closer to 2% and I'm being generous).

HVAC/power delivery and generation are the major factors, and if you didn't skimp/get defective parts for this and replace failed moving parts (usually fans), your hardware is basically the same 20 years down the line as it was today.

Also using LLMs locally doesn't even induce sustained 100% GPU usage over significant periods of time for most real (agentic coding in OpenCode) use-cases.

reply
perbu
44 minutes ago
[-]
For me, the appeal of local compute is first and foremost confidentiality and having the possibility to run my 200K documents through an LLM just to see what happen without having to consider the cost.
reply
bilekas
2 hours ago
[-]
I don't hear people debating which is cheaper, local or cloud run models. The conversation, at least what I hear, is a lot of the time users are not utilizing an awful lot of tickets all the time, those providers will be paid if you never use them. If 80% - 90% of the work I and my team are doing with Ai is grunt work, write tests for this, implement a FFT here, write the dB query for X. Nothing exhausting. Those who are using AI for whole cloth "vibe coded" applications and services are definitely better suited to cloud. If a work laptop can run my local models and get my works needed performance for development, why wouldn't I as a company prefer that?

Add to that the privacy improvements and data protection and potentially further specific inferance if needed it's a no brainer.

Again, Ai is a tool, and the right tool for the job, I would wager with no evidence looked up, is that the majority of Devs would be happy with 10-30 per second locally.

reply
trvz
1 hour ago
[-]
Local LLMs aren’t about cost, but control.
reply
Havoc
2 hours ago
[-]
I like that the numbers were crunched, but the answer to these is always a bit of a foregone conclusion.

* Industrial power pricing

* Wholesale hardware pricing

* Utilization density of shared API

means API always wins a cost shootout.

Privacy & tinkering is cool too though

reply
Archit3ch
17 minutes ago
[-]
Except I already have a local Mac to run Xcode. OpenRouter cannot help with that, at any price.

> 64 gigs should run a model like Gemma 4 31b

No, it can run anything in the 70B range. It's a notable quality upgrade from the 30B, which isn't obvious because the famous flurry of April releases didn't contain any 70Bs.

It can also run 120B in UD-Q3. Or 230B disk-streamed.

reply
zkmon
56 minutes ago
[-]
Consider deepseek as well. About 50 cents per 1M tokens, for >1T model
reply
deadbabe
17 minutes ago
[-]
What would really elevate an article like this is if we could somehow quantify human brain’s equivalent outputs and compare the costs with local LLM and cloud LLMs.
reply
freakynit
1 hour ago
[-]
So I did the India-specific analysis for a tier-3 city. Here, electricity costs 1/3rd of the US version, and you also get solar subsidy up to a certain amount.

https://shorturl.at/q6gRE

tldr;

Hardware deprecation costs are the major factor.

But, if we assume ZERO hardware deprecation (not realistic), then local inference becomes super cheap.. roughly, 90%+ cheaper.

Third case: the break-even happens only if we can get at the very very very least, 8.7 years of useful hardware life. A more realistic number, however, when working 8 hrs/day and not of 24 hrs/day, is around 25 years.

So, for now, local inference is preferable if you deeply care about privacy. From cost perspective, it's still not there.

reply
brisket_bronson
1 hour ago
[-]
> Let's round up to $0.20 per kWh.

Next paragraph

> At ~50-100 watts and $0.18/kWh that's $0.009 or $0.018 per hour. $0.02 per hour. $0.48 cents per day for the electricity to be running inference at 100%.

lol

reply
jmyeet
1 hour ago
[-]
I've dug into this previously for one simple reason: NVidia segments the market by capping VRAM and Apple silicon uses a shared memory model that could challenge that but it currently doesn't. And I really wonder if Apple realizes the potential of what they have or if they even care.

So, for comparison, a 5090 has 32GB of VRAM and you can get one for ~$3000 maybe. To go beyond that memory with current generation (ie Blackwell) GPUs, you have to go to the RTX 6000 Pro w/ 96GB of VRAM and that's almost $10,000 for the GPU by itself. Beyond that you're in the H100/H200 GPUs and you're talking much bigger money.

Part of the problem here is the author is looking at laptops. That's the only place you'll find the M5 Max currently. The real problem here is that the Mac Studios haven't been updated in almost 2 years. There were configs of those with 256/512GB of RAM but they've been discontinued, possibly because of the RAM shortage and possibly because of they're reaching EOL. Apple hasn't said why. They never do.

Many expect M5 Ultra Mac Studios in Q3 and the M5 Ultra may well have >1TB/s of memory bandwidth (for comparison, the 5090 is 1.8TB/s). Memory bandwidth isn't the only issue. A 5090 will still have more compute power (most likely) but being able to run large models without going to a $10k+ GPU could be huge.

But yes, it's hard to compete with the scales and discounted electricity of a data center. Even H200 compute hours are kinda cheap if you consider the capital cost of what you're using.

I've looked into getting a 128GB M5 Max 16" MBP. That retails for $6k. You might be able to get it for $5400. But I don't think the value proposition is quite there yet. It's close though.

reply
gizajob
56 minutes ago
[-]
I think Apple really do care and know that Moore’s law is likely to position them as major winners in this race in 3-7 years time.
reply
brookst
50 minutes ago
[-]
This. The M5’s massive speed up in refill is a good sign.

Apple isn’t expecting wholesale adoption of on-device models this year or next. But all of their design and iteration suggests they see it coming.

reply
SecretDreams
2 hours ago
[-]
Will this cost structure always be this way and are there other benefits to not running your LLM on the cloud?

E.g.

Privacy

Uptime

Future cost structure controls

This is a field that has moved very quickly. And it has moved in a direction to try to trap users into certain habits. But these habits might not best align with what best benefits end users today or some time in the future.

reply
an0malous
2 hours ago
[-]
OpenRouter and other LLM platforms are being subsidized by VC investment to less than it costs them to run inference, the MacBook Pro is not
reply
Kwpolska
2 hours ago
[-]
When the AI bubble inevitably pops, the author will find a new way to skew results in favor of cloud LLMs. Like including the price of a desk and a chair in the local token cost.
reply
datadrivenangel
1 hour ago
[-]
I really wanted the laptop to look better cost-wise, but it doesn't.
reply
an0malous
1 hour ago
[-]
I mean if you’re buying it just as an LLM inference server it’s not, but most people already have laptops, in which case it’s practically free
reply
JSR_FDED
2 hours ago
[-]
Wouldn’t a Mac Mini be a better comparison?
reply
650REDHAIR
1 hour ago
[-]
Also after a few years you can sell and upgrade.

A 2022 Mac Studio w/ M1 Ultra and 128gb was ~$5200 new and I see them selling for over $4k on eBay.

Can’t sell your used tokens…

reply
sgt
2 hours ago
[-]
Yes, or Mac Studio. Laptops with screens aren't made to run 24/7 heavy workloads.
reply
maxdo
1 hour ago
[-]
I'm even surprised people ignorantly talking about advantages of buying very expensive device , run it only sometimes and aiming to beat cloud vendors.

If small model is great it will be hosted with good electricity cost and will be utilized 24/7.

Isn't it 2+2 of economics ?

CPU is a commodity, and we are still buying cpu and ram from vendors for same reason

reply
throw1234567891
58 minutes ago
[-]
Put a cost on sending your intellectual property to a saas provider who knows where. Half a problem when it is just your IP, hopefully not the IP of your clients. Maybe if one is building yet another html nobody really cares about.
reply
panny
2 hours ago
[-]
Your laptop AI costs too much? Speculative investors can help!
reply
christkv
1 hour ago
[-]
Bizarre running local models have nothing to do with cost. It's about privacy first and foremost
reply
anonym29
1 hour ago
[-]
The true advantage of locally self-hostable, open weight models isn't about monetary cost at all, it's about the CIA triad.

Running locally, you get confidentiality of knowing your tokens are only ever being processed by your own hardware. You get the integrity of knowing your model isn't being secretly or silently quantized differently behind the scenes, or having it's weights updated in ways you don't want. And you get the availability of never having to worry about an API outage, or even an internet outage, for local inference capacity.

And this isn't even starting to address the whole added world of features and tunability you get when you control the inference stack. Sampling parameters, caching mechanisms, etc.

OpenRouter may be cheaper than frontier labs, but you still lose all of these benefits from open weight models the moment you decide to rely on someone else's hardware for your processing.

reply
SpyCoder77
2 hours ago
[-]
Open router doesn't cost money per say, it depends on the providers pricing
reply
moritzwarhier
2 hours ago
[-]
> OpenRouter has Gemma4 31b at ~38-50 cents per million tokens. This means that on the optimistic side (50 watts, 40 tokens per second, and 10 years) the pro max is as cheap as openrouter. On the pessimistic side (100 watts and 3 years at 10 tokens per second) the pro max is 10x the cost. I think ~3x the cost per million tokens is likely the right number for local inference on the pro max from an accounting perspective.

Apart from that, like detailed in the the article, pricing for local compute also depends on electricity prices.

By the way, I don't want to snark about it, my English is not very good, but it's "per se", not "per say". Just commenting on this petty thing because it seems to be a common misspelling, and it always trips me up a bit. Makes me wonder about another supposed meaning like "from hearsay".

reply
mnahkies
2 hours ago
[-]
They do take a cut of 5.5%, (as they should)
reply
newsclues
1 hour ago
[-]
Local isn’t (just) about cost, it’s control and trust.
reply
Der_Einzige
1 hour ago
[-]
OpenRouter doesn't expose all the LLM sampling parameters/research that llamacpp, vllm, sglang, et al expose (so no high temperature/highly diverse outputs). Also OpenRouter doesn't let you use steering vectors or LoRA or other personalization techniques per-request. Also no true guarantees of ZDR/privacy/data sovereignty.

Oh, and the author didn't mention at all anything related to inference optimization, so no idea if they even know about or enabled things like speculative decoding, optimized attention backends, quantization, etc.

At least AI slop would have hit on far more of the things I listed above. This is worse-than-AI.

reply
mrtimeman
2 hours ago
[-]
The full-amortization framing is doing a lot of work here. I bought my laptop because I needed a laptop, not as an inference box, and running a model on it is incidental to that. Once the hardware is sunk for other reasons, the only cost left is electricity plus whatever depreciation you accelerate by hammering the SoC, which the post actually acknowledges in one parenthetical before allocating the full $4299 to tokens anyway.

Also nobody I know picks local over OpenRouter on price. They pick it for offline, for data not leaving the machine, for no rate limits, for not having a provider go down mid-task. If $/Mtok is the only axis, sure, cloud wins.

In practice the pattern I see is leaving a small model running on easy background tasks while using the laptop normally, not a dedicated inference box hammered flat out for 5 years.

reply