Local AI needs to be the norm
432 points
by cylo
6 hours ago
| 47 comments
| unix.foo
| HN
TheJCDenton
3 hours ago
[-]
For the mainstream audience, the sentiment around local ai today is the same that they had around open source a few decades ago. For a few products, some paid solutions were so much more advanced that open source were very often completely overlooked. Why bother ? And the like. Then we had captive SaaS and other plateforms and now it's obviously wrong for most of us.

The dependency we have with anthropic and openai for coding for instance is insane. Most accept it because either they don't care, or they just hope chinese will never stop open weights. The business model of open weights is very new, include some power play between countries and labs, and move an absurd amount of money without any concrete oversight from most people.

It's a very dangerous gamble. Today incredible value is available for nearly everyone. But it may stop without any warning, for reason outside our control.

reply
apublicfrog
1 hour ago
[-]
> It's a very dangerous gamble. Today incredible value is available for nearly everyone. But it may stop without any warning, for reason outside our control.

What stops you from running the best open weighted LLMs currently available on consumer grade hardware for the rest of time? They're good enough for 95% of use cases, and they don't have a used by date. From what I can see, the "danger" is not having the next tier that comes out, but the impact of that is very low.

reply
giobox
1 hour ago
[-]
> they don't have a used by date

For quite a lot of use cases, the current systems arguably do get worse over time if not continually updated. The knowledge cutoff date will start to hurt more and more as the weights age in a hypothetical scenario where you are stuck with them forever.

Coding, one of the most popular usescases today, would not be great if it say only understood java to a version from years ago etc.

https://en.wikipedia.org/wiki/Knowledge_cutoff

reply
throwyawayyyy
1 hour ago
[-]
One solution is not to advance anything of course. I'm not even joking, is there going to be a successor to React? I suspect not, with the vast amount of training data for React now, it's going to look silly to move to something else with less support. What is the last new popular programming language, rust? Will there be another one? I suspect not. Same reasoning. The irony of all this AI acceleration talk is it'll work best if we don't accelerate the underlying tech at all.
reply
mrtesthah
2 minutes ago
[-]
>Coding, one of the most popular uses cases today, would not be great if it say only understood java to a version from years ago etc.

This LLM trained only and entirely on pre-1930s texts was able to code Python programs when given only a short example:

https://talkie-lm.com/introducing-talkie

reply
rrvsh
1 hour ago
[-]
Nobody is unaware of the knowledge cutoff, and sharing the Wikipedia article is not helping anyone. Your point is easily rebutted by taking whatever open weights/source model has an outdated cutoff and training or fine tuning it on more data, which is again always going to be viable given a modicum of compute
reply
tcp_handshaker
1 hour ago
[-]
You could learn how to code...a whole generation did it before...
reply
nullc
11 minutes ago
[-]
Small models are more useful for "doing stuff" than "knowing stuff" to begin with. And in agentic harness a small model can happily read more current information on demand (including from e.g. a local wikipedia snapshot).
reply
turtlebits
1 hour ago
[-]
FOMO. A new model comes out weekly and the HN crowd debates over the minutia of changes.

Pockets are too deep, it will only change once everyone is out of money.

reply
nightski
1 hour ago
[-]
Hardware. Frontier labs are driving up demand so much that it's priced significantly above cost making it far less affordable. Just look at Nvidia's profit margins.
reply
suika
1 hour ago
[-]
The use cases in the future will be nothing like the use cases from today.
reply
lxgr
1 hour ago
[-]
They’re really not good enough, unless you consider 64 GB of memory or more consumer grade.
reply
steve_adams_86
1 hour ago
[-]
I’m pretty happy with what a 32GB Mac Studio can do for a lot of tasks. They’re the things I’d throw a model like Haiku at, but still genuinely useful. We don’t have an answer to frontier models in the consumer range yet, but we’re not totally trapped.

Side note though, it’s the speed that bothers me more than the reasoning. Qwen 3.5 is awesome, but my Claude subscription can tear through similar workloads an order of magnitude faster than my local LLM can when using Haiku. That’ll matter a lot to some people.

reply
datadrivenangel
2 minutes ago
[-]
Yeah this is the real killer. slower and more expensive is tough.
reply
ai_fry_ur_brain
17 minutes ago
[-]
95% of usecases. What are you smoking.
reply
oytis
3 hours ago
[-]
What is the business model of open weight AI? I don't think there is any. At best it can serve as an advertisement for the more advanced models you sell.

The huge difference to open source is that you can't just train an LLM with free time and motivation. You need lots of data and a lot of compute.

I sure want to be wrong on that, I definitely like the open-weight version of the future more

reply
wood_spirit
3 hours ago
[-]
Meta released Llama just when OpenAI was so hot and its valuation was going through the roof. Speculating, but Meta probably thought the model not competitive enough to keep as a secret weapon but well good enough to commercially damage OpenAI who were a sudden competitor for most-valued-company?

In the same way you can imagine the Chinese government pushing the release of deepseek etc to make sure no one thinks the US has “won” and to keep everyone aware that a foreign model might leapfrog in the short term future etc.

At some point though if OpenAI/Antropic/Google plateau or go bust then the open source sponsorship becomes less likely, as making it open source was a weapon not a principle.

reply
2ndorderthought
2 hours ago
[-]
I disagree. I think deepseek, qwen, and kimi earn a lot of trust open sourcing their models. While still profiting.

Effectively they are saying "yea don't crowd our data centers with small queries, go ahead and send your frontier questions to our frontier models. Oh btw those us models? You can run something about as good for free from us if you want hah." It's a power and marketing move. It's also insanely smart to keep up with it to remain sustainable as a brand. Especially given how small their investments into this are.

Look at anthropics growing pains. Deepseek has other hosts spreading their brand for free while they grow. Brilliant honestly. In my opinion it makes anthropic and openai look clueless on a lot of levels.

China is playing a different game here. To them this is commoditizing their compliment and building good will. The Chinese economy doesn't teter on the brink of collapse to deliver frontier grade LLMs. Nope, Alibaba just made qwen because it needs it. It needs efficient models. Similarly, in China they manufacture and automate so much more than the US ever could. LLMs to them are a topping not the whole meal like they are in the us.

reply
HDBaseT
29 minutes ago
[-]
You can still make money on open weight models.

The compute required to run these models is still very far out of reach for the average consumer, yet known enthusiast, therefore they still sell inference, whilst also getting consumer goodwill for providing open weights.

reply
datadrivenangel
1 minute ago
[-]
And the efficiency! Big accelerator cards are ~100x the throughput per watt in terms of raw processing power.
reply
try-working
1 hour ago
[-]
Correct. Open source is a PR and marketing strategy for new labs, regardless of origin.

https://try.works/#why-chinese-ai-labs-went-open-and-will-re...

reply
mystraline
1 hour ago
[-]
Thats because the USA has really nothing big to export. Yay, designs.

China? Im getting ready to watch the URKL (universal robot knockout league) go on. The USA is dicking around with failed robot dogs.

The USA has been a failed country, coasting on massive inertia. But the tech avenues from a article I cant find showed the USA 8/64 areas excelling. China was 56/64 areas excelling.

reply
2ndorderthought
1 hour ago
[-]
I believe it. The us intentionally lacks accountability to prop up the already wealthy in almost all of its ventures. Which socializes losses and capitalizes gains. It's an economic model that guarantees deterioration and stagnation.

Dodging politics, the power structures in us industry need serious revamping.

reply
mrleinad
17 minutes ago
[-]
China is going to be the next Germany: a loser in the new world without globalization
reply
sillysaurusx
33 minutes ago
[-]
If this is true, then why are most of the companies that change the world founded in the US?
reply
js8
2 hours ago
[-]
What is the business model of Wikipedia? I don't think there is any.

Not everything good in our society needs to have a "business model". People still work on it. It's FINE.

reply
avidphantasm
2 hours ago
[-]
Ultimately, information is a public good: it is non-excludable (you can’t stop people from using it) and it is non-rival (we can all use it at the same time). Public goods are often very useful, and because they are non-excludable and non-rival, ultimately can’t have a market-based business model. I would class open-weights AI models as public goods, and would support government expenditure to produce them.
reply
sroussey
2 hours ago
[-]
> What is the business model of Wikipedia?

Donations. Have you donated lately?

Wikipedia is cheap compared to creating and training models.

I don’t think donations will suffice at all.

As an example, we had millions of web developers download and install Firebug before browsers shipped their own dev tools. Donations over the course of multiple years would have paid my salary for a month if I were not a volunteer.

But from the “it’s fine” point of view, models will be baked into your OS.

Then later models will be embedded into hardware. Likely only OS makers models.

reply
phainopepla2
2 hours ago
[-]
Training AI models is capital intensive, though. Unless there's some sort of mega-crowdfunding effort for open weight model training there needs to be a way to recoup that money on the other end. Either that or state sponsorship I guess
reply
try-working
1 hour ago
[-]
Open sourcing models is a marketing strategy. Chinese labs and small international labs have no awareness or distribution, so unless they become a hot topic for a while, nobody is going to bother trying out their models. Open source gets them that, and is essentially a tax on newcomers. When you start out you simply have no other option but to open source your models.

So, the business model of open models is the same as closed models: Sell inference. Open source is marketing for that inference.

https://try.works/#why-chinese-ai-labs-went-open-and-will-re...

reply
kranke155
26 minutes ago
[-]
China’s long term goal might just be to own the chip layer alongside everything else, and outproduce the US in data centers.

Frontier US labs could still have an advantage for a long time, but many use cases would start gravitating towards Chinese models if they 10x the data centers and provide similar quality inference for a third of the cost.

reply
PAndreew
3 hours ago
[-]
Perhaps you can create a compelling UX around it and sell it as a subscription. "Normies" will not be able/willing to build it. You can then patch the model/ship new features around it as it evolves. For example I have built an ambient todo list / health data extractor using Gemma 4 2EB and Whisper. Nothing to brag about but it does fairly decent job even in foreign languages.
reply
karussell
3 hours ago
[-]
> What is the business model of open weight AI?

This is what I do not understand as well and advertising the knowledge and more advanced model is also the only thing that comes to my mind.

Since a month I am using gemma4 locally successfully on a MBP M2 for many search queries (wikipedia style questions) and it is really good, fast enough (30-40t/s) and feels nice as it keeps these queries private. But I don't understand why Google does this and so I think "we" need to find a better solution where the entire pipeline is open and the compute somehow crowdfunded. Because there will be a time when these local models will get more closed like Android is closing down. One restriction they might enforce in the future could be that they cripple the models down for "sensitive" topics like cybersecurity or health topics. Or the government could even feel the need to force them to do so.

reply
2ndorderthought
3 hours ago
[-]
Why would you want to try to support all users simple queries on your ai data center if they could run it on their own computer?

It builds good will also. it also shows research prowess.

For China it's different. They need to show Americans who don't trust them at all because of propaganda that they have no tricks up their sleeve. It also doesn't hurt when Chinese companies drop models for free people can run at home that are about as good as sonnet. Serious mic drop.

reply
TheJCDenton
2 hours ago
[-]
Very good point on using local ai to avoid data centers costs.

Running AI models on local hardware was exploratory at first, and if it's so easy today it's thanks to open source. It's a little bit coincidental that we have this today, and that mainstream hardware have this capability. The fact that a phone can run very small models is exploratory or some kind of marketing opportunity at best.

Why would hardware company ships cards with more AI capabilites (like more VRAM) in the foreseable future ? On what ground does the marketing for on device AI will keep generating interest ? For something as important, it's very uncertain. But above all, it should not depends on these brittle justifications.

Showing good will in distribution and research prowess today is positive communication, but it can be exactly the oppositite if/when an attack using those small models will reach a high value target.

For China the cultural difference is so huge, it's difficult to say. I would think they first and foremost need to show to evryone inside and outside of China that they match american models. Second, i would say that when americans prefer few very powerfull companies on the get go because they can leverage a lot of capital rapidly to industrialize, China will prefer leveraging a lot of smaller companies exploring a lot of things simultanously (so doing a lot of research), THEN creating legislation to let only the best (or a few) to survive effectively. In the end it's the same result (monopoly or oligopoly), but China may have a stronger core (research) and America may have stronger productive capital, that may be proved obsolete... In the long run, in either side it's a gamble, again.

reply
2ndorderthought
1 hour ago
[-]
They have already shown that their models match or excel over American ones in different cases. For cheaper too.

I disagree on the second point. I think most Americans don't prefer fewer competition, that's a bit antithetical to the free market.

I doubt the Chinese government cares as much about controlling a few companies as you think they do.

China has a few things going for it beyond research. They are mission driven, they actually have needs for this technology, their needs will forward their entire economy as they are the world's largest manufacturers. They are also huge exporters and have buckets of customer support for various languages.

China also has considerably stronger infrastructure for electricity, etc. even with an nividia embargo they are doing more than showing up.

I don't think it's a matter of who "wins". There is no winning. I think China stands to gain far more from LLMs than the US does, and they have proven they don't need the us to do it, even with he us trying to sabotage it's every move into the space. The game is already more or less over in my mind.

If anything I see LLMs as having a huge market in China, and now the US can't even sell it to them.

All I care about is, if I have to use this technology, let me run it locally to avoid the surveillance capitalism aspect. That seems to be the real reason the us has propped up it economy in anticipation for this technology. Yet it doesn't long term benefit the us nor me.

reply
karussell
3 hours ago
[-]
Indeed cost can be another factor. Maybe also the main reason why Chrome added an offline model.
reply
2ndorderthought
2 hours ago
[-]
That and it's lucrative for Android/chrome to have a text summarizer model embedded on your phone probably for government contracts and data exfil but we won't go through there.
reply
majormajor
2 hours ago
[-]
> What is the business model of open weight AI? I don't think there is any. At best it can serve as an advertisement for the more advanced models you sell.

I don't think local will necessarily be open-weight. And then it's not that different from personal computing: you're giving up the big lucrative corporate mainframe, thin-client model for "sell copies to a ton of individuals."

So it'd be someone else (an Apple, or the next-year equivalent of 1976 Apple) who'd start eating into that. There are a few on-device things today, but not for much heavy lifting. At first it's a toy, could maybe become more realized in a still-toy-like basis like a fully-local Alexa; in the future it grows until it eats 80-90% of the OpenAI/Anthropic use cases.

Incumbents would always rather you pay a subscription or per-use forever, but if the market looks big enough, someone will try to disrupt it.

reply
treis
53 minutes ago
[-]
Compute has gone back and forth from mainframe/thin client to fat client a few times already. LLMs will probably follow at some point but I think it's going to take a long time.

The cost to transmit text is basically free and instantaneous. The rent (i.e. a GPU in a data center) vs buy is going to favor rent until buy is a trivial expense. Like 50-100 range.

Even then a LLM that just works is easier than dealing with your own

reply
zozbot234
50 minutes ago
[-]
Except that buy is a trivial expense because the hardware has been bought already. You've got a whole lot of iGPU and dGPU silicon that's currently sitting idle as part of consumer devices and could be working on local AI inference under the end user's control.
reply
sumeno
1 hour ago
[-]
If a local model hits critical mass the business model is to use it to shape opinions in a way that is advantageous for the company/owners.

Much like the current Twitter model, being able to put your thumb on the scale of "truth". Bake a stronger bias towards their preferred narrative directly into the model. Could be as "benign" as training it to prefer Azure over AWS. Could be much worse.

reply
worldsayshi
3 hours ago
[-]
It should be feasible to crowd fund training runs right?
reply
dmd
3 hours ago
[-]
A training run costs somewhere in the neighborhood of a billion dollars. That’s a thousand millions.

How many crowdfunded projects do you know that have raised even one percent of that? Who’s going to be in charge of collecting that scale of money? Perhaps some sort of company formed for the benefit of humanity, which will promise to be a non-profit? Some sort of “Open” AI?

Oh, wait.

reply
iugtmkbdfil834
2 hours ago
[-]
<< That’s a thousand millions.

I can't say that you are lying and you are not exactly exaggerating either. It is true that a new SOTA model -- from literal scratch -- it would be expensive.

But, and it is not a small but, is the starting point really zero?

reply
dleslie
2 hours ago
[-]
This is where government funding can play a role.

Sometimes there are things where the public good is best served with public expenditure.

reply
CamperBob2
1 hour ago
[-]
"Government funding" these days would mean that Trump pays Elon Musk (or more likely vice versa) to make Grok 4.20 the only legal LLM for use by Americans.
reply
dleslie
1 hour ago
[-]
Outside of the USA it would not look like a wealth transfer to an oligarch.

Not every country is in a crypto-libertarian race to hoard power and wealth.

reply
fragmede
2 hours ago
[-]
The business model is the total lack of attention to Qwen and Kimi that would happen if their models weren't downloadable. Before releasing the weights, there was basically zero attention paid in the western hemisphere to them, for whatever reason. By releasing the weights, they're relevant in the western world. The business model is to get people in the West to pay to use their platform hosting their AI, that otherwise would never have heard of them. As you said, advertising/marketing, essentially.
reply
ios-contractor
42 minutes ago
[-]
I don't think it should be local vs cloud AI. I think local AI should be treated as a separate product. local ai should do things that really don't need cloud AI, then cloud AI should be used as a fallback. That would reduce a lot of costs
reply
slicktux
2 hours ago
[-]
I’m just waiting for the US Government to implement their own local AI. Which will eventually lead to them open sourcing it because it’s tax payer funded and being that the NSA has decades worth of internet data they can train on; open weights would be just as good as any companies…
reply
aabhay
3 hours ago
[-]
Disagree with this. When cost becomes an important factor or the free but worse option becomes compelling and accessible (i.e. on device agent via apple style UX), there has been significant user behavior towards local. Think about stuff like removing backgrounds from photos, OCR on PDFs, who uses paid services for casual usage of these things?
reply
furyofantares
1 hour ago
[-]
What's the gamble here exactly? What agency do we have in it right now?
reply
iLoveOncall
2 hours ago
[-]
The mainstream audience does not have the faintest idea that "local AI" is even a thing.
reply
CamperBob2
1 hour ago
[-]
Just as their counterparts in 1975 had no idea that "personal computers" were even a thing.

Read through a 1970s-era issue of Popular Electronics or Byte, and then spend some time surfing /r/LocalLlama. You'll get a sense of real-time deja vu, like you're watching history unfold again.

reply
irishcoffee
1 hour ago
[-]
I own 2 5070TI cards in a rig I would gladly donate time to for a distributed training model effort. The kicker is the training data. I would want to gate the data to anything before 2022. I don’t know how to coordinate that, but I would really like to be involved in something like this. SETI, for LLMs.
reply
AlexCoventry
1 hour ago
[-]
Bandwidth is the killer, in distributed LLM training.
reply
irishcoffee
34 minutes ago
[-]
What’s the rush?
reply
pronik
2 hours ago
[-]
They will be, and that moment is not that far off. We've got the progression in place already: first, large data centers could have performant LLMs, we are now firmly in "a bunch of servers with a couple of H100s each" territory, slowly going into "128 GB VRAM on a MacBook Pro or a Strix Halo". Within the next year, the pattern of "expensive remote LLM for planning, local slow-but-faster-than-human LLM for execution" will become the norm for companies, slowly moving to "using local LLM for everything is good enough". And then we'll have the equilibrium we already have with the "classic cloud": you either self-host or pay for flexibility and speed. The question will be: how much of the current compute capacity craze will local hosting give the kiss of death to and what that means for the market.
reply
reisse
53 minutes ago
[-]
> They will be, and that moment is not that far off.

It's here, right now. I'm running quantized Qwen and Gemma on a decent, but three years old gaming rig (think RTX 3080 12GB and 32 GB RAM). Yes, it's slow, it has a small context window. But it can (given a proper harness) run through my trip photos and categorize them. It can OCR receipts and summarize spendings. It can answer simple questions, analyze code and even write code when little context is required. Probably I could get a half-decent autocomplete out of it, if I bother with VS Code integration. "128 GB VRAM on a MacBook Pro or a Strix Halo" is already a minimum viable setup for agentic coding, I think.

> And then we'll have the equilibrium we already have with the "classic cloud": you either self-host or pay for flexibility and speed.

Currently, it works exactly the other way. The cloud versions are orders of magnitude cheaper than self hosting, because sharing can utilize servers much more efficiently. Company can spend half a million bucks on a rig running GLM 5.1, and get data security, flexibility and lack of censorship, but oh it's so expensive compared to Anthropic per-seat plans.

reply
RataNova
2 hours ago
[-]
The biggest impact of local models may simply be that they prevent remote inference from becoming the only game in town
reply
mattlondon
3 hours ago
[-]
Yet there is another post a few rows down where people are losing their shit that Chrome has a local LLM model that uses a couple of GB of space for local-inference.

Damned if they do, damned if they don't.

reply
dlcarrier
3 hours ago
[-]
Maybe don't use gigabytes of bandwidth and storage space, without asking.
reply
hparadiz
2 hours ago
[-]
Easy. Stop using Chrome.
reply
userbinator
1 hour ago
[-]
If I want a model I'll go download one. (And I did, not long ago, to play around with image generation.)
reply
bytecauldron
2 hours ago
[-]
This is a bit disingenuous. People aren't losing their shit about a local model being installed. It's the lack of user autonomy. Just give the option to download a model instead of a silent install. It's not that hard. This is how every other local option works.
reply
wmf
2 hours ago
[-]
AFAIK Apple and MS auto-download local models.
reply
aabhay
3 hours ago
[-]
This is a weird take. If its not opt in or you’re shoe horning it into a browser, then that sucks. Nobody is getting enraged that an app for running local LLMs downloads data to do so.
reply
avadodin
2 hours ago
[-]
Although you can opt out and even disable the download feature when you build them in some cases, most of the local LLM tools are too download–happy by default.
reply
fg137
2 hours ago
[-]
You might want to read the comments to understand what people are actually complaining about.

This comment is quite dishonest about the nature of the discussion.

reply
themafia
3 hours ago
[-]
If it was such a good and laudable idea why didn't they tell me about it before they activated it? It seems to me like they avoided it in the hopes that I wouldn't notice, because, presumably if I had, I would have IMMEDIATELY disabled it.

Also why doesn't their task manager show that it's actually the one downloading? Why does it go out of it's way to hide this activity?

Since I have conky on my desktop I could catch this immediately, and take the action I preferred with my own computer, which was to _immediately_ disable it.

reply
StilesCrisis
2 hours ago
[-]
I'm guessing you immediately close the What's New Chrome tab when you update?

https://developer.chrome.com/blog/new-in-chrome-148#prompt-a...

https://www.google.com/chrome/ai-innovations/

They have absolutely not been shy about any of this.

reply
themafia
2 hours ago
[-]
I've never had a "What's new" tab ever open because I disable the customized home page where that's displayed. I'm guessing you're not aware that's an option.

Please show me where in either of those documents it explains it's going to download a 4GB model.

reply
crazygringo
1 hour ago
[-]
I use an extension that gives me a customized homepage, but I still always get the "what's new" tab on every major version upgrade.

It's a totally separate tab that opens. It's got nothing to do with what you use as your homepage.

reply
ekjhgkejhgk
3 hours ago
[-]
You don't understand the difference between "I run a local LLM because I chose to" vs "The browser chose to run a local LLM and I have no say"? You don't understand?

Not to mention that the LLM that I choose to run requires a monster machine and is infinitely more capable than whatever google chose to put on their browser?

I mean, none of this affects me because I don't use chrome, obviously, but you don't see the difference? Bewildering.

reply
StilesCrisis
2 hours ago
[-]
Did you opt into WebGPU? QUIC? Canvas 2D? Brotli? Browsers don't work that way.
reply
za_creature
2 hours ago
[-]
The size difference between the local LLM and all of the above is about... the size of the local LLM.
reply
deivid
16 minutes ago
[-]
Sounds great, but if you din't cave to apple/google (eg: graphene, lineage), models are not built-in. Every app needs to ship their own models, and they are not tiny.

Is there a solution for this? I'm currently just making users download onnx models if they want a feature, but it's not smooth UX

reply
Guillaume86
2 hours ago
[-]
I think we should separate the private AI discussion from the local AI discussion. The pragmatic choice to run big LLMs is one/several big servers online, but that doesn't mean private companies should be the only ones to run them.

A self hosted inference solution that offer good tenant isolation guarantees (ideally zero trust) and is easy enough to deploy and maintain (think Plex for AI) would be my choice for privacy. Now to be honest I have done zero research about this and have zero idea how feasible that is, maybe it already exists and there's some discord servers I should join?

Edit: I don't need to mention it here but what's incredible is that open models are in the ballpark of the best commercial models so supposedly, the hardest part by far is already solved.

reply
FrasiertheLion
42 minutes ago
[-]
Another option is verifiably private inference with open source models running inside secure enclaves on the cloud (using NVIDIA confidential computing), and the enclave code is open source and verified via remote attestation upon connection, cryptographically proving that the inference provider cannot see any data. Tinfoil: https://tinfoil.sh/ is a good example of this (disclaimer: i'm the cofounder). You can read more about how this works here: https://docs.tinfoil.sh/verification/verification-in-tinfoil

>that open models are in the ballpark of the best commercial models

This is basically true for certain tasks. As an example, chat interfaces are not well poised to take advantage of higher model intelligence than what the best open source models already provide. But coding harnesses still benefit from greater model intelligence and even more so, the reinforcement learning that tightly interlinks the provider's coding harness (claude-code, codex) with the model's tool calling interfaces is another reason for discrepancy in effectiveness even when controlled for model intelligence. The opencode founder (open source coding harness that supports different model providers) was recently complaining about the challenges making the harness work well with different providers: https://x.com/thdxr/status/2053290393727324313

reply
wrxd
2 hours ago
[-]
The example in the post confirms my theory that for local models to succeed they need to be "good enough", not big enough that they can compete with frontier models.

They need to be able to do a small task well and they need to be able to run reasonably on consumer-class devices. Even better if they can run on mobile phones.

In my experiments with local LLMs I noticed that while increasing the size of the model is nice the real thing that turns a barely useless model into something useful is the ability to use tools. Giving my models the ability to search the web and fetch web pages did way more to solve hallucinations than getting a bigger model. And it doesn't have a training cutoff. Sure, the bigger model is probably better at using tools but I often find the smaller models to be good enough.

reply
robot-wrangler
1 hour ago
[-]
Entrenched interests are going to do everything to stop local, but there's at least a few technical reasons to believe small and specialized models could be the norm eventually. If that does happen, local will follow.

TFA is focused on whether big models are necessary for what users want. There's some evidence they may never actually be reliable enough unless a) mechanistic interpretation matures far enough or b) our multi-agent systems all become multi-model.

For (a), advancement in MI might fix problems with big models, but would also mean we can maybe get unified representations, and just slice and dice the useful stuff out of huge models, getting only what we need without the junk. Ability to isolate problems won't really come without bringing the ability to isolate functional subsystems. Only want logic? Only vision? Just cut it out of the big monster and enjoy reduced costs and surface area for problems.

For (b), just look at stuff like the evil vector, or the category of hallucinations specific to tool-use. Without a complete solution for helpful/honest/harmless alignment, it seems likely that creativity and rigor (and many other things) are fundamentally at odds. If you start to need many models for everything anyway, why do we need the huge expensive do-everything ones? So specialization also becomes a pressure to shrink everything towards minimal reliable experts

reply
scriptsmith
2 hours ago
[-]
I've got some demos of what the new Prompt API in Chrome that uses a local model can do: https://adsm.dev/posts/prompt-api/#what-could-you-build-with...

As OP says, it shines in constrained environments where the model is transforming user-owned data. Definitely less useful for anything more open-ended.

reply
robot-wrangler
40 minutes ago
[-]
> I've got some demos of what the new Prompt API can do: > Use surrounding context to rewrite your ad copy:

Yup, that's the plan. No local model, no webpage; more, better and cheaper adtech extortion/surveillance for vendors while everyone else pays for the juice and hardware degradation.

reply
2ndorderthought
2 hours ago
[-]
Yea I do not recommend treating chromes prompt API as a good example of local LLMs. It's fine and stuff but it's really weak. 8b models from a year ago are better in some ways. And a lot of the recent model drops are meaningfully better.
reply
scriptsmith
2 hours ago
[-]
It's based on a Gemma 3n model, and yeah it's not the best. But if you have a use case that needs constrained JSON output for example, it's pretty neat.

Maybe it would do better with the new Gemma 4 models, which the Chrome devs have been hinting at moving to. And why the API doesn't let you introspect / pick the model, I'm still not sure.

reply
dakolli
2 hours ago
[-]
So you're running an llm to do data transformation that deterministic processes would be much better suited for and running 1,000 watt power supply to do so. Wild.
reply
manlymuppet
48 minutes ago
[-]
People are trying to “make the best software”, though.

I think the Quixotic accelerationists of AI are more or less a vocal minority of the people who make software, and the choice of online APIs over local systems is largely a choice made for users, rather than developer’s laziness.

You can do more and better with private AI today than with local models. There is no getting around that. Even if local AIs get better, being on the cutting edge of LLM performance is often a very worthy investment.

Most people won’t settle for a product if it’s not the very best and incredibly convenient. That’s a high bar, and local AI often doesn’t meet those standards.

HN’s insistence on treating all users like they are open-source, privacy-first, self-hosted Linux fanatics is painfully corny.

reply
timeattack
3 hours ago
[-]
My problem with LLMs (apart from philosophical aspects and economical impact) is that it would be unlikely for any of us to be able to train something functional locally (toy-like LLMs -- sure, but something really useful -- no). Apart from that it requires immense computing power, it also requires a dataset which is for the most part is obtained illegally.
reply
kibwen
3 hours ago
[-]
This seems overly pessimistic.

I may personally be of modest intelligence, but to acquire the intelligence that I do have, I did not need to train on every book ever written, every Wikipedia article ever written, every blog post ever written, every reference manual ever written, every line of code ever written, and so on. In fact, I didn't train on even 1% of those materials, or even 0.00000000001% of those. The texts themselves were demonstrably not a prerequisite for intelligence.

At minimum, given that it only took me about 20 years of casual observation of my surroundings to approximate intelligence, this is proof positive that the only "dataset" you need is a bunch of sensors and the world around you.

And yes, of course, the human brain does not start from zero; it had a few million years of evolution to produce a fertile plot for intelligence to take root. But that fundamental architecture is fairly generic, and does not at all seem predicated on any sort of specific training set. You could feasibly evolve it artificially.

reply
krupan
2 hours ago
[-]
What does this even have to do with the parent? Your capabilities have nothing to do with LLM capabilities. The two work in completely different ways. The reason LLMs work is because they are huge and have been trained on vast amounts of data, full stop. Sure, there's potential someday to get something useful using less data, but we aren't there.
reply
avadodin
1 hour ago
[-]
You are right on the limitations of the architecture but I wouldn't call LLMs huge. Flagship models maybe but that's just because they don't scale very well.

A universal translator with image and voice recognition and a decent breadth of encyclopedic knowledge in only a small fraction of an English Wikipedia dump(6GB/20+GB) is not "huge".

It is probably closer to the theoretical limit than anyone could have expected.

reply
_heimdall
3 hours ago
[-]
You're also embodied and experiencing the world around you with more senses than only the ability to read text.
reply
rogerrogerr
2 hours ago
[-]
> the only "dataset" you need is a bunch of sensors and the world around you.
reply
dlcarrier
3 hours ago
[-]
Not the whole thing, at least with current technology, but LoRAs are really good at fine tuning, and can be generated in a few hours on high-end gaming computers, so as long as the base model is in your language, you likely have enough spate computing power, in whatever electronics you own, to train a few LoRAs a month.

In the future, when regular home computers have the capabilities of modern servers, we'll be able to train the entire LLM at home.

reply
krupan
2 hours ago
[-]
And this is important because even though you are running a model locally, it's still a proprietary model. You have no say in what it was trained on, how that training data is labeled, what the guardrails are, what biases it might have, none of that.
reply
pronik
2 hours ago
[-]
There is so much technology that we are unable to reproduce locally, I don't think LLMs are in any way different. There will be large LLM manufacturers, small LLM manufacturers, LLM artisanals, LLM enthusiasts and of course LLM consumers, just like with everything.
reply
woah
1 hour ago
[-]
Can you make your own CPU, locally?
reply
Ucalegon
3 hours ago
[-]
Depends on the domain. There are plenty of different use cases where the data needed for training is available for personal, or non-commercial, use. At that point, it does come down to compute/time to do the training, which if you are willing to wait, consumer grade hardware is perfectly capable of developing useful models.
reply
RataNova
2 hours ago
[-]
That's a fair concern, but I'd separate training from inference here
reply
cyanydeez
3 hours ago
[-]
That sounds like government. So your problem is mostly that you expect to have a collective social effort, but not enough to pay for it as a public good.
reply
vb-8448
3 hours ago
[-]
> Use cloud models only when they’re genuinely necessary.

The problem is that it's much easier to use the SOTA models (especially if they are subsidized) instead of spending time fixing the knobs with the local one.

I just realized this with coding agents, yeah, you probably shouldn't always use latest version at xhigh, but you will end doing it because you do the job in less time, with less "effort" and basically at the same price.

I guess we'll see a real effort for local AI only when major vendors will start billing based on actual token usage.

reply
lelanthran
2 hours ago
[-]
> The problem is that it's much easier to use the SOTA models (especially if they are subsidized) instead of spending time fixing the knobs with the local one.

That's not a problem, that's a feature; I have something like 8 tabs open to different free-tier providers. ChatGPT, Claude and Gemini are the SOTA ones.

I have no problem maxing one out, then moving to the next. I can do this all day, have them implement specific functions (or classes) in my code. The things is, because I actually know how to write and design software, I don't need to run an agent in a loop to produce everything in a day, I can use the web chatbots with copy/paste to literally generate thousands of lines of code per hour while still having a strong mental model of the code that I can go in and change whatever I need to.[1]

---------------------

[1] Just did that this morning on a Python project: because I designed what I needed, each generation was me prompting for a single function. So when I needed to add something this morning I didn't even bother asking an chatbot to do it, I just went ahead directly to the correct place and did it.

You can't do that if you generate the entire thing from specs.

reply
vb-8448
2 hours ago
[-]
We are speaking about local AI, and having all this SOTA models basically for free is blocking the progress of local or independent third party setups.
reply
lelanthran
1 hour ago
[-]
Maybe I should have clarified what the feature is (After re-reading my post, I see that I basically just ended after adding the footnote)

The feature of using all these SOTAs to exhaustion on the free tiers is burning their VC money!

The more I use for free, the more of their money I burn, the closer we'll get to actual 3rd-party and independent setups (local or otherwise).

reply
RataNova
2 hours ago
[-]
The path of least resistance usually wins, especially when the pricing hides the real cost
reply
Analemma_
3 hours ago
[-]
I'm also just not seeing good performance from local models. Every time a thread about LLMs comes up, there are tons of people in the comments insisting that they're getting just as good results from the latest DeepSeek/qwen/whatever as with Opus, and that just hasn't been my experience at all: open-source models just fall over completely compared to Claude when asked to do anything remotely complicated.

I have a sneaking suspicion this is kinda like the situation with Linux in the 90s, where it kinda worked but it reeeeeally wasn't ready for the home user, but you had a lot of people who would insist to your face everything was fine, mostly for ideological reasons.

reply
lelanthran
2 hours ago
[-]
> Every time a thread about LLMs comes up, there are tons of people in the comments insisting that they're getting just as good results from the latest DeepSeek/qwen/whatever as with Opus, and that just hasn't been my experience at all: open-source models just fall over completely compared to Claude when asked to do anything remotely complicated.

Different usage patterns - you want to issue a single spec then walk away and come back later (when it has consumed $10k worth of API tokens inside your $200/m subscription) to a finished product.

Many people issue a spec for a single function, a single class or similar. When you break it down like that, the advantages of SOTA models shrinks.

reply
vb-8448
1 hour ago
[-]
My experience is that in medium/big codebases even with single functions going with the xhigh is basically better from a user perspective (faster to get the result, and you can trust it) while going with lower models(e.g. sonnet instead of opus) you have to always carefully review the output because 1 of 10 it will hallucinate, you won't catch it immediately and at some point it will bite you.
reply
lelanthran
1 hour ago
[-]
> My experience is that in medium/big codebases even with single functions going with the xhigh is basically better from a user perspective (faster to get the result, and you can trust it) while going with lower models(e.g. sonnet instead of opus) you have to always carefully review the output because 1 of 10 it will hallucinate,

What do you mean "trust it"? It sounds like you want to vibe-code (never look at the output), and maybe for that you need SOTA, but like I said in a different comment, I can easily generate 1000s of lines of code per hour just prompting the chatbots.

I don't, because I actually review everything, but I can, and some of those chatbots are actually SOTA anyway.

reply
vb-8448
1 hour ago
[-]
With SOTA models I can just set up the instructions (even a little bit fuzzy), go away for 10 or 15 minutes, come back and just check result and adjust when necessary (and most of the time small adjustment are necessary, but the overall work is pretty good).

With subpar models I must be more careful on providing instructions and check it step by step because the path it chose is wrong, or I didn't ask for or the agent stuck in a loop somewhere.

reply
kgeist
2 hours ago
[-]
It depends a lot on how you run those models. I think a lot of disagreement is because of that. A lot of people run local models with incredibly small context windows (makes an agentic LLM circle in loops), use very small quants (like 4 bit => huge degradation), don't set the recommended parameters (like top-p/temperature), or download GGUFs with broken chat templates. And then they claim model X is bad :)

I'm currently running both Sonnet 4.6 and Qwen 3.6-27b on the same codebase (via OpenCode, the parameters were carefully tuned to have a good quality/context size ratio), and on this project, they both struggle with complex non-trivial tasks, and both work flawlessly otherwise. Sonnet 4.6 understands the intent better if my task is ambiguously formulated, but otherwise the gap is pretty small for coding under a harness.

reply
bilbo0s
2 hours ago
[-]
This.

I’ve begun to suspect that most people are probably running different hardware. Sure, you run the latest deep flash on your brand new M5 128G maybe you get acceptable performance?

But honestly, how many people have an extra $9000 laying around these days?

Right now, running with acceptable performance is kind of a luxury. I wish the people who always say - “This is great!” - would realize that not everyone has their hardware.

reply
vb-8448
1 hour ago
[-]
Actually even with a 9k hardware you won't get good enough performance. There is an interesting video from antirez on trying to run deepseek v4 flash 2bits on a m3 max 128GB ... and the result is kind delusional: as soon as the context start growing you are around 20token/s.
reply
kgeist
42 seconds ago
[-]
DeepSeek v4 Flash is around the same level as Qwen3.6-27b at agentic coding (according to artificialanalysis.ai), and Qwen3.6-27b runs very fast on my RTX 5090 ($4-5k) without noticeable degradation at 5 bits (Unsloth quants) and 8-bit KV cache (especially after llama.cpp's Hadamard transform commit): up to 262k context, 40-60 tok/sec generation. I'm currently successfully using it on a project, quality and speed are good.
reply
zozbot234
1 hour ago
[-]
Prefill performance used to be the real bottleneck on antirez's DS4 and that's been greatly improved by now, it doesn't perceivably slow down with growing context.
reply
hyfgfh
1 hour ago
[-]
Local LLMs is the only thing viable and probably the only thing it will remain once the hype dies down.

A smaller cheaper local model can delivery most the value for coding, while we still use some services for code review and security compliance.

Once the VC money runs out and they start to charge the real price, the C-level will have to impose budges or limits. The current pissing contest over who can expend the most tokens is both ridiculous and shortsighted

reply
revolvingthrow
3 hours ago
[-]
A local Answer Machine is the dream, especially when the internet is decaying and generally on its last legs, but the hardware requirements seem like a huge mountain to climb. Things are progressing tremendously - deepseek v4 flash is very good for what it is - but even that goes beyond any reasonable local setup, which imo is 128 GB ram + 16 GB vram. 4 ram slots on a consumer board craters ram speed, 256 gb macs are too expensive, and even then the inference is ungodly slow.

On the other hand… v4 flash model is actual magic compared to what was available 2 years ago. If the rate of improvement stays as is, we’ll get a similar performance in a ~120B model in a year, which is viable (if expensive) for everyman hardware. Possibly you’ll be able to run its equivalent on a ~$1200 laptop by 2028, which for me-in-2020 would sound straight out of a scifi movie. A good harness that lets the model fetch data from other sources like a local wikipedia copy from kiwix could do a lot for factual knowledge, too; there’s only so much you can encode in the model itself, but even a cheapish (pre-curent prices) 2TB drive can hold an immense amount of LLM-accessible data.

Big caveat: I don’t see local models for programming or generally demanding agentic tasks being worth it anytime soon. You likely want bleeding edge models for it, and speed is far more important. Chat at 20tok/s is fine; working on even a small codebase at 20tok/s, especially on a noticeably weaker model, is just a waste of time. Maybe it’s a PEBKAC but I have no idea how people make any meaningful use out of qwen 3.6.

reply
zozbot234
2 hours ago
[-]
> and even then the inference is ungodly slow.

This is the wrong way of putting it. Local inference with SOTA models is all about slowing down compute for the sake of fitting on bespoke repurposed hardware. You don't need to go fast if you have the whole machine to yourself 24/7. Cloud AI vendors can't match that kind of economics.

reply
Animats
2 hours ago
[-]
Question: for software development, how much of an AI do you need for local development? Can it be run locally? Can someone train something that knows a lot about software but lacks comprehensive coverage of history, politics, and popular culture?
reply
mrkeen
2 hours ago
[-]
This is a good snapshot of things:

https://news.ycombinator.com/item?id=48050751

A specialist handrolls a cut-down framework to power a 1 or 2 bit quantised version of a cut-down sort-of-frontier model.

It can be yours if you have 128GB or 256GB of RAM.

reply
dd8601fn
2 hours ago
[-]
The ones that are good for more than elaborate auto-complete are pretty hefty, but it can be done. They’re still not Opus behind claude code.
reply
holtkam2
3 hours ago
[-]
I wish I could upvote this twice. We (devs) really REALLY need to consider on-device compute before going to the cloud for LLM inference.
reply
everlier
1 hour ago
[-]
There was never a better time to run LLMs locally. It's just a few commands from zero till a fully working LLM homelab.

``` harbor pull unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q4_K_XL

# Open WebUI -> llama.cpp + SearXNG for Web RAG + OpenTerminal as sandbox harbor up searxng webui llamacpp openterminal ```

That's it, it's already better than Claude's or ChatGPT's app.

reply
hackyhacky
2 hours ago
[-]
I would like a standardized API for local AI to exist outside of the Apple ecosystem. The Prompt API is Chrome is halfway there.

* What is the answer to local AI for native apps on Windows?

* What is the answer to local AI for Linux?

This is a big opportunity for Linux, given the high quality of open-weight models. I hope some answer emerges before designs fracture and we get a dozen mutually incompatible answers.

reply
teravor
1 hour ago
[-]
> What is the answer to local AI for Linux?

run an ai api endpoint on a unix domain socket

reply
franze
2 hours ago
[-]
i researched that question for apfel https://github.com/Arthur-Ficial/apfel and standardized API is openai api so thats what i went with
reply
hackyhacky
1 hour ago
[-]
OpenAI's API is not local AI.
reply
zozbot234
1 hour ago
[-]
Most local AI servers expose that API.
reply
ksec
2 hours ago
[-]
While I agree that would be the goal, we are too early for that. Just like how speech recognition used to require many server in a Datacenter to process and you send your data over. It is now completely on devices.

We are at least 5 years away from that. And DRAM needs a substantial breakthrough in cost reduction.

reply
jjordan
3 hours ago
[-]
It feels like we're one technological breakthrough away from all of these data centers going up to be deemed irrelevant.
reply
Lalabadie
3 hours ago
[-]
The cynical take is getting more and more to be the only rational one:

The promised mega-data center deals are meant to boost valuations today, not serve tons of customers three years from now.

reply
_heimdall
3 hours ago
[-]
It seems pretty clearly inline with the dotcom bubble to me. Every company claims to be a leading AI company, those building infrastructure are promising the moon and getting 1/3 of the way there, and no one knows how to monetize it justify the hype or expense.
reply
jjordan
3 hours ago
[-]
oof, this bubble popping is gonna be brutal.
reply
krupan
2 hours ago
[-]
It took us only, what 70-ish years of computer and AI research to get to this point, so yeah, probably just one little thing and then we'll have it </sarcasm>

Seriously. I have never ever seen so many people so willingly drink the marketing kool-aid from companies selling their product before. It's scarier to me than any threats of AI actually disrupting society (because it is so far from being capable of doing that).

reply
i_love_retros
3 hours ago
[-]
What would that breakthrough be?
reply
Waterluvian
3 hours ago
[-]
Magic math and computer science that allows us to get the same quality response for a fraction of the GPU.
reply
intothemild
3 hours ago
[-]
That's already happening. Qwen3.6 and Gemma4.

Basically small and medium models that are crazy well trained for their sizes.

Then we have a lot of specular decoding stuff like MTP and others coming to speed up responses, and finally better quantisation to use less memory.

Local LLM is the future, and the larger labs know that the open models will eat their lunch once people realise that the gap is only a few months. If we were good with LLMs a couple months ago, we're good with the open models now.

reply
krupan
2 hours ago
[-]
And how were those models developed and trained?
reply
lelanthran
2 hours ago
[-]
> And how were those models developed and trained?

That's irrelevant to my decision to use local or not.

reply
krupan
1 hour ago
[-]
That's not what this thread is about? We're saying some new breakthrough is needed, someone said it already has happened, and I'm asking if it really has. Has it? I don't think so, those models are not in some way fundamentally different than other LLMs
reply
lelanthran
1 hour ago
[-]
> We're saying some new breakthrough is needed, someone said it already has happened, and I'm asking if it really has.

I didn't read "and how were those models trained" as "Are we there yet?"

reply
YZF
3 hours ago
[-]
The current LLMs are also "magic" so anything is possible. AFAIK there is no proof that the current architecture is optimal. And we have our brains as a pretty powerful local thinking machine as a counter-example to the idea that thinking has to happen in data centers.
reply
_heimdall
3 hours ago
[-]
I want to ask what makes them magic, but even those building LLMs don't really know what happens when they run inference...

I have to assume current architectures aren't optimal though, the idea that we stumbled into the one and only optimal solution seems almost impossible.

reply
toufka
3 hours ago
[-]
I mean, the most cutting edge of iPhones, iPads and MacBook Pros _today_ are quite capable of running in realtime today’s high-end local LLMs.

If you project out that hardware just a couple of years, and the trained models out a couple of years, you end up in a place where it makes so much more sense to run them locally, for all sorts of latency, privacy, efficacy, and domain-specific reasons.

Not all that different from the old terminal & mainframe->pc shifts.

Finally - hardware has seemingly gotten out ahead of software that most folks use - watching YouTube, listening to music, playing a game or two. There was a time when playing an mp3 or watching a 4k video really taxed all but the nicest systems. Hardware fixed that problem, like it very well could this one.

reply
sofixa
3 hours ago
[-]
> I mean, the most cutting edge of iPhones, iPads and MacBook Pros _today_ are quite capable of running in realtime today’s high-end local LLMs

Definitely not the high end local LLMs. The small ones, yes, absolutely.

> If you project out that hardware just a couple of years

One of the biggest bottlenecks for LLMs is memory capacity and bandwidth. With the current glut for memory, it's unlikely we'll see lots of advancements in terms of average memory available or its bandwidth on regular (not super high end devices) in the coming years.

Alternatively, it's possible we get dedicated SMLs for e.g. phone specific use cases, that are optimised and run well.

reply
_heimdall
3 hours ago
[-]
I'd assume its a totally different architecture that isn't based on storing a compressed dataset of all digital human text.
reply
TechSquidTV
43 minutes ago
[-]
Local AI will catch up. Unless we can't get our hands on hardware anymore, which is a legitimate concern I have.
reply
FrasiertheLion
1 hour ago
[-]
Overall I'm bullish on standardized local APIs that ship with the browser or platform. Far more tractable than expecting end users to stand up their own local model instances, though r/LocalLLaMA is a fantastic community to follow if you want to go that route.

A useful framing over “local vs cloud AI” can be split along two axes: does the task touch private data, and does it need frontier intelligence? You can use frontier models for developing the software (doesn’t touch data), but open-source models running locally for ops: maintenance, debugging and monitoring (touches data). If you need to fall back to frontier intelligence at some point for a particularly hard to resolve problem, you can still rely on local models for pre-transforming and filtering input in a way that's privacy-preserving or satisfies some constraint before it’s sent off to the cloud for processing. OpenAI's privacy filter is a good example of a model that can be used to mask PII and secrets and that can run locally: https://openai.com/index/introducing-openai-privacy-filter/, before sending any data externally for processing.

Another framing for local vs frontier closed which the article mentions is whether the task saturates model capability. With certain tasks like PDF processing or voice or summarization, adding more intelligence isn't necessarily useful. Arguably we've approached that point for chat interfaces already with frontier open-source models. But for coding and ops through well structured tool use inside a coding capable harness, we're still a ways away.

Tangentially, a contrarian take here is that AI can actually enable more privacy preserving software if you’re so inclined. You can just build personalized software and it lowers the barrier to entry and the effort required to self host. SaaS complexity often comes from scaling and supporting features for all types of customers, and if you're building software for personal use, you don't need all that additional complexity. Additionally, foundational and infra software that is harder to vibecode with AI is often already open source.

reply
RataNova
2 hours ago
[-]
I mostly agree, though I think local AI will need better UX around failure modes. Cloud models are often used not just because developers are lazy, but because they are more capable and easier to support consistently across devices.
reply
daishi55
2 hours ago
[-]
> We are building applications that stop working the moment the server crashes or a credit card expires

Isn’t this true of any application that accesses anything not running on your computer? This is just describing what it means to add an API call to your app. Nothing to do with AI (?)

reply
simonkagedal
1 hour ago
[-]
Furthermore, for the example given, it would have made a lot of sense to me to generate those article summaries on the backend. Once and for all, no need to burden each client device (which are going to need to download the content anyway), no need to tie yourself to a specific provider (Apple in this case), can have the same experience everywhere. Of course, the backend could use a local (to itself) model.

Not saying it’s _wrong_ either – maybe it doesn’t use a backend of its own (the client downloads content directly from some predefined set of sites), maybe there is functionality to adjust how the summaries work that benefit from doing it on device, etc. Just doesn’t convince me that ”local AI should be the norm”.

reply
msteffen
3 hours ago
[-]
> One of the current trends in modern software is for developers to slap an API call to OpenAI or Anthropic for features within their app.

Well there’s your problem, control needs to go the other way. If you want your app to be AI-enabled, you need to make it easy for AI to control your app. Have you used OpenClaw? It’s awesome!

reply
krupan
2 hours ago
[-]
Here I was hoping that this was some plea for us to get away from proprietary solutions that we have no control over and go back to open source, but no, not that at all.
reply
Galanwe
4 hours ago
[-]
I would love for local inference to be possible, but from my experience, Kimi 2.6 is the only model that would be worth it, and its a $10k (M3 Ultra max spec'd - 30s TTFT so kind of slowish) to $30k (RTX6000/700GB+ DDR5) upfront, noise / power consumption aside.
reply
mft_
4 hours ago
[-]
You're maybe missing the article's point, which is to use local models appropriately:

> “But Local Models Aren’t As Smart”

> Correct.

> But also so what?

> Most app features don’t need a model that can write Shakespeare, explain quantum mechanics, and pass the bar exam. They need a model that can do one of these reliably: summarize, classify, extract, rewrite, or normalize.

> And for those tasks, local models can be truly excellent.

reply
Galanwe
3 hours ago
[-]
This is a bit naive IMHO...

I have tried quite a bunch of local models, and the reality is that it's not just a matter of of "it's a small model that should be hostable easily". Its also a matter of whats your acceptable prefill TTFT and decode t/s.

All the local models I used, on a _consumer grade_ server (32GB DDR5, AMD Ryzen) have been mostly unusable interactively (no use as coding agent decently possible), and even for things like classification, context size is immediatly an issue.

I say that with 6m experience running various local models for classifying and summarizing my RSS feeds. Just offline summarizing ans tagging HN articles published on the front page barely make the queue sustainable and not growing continuously.

reply
mft_
2 hours ago
[-]
1) Again, I suspect you're missing the point of the article. The iPhone's on-device LLM is (apparently) ~3 Bn parameters - and runs well/fast enough to be used in the manner described. Of course, the iPhone has its GPU to leverage.

2) It's probably not the time/place to trouble-shoot your "consumer grade server" LLM experience, but if you're running on CPU (you don't mention a GPU) then yeah, your inference speed will be slow.

3) Counterpoint: my consumer-grade Macbook Pro (M1 Max, 64GB) runs Qwen3.6-35B-A3B fast enough to be very usable for regular interactive coding support. (And it would fly with smaller models performing simpler tasks.)

reply
mikrl
3 hours ago
[-]
One of my hobbyist workflows involved transcribing ETF prospecti into yaml for an optimizer to optimize over.

Used to take me maybe 10-20 minutes per sheet.

Then I got codex to whip up a script that sends each sheet to a fairly low parameter locally running LLM and I have the yaml in a couple seconds.

My dream is to bootstrap myself to local productivity with providers… I know I’ll never get there because hedonic treadmill etc, but I do feel there’s lots more juice to squeeze. I just need to invest more time into AI engineering…

reply
Salgat
1 hour ago
[-]
Local models are much less energy efficient right?
reply
HDBaseT
7 seconds ago
[-]
It's a good question, although I think hard to quantify.

If you are simply measuring Watt Cost per Token, you are missing the mark drastically. You have to measure quality output per Watt.

It sounds reasonably difficult to benchmark this, maybe I'm wrong though.

reply
1a527dd5
2 hours ago
[-]
Consumer/private needs to be local.

Work? I don't want it local at all. I want it all cloud agent.

reply
rduffyuk
2 hours ago
[-]
agree with the article but the limitation for local llm usefulness is the limited scope from my experiments. eventually context heavy data pipelines require larger models which consumer hardware can't deal with yet. the local model for summary on a page like you describe could be done via code as well, i've found using an llm isn't always the right choice. for example i use ner tagging in my md docs for better indexing and llm search capabilities. this is purely code based and not via an llm. tried with an llm and the results were a lot worse. augmenting tools to make the llm produce better outputs gives better results.
reply
prometheus1992
1 hour ago
[-]
Agreed, but the way ram prices are going, I don't think we would be able to afford hardware that can run any useful model.
reply
vegabook
3 hours ago
[-]
>> years ago I launched "The Brutalist Report"

proceeds to brutalise the reader with an 88-point headline font.

reply
eyk19
3 hours ago
[-]
Apple stock is going to skyrocket
reply
baal80spam
2 hours ago
[-]
Maybe. What about NVDA?
reply
ChoGGi
1 hour ago
[-]
Who can afford local AI?
reply
m463
1 hour ago
[-]
Who can afford to backup their own photos?

who can afford a house?

reply
agentifysh
3 hours ago
[-]
Until the hardware is economical and powerful enough, local AI that can compete with frontier models today is still far off.

If we could even get something like GPT 5.5 running locally that would be quite useful.

reply
refulgentis
2 hours ago
[-]
The shitty thing here is, either everyone's shipping 800 MB at least with their binary, or, you have to rely on the platform vendor anyway. I'm hoping there's enough external pressure that the OS vendors turn it more into a repository than a blessed-model-garden.
reply
wrxd
2 hours ago
[-]
To be fair the author of the post is using the model Apple provides with the OS so it doesn't have any extra binary size
reply
wilg
3 hours ago
[-]
Two issues -

1. Local models are likely to be more power-expensive to run (per-"unit-of-intelligence") than remote models, due to datacenter economies of scale. People do not like to engage with this point, but if you have environmental concerns about AI, this is a pretty important one.

2. Using dumb models for simple tasks seems like a good idea, but it ends up being pretty clear pretty quick that you just want the smartest model you can afford for absolutely every task.

reply
manc_lad
2 hours ago
[-]
I think using the best model for every tasks makes sense when these models are subsidised. when the prices go up (assuming they do) this could trigger a more varied approach. assuming the model doesn't self select for you.
reply
dana321
3 hours ago
[-]
"NO AI" needs to be the norm, we should be working on better ways of sharing information and better documentation instead of fighting with computers for substandard results.
reply
williamtrask
4 hours ago
[-]
I wonder if a popularization moment for local AI will ultimately be the pin-prick that pops the AI bubble. Like the deepseek or openclaw moments but bigger/next.
reply
gdulli
3 hours ago
[-]
That's like wondering if enough people discovering local media streaming will disrupt commercial streaming services. It's not going to happen. Most people are not ambitious and will let themselves be controlled by the services of least resistance.

And you can't take comfort in knowing that you, personally, will remain in control of your own computing. The majority will let the range and direction of their thoughts and output be determined by the will of the tech giant whose AI they adopt. And that will shape society.

reply
williamtrask
2 hours ago
[-]
Yeah... probably right. I do hold out hope that this is mostly a timeframe thing. Like, the library, printing press, etc. all had their moments of centralization. But eventually they federated.
reply
krupan
2 hours ago
[-]
If you don't need a lot of smarts, do you even need an LLM? Aren't older machine learning techniques just as good, or like, you know, old-school algorithms?
reply
holoduke
2 hours ago
[-]
We need computers with 128gb or maybe even 192gb of memory before local use make sense. From my own experience 32b LLMs are the absolute minimum for proper tool use and decent output quality. But for local ai you want also vision models and maybe even various LLMs. Plus some memory for the system of course. On my 36gb M3 the 24b Gemma model is nice. But the entire system gets allocated for that thing.
reply
jmyeet
1 hour ago
[-]
I've been looking into options for this and we are getting close. There are two main constraints: memory and memory bandwidth.

NVidia segments the market by limiting the amount of memory on GPUs. It currently tops out at 32GB (on a 5090) but it has excellent memory bandwidth (~1.8TB/s). If you want more than the you need to buy an RTX Pro (eg RTX 6000 Pro w/ 96GB for ~$10K) or you get into high high end solutions like H100, H200, etc that have significantly more memory and even higher bandwidth on HBM memory (eg 3.2TB/s+).

NVidia has released the DGX Spark w/ 128GB of memory for ~$4k. The problem is the memory bandwidth. It's only 273GB/s, which is less than the M5 Pro (307GB/s) but more than the M5. You can buy a 16" Macbook Pro with an M5 Max and 128GB of memory for $6k and it has a bandwidth of 614GB/s. So the DGX Spark is a joke, really.

In case it wasn't clear, Apple is interesting in this space because it has a shared memory architecture so the GPU can use all the memory.

Many, myself include, expect there to be no refresh to the 5000 series consumer GPUs this year, which would otherwise happen based on product cycles. So no 5080 Super, for example. And I wouldn't expect a 6090 before 2028 realistically.

One thing Apple hasn't done yet is release the M5 Mac Studios, which are widely expected in Q3 this year. They are interesting because, for example, the M3 Ultra has a memory bandwidth of 819GB/s and previously had a max spec of 512GB but that got discontinued (and the 256GB version also got discontinued more recently).

So many expect an M5 Max Mac Studio with 1TB/s+ bandwidth and specs up to 256GB or 512GB, probably for ~$10k later this year.

You really have to use this hardware almost 24x7 for it to be economical because otherwise H100 computer hours are probably cheaper.

But what happens when the next generation of GPUs comes out to the trillions in AI DC investment? It's going to halve its value. That's over $1 trillion in capex that will disappear overnight, effectively.

I think Apple is the dark horse here because they have no interest in NVidia's psuedo-monopoly. I'm just waiting for them to realize it.

Now CUDA is an issue here still but I think as time goes on it's going to be less of an issue. Memory is still a huge constraint both in terms of price and just general supply because NVidia can justify paying way more for it than you can, probably.

It's still sad to see that 128GB (2x64GB) DDR5 kits are almost $2k now and werre $400 a year ago. Expect that to continue until this bubble pops (which IMHO it will) and we're likely in a global recession.

So the other issue is models. OpenAI and Anthropic are built on proprietary models. Their entire valuation depends on this moat. I don't think this last so both companies are doomed because open source models are going to be sufficiently good.

We can already do some reasonably cool stuff on local hardware that isn't that expensive and even more so once you get to $5-10k hardware. That's going to be so much better in 2 years that I'm hesitant to spend any amount of money now.

Plus the code for running these things is getting better. Just in the last month there have been huge speed ups in local LLMs with MTP.

reply
regexorcist
1 hour ago
[-]
> So many expect an M5 Max Mac Studio with 1TB/s+ bandwidth and specs up to 256GB or 512GB, probably for ~$10k later this year.

This is what I'm really waiting for. It will enable models comparable to current SOTA at the enthusiast price range.

reply
zozbot234
1 hour ago
[-]
> So the DGX Spark is a joke, really.

Not at all sure about that. They have really good compute, and DeepSeek V4 (with antirez's 2-bit expert layer quant) may be able to leverage that compute via parallel inference - the jury is still out on that. Now if you had said Strix Halo/Strix Point or perhaps the Intel close equivalents, that would've been a slightly stronger case.

reply
hypfer
3 hours ago
[-]
Same as local compute.

Welcome back to 2014. Let us now continue yelling at the cloud.

reply
shmerl
3 hours ago
[-]
Depending on some remote AI provider is a major lock-in pitfall. But it's exactly what those AI providers want you to do.
reply
artursapek
3 hours ago
[-]
I'm someone who is trying to build a subscription-based business to cover underlying LLM costs, and very hopeful I can one day just sell a permanent license to the software instead with customers using local LLMs to power it.
reply
sgt
6 hours ago
[-]
I guess Google got that memo!
reply
cubefox
3 hours ago
[-]
Local AI is a bit like wind parks. Everyone is in favor, except if they are in your own backyard. There was recently a huge outcry when Chrome shipped a local 4 GB AI model: https://news.ycombinator.com/item?id=48019219

I have to conclude that people would like to have powerful local AI but it should at the same time only be a tiny model. In which case it wouldn't be powerful.

reply
barrkel
3 hours ago
[-]
Local models are extraordinarily expensive if you're not maximizing throughput, and you're not going to be maximizing it.

Local models need to be resident in expensive RAM, the kind that has fat pipes to compute. And if you have a local app, how do you take a dependency on whatever random model is installed? Does it support your tool calling complexity? Does it have multimodal input? Does it support system messages in the middle of the conversation or not? Is it dumb enough to need reminders all the time?

Spend enough time building against local models and you'll see they're jagged in performance. You need to tune context size, trade off system message complexity with progressive disclosure. You simply can't rely on intelligence. A bunch of work goes into the harness.

Meanwhile, third party inference is getting the benefits of scale. You only need to rent a timeslice of memory and compute. It's consistent and everybody gets the same experience. And yes, it needs paying for, but the economics are just better.

reply
LPisGood
3 hours ago
[-]
> And if you have a local app, how do you take a dependency on whatever random model is installed?

Reading the tea leaves here, it will probably be common for OS’s to have built in models that can be accessed via API. Apple already does this.

reply
crazygringo
1 hour ago
[-]
I don't know why you are being downloaded. These are precisely the facts that advocates for local models completely ignore.

Local models are absolutely going to be the future for things like simple automation and classification tasks that run occasionally and don't need to rely on internet access.

But for all of the serious stuff where you are doing knowledge work, the models will simply continue to be too big, and too slow to run locally.

The article says:

> Use cloud models only when they’re genuinely necessary.

But at least for me, they're genuinely necessary for 99+% of my LLM usage.

At the end of the day, the constraint here really is efficiency and cost.

Privacy can be ensured with the legal system, the same way that businesses that compete with Google still have no problem storing their data in Google Workspace and Google Cloud. The contractual guarantees of privacy are ironclad, and Google would lose its entire cloud business overnight as its customers fled if it ever violated those contractual agreements (on top of whatever penalties they allow for).

reply
bheadmaster
3 hours ago
[-]
> And if you have a local app, how do you take a dependency on whatever random model is installed?

Why not ship your own model? In the age of Electron apps, 10GB+ apps are not unheard of.

reply
_heimdall
3 hours ago
[-]
Personally I wouldn't want a couple dozen apps installed all with their own model.

It seems easier to have industry specs that define a common interface for local models.

I also assume the OS can, or would need to, be involved in proving the models. That may not be a good thing depending on your views of OS vendors, but sharing a single local model does seem more like an OS concern.

reply
alex7o
3 hours ago
[-]
I mean the openai API is the industry standard for allowing apps to communicate with models, llama-server has it, oMLX has it, ollama has it, vLLM has it, lmstudio as well. I don't think this is such a hard thing to do, but it requires people to set it up.
reply
_heimdall
3 hours ago
[-]
I don't know enough about that API surface to know if its a particularly good one for the use cases we'd have, but yes defining a universal spec for all implementors to support wouldn't be a big lift and is done in plenty of other areas already.
reply
alex7o
3 hours ago
[-]
There is no other way than shipping your own model, because you will want an abstracted API over the inference, and you don't know what the user has installed. Also you can ship 9b fp4 model but it all just depends
reply
_heimdall
3 hours ago
[-]
Knowing what's installed would have to be an OS API. If LLMs provide a standard API surface to the OS, likely including metadata related to feature support.
reply
LPisGood
3 hours ago
[-]
You can know what the user has installed if the OS developer offers something.
reply