Ask HN: Better hardware means OpenAI, Anthropic, etc. are doomed in the future?
4 points
2 days ago
| 5 comments
| HN
This is something I don't understand, how will all these AI-as-a-service companies survive in the future when hardware gets better and people are able to run LLMs locally? Of course right now the rent vs. buy equation is heavily tilted towards rent, but eventually I could see people buying a desktop they keep at home, and having all their personal inference running on that one machine. Or even having inference pools to distribute load among many people.

do you think what this is possible, and what are these companies plans in that event?

farseer
2 days ago
[-]
The frontier of how good models are also shifts and will remain ahead of local models unless we hit some dead end limitation in the algorithms themselves. A ceiling so to speak on how good LLM can get before the law of diminishing returns starts to apply.
reply
kart23
1 day ago
[-]
i don’t understand. all models are local models, they’re just not running on your machine.
reply
verdverm
2 days ago
[-]
1. Is it cheaper for me to buy hardware and electricity than to call an API? (doesn't seem like it right now)

2. The best models are still worth it, unclear when this changes

3. Average person doesn't have the skill to do this. They are afraid to run even simpler things

reply
kart23
1 day ago
[-]
definitely not right now. but I believe sometime the progress of models will plateau while hardware continues to get better. and maybe it would be cheaper, especially if you have solar.

3. this is like saying the average person doesn’t have the skill to run gta over wine on their linux box. gaming consoles exist.

reply
throwaway5465
2 days ago
[-]
Young people have had even the concepts of filesystems conditioned out for files to live in a 'folder' of an APP.

Local sovereignty isn't a pressing need for most users.

reply
freakynit
2 days ago
[-]
I do believe this is gonna get commodatized like the internet has. Hardware obviously keeps getting better and cheaper as the time goes by. Sofware in this case is already free/open-weights.

The moats these companies might end up having in near future:

1. Government and enterprise contracts;

2. Even better private models not released to public and only accessible through long-term/exclusive contracts;

3. Gatekeeping the access to millions of their users, especially the non-technical ones, and charging premium for the same;

4. Becoming more and more as the full-stack OS'es to build on top of them.. By proving ready-made foundational layers like knowledge, memory, search/research, sandboxes, deployments, etc...

5. Data/network effects from large-scale usage and feedback loops.

...

reply
fogzen
1 day ago
[-]
They won't survive. AI-as-a-service for frontier models will be relegated to military and research – if that. We're already at diminishing returns on model improvements. Latest improvements are on surrounding architecture, harnesses, agent systems, etc. Consumer hardware will be running the equivalent of ChatGPT 5.2 and IMO most interaction with personal computing devices will be done via natural language LLM personal assistants.

Maybe it takes a bit longer than 5 years but that's where we're going. Already the only reason you're not interacting via personal assistant for everything isn't really LLM capability but the lack of tooling.

reply