Ask HN: Who is using local LLMs in a production environment here?
7 points
6 hours ago
| 2 comments
| HN
I'm asking because it seems that nobody really does that. Yes, there are some projects here and there, but ultimately everybody just jumps over to cloud LLMs. Everything is cloud. People pay for GPU usage somewhere in the middle of nowhere. But nobody really uses local LLMs long term. They say, "Well, it's so great. Local LLMs work on small devices they even work on your mobile phone."

I have to say there's one exception for me and that's Whisper. I actually do use Whisper a lot. But I just don't use local LLMs. They're just really, really bad compared to cloud GPUs.

And I don't know why, because for me it seems that having a speech-to-text model is much more challenging to create than just a model that creates text.

But it seems that they really cannot remove the differences and have it run on consumer computers. And so I also go back to cloud LLMs, all privacy aside.

halJordan
2 hours ago
[-]
The federal government, especially the dod, has adopted local llms. Now, they also have the big iron closed models "locally" so that stretches your definition I'm sure. But they use other models too
reply
Haeuserschlucht
1 hour ago
[-]
Intersting, not my government though as I am in Germany. But are those huge deepseek models worth it? It seems that only proprietary models can match up.

On the other hand, we need to talk specifics. Measure up, how and regarding which benchmark.

reply
websiteapi
2 hours ago
[-]
things are changing too quickly for it to be worth it yet. eventually LLMs won't really increase in capability or resources anymore, and at that point, if the hardware itself isn't becoming more optimized for LLM workloads, you'd see people do this.
reply