Here's that README from March 10th 2023 https://github.com/ggml-org/llama.cpp/blob/775328064e69db1eb...
> The main goal is to run the model using 4-bit quantization on a MacBook. [...] This was hacked in an evening - I have no idea if it works correctly.
Hugging Face have been a great open source steward of Transformers, I'm optimistic the same will be true for GGML.
I wrote a bit about this here: https://simonwillison.net/2026/Feb/20/ggmlai-joins-hugging-f...
I would like to see others, being promoted to the top rather than Simon’s constant shilling for backlinks to his blog every time an AI topic is on the front page.
I generally try to include something in a comment that's not information already under discussion - in this case that was the link and quote from the original README.
I feel like you're making this statement in bad faith, rather than honestly believing the developers of the forum software here have built in a clause to pin simonw's comments to the top.
This does not happen. It hasn't even happened when pg made the forum in the first place.
Attention is ALL You Need.
And for those who think it's just organic with all of the upvotes, HN absolutely does have a +/- comment bias for users, and it does automatically feature certain people and suppress others.
Exactly.
There are configurable settings for each account, which might be automatically or manually set—I'm not sure–, that control the initial position of a comment in threads, and how long it stays there. There might be a reward system, where comments from high-karma accounts are prioritized over others, and accounts with "strikes", e.g. direct warnings from moderators, are penalized.
The difference in upvotes that account ultimately receives, and thus the impact on the discussion, is quite stark. The more visible a comment is, i.e. the more at the top it is, the more upvotes it can collect, which in turn makes it stay at the top, and so on.
It's safe to assume that certain accounts, such as those of YC staff, mods, or alumni, or tech celebrities like simonw, are given the highest priority.
I've noticed this on my own account. Before being warned for an IMO bullshit reason, my comments started to appear near the middle, and quickly float down to the bottom, whereas before they would usually be at the top for a few minutes. The quality of what I say hasn't changed, though the account's standing, and certainly the community itself, has.
I don't mind, nor particularly care about an arbitrary number. This is a proprietary platform run by a VC firm. It would be silly to expect that they've cracked the code of online discourse, or that their goal is to keep it balanced. The discussions here are better on average than elsewhere because of the community, although that also has been declining over the years.
I still find it jarring that most people would vote on a comment depending on if they agree with it or not, instead of engaging with it intellectually, which often pushes interesting comments to the bottom. This is an unsolved problem here, as much as it is on other platforms.
I'm old enough to remember when traffic was expensive, so I've no idea how they've managed to offer free hosting for so many models. Hopefully it's backed by a sustainable business model, as the ecosystem would be meaningfully worse without them.
We still need good value hardware to run Kimi/GLM in-house, but at least we've got the weights and distribution sorted.
They provide excellent documentation and they’re often very quick to get high quality quants up in major formats. They’re a very trustworthy brand.
Hypothetically my ISP will sell me unmetered 10 Gb service but I wonder if they would actually make good on their word ...
If you stream weights in from SSD storage and freely use swap to extend your KV cache it will be really slow (multiple seconds per token!) but run on basically anything. And that's still really good for stuff that can be computed overnight, perhaps even by batching many requests simultaneously. It gets progressively better as you add more compute, of course.
This is fun for proving that it can be done, but that's 100X slower than hosted models and 1000X slower than GPT-Codex-Spark.
That's like going from real time conversation to e-mailing someone who only checks their inbox twice a day if you're lucky.
The issue you'll actually run into is that most residential housing isn't wired for more than ~2kW per room.
Harder to track downloads then. Only when clients hit the tracker would they be able to get download states, and forget about private repositories or the "gated" ones that Meta/Facebook does for their "open" models.
Still, if vanity metrics wasn't so important, it'd be a great option. I've even thought of creating my own torrent mirror of HF to provide as a public service, as eventually access to models will be restricted, and it would be nice to be prepared for that moment a bit better.
It's a bit like any legalization question -- the black market exists anyway, so a regulatory framework could bring at least some of it into the sunlight.
But that'll only stop a small part, anyone could share the infohash and if you're using the dht/magnet without .torrent files or clicks on a website, no one can count those downloads unless they too scrape the dht for peers who are reporting they've completed the download.
Which can be falsified. Head over to your favorite tracker and sort by completed downloads to see what I mean.
Suppose HF did the opposite because the bandwidth saved is more and they're not as concerned you might download a different model from someone else.
How solid is its business model? Is it long-term viable? Will they ever "sell out"?
https://giftarticle.ft.com/giftarticle/actions/redeem/9b4eca...
To summarize, they rejected Nvidia's offer because they didn't want one outsized investor who could sway decisions. And "the company was also able to turn down Nvidia due to its stable finances. Hugging Face operates a 'freemium' business model. Three per cent of customers, usually large corporations, pay for additional features such as more storage space and the ability to set up private repositories."
GitHub is great -- huge fan. To some degree they "sold out" to Microsoft and things could have gone more south, but thankfully Microsoft has ruled them with a very kind hand, and overall I'm extremely happy with the way they've handled it.
I guess I always retain a bit of skepticism with such things, and the long-term viability and goodness of such things never feels totally sure.
Oh no, never. Don't worry, the usual investors are very well known for fighting for user autonomy (AMD, Nvidia, Intel,IBM, Qualcomm)
They are all very pro consumers and all backers are certainly here for your enjoyment only
> but not quite anti-consumer either!
All of them are public companies, which means that their default state is anti-consumer and pro-shareholder. By law they are required to do whatever they can to maximize profits. History teaches that shareholders can demand whatever they want, with the respective companies following orders, since nobody ever really has to suffer consequences and any and all potential fines are already priced in, in advance, anyway.
Conversely, this is why Valve is such a great company. Valve is probably one of the only few actual pro-consumer companies out there.
Fun Fact! Rarely is it ever mentioned anywhere, but Valve is not a public company! Valve is a private company! That's why they can operate the way they do! If Valve was a public company, then greedy, crooked billionaire shareholders would have managed to get rid of Gabe a long time ago.
I know it's a nit-pick, but I hate that this always gets brought up when it's not actually true. Public corporations face pressure from investors to maximize returns, sure, but there is no law stating that they have to maximize profits at all costs. Public companies can (and often do) act against the interest of immediate profits for some other gain. The only real leverage that investors have is the board's ability to fire executives, but that assumes that they have the necessary votes to do so. As a counter-example, Mark Zuckerberg still controls the majority of voting power at Meta, so he can effectively do whatever he wants with the company without major consequence (assuming you don't consider stock price fluctuations "major").
But I say this not to take away from your broader point, which I agree with: the short-term profit-maximizing culture is indeed the default when it comes to publicly traded corporations. It just isn't something inherent in being publicly traded, and in the inverse, private companies often have the same kind of culture, so that's not a silver bullet either.
Valve is one of my top favorite companies right now. Love the work they're doing, and their products are amazing.
Can hardly wait for the Steam Frame.
I want this to be true, but business interests win out in the end. Llama.cpp is now the de-facto standard for local inference; more and more projects depend on it. If a company controls it, that means that company controls the local LLM ecosystem. And yeah, Hugging Face seems nice now... so did Google originally. If we all don't want to be locked in, we either need a llama.cpp competitor (with a universal abstration), or it should be controlled by an independent nonprofit.
It seems to me there is no chance local ML is going to be anywhere out of the toy status comparing to closed source ones in short term
That's interesting. I thought they would be somewhat redundant. They do similar things after all, except training.
Is my only option to invest in a system with more computing power? These local models look great, especially something like https://huggingface.co/AlicanKiraz0/Cybersecurity-BaronLLM_O... for assisting in penetration testing.
I've experimented with a variety of configurations on my local system, but in the end it turns into a make shift heater.
For your Mac, you can use Ollama, or MLX (Mac ARM specific, requires different engine and different model disk format, but is faster). Ramalama may help fix bugs or ease the process w/MLX. Use either Docker Desktop or Colima for the VM + Docker.
For today's coding & reasoning models, you need a minimum of 32GB VRAM combined (graphics + system), the more in GPU the better. Copying memory between CPU and GPU is too slow so the model needs to "live" in GPU space. If it can't fit all in GPU space, your CPU has to work hard, and you get a space heater. That Mac M1 will do 5-10 tokens/s with 8GB (and CPU on full blast), or 50 token/s with 32GB RAM (CPU idling). And now you know why there's a RAM shortage.
I picked up a second-hand 64GB M1 Max MacBook Pro a while back for not too much money for such experimentation. It’s sufficiently fast at running any LLM models that it can fit in memory, but the gap between those models and Claude is considerable. However, this might be a path for you? It can also run all manner of diffusion models, but there the performance suffers (vs. an older discrete GPU) and you’re waiting sometimes many minutes for an edit or an image.
Most people will not choose Metal if they're picking between the two moats. CUDA is far-and-away the better hardware architecture, not to mention better-supported by the community.
https://www.reddit.com/r/LocalLLM/
Everytime I ask the same thing here, people point me there.
https://www.docker.com/blog/run-llms-locally/
As far as how to find good models to run locally, I found this site recently, and I liked the data it provides:
How can I realistically get involved the AI development space? I feel left out with what’s going on and living in a bubble where AI is forced into by my employer to make use of it (GitHub Copilot), what is a realistic road map to kinda slowly get into AI development, whatever that means
My background is full stack development in Java and React, albeit development is slow.
I’ve only messed with AI on very application side, created a local chat bot for demo purposes to understand what RAG is about to running models locally. But all of this is very superficial and I feel I’m not in the deep with what AI is about. I get I’m too ‘late’ to be on the side of building the next frontier model and makes no sense, what else can I do?
I know Python, next step is maybe do ‘LLM from scratch”? Or I pick up Google machine learning crash course certificate? Or do recently released Nvidia Certification?
I’m open for suggestions
I did use candle for wasm based inference for teaching purposes - that was reasonably painless and pretty nice.
Hopefully this does not mean consolidation due to resource dry up but true fusion of the bests.
I am somewhat anxious about "integration with the Hugging Face transformers library" and possible python ecosystem entanglements that might cause. I know llama.cpp and ggml already have plenty of python tooling but it's not strictly required unless you're quantizing models yourself or other such things.
Sounds like you're very serious about supporting local AI. I have a query for you (and anyone else who feels like donating) about whether you'd be willing to donate some memory/bandwidth resources p2p to hosting an offline model:
We have a local model we would like to distribute but don't have a good CDN.
As a user/supporter question, would you be willing to donate some spare memory/bandwidth in a simple dedicated browser tab you keep open on your desktop that plays silent audio (to not be put in the background and deloaded) and then allocates 100mb -1 gb of RAM and acts as a webrtc peer, serving checksumed models?[1] (Then our server only has to check that you still have it from time to time, by sending you some salt and a part of the file to hash and your tab proves it still has it by doing so). This doesn't require any trust, and the receiving user will also hash it and report if there's a mismatch.
Our server federates the p2p connections, so when someone downloads they do so from a trusted peer (one who has contributed and passed the audits) like you. We considered building a binary for people to run but we consider that people couldn't trust our binaries, or would target our build process somehow, we are paranoid about trust, whereas a web model is inherently untrusted and safer. Why do all this?
The purpose of this would be to host an offline model: we successfully ported a 1 GB model from C++ and Python to WASM and WebGPU (you can see Claude doing so here, we livestreamed some of it[2]), but the model weights at 1 GB are too much for us to host.
Please let us know whether this is something you would contribute a background tab to hosting on your desktop. It wouldn't impact you much and you could set how much memory to dedicate to it, but you would have the good feeling of knowing that you're helping people run a trusted offline model if they want - from their very own browser, no download required. The model we ported is fast enough for anyone to run on their own machines. Let me know if this is something you'd be willing to keep a tab open for.
[1] filesharing over webrtc works like this: https://taonexus.com/p2pfilesharing/ you can try it in 2 browser tabs.
[2] https://www.youtube.com/watch?v=tbAkySCXyp0and and some other videos
We are not going to do what you suggest. Instead, our approach is to use the RAM people aren't using at the moment for a fast edge cache close to their area.
We've tried this architecture and get very low latency and high bandwidth. People would not be contributing their resources to anything they don't know about.
Finally, we would like the possibility of setting up market dynamics in the future: if you aren't currently using all your ram, why not rent it out? This matches the p2p edge architecture we envision.
In addition, our work on WebGPU would allow you to rent out your gpu to a background tab whenever you're not using it. Why have all that silicon sit idle when you could rent it out?
You could also donate it to help fine tune our own sovereign model.
All of this will let us bootstrap to the point where we could be trusted with a download.
We have a rather paranoid approach to security.
What services would you need that Hugging Face doesn't provide?
That is not true. I am serving models off Cloudflare R2. It is 1 petabyte per month in egress use and I basically pay peanuts (~$200 everything included).
the space moved from Consumer to Enterprise pretty fast due to models getting bigger
but maybe I'm just slightly out of the loop
In either case - huge thanks to them for keeping AI open!
I think, for some definition of “banned”, that’s the case. It doesn’t stop the Chinese labs from having organization accounts on HF and distributing models there. ModelScope is apparently the HF-equivalent for reaching Chinese users.
Both $0 revenue "companies", but have created software that is essential to the wider ecosystem and has mindshare value; Bun for Javascript and Ggml for AI models.
But of course the VCs needed an exit sooner or later. That was inevitable.