April 2026 TLDR Setup for Ollama and Gemma 4 26B on a Mac mini
70 points
3 hours ago
| 7 comments
| gist.github.com
| HN
redrove
2 hours ago
[-]
There is virtually no reason to use Ollama over LM Studio or the myriad of other alternatives.

Ollama is slower and they started out as a shameless llama.cpp ripoff without giving credit and now they “ported” it to Go which means they’re just vibe code translating llama.cpp, bugs included.

reply
faitswulff
50 minutes ago
[-]
Does LM Studio have an equivalent to the ollama launch command? i.e. `ollama launch claude --model qwen3.5:35b-a3b-coding-nvfp4`
reply
alifeinbinary
1 hour ago
[-]
I really like LM Studio when I can use it under Windows but for people like me with Intel Macs + AMD gpu ollama is the only option because it can leverage the gpu using MoltenVK aka Vulkan, unofficially. We're still testing it, hoping to get the Vulkan support in the main branch soon. It works perfectly for single GPUs but some edge cases when using multiple GPUs are unsupported until upstream support from MoltenVK comes through. But yeah, I agree, it wasn't cool to repackage Georgi's work like that.
reply
gen6acd60af
1 hour ago
[-]
LM Studio is closed source.

And didn't Ollama independently ship a vision pipeline for some multimodal models months before llama.cpp supported it?

reply
meltyness
1 hour ago
[-]
I feel like the READMEs for these 3 large popular packages already illustrate tradeoffs better than hacker news argument
reply
iLoveOncall
2 hours ago
[-]
> There is virtually no reason to use Ollama over LM Studio or the myriad of other alternatives.

Hmm, the fact that Ollama is open-source, can run in Docker, etc.?

reply
lousken
1 hour ago
[-]
lm studio is not opensource and you can't use it on the server and connect clients to it?
reply
jedisct1
1 hour ago
[-]
LM Studio can absolutely run as as server.
reply
walthamstow
41 minutes ago
[-]
IIRC it does so as default too. I have loads of stuff pointing at LM Studio on localhost
reply
robotswantdata
1 hour ago
[-]
Why are you using Ollama? Just use llama.cpp

brew install llama.cpp

use the inbuilt CLI, Server or Chat interface. + Hook it up to any other app

reply
Bigsy
57 minutes ago
[-]
For MLX I'd guess.
reply
redrove
3 minutes ago
[-]
reply
wronglebowski
18 minutes ago
[-]
That also comes upstream from llama.cpp https://github.com/ggml-org/llama.cpp/discussions/4345
reply
greenstevester
3 hours ago
[-]
Right. So Google released Gemma 4, a 26B mixture-of-experts model that only activates 4B parameters per token.

It's essentially a model that's learned to do the absolute minimum amount of work while still getting paid. I respect that enormously.

It scores 1441 on Arena Elo — roughly the same as Qwen 3.5 at 397B and Kimi k2.5 at 1100B.

Ollama v0.19 switched to Apple's MLX framework on Apple Silicon. 93% faster decode.

They've also improved caching so your coding agents don't have to re-read the entire prompt every time, about time I'd say.

The gist covers the full setup: install, auto-start on boot, keep the model warm in memory.

It runs on a 24GB Mac mini, which means the most expensive part of your local AI setup is still the desk you put it on.

reply
krzyk
1 hour ago
[-]
By desk you mean that "Mac mini"? Because it is pricey. In my country it is 1000 USD (from Apple for basic M4 with 24GB). My desk was 1/5th of that price.

And considering that this Mac mini won't be doing anything else is there a reason why not just buy subscription from Claude, OpenAI, Google, etc.?

Are those open models more performant compared to Sonnet 4.5/4.6? Or have at least bigger context?

reply
lambda
7 minutes ago
[-]
Right now, open models that run on hardware that costs under $5000 can get up to around the performance of Sonnet 3.7. Maybe a bit better on certain tasks if you fine tune them for that specific task or distill some reasoning ability from Opus, but if you look at a broad range of benchmarks, that's about where they land in performance.

You can get open models that are competitive with Sonnet 4.6 on benchmarks (though some people say that they focus a bit too heavily on benchmarks, so maybe slightly weaker on real-world tasks than the benchmarks indicate), but you need >500 GiB of VRAM to run even pretty aggressive quantizations (4 bits or less), and to run them at any reasonable speed they need to be on multi-GPU setups rather than the now discontinued Mac Studio 512 GiB.

The big advantage is that you have full control, and you're not paying a $200/month subscription and still being throttled on tokens, you are guaranteed that your data is not being used to train models, and you're not financially supporting an industry that many people find questionable. Also, if you want to, you can use "abliterated" versions which strip away the censoring that labs do to cause their models to refuse to answer certain questions, or you can use fine-tunes that adapt it for various other purposes, like improving certain coding abilities, making it better for roleplay, etc.

reply
easygenes
2 hours ago
[-]
Why is ollama so many people’s go-to? Genuinely curious, I’ve tried it but it feels overly stripped down / dumbed down vs nearly everything else I’ve used.

Lately I’ve been playing with Unsloth Studio and think that’s probably a much better “give it to a beginner” default.

reply
DiabloD3
4 minutes ago
[-]
Advertising, mostly.

Ollama's org had people flood various LLM/programming related Reddits and Discords and elsewhere, claiming it was an 'easy frontend for llama.cpp', and tricked people.

Only way to win is to uninstall it and switch to llama.cpp.

reply
diflartle
54 minutes ago
[-]
Ollama is good enough to dabble with, and getting a model is as easy as ollama pull <model name> vs figuring it out by yourself on hugging face and trying to make sense on all the goofy letters and numbers between the forty different names of models, and not needing a hugging face account to download.

So you start there and eventually you want to get off the happy path, then you need to learn more about the server and it's all so much more complicated than just using ollama. You just want to try models, not learn the intricacies of hosting LLMs.

reply
polotics
1 hour ago
[-]
Ollama got some first-mover advantage at the time when actually building and git pulling llama.cpp was a bit of a moat. The devs' docker past probably made them overestimate how much they could lay claim to mindshare. However, no one really could have known how quickly things would evolve... Now I mostly recommend LM-studio to people.

What does unsloth-studio bring on top?

reply
easygenes
1 hour ago
[-]
LM Studio has been around longer. I’ve used it since three years ago. I’d also agree it is generally a better beginner choice then and now.

Unsloth Studio is more featureful (well integrated tool calling, web search, and code execution being headline features), and comes from the people consistently making some of the best GGUF quants of all popular models. It also is well documented, easy to setup, and also has good fine-tuning support.

reply
aetherspawn
18 minutes ago
[-]
Which harness (IDE) works with this if any? Can I use it for local coding right now?
reply
boutell
1 hour ago
[-]
Last night I had to install the VO.20 pre-release of ollama to use this model. So I'm wondering if these instructions are accurate.
reply
logicallee
40 minutes ago
[-]
In case someone would like to know what these are like on this hardware, I tested Gemma 4 32b (the ~20 GB model, the largest Gemma model Google published) and Gemma 4 gemma4:e4b (the ~10 GB model) on this exact setup (Mac Mini M4 with 24 GB of RAM using Ollama), I livestreamed it:

https://www.youtube.com/live/G5OVcKO70ns

The ~10 GB model is super speedy, loading in a few seconds and giving responses almost instantly. If you just want to see its performance, it says hello around the 2 minute mark in the video (and fast!) and the ~20 GB model says hello around 5 minutes 45 seconds in the video. You can see the difference in their loading times and speed, which is a substantial difference. I also had each of them complete a difficult coding task, they both got it correct but the 20 GB model was much slower. It's a bit too slow to use on this setup day to day, plus it would take almost all the memory. The 10 GB model could fit comfortably on a Mac Mini 24 GB with plenty of RAM left for everything else, and it seems like you can use it for small-size useful coding tasks.

reply