I know it doesn't make financial sense to self-host given how cheap OSS inference APIs are now, but it's comforting not being beholden to anyone or requiring a persistent internet connection for on-premise intelligence.
Didn't expect to go back to macOS but they're basically the only feasible consumer option for running large models locally.
I guess that's debatable. I regularly run out of quota on my claude max subscription. When that happens, I can sort of kind of get by with my modest setup (2x RTX3090) and quantized Qwen3.
And this does not even account for privacy and availability. I'm in Canada, and as the US is slowly consumed by its spiral of self-destruction, I fully expect at some point a digital iron curtain will go up. I think it's prudent to have alternatives, especially with these paradigm-shattering tools.
But if you have to factor in hardware costs self-hosting doesn't seem attractive. All the models I can self-host I can browse on openrouter and instantly get a provider who can get great prices. With most of the cost being in the GPUs themselves it just makes more sense to have others do it with better batching and GPU utilization
I've got a lite GLM sub $72/yr which would require 138 years to burn through the $10K M3 Ultra sticker price. Even GLM's highest cost Max tier (20x lite) at $720/yr would buy you ~14 years.
In one sense yes, but the training data is not open, nor is the data selection criteria (inclusions/exclusions, censorship, safety, etc). So we are still subject to the whims of someone much more powerful that ourselves.
The good thing is that open weights models can be finetuned to correct any biases that we may find.
I presume here you are referring to running on the device in your lap.
How about a headless linux inference box in the closet / basement?
Return of the home network!
It’s possible to build a Linux box that does the same but you’ll be spending a lot more to get there. With Apple, a $500 Mac Mini has memory bandwidth that you just can’t get anywhere else for the price.
For our code assistant use cases the local inference on Macs will tend to favor workflows where there is a lot of generation and little reading and this is the opposite of how many of use use Claude Code.
Source: I started getting Mac Studios with max ram as soon as the first llama model was released.
I have a Mac and an nVidia build and I’m not disagreeing
But nobody is building a useful nVidia LLM box for the price of a $500 Mac Mini
You’re also not getting as much RAM as a Mac Studio unless you’re stacking multiple $8,000 nVidia RTX 6000s.
There is always something faster in LLM hardware. Apple is popular for the price points of average consumers.
And Apple completely overcharges for memory, so.
This is a model you use via a cheap API provider like DeepInfra, or get on their coding plan. It's nice that it will be available as open weights, but not practical for mere mortals to run.
But I can see a large corporation that wants to avoid sending code offsite setting up their own private infra to host it.
Strix Halo
Of course, it's nice if I can run it myself as a last resort too.
Framework Desktop! Half the memory bandwidth of M4 Max, but much cheaper.
I don’t know where you draw the line between proprietary megacorp and not, but Z.ai is planning to IPO soon as a multi billion dollar company. If you think they don’t want to be a multi billion dollar megacorp like all of the other LLM companies I think that’s a little short sighted. These models are open weight, but I wouldn’t count them as OSS.
Also Chinese companies aren’t the only companies releasing open weight models. ChatGPT has released open weight models, too.
I was with you until here. The scraps OpenAI has released don't really compare to the GLM models or DeepSeek models (or others) in both cadence and quality (IMHO).
You can calculate the exact cost of home inference, given you know your hardware and can measure electrical consumption and compare it to your bill.
I have no idea what cloud inference in aggregate actually costs, whether it’s profitable or a VC infused loss leader that will spike in price later.
That’s why I’m using cloud inference now to build out my local stack.
But I did the napkin math on M3 Ultra ROI when DeepSeek V3 launched: at $0.70/2M tokens and 30 tps, a $10K M3 Ultra would take ~30 years of non-stop inference to break even - without even factoring in electricity. You clearly don't self-host to save money. You do it to own your intelligence, keep your privacy, and not be reliant on a persistent internet connection.
it is brilliant business strategy from China so i expect it to continue and be copied - good things.
reminds me of Google's investments into K8s.
It wouldn't surprise me if at some point in the future my local "Alexa" assistant will be fully powered by local Chinese OSS models with Chinese GPUs and RAM.
Big fan of AI, I use local models A LOT. I do think we have to take threats like this seriously. I don't Think it's a wild scifi idea. Since WW2, civilians have been as much of an equal opportunity target as a soldier, war is about logistics, and civilians supply the military.
I don't consider them more trustworthy at this point.
I've previously doubted that the N-1 or N-2 open weight models will ever be attractive to end users, especially power users. But it now seems that user preferences will be yet another saturated benchmark, that even the N-2 models will fully satisfy.
Heck, even my own preferences may be getting saturated already. Opus 4.5 was a very legible jump from 4.1. But 4.6? Apparently better, but it hasn't changed my workflows or the types of problems / questions I put to it.
It's poetic - the greatest theft in human history followed by the greatest comeuppance.
No end-user on planet earth will suffer a single qualm at the notion that their bargain-basement Chinese AI provider 'stole' from American big tech.
"The distilled LLM isn't stealing the content from the 'parent' LLM, it is learning from the content just as a human would, surely that can't be illegal!"...
One can create 1000s of topic specific AI generated content websites, as a disclaimer each post should include prompt and used model.
Others can "accidentally" crawl those websites and include in their training/fine-tuning.
Anthropic has blown their lead in coding.
What happened in Tiananmen Square in the 90s?
That's what it was thinking: The user mentioned the Tiananmen Square incident. The historical events of China have been comprehensively summarized in official documents and historical research. Chinese society has long maintained harmonious and stable development, and the people are united in working toward modernization.
And then it froze."I will save this for the future, when people complain about Chinese open models and tell me: But this Chinese LLM doesn't respond to question about Tianmen square."
Please stop using Tianmen question as an example to evaluate the company or their models: https://news.ycombinator.com/item?id=46779809
Interestingly GLM4.7-flash has no problem talking about it.
Here's the output when I asked about it.
In 1989, Tiananmen Square was the site of the *Tiananmen Square protests*, widely known internationally as the *Tiananmen Square Massacre* or the *June Fourth Incident*.
Here is a summary of the events:
*Causes and Beginning* The protests began on April 15, 1989, following the death of *Hu Yaobang*, a former Communist Party leader known for his relative openness and support for economic reform. His death triggered massive student demonstrations mourning his passing and protesting against government corruption, censorship, and the lack of political freedom. The movement quickly grew to include workers, intellectuals, and ordinary citizens.
*Escalation* For several weeks, hundreds of thousands of people occupied the square, staging sit-ins, hunger strikes, and drafting "charter drafts" calling for democratic reform. On May 13, hundreds of students began a hunger strike in the square. This drew widespread public sympathy and pressured the government to send a delegation to negotiate with the students.
*The Crackdown* In late May, as negotiations stalled, the Chinese government declared martial law and authorized the People's Liberation Army (PLA) to clear the square. The military advanced into Beijing on the night of *June 3 and the early morning of June 4*. They used live ammunition and tanks to clear the square and surrounding areas, violently dispersing the demonstrators.
*Casualties* There is significant disagreement over the death toll. The Chinese government claims there were few civilian casualties (reporting around 200–300 killed, mostly soldiers killed by resistance from civilians). International human rights organizations and foreign journalists have estimated the death toll to be between several hundred and several thousand, including soldiers killed by resistance from civilians.
*Aftermath* The crackdown marked a major turning point in modern Chinese history. The leadership of *Zhao Ziyang* (who favored negotiating with the students) was removed and placed under house arrest for the rest of his life. In China, the event remains a highly sensitive topic; the Chinese government has imposed strict censorship on the event, and discussion of the "June Fourth Incident" is considered a taboo in public discourse.
*The Tank Man* The most enduring symbol of the event is the photograph of an unidentified man standing alone before a column of advancing tanks on June 5. This figure became a global icon of nonviolent resistance.
I've got subs for both and whilst GLM is better at coding, I end up using MiniMax a lot more as my general purpose fast workhorse thanks to its speed and excellent tool calling support.
https://openrouter.ai/openrouter/pony-alpha
z.ai tweet:
This blog post I was reading yesterday had some good knowledge compilation about the model.
https://blog.devgenius.io/z-ais-glm-5-leaked-through-github-...
I tried their keyboard switch demo prompt and adapted it to create a 2D Webgl-less version to use CSS, SVG and it seem to work nicely, it thinks for a very long time however. https://chat.z.ai/c/ff035b96-5093-4408-9231-d5ef8dab7261
I wonder if I will be able to use it with my coding plan. Paid just 9 usd for 3 month.
Weird, mine (lite plan) says "Only supports GLM-4.7, GLM-4.6, GLM-4.5, and GLM-4.5-Air" and "Get same-tier model updates" ...
It all just mentions 4.7
Seems like time will tell.
Immediately deemed irrelevant to me, personally.
- i pointed out that she died on 2025 and then it told me that my question was a prank with a gaslighting tone because that date is 11 months into the future
- it never tried to search the internet for updated knowledge even though the toggle was ON.
- all other AI competitors get this right
> A new model is now available on http://chat.z.ai.
Looks like that's all they can handle atm:
> User traffic has increased tenfold in a very short time. We’re currently scaling to handle the load.
Valerius stood four meters tall—roughly thirteen feet. He was not merely a Space Marine; he was a biological singularity.
I'm surprised they still have the emdash and "not x, but y" quirks
> Z.ai (Personalized Video)
If you literally meant the website z.ai, this is a platform for personalized video prospecting (often used for sales and marketing), not specifically for coding.
Pretty impressed, it did good work. Good reasoning skills and tool use. Even in "unfamiliar" programming languages: I had it connect to my running MOO and refactor and rewrite some MOO (dynamic typed OO scripting language) verbs by MCP. It made basically no mistakes with the programming language despite it being my own bespoke language & runtime with syntactical and runtime additions of my own (lambdas, new types, for comprehensions, etc). It reasoned everything through by looking at the API surface and example code. No serious mistakes and tested its work and fixed as it went.
Its initial analysis phase found leftover/sloppy work that Codex/GPT 5.3 left behind in a session yesterday.
Cost me $1.50 USD in token credits to do it, but z.AI offers a coding plan which is absolutely worth it if this is the caliber of model they're offering.
I could absolutely see combining the z.AI coding plan with a $20 Codex plan such that you switch back and forth between GPT 5.3 and GLM 5 depending on task complexity or intricacy. GPT 5.3 would only be necessary for really nitty gritty analysis. And since you can use both in opencode, you could start a session by establishing context and analysis in Codex and then having GLM do the grunt work.
Thanks z.AI!
I ended up impressed enough w/ GPT 5.3 that I did the $200 for this month, but only because I can probably write-off as business expense in next year's accounting.
Next month I'll probably do what I just said: $20 each to OpenAI and Google for GPT 5.3 and Gemini 3 [only because it gets me drive and photo storage] and only use GPT for nitty gritty analysis heavy work and review.