Kimi Released Kimi K2.5, Open-Source Visual SOTA-Agentic Model
381 points
11 hours ago
| 26 comments
| kimi.com
| HN
Tepix
10 hours ago
[-]
Huggingface Link: https://huggingface.co/moonshotai/Kimi-K2.5

1T parameters, 32b active parameters.

License: MIT with the following modification:

Our only modification part is that, if the Software (or any derivative works thereof) is used for any of your commercial products or services that have more than 100 million monthly active users, or more than 20 million US dollars (or equivalent in other currencies) in monthly revenue, you shall prominently display "Kimi K2.5" on the user interface of such product or service.

reply
endymi0n
7 hours ago
[-]
One. Trillion. Even on native int4 that’s… half a terabyte of vram?!

Technical awe at this marvel aside that cracks the 50th percentile of HLE, the snarky part of me says there’s only half the danger in giving something away nobody can run at home anyway…

reply
johndough
5 hours ago
[-]
The model absolutely can be run at home. There even is a big community around running large models locally: https://www.reddit.com/r/LocalLLaMA/

The cheapest way is to stream it from a fast SSD, but it will be quite slow (one token every few seconds).

The next step up is an old server with lots of RAM and many memory channels with maybe a GPU thrown in for faster prompt processing (low two digits tokens/second).

At the high end, there are servers with multiple GPUs with lots of VRAM or multiple chained Macs or Strix Halo mini PCs.

The key enabler here is that the models are MoE (Mixture of Experts), which means that only a small(ish) part of the model is required to compute the next token. In this case, there are 32B active parameters, which is about 16GB at 4 bit per parameter. This only leaves the question of how to get those 16GB to the processor as fast as possible.

reply
WhitneyLand
2 hours ago
[-]
Its often pointed out in the first sentence of a comment how a model can be run at home, then (maybe) towards the end of the comment it’s mentioned how it’s quantized.

Back when 4k movies needed expensive hardware, no one was saying they could play 4k on a home system, then later mentioning they actually scaled down the resolution to make it possible.

The degree of quality loss is not often characterized. Which makes sense because it’s not easy to fully quantify quality loss with a few simple benchmarks.

By the time it’s quantized to 4 bits, 2 bits or whatever, does anyone really have an idea of how much they’ve gained vs just running a model that is sized more appropriately for their hardware, but not lobotomized?

reply
zozbot234
51 minutes ago
[-]
> ...Back when 4k movies needed expensive hardware, no one was saying they could play 4k on a home system, then later mentioning they actually scaled down the resolution to make it possible. ...

int4 quantization is the original release in this case; it's not been quantized after the fact. It's a bit of a nuisance when running on hardware that doesn't natively support the format (might waste some fraction of memory throughput on padding, specifically on NPU hw that can't do the unpacking on its own) but no one here is reducing quality to make the model fit.

reply
WhitneyLand
9 minutes ago
[-]
Good point thanks for the clarification.

The broader point remains though which is, “you can run this model as home…” when actually the caveats are potentially substantial.

It would be so incredibly slow…

reply
FuckButtons
1 hour ago
[-]
From my own usage, the former is almost always better than the latter. Because it’s less like a lobotomy and more like a hangover, though I have run some quantized models that seem still drunk.

Any model that I can run in 128 gb in full precision is far inferior to the models that I can just barely get to run after reap + quantization for actually useful work.

I also read a paper a while back about improvements to model performance in contrastive learning when quantization was included during training as a form of perturbation, to try to force the model to reach a smoother loss landscape, it made me wonder if something similar might work for llms, which I think might be what the people over at minimax are doing with m2.1 since they released it in fp8.

In principle, if the model has been effective during its learning at separating and compressing concepts into approximately orthogonal subspaces (and assuming the white box transformer architecture approximates what typical transformers do), quantization should really only impact outliers which are not well characterized during learning.

reply
WhitneyLand
5 minutes ago
[-]
Interesting.

If this were the case however, why would labs go through the trouble of distilling their smaller models rather than releasing quantized versions of the flagships?

reply
selfhoster11
1 hour ago
[-]
Except the parent comment said you can stream the weights from an SSD. The full weights, uncompressed. It takes a little longer (a lot longer), but the model at least works without lossy pre-processing.
reply
1dom
3 hours ago
[-]
> The model absolutely can be run at home. There even is a big community around running large models locally

IMO 1tln parameters and 32bln active seems like a different scale to what most are talking about when they say localLLMs IMO. Totally agree there will be people messing with this, but the real value in localLLMs is that you can actually use them and get value from them with standard consumer hardware. I don't think that's really possible with this model.

reply
zamadatix
2 hours ago
[-]
Local LLMs are just LLMs people run locally. It's not a definition of size, feature set, or what's most popular. What the "real" value is for local LLMs will depend on each person you ask. The person who runs small local LLMs will tell you the real value is in small models, the person who runs large local LLMs will tell you it's large ones, those who use cloud will say the value is in shared compute, and those who don't like AI will say there is no value in any.

LLMs which the weights aren't available are an example of when it's not local LLMs, not when the model happens to be large.

reply
1dom
1 hour ago
[-]
> LLMs which the weights aren't available are an example of when it's not local LLMs, not when the model happens to be large.

I agree. My point was that most aren't thinking of models this large when they're talking about local LLMs. That's what I said, right? This is supported by the download counts on hf: the most downloaded local models are significantly smaller than 1tln, normally 1 - 12bln.

I'm not sure I understand what point you're trying to make here?

reply
zozbot234
3 hours ago
[-]
32B active is nothing special, there's local setups that will easily support that. 1T total parameters ultimately requires keeping the bulk of them on SSD. This need not be an issue if there's enough locality in expert choice for any given workload; the "hot" experts will simply be cached in available spare RAM.
reply
spmurrayzzz
2 hours ago
[-]
When I've measured this myself, I've never seen a medium-to-long task horizon that would have expert locality such that you wouldn't be hitting the SSD constantly to swap layers (not to say it doesn't exist, just that in the literature and in my own empirics, it doesn't seem to be observed in a way you could rely on it for cache performance).

Over any task that has enough prefill input diversity and a decode phase thats more than a few tokens, its at least intuitive that experts activate nearly uniformly in the aggregate, since they're activated per token. This is why when you do something more than bs=1, you see forward passes light up the whole network.

reply
zozbot234
2 hours ago
[-]
> hitting the SSD constantly to swap layers

Thing is, people in the local llm community are already doing that to run the largest MoE models, using mmap such that spare-RAM-as-cache is managed automatically by the OS. It's a drag on performance to be sure but still somewhat usable, if you're willing to wait for results. And it unlocks these larger models on what's effectively semi-pro if not true consumer hardware. On the enterprise side, high bandwidth NAND Flash is just around the corner and perfectly suited for storing these large read-only model parameters (no wear and tear issues with the NAND storage) while preserving RAM-like throughput.

reply
1dom
3 hours ago
[-]
I never said it was special.

I was trying to correct the record that a lot of people will be using models of this size locally because of the local LLM community.

The most commonly downloaded local LLMs are normally <30b (e.g. https://huggingface.co/unsloth/models?sort=downloads). The things you're saying, especially when combined together, make it not usable by a lot of people in the local LLM community at the moment.

reply
GeorgeOldfield
1 hour ago
[-]
do you guys understand that different experts are loaded PER TOKEN?
reply
dev_l1x_be
3 hours ago
[-]
How do you split the model between multiple GPUs?
reply
evilduck
2 hours ago
[-]
With "only" 32B active params, you don't necessarily need to. We're straying from common home users to serious enthusiasts and professionals but this seems like it would run ok on a workstation with a half terabyte of RAM and a single RTX6000.

But to answer your question directly, tensor parallelism. https://github.com/ggml-org/llama.cpp/discussions/8735 https://docs.vllm.ai/en/latest/configuration/conserving_memo...

reply
PlatoIsADisease
2 hours ago
[-]
>The model absolutely can be run at home.

There is a huge difference between "look I got it to answer the prompt: '1+1='"

and actually using it for anything of value.

I remember early on people bought Macs (or some marketing team was shoveling it), and proposing people could reasonably run the 70B+ models on it.

They were talking about 'look it gave an answer', not 'look this is useful'.

While it was a bit obvious that 'integrated GPU' is not Nvidia VRAM, we did have 1 mac laptop at work that validated this.

Its cool these models are out in the open, but its going to be a decade before people are running them at a useful level locally.

reply
esafak
1 hour ago
[-]
Hear, hear. Even if the model fits, a few tokens per second make no sense. Time is money too.
reply
tempoponet
1 hour ago
[-]
Maybe for a coding agent, but a daily/weekly report on sensitive info?

If it were 2016 and this technology existed but only in 1 t/s, every company would find a way to extract the most leverage out of it.

reply
wongarsu
5 hours ago
[-]
Which conveniently fits on one 8xH100 machine. With 100-200 GB left over for overhead, kv-cache, etc.
reply
Davidzheng
5 hours ago
[-]
that's what intelligence takes. Most of intelligence is just compute
reply
Imustaskforhelp
9 hours ago
[-]
Hey have they open sourced all Kimi k2.5 (thinking,instruct,agent,agent swarm [beta])?

Because I feel like they mentioned that agent swarm is available their api and that made me feel as if it wasn't open (weights)*? Please let me know if all are open source or not?

reply
XenophileJKO
6 hours ago
[-]
I'm assuming the swarm part is all harness. Well I mean a harness and way of thinking that the weights have just been fine tuned to use.
reply
dheera
9 hours ago
[-]
> or more than 20 million US dollars (or equivalent in other currencies) in monthly revenue, you shall prominently display "Kimi K2.5" on the user interface of such product or service.

Why not just say "you shall pay us 1 million dollars"?

reply
vessenes
7 hours ago
[-]
? They prefer the branding. The license just says you have to say it was them if you make > $250mm a year on the model.
reply
viraptor
7 hours ago
[-]
Companies with $20M revenue will not normally have spare $1M available. They'd get more money by charging reasonable subscriptions than by using lawyers to chase sudden company-ending fees.
reply
laurentb
5 hours ago
[-]
it's monthly :) $240M revenue companies will absolutely find a way to fork $1M if they need to. Kimi most likely sees the eyeballs of free advertising as more profitable in the grander scheme of things
reply
clayhacks
8 hours ago
[-]
I assume this allows them to sue for different amounts. And not discourage too many people from using it.
reply
bertili
7 hours ago
[-]
The "Deepseek moment" is just one year ago today!

Coincidence or not, let's just marvel for a second over this amount of magic/technology that's being given away for free... and how liberating and different this is than OpenAI and others that were closed to "protect us all".

reply
motoboi
5 hours ago
[-]
What amazes me is why would someone spend millions to train this model and give it away for free. What is the business here?
reply
whizzter
4 hours ago
[-]
Chinese state that maybe sees open collaboration as the way to nullify any US lead in the field, concurrently if the next "search-winner" is built upon their model the Chinese worldview that Taiwan belongs to China and Tiamen Square massacre never happened.

Also their license says that if you have a big product you need to promote them, remember how Google "gave away" site searche widgets and that was perhaps one of the major ways they gained recognition for being the search leader.

OpenAI/NVidia is the Pets.com/Sun of our generation, insane valuations, stupid spend, expensive options, expensive hardware and so on.

Sun hardware bought for 50k USD to run websites in 2000 are less capable than perhaps 5 dollar/month VPS's today?

"Scaling to AGI/ASI" was always a fools errand, best case OpenAI should've squirreled away money to have a solid engineering department that could focus on algorithmic innovations but considering that Antrophic, Google and Chinese firms have caught up or surpassed them it seems they didn't.

Once things blows up, those closed options that had somewhat sane/solid model research that handles things better will be left and a ton of new competitors running modern/cheaper hardware and just using models are building blocks.

reply
two_tasty
43 minutes ago
[-]
I love how Tiananmen square is always brought up as some unique and tragic example of disinformation that could never occur in the west, as though western governments don't do the exact same thing with our worldview. Your veneer of cynicism scarcely hides the structure of naivety behind.
reply
igneo676
4 minutes ago
[-]
The difference is that, in the west, there's an acceptable counter narrative. I can tell you that Ruby Ridge and Waco never should've happened and were examples of government overreach and massacre of it's own citizens. Or <insert pet issue with the government here>

You can't with Tiananmen square in China

reply
dev_l1x_be
3 hours ago
[-]
> Taiwan belongs to China

So they are on the same page as the UN and US?

The One China policy refers to a United States policy of strategic ambiguity regarding Taiwan.[1] In a 1972 joint communiqué with the PRC, the United States "acknowledges that all Chinese on either side of the Taiwan Strait maintain there is but one China and that Taiwan is a part of China" and "does not challenge that position."

https://en.wikipedia.org/wiki/One_China https://en.wikipedia.org/wiki/Taiwan_and_the_United_Nations

reply
9cb14c1ec0
1 hour ago
[-]
The One China policy is a fiction of foreign policy statecraft, designed to sideline the issue without having to actually deal with it. It is quite clear that apart from the official fiction there is a real policy that is not One China. This is made clear by the weapons sales to Taiwan that specifically calibrated to make a Chinese military action harder.
reply
pqtyw
51 minutes ago
[-]
Existence of an independent and effectively sovereign state on the island of Taiwan (however one calls it) is a fact. Whatever doublespeak governments of other countries or international organizations engage in due to political reasons does not change that.
reply
zozbot234
4 hours ago
[-]
> "Scaling to AGI/ASI" was always a fools errand

Scaling depends on hardware, so cheaper hardware on a compute-per-watt basis only makes scaling easier. There is no clear definition of AGI/ASI but AI has already scaled to be quite useful.

reply
tokioyoyo
3 hours ago
[-]
Moonshot’s (Kimi’s owner) investors are Alibaba/Tencent et al. Chinese market is stupidly competitive, and there’s a general attitude of “household name will take it all”. However getting there requires having a WeChat-esque user base, through one way or another. If it’s paid, there’ll be friction and it won’t work. Plus, it undermines a lot of other companies, which is a win for a lot of people.
reply
Balinares
4 hours ago
[-]
Speculating: there are two connected businesses here, creating the models, and serving the models. Outside of a few moneyed outliers, no one is going to run this at home. So at worst opening this model allows mid-sized competitors to serve it to customers from their own infra -- which helps Kimi gain mindshare, particularly against the large incumbents who are definitely not going to be serving Kimi and so don't benefit from its openness.

Given the shallowness of moats in the LLM market, optimizing for mindshare would not be the worst move.

reply
ggdG
4 hours ago
[-]
I think this fits into some "Commoditize The Complement" strategy.

https://gwern.net/complement

reply
WarmWash
2 hours ago
[-]
It's another state project funded at the discretion of the party.

If you look at past state projects, profitability wasn't really considered much. They are notorious for a "Money hose until a diamond is found in the mountains of waste"

reply
testfrequency
4 hours ago
[-]
Curious to hear what “OpenAI” thinks the answer to this is
reply
YetAnotherNick
4 hours ago
[-]
Hosting the model is cheaper per token, the more batched token you get. So they have big advantage here.
reply
jimmydoe
3 hours ago
[-]
It’s not coincidence. Chinese companies tend to do big releases before Chinese new year. So expect more to come before Feb 17.
reply
PlatoIsADisease
2 hours ago
[-]
I am convinced that was mostly just marketing. No one uses deepseek as far as I can tell. People are not running it locally. People choose GPT/Gemini/Claude/Grok if you are giving your data away anyway.

My biggest source of my conspiracy is that I made a reddit thread asking a question: "Why all the deepseek hype" or something like that. And to this day, I get odd, 'pro deepseek' comments from accounts only used every few months. Its not like this was some highly upvoted topic that is in the 'Top'.

I'd put that deepseek marketing on-par with an Apple marketing campaign.

reply
logicprog
1 hour ago
[-]
I don't use DeepSeek, but I prefer Kimi and GLM to closed models for most of my work.
reply
mekpro
2 hours ago
[-]
Except that, In OpenRouter, Deepseek always maintain in Top 10 Ranking. Although I did not use it personally, i believe that their main advantage over other model is price/performance.
reply
catigula
2 hours ago
[-]
I mean, there are credible safety issues here. A Kimi fine-tune will absolutely be able to help people do cybersecurity related attacks - very good ones.

In a few years, or less, biological attacks and other sorts of attacks will be plausible with the help of these agents.

Chinese companies aren't humanitarian endeavors.

reply
jumploops
10 hours ago
[-]
> For complex tasks, Kimi K2.5 can self-direct an agent swarm with up to 100 sub-agents, executing parallel workflows across up to 1,500 tool calls.

> K2.5 Agent Swarm improves performance on complex tasks through parallel, specialized execution [..] leads to an 80% reduction in end-to-end runtime

Not just RL on tool calling, but RL on agent orchestration, neat!

reply
storystarling
49 minutes ago
[-]
1,500 tool calls per task sounds like a nightmare for unit economics though. I've been optimizing my own agent workflows and even a few dozen steps makes it hard to keep margins positive, so I'm not sure how this is viable for anyone not burning VC cash.
reply
zozbot234
44 minutes ago
[-]
"tool call" is just a reference to any elementary interaction with the outside system. It's not calling third-party APIs or anything like that.
reply
XCSme
4 hours ago
[-]
> Kimi K2.5 can self-direct an agent swarm

Is this within the model? Or within the IDE/service that runs the model?

Because tool calling is mostly just the agent outputting "call tool X", and the IDE does it and returns the data back to AI's context

reply
mzl
4 hours ago
[-]
An LLM model only outputs tokens, so this could be seen as an extension of tool calling where it has trained on the knowledge and use-cases for "tool-calling" itself as a sub-agent.
reply
XCSme
3 hours ago
[-]
Ok, so agent swarm = tool calling where the tool is a LLM call and the argument is the prompt
reply
IanCal
1 hour ago
[-]
Yes largely, although they’ve trained a model specifically for this task rather than using the base model and a bit of prompting.
reply
dcre
2 hours ago
[-]
Sort of. It’s not necessarily a single call. In the general case it would be spinning up a long-running agent with various kinds of configuration — prompts, but also coding environment and which tools are available to it — like subagents in Claude Code.
reply
mohsen1
6 hours ago
[-]
Parallel agents are such a simple, yet powerful hack. Using it in Claude Code with TeammateTool and getting lots of good results!
reply
esperent
5 hours ago
[-]
> TeammateTool

What is this?

reply
frimmy
4 hours ago
[-]
https://x.com/kieranklaassen/status/2014830266515382693 - agent swarms tool shipping w/ cc soon..
reply
jlu
4 hours ago
[-]
claude code hidden feaure currently under a feature flag:

https://github.com/mikekelly/claude-sneakpeek

reply
vinhnx
7 hours ago
[-]
One thing caught my eyes is that besides K2.5 model, Moonshot AI also launched Kimi Code (https://www.kimi.com/code), evolved from Kimi CLI. It is a terminal coding agent, I've been used it last month with Kimi subscription, it is capable agent with stable harness.

GitHub: https://github.com/MoonshotAI/kimi-cli

reply
esafak
1 hour ago
[-]
Does it support the swarm feature? Does Opencode?
reply
forgotpwd16
3 hours ago
[-]
>Kimi Code CLI is not only a coding agent, but also a shell.

That's cool. It also has a zsh hook, allowing you to switch to agent mode wherever you're.

reply
vinhnx
3 hours ago
[-]
It is, Kimi Code CLI supports Zed' Agent Client Protocol (http://agentclientprotocol.com/), so it can acts as an external agent that could run in any ACP-compatible client, eg: Zed, Jetbrain, Toad CLI, Minano Notebook. Also, it supports Agent Skills. Moonshot AI developers are actively update the agent and every active. I really like their CLI.
reply
Imanari
3 hours ago
[-]
How does it fare against CC?
reply
Alifatisk
6 hours ago
[-]
Have you all noted that the latest releases (Qwen3 max thinking, now Kimi k2.5) from Chinese companies are benching against Claude opus now and not Sonnet? They are truly catching up, almost at the same pace?
reply
conception
2 hours ago
[-]
https://clocks.brianmoore.com

K2 is one of the only models to nail the clock face test as well. It’s a great model.

reply
DJBunnies
2 hours ago
[-]
Cool comparison, but none of them get both the face and the time correct when I look at it.
reply
WarmWash
2 hours ago
[-]
They distill the major western models, so anytime a new SOTA model drops, you can expect the Chinese labs to update their models within a few months.
reply
Alifatisk
39 minutes ago
[-]
Yes, they do distill. But just saying all they do is distill is not correct and actually kind of unfair. These Chinese labs have done lots of research in this field and publish it to the public, some of not majority contribute with open-weight models making a future of local llm possible! Deepseek, Moonshot, Minimax, Z.a, Alibabai (Qwen).

They are not just leeching here, they took this innovation, refined it and improved it further. This is what the Chinese is good at.

reply
zozbot234
2 hours ago
[-]
This is just a conspiracy theory/urban legend. How do you "distill" a proprietary model with no access to the original weights? Just doing the equivalent of training on chat/API logs has terrible effectiveness (you're trying to drink from a giant firehose through a tiny straw) and gives you no underlying improvements.
reply
esafak
1 hour ago
[-]
They are, in benchmarks. In practice Anthropic's models are ahead of where their benchmarks suggest.
reply
HNisCIS
46 minutes ago
[-]
Bear in mind that lead may be, in large part, from the tooling rather than the model
reply
zozbot234
6 hours ago
[-]
The benching is sus, it's way more important to look at real usage scenarios.
reply
Reubend
9 hours ago
[-]
I've read several people say that Kimi K2 has a better "emotional intelligence" than other models. I'll be interested to see whether K2.5 continues or even improves on that.
reply
mohsen1
1 hour ago
[-]
I'll test it out on mafia-arena.com once it is available on Open Router
reply
Alifatisk
4 hours ago
[-]
Yup, I experience the same. I don't know what they do to achieve this but it gives them this edge, really curious to learn more about what makes it so good at it.
reply
storystarling
9 hours ago
[-]
yes, though this is highly subjective - it 'feels' like that to me as well (comapred to Gemini 3, GPT 5.2, Opus 4.5).
reply
Topfi
8 hours ago
[-]
K2 0905 and K2 Thinking shortly after that have done impressively well in my personal use cases and was severely slept on. Faster, more accurate, less expensive, more flexible in terms of hosting and available months before Gemini 3 Flash, I really struggle to understand why Flash got such positive attention at launch.

Interested in the dedicated Agent and Agent Swarm releases, especially in how that could affect third party hosting of the models.

reply
msp26
8 hours ago
[-]
K2 thinking didn't have vision which was a big drawback for my projects.
reply
throwaw12
6 hours ago
[-]
Congratulations, great work Kimi team.

Why is that Claude still at the top in coding, are they heavily focused on training for coding or is it their general training is so good that it performs well in coding?

Someone please beat the Opus 4.5 in coding, I want to replace it.

reply
symisc_devel
17 minutes ago
[-]
Gemini 3 pro is way better than Opus especially for large codebases.
reply
pokot0
3 hours ago
[-]
I don't think that kind of difference in benchmarks has any meaning at all. Your agentic coding tool and the task you are working on introduce a lot more "noise" than that small delta.

Also consider they are all overfitting on the benchmark itself so there might be that as well (which can go in either directions)

I consider the top models practically identical for coding applications (just personal experience with heavy use of both GPT5.2 and Opus 4.5).

Excited to see how this model compares in real applications. It's 1/5th of the price of top models!!

reply
Balinares
4 hours ago
[-]
I replaced Opus with Gemini Pro and it's just plain a better coder IMO. It'll restructure code to enable support for new requirements where Opus seems to just pile on more indirection layers by default, when it doesn't outright hardcode special cases inside existing functions, or drop the cases it's failing to support from the requirements while smugly informing you you don't need that anyway.
reply
MattRix
4 hours ago
[-]
Opus 4.5 only came out two months ago, and yes Anthropic spends a lot of effort making it particularly good at coding.
reply
zmmmmm
9 hours ago
[-]
Curious what would be the most minimal reasonable hardware one would need to deploy this locally?
reply
NitpickLawyer
9 hours ago
[-]
I parsed "reasonable" as in having reasonable speed to actually use this as intended (in agentic setups). In that case, it's a minimum of 70-100k for hardware (8x 6000 PRO + all the other pieces to make it work). The model comes with native INT4 quant, so ~600GB for the weights alone. An 8x 96GB setup would give you ~160GB for kv caching.

You can of course "run" this on cheaper hardware, but the speeds will not be suitable for actual use (i.e. minutes for a simple prompt, tens of minutes for high context sessions per turn).

reply
simonw
6 hours ago
[-]
Models of this size can usually be run using MLX on a pair of 512GB Mac Studio M3 Ultras, which are about $10,000 each so $20,000 for the pair.
reply
PlatoIsADisease
2 hours ago
[-]
You might want to clarify that this is more of a "Look it technically works"

Not a "I actually use this"

The difference between waiting 20 minutes to answer the prompt '1+1='

and actually using it for something useful is massive here. I wonder where this idea of running AI on CPU comes from. Was it Apple astroturfing? Was it Apple fanboys? I don't see people wasting time on non-Apple CPUs. (Although, I did do this for a 7B model)

reply
mholm
1 hour ago
[-]
The reason Macs get recommended is the unified memory, which is usable as VRAM for the GPU. People are similarly using the AMD Strix Halo for AI which also has a similar memory architecture. Time to first token for something like '1+1=' would be seconds, and then you'd be getting ~20 tokens per second, which is absolutely plenty fast for regular use. Token/s slows down at the higher end of context, but it's absolutely still practical for a lot of usecases. Though I agree that agentic coding, especially over large projects, would likely get too slow to be practical.
reply
zozbot234
1 hour ago
[-]
Not too slow if you just let it run overnight/in the background. But the biggest draw would be no rate limits whatsoever compared to the big proprietary APIs, especially Claude's. No risk of sudden rugpulls either, and the model will have very consistent performance.
reply
tucnak
1 hour ago
[-]
Mac studio way is not "AI on CPU," as M2/M4 are complex SoC, that includes a GPU with unified memory access.
reply
tosh
6 hours ago
[-]
I think you can put a bunch of apple silicon macs with enough ram together

e.g. in an office or coworking space

800-1000 gb ram perhaps?

reply
spaceman_2020
10 hours ago
[-]
Kimi was already one of the best writing models. Excited to try this one out
reply
Alifatisk
7 hours ago
[-]
To me, Kimi has been the best with writing and conversing, its way more human like!
reply
simonw
6 hours ago
[-]
reply
simonw
2 hours ago
[-]
reply
mythz
6 hours ago
[-]
doesn't work, looks like the link or SVG was cropped.
reply
bavell
4 hours ago
[-]
No pelican for me :(
reply
Barathkanna
7 hours ago
[-]
A realistic setup for this would be a 16× H100 80GB with NVLink. That comfortably handles the active 32B experts plus KV cache without extreme quantization. Cost-wise we are looking at roughly $500k–$700k upfront or $40–60/hr on-demand, which makes it clear this model is aimed at serious infra teams, not casual single-GPU deployments. I’m curious how API providers will price tokens on top of that hardware reality.
reply
a2128
3 hours ago
[-]
You don't need to wait and see, Kimi K2 has the same hardware requirements and has several providers on OpenRouter:

https://openrouter.ai/moonshotai/kimi-k2-thinking https://openrouter.ai/moonshotai/kimi-k2-0905 https://openrouter.ai/moonshotai/kimi-k2-0905:exacto https://openrouter.ai/moonshotai/kimi-k2

Generally it seems to be in the neighborhood of $0.50/1M for input and $2.50/1M for output

reply
wongarsu
5 hours ago
[-]
The weights are int4, so you'd only need 8xH100
reply
reissbaker
6 hours ago
[-]
Generally speaking, 8xH200s will be a lot cheaper than 16xH100s, and faster too. But both should technically work.
reply
pama
2 hours ago
[-]
You can do it and may be ok for single user with idle waiting times, but performance/throughput will be roughly halved (closer to 2/3) and free context will be more limited with 8xH200 vs 16xH100 (assuming decent interconnect). Depending a bit on usecase and workload 16xH100 (or 16xB200) may be a better config for cost optimization. Often there is a huge economy of scale with such large mixture of expert models so that it would even be cheaper to use 96 GPU instead of just 8 or 16. The reasons are complicatet and involve better prefill cache, less memory transfer per node.
reply
bertili
6 hours ago
[-]
The other realistic setup is $20k, for a small company that needs a private AI for coding or other internal agentic use with two Mac Studios connected over thunderbolt 5 RMDA.
reply
Barathkanna
6 hours ago
[-]
That won’t realistically work for this model. Even with only ~32B active params, a 1T-scale MoE still needs the full expert set available for fast routing, which means hundreds of GB to TBs of weights resident. Mac Studios don’t share unified memory across machines, Thunderbolt isn’t remotely comparable to NVLink for expert exchange, and bandwidth becomes the bottleneck immediately. You could maybe load fragments experimentally, but inference would be impractically slow and brittle. It’s a very different class of workload than private coding models.
reply
bertili
6 hours ago
[-]
People are running the previous Kimi K2 on 2 Mac Studios at 21tokens/s or 4 Macs at 30tokens/s. Its still premature, but not a completely crazy proposition for the near future, giving the rate of progress.
reply
NitpickLawyer
6 hours ago
[-]
> 2 Mac Studios at 21tokens/s or 4 Macs at 30tokens/s

Keep in mind that most people posting speed benchmarks try them with basically 0 context. Those speeds will not hold at 32/64/128k context length.

reply
zozbot234
6 hours ago
[-]
If "fast" routing is per-token, the experts can just reside on SSD's. the performance is good enough these days. You don't need to globally share unified memory across the nodes, you'd just run distributed inference.

Anyway, in the future your local model setups will just be downloading experts on the fly from experts-exchange. That site will become as important to AI as downloadmoreram.com.

reply
omneity
3 hours ago
[-]
RDMA over Thunderbolt is a thing now.
reply
YetAnotherNick
4 hours ago
[-]
Depends on if you are using tensor parallelism or pipeline parallelism, in the second case you don't need any sharing.
reply
embedding-shape
6 hours ago
[-]
I'd love to see the prompt processing speed difference between 16× H100 and 2× Mac Studio.
reply
zozbot234
6 hours ago
[-]
Prompt processing/prefill can even get some speedup from local NPU use most likely: when you're ultimately limited by thermal/power limit throttling, having more efficient compute available means more headroom.
reply
Barathkanna
6 hours ago
[-]
I asked GPT for a rough estimate to benchmark prompt prefill on an 8,192 token input. • 16× H100: 8,192 / (20k to 80k tokens/sec) ≈ 0.10 to 0.41s • 2× Mac Studio (M3 Max): 8,192 / (150 to 700 tokens/sec) ≈ 12 to 55s

These are order-of-magnitude numbers, but the takeaway is that multi H100 boxes are plausibly ~100× faster than workstation Macs for this class of model, especially for long-context prefill.

reply
ffsm8
4 hours ago
[-]
You do realize that's entirely made up, right?

Could be true, could be fake - the only thing we can be sure of is that it's made up with no basis in reality.

This is not how you use llms effectively, that's how you give everyone that's using them a bad name from association

reply
zozbot234
6 hours ago
[-]
That's great for affordable local use but it'll be slow: even with the proper multi-node inference setup, the thunderbolt link will be a comparative bottleneck.
reply
teiferer
5 hours ago
[-]
Can we please stop calling those models "open source"? Yes the weights are open. So, "open weight" maybe. But the source isn't open, the thing that allows to re-create it. That's what "open source" used to mean. (Together with a license that allows you to use that source for various things.)
reply
Jackson__
8 hours ago
[-]
As your local vision nut, their claims about "SOTA" vision are absolutely BS in my tests.

Sure it's SOTA at standard vision benchmarks. But on tasks that require proper image understanding, see for example BabyVision[0] it appears very much lacking compared to Gemini 3 Pro.

[0] https://arxiv.org/html/2601.06521v1

reply
nostrebored
36 minutes ago
[-]
Gemini remains the only usable vision fm :(
reply
pu_pe
7 hours ago
[-]
I don't get this "agent swarm" concept. You set up a task and they boot up 100 LLMs to try to do it in parallel, and then one "LLM judge" puts it all together? Is there anywhere I can read more about it?
reply
vessenes
7 hours ago
[-]
You can read about this basically everywhere - the term of art is agent orchestration. Gas town, Claude’s secret swarm mode, or people who like to use phrases like “Wiggum loop” will get you there.

If you’re really lazy - the quick summary is that you can benefit from the sweet spot of context length and reduce instruction overload while getting some parallelism benefits from farming tasks out to LLMs with different instructions. The way this is generally implemented today is through tool calling, although Claude also has a skills interface it has been trained against.

So the idea would be for software development, why not have a project/product manager spin out tasks to a bunch of agents that are primed to be good at different things? E.g. an architect, a designer, and so on. Then you just need something that can rectify GitHub PRs and bob’s your uncle.

Gas town takes a different approach and parallelizes on coding tasks of any sort at the base layer, and uses the orchestration infrastructure to keep those coders working constantly, optimizing for minimal human input.

reply
IanCal
6 hours ago
[-]
I'm not sure whether there are parts of this done for claude but those other ones are layers on top of the usual LLMs we see. This seems to be a bit different, in that there's a different model trained specifically for splitting up and managing the workload.
reply
Rebuff5007
6 hours ago
[-]
I've also been quite skeptical, and I became even more skeptical after hearing a tech talk from a startup in this space [1].

I think the best way to think about it is that its an engineering hack to deal with a shortcoming of LLMs: for complex queries LLMs are unable to directly compute a SOLUTION given a PROMPT, but are instead able to break down the prompt to intermediate solutions and eventually solve the original prompt. These "orchestrator" / "swarm" agents add some formalism to this and allow you to distribute compute, and then also use specialized models for some of the sub problems.

[1] https://www.deepflow.com/

reply
jonkoops
7 hours ago
[-]
The datacenters yearn for the chips.
reply
rvnx
7 hours ago
[-]
You have a team lead that establishes a list of tasks that are needed to achieve your mission

then it creates a list of employees, each of them is specialized for a task, and they work in parallel.

Essentially hiring a team of people who get specialized on one problem.

Do one thing and do it well.

reply
XCSme
4 hours ago
[-]
But in the end, isn't this the same idea with the MoE?

Where we have more specialized "jobs", which the model is actually trained for.

I think the main difference with agents swarm is the ability to run them in parallel. I don't see how this adds much compared to simply sending multiple API calls in parallel with your desired tasks. I guess the only difference is that you let the AI decide how to split those requests and what each task should be.

reply
zozbot234
4 hours ago
[-]
Nope. MoE is strictly about model parameter sparsity. Agents are about running multiple small-scale tasks in parallel and aggregating the results for further processing - it saves a lot of context length compared to having it all in a single session, and context length has quadratic compute overhead so this matters. You can have both.

One positive side effect of this is that if subagent tasks can be dispatched to cheaper and more efficient edge-inference hardware that can be deployed at scale (think nVidia Jetsons or even Apple Macs or AMD APU's) even though it might be highly limited in what can fit on the single node, then complex coding tasks ultimately become a lot cheaper per token than generic chat.

reply
XCSme
4 hours ago
[-]
Yes, I know you can have both.

My point was that this is just a different way of creating specialised task solvers, the same as with MoE.

And, as you said, with MoE it's about the model itself, and it's done at training level so that's not something we can easily do ourselves.

But with agent swarm, isn't it simply splitting a task in multiple sub-tasks and sending each one in a different API call? So this can be done with any of the previous models too, only that the user has to manually define those tasks/contexts for each query.

Or is this at a much more granular level than this, which would not be feasible to be done by hand?

I was already doing this in n8n, creating different agents with different system prompts for different tasks. I am not sure if automating this (with swarm) would work well in my most cases, I don't see how this fully complements Tools or Skills

reply
zozbot234
3 hours ago
[-]
MoE has nothing whatsoever to do with specialized task solvers. It always operates per token within a single task, you can think of it perhaps as a kind of learned "attention" for model parameters as opposed to context data.
reply
XCSme
3 hours ago
[-]
Yes, specific weights/parameters have be trained to solve specific tasks (trained on different data).

Or did I misunderstand the concept of MoE, and it's not about having specific parts of the model (parameters) do better on specific input contexts?

reply
dev_l1x_be
3 hours ago
[-]
I had these weird situations like some models are refusing to use SSH as a tool. Not sure if it was the coding tool limitation or it is baked into in some of the models.
reply
striking
9 hours ago
[-]
reply
hmate9
6 hours ago
[-]
About 600GB needed for weights alone, so on AWS you need an p5.48xlarge (8× H100) which costs $55/hour.
reply
jdeng
6 hours ago
[-]
Glad to to see open source models are catching up and treat vision as first-class citizen (a.k.a native multimodal agentic model). GLM and Qwen models takes different approach, by having a base model and a vision variant (glm-4.6 vs glm-4.6v).

I guess after Kimi K2.5, other vendors are going to the same route?

Can't wait to see how this model performs on computer automation use cases like VITA AI Coworker.

https://www.vita-ai.net/

reply
monkeydust
7 hours ago
[-]
Is this actually good or just optimized heavily for benchmarks? I am hopefully its the former based on the writeup but need to put it through its paces.
reply
pplonski86
9 hours ago
[-]
There are so many models, is there any website with list of all of them and comparison of performance on different tasks?
reply
Reubend
9 hours ago
[-]
The post actually has great benchmark tables inside of it. They might be outdated in a few months, but for now, it gives you a great summary. Seems like Gemini wins on image and video perf, Claude is the best at coding, ChatGPT is the best for general knowledge.

But ultimately, you need to try them yourself on the tasks you care about and just see. My personal experience is that right now, Gemini Pro performs the best at everything I throw at it. I think it's superior to Claude and all of the OSS models by a small margin, even for things like coding.

reply
Imustaskforhelp
9 hours ago
[-]
I like Gemini Pro's UI over Claude so much but honestly I might start using Kimi K2.5 if its open source & just +/- Gemini Pro/Chatgpt/Claude because at that point I feel like the results are negligible and we are getting SOTA open source models again.
reply
wobfan
6 hours ago
[-]
> honestly I might start using Kimi K2.5 if its open source & just +/- Gemini Pro/Chatgpt/Claude because at that point I feel like the results are negligible and we are getting SOTA open source models again.

Me too!

> I like Gemini Pro's UI over Claude so much

This I don't understand. I mean, I don't see a lot of difference in both UIs. Quite the opposite, apart from some animations, round corners and color gradings, they seem to look very alike, no?

reply
Imustaskforhelp
5 hours ago
[-]
Y'know I ended up buying Kimi's moderato plan which is 19$ but they had this unique idea where you can talk to a bot and they could reduce the price

I made it reduce the price of first month to 1.49$ (It could go to 0.99$ and my frugal mind wanted it haha but I just couldn't have it do that lol)

Anyways, afterwards for privacy purposes/( I am a minor so don't have a card), ended up going to g2a to get a 10$ Visa gift card essentially and used it. (I had to pay a 1$ extra but sure)

Installed kimi code on my mac and trying it out. Honestly, I am kind of liking it.

My internal benchmark is creating pomodoro apps in golang web... Gemini 3 pro has nailed it, I just tried the kimi version and it does have some bugs but it feels like it added more features.

Gonna have to try it out for a month.

I mean I just wish it was this cheap for the whole year :< (As I could then move from, say using the completely free models)

Gonna have to try it out more!

reply
coffeeri
9 hours ago
[-]
reply
XCSme
4 hours ago
[-]
There are many lists, but I find all of them outdated or containing wrong information or missing the actual benchmarks I'm looking for.

I was thinking, that maybe it's better to make my own benchmarks with the questions/things I'm interested in, and whenever a new model comes out run those tests with that model using open-router.

reply
pplonski86
8 hours ago
[-]
Thank you! Exactly what I was looking for
reply
DeathArrow
10 hours ago
[-]
Those are some impressive benchmark results. I wonder how well it does in real life.

Maybe we can get away with something cheaper than Claude for coding.

reply
oneneptune
10 hours ago
[-]
I'm curious about the "cheaper" claim -- I checked Kimi pricing, and it's a $200/mo subscription too?
reply
NitpickLawyer
10 hours ago
[-]
On openrouter 2.5 is at 0.60/3$ per Mtok. That's haiku pricing.
reply
storystarling
8 hours ago
[-]
The unit economics seem tough at that price for a 1T parameter model. Even with MoE sparsity you are still VRAM bound just keeping the weights resident, which is a much higher baseline cost than serving a smaller model like Haiku.
reply
mrklol
9 hours ago
[-]
They also have a $20 and $40 tier.
reply
esafak
1 hour ago
[-]
reply
Alifatisk
4 hours ago
[-]
If you bargain with their bot Kimmmmy (not joking), you can even get lower pricing.
reply
mohsen1
1 hour ago
[-]
tell me more...
reply
Alifatisk
1 hour ago
[-]
Go to kimi chat, there will come up multiple suggestions of use cases. One of them will be the bargain robot. If you download their mobile app, the challenge to bargain will probably popup too!

Depending on how well you bargain with the robot, you can go as low as 0,99$ (difficult). Either way, their moderate plan doesn’t have to be 20$. The agent wants a good reason for why it should lower the price for you.

Here’s the direct link to Kimmmmy:

https://www.kimi.com/kimiplus/sale

I’ll send an invite link too if you don’t mind:

https://www.kimi.com/kimiplus/sale?activity_enter_method=h5_...

reply
mangolie
10 hours ago
[-]
they cooked
reply
lrvick
10 hours ago
[-]
Actually open source, or yet another public model, which is the equivalent of a binary?

URL is down so cannot tell.

reply
Tepix
10 hours ago
[-]
It's open weights, not open source.
reply
typ
8 hours ago
[-]
The label 'open source' has become a reputation reaping and marketing vehicle rather than an informative term since the Hugging Face benchmark race started. With the weights only, we cannot actually audit that if a model is a) contaminated by benchmarks, b) built with deliberate biases, or c) trained on copyrighted/privacy data, let alone allowing other vendors to replicate the results. Anyways, people still love free stuff.
reply
Der_Einzige
7 hours ago
[-]
Just accept that IP laws don't matter and the old "free software" paradigm is dead. Aaron Swartz died so that GenAI may live. RMS and his model of "copyleft" are so Web 1.0 (not even 2.0). No one in GenAI cares AT ALL about the true definition of open source. Good.
reply
duskdozer
5 hours ago
[-]
Good?
reply
billyellow
11 hours ago
[-]
Cool
reply
rvz
8 hours ago
[-]
The chefs at Moonshot have cooked once again.
reply