I am aware I could Google it all or ask an LLM, but I'm still interested in a good human explanation.
- apply learned knowledge from its parameters to every part of the input representation („tokenized“, ie, chunkified text).
- apply mixing of the input representation with other parts of itself. This is called „attention“ for historical reasons. The original attention computes mixing of (roughly) every token (say N) with every other (N). Thus we pay a compute cost relative to N squared.
The attention cost therefore grows quickly in terms of compute and memory requirements when the input / conversation becomes long (or may even contain documents).
It is a very active field of research to reduce the quadratic part to something cheaper, but so far this has been rather difficult, because as you readily see this means that you have to give up mixing every part of the input with every other.
Most of the time mixing token representations close to each other is more important than those that are far apart, but not always. That’s why there are many attempts now to do away with most of the quadratic attention layers but keeping some.
What to do during mixing when you give up all-to-all attention is the big research question because many approaches seem to behave well only under some conditions and we haven’t established something as good and versatile as all-to-all attention.
If you forgo all-to-all you also open up so many options (eg. all-to-something followed by something-to-all as a pattern, where something serves as a sort of memory or state that summarizes all inputs at once. You can imagine that summarizing all inputs well is a lossy abstraction though, etc.)
With self attention you compute every token in a sequence with every other token in that same sequence, figuring out which word references which other word (e.g. "George is sitting in the park. He's reading a book.", "He" would correlate with "George", letting the model know what it refers to). Of course these are also trained layers so what the model thinks correlates with what and how that info is used in the DNN perceptron part is depends wholly on the training process.
There is no free lunch with this and with only 1/4 of layers having it, the model will perform significantly worse at identifying relevant info and likely decohere a lot compared to having it on every layer. But since you get rid of the quadratic complicity, it'll be much faster. Think "I'm doing 1000 calculations per second and they're all wrong" meme. So far there have been lots of attempts at doing linear-ish attention (e.g. Google doing the sliding window hackery that only computes a part of the vectors and hopes for good locality, mamba combinations with RNNs, Meta removing positional encodings in attention in the trainwreck that was LLama4, etc.) and they've mostly failed, so the holy grail is finding a way to make it work since you get the best of both worlds. The top performing models today all use fully quadratic attention or combine it with sliding windows in some layers to claw back some speed in long context scenarios at the cost of some accuracy.
How likely are we to NOT see the AI data center apocalypse through better algorithms?
But so far this has just lead to more induced demand. There are a lot of things we would use LLMs for if it was just cheap enough, and every increase in efficiency makes more of those use cases viable
If anything, the US is massively underproducing.
Near certain IMO. Algorithmic improvements have outpaced hardware improvements for decades. We're already seeing the rise of small models and how simple tweaks can make small models very capable problem solvers, better even than state of the art large models. Data center scaling is nearing its peak IMO as we're hitting data limits which cap model size anyway.
Of course with Kimi there is fear because the Chinese government can easily pressure Moonshot AI into sharing the data, and other countries have to work to stealthily siphon data off without being caught by Chinese counterintelligence. As opposed to GPT5 where the American government can easily pressure OpenAI and every other country has to stealthily siphon data off without being caught by American counterintelligence. The only way to be reasonably certain that you aren't spied on is to run your own models or rent GPU time to run models.
The bigger worry imho is whether the models are booby-trapped to give poisoned answers when they detect certain queries or when they detect that you work for a competitor or enemy of China. But that has to be reasonably stealthy to work
Not doubting they aren't spying on people, but regardless, how would you really know? Are you basing this on that no Chinese police has visited you, or how would you really know if it's "simply true" or not?
With that said, I use plenty of models coming out of China too with no fear, but I'm also using them locally, not cloud platforms.
Each model, the tooling you used and even what prompts you use for what model, impacts a lot of the quality of responses you get from the models.
Consider the implications of increases in efficiency *when you hold compute constant*.
The win is far more obvious when it's "we can do more with what we have" instead of "we can do the same with less".
A lot of the optimizations are not some ground breaking new way to program, they're known techniques to any Software Engineer or Systems Engineer.
Hindsight is a bitch huh? Everything looks simple now once people proved it kind of works, but I think you over-simplify "how easy it is".
Lots of stuff in ML, particularly recent ~5 years or so, haven't been "implementing old algorithms" although of course everything is based on the research that happened in the past, we're standing on the shoulders of giants and all that.
Honestly, it would take 24 hours just to download the 98 GB model if I wanted to try it out (assuming I had a card with 98 GB of ram).
With an RTX 3070 (7GB GRAB VRAM), 32 GB RAM and an SSD I can run such models at speeds tolerable for casual use.
> Can you explain what this means and its significance? Assume that I'm a layperson with no familiarity with LLM jargon so explain all of the technical terms, references, names. https://github.com/MoonshotAI/Kimi-Linear
Imagine your brain could only “look at” a few words at a time when you read a long letter. Today’s big language models (the AI that powers chatbots) have the same problem: the longer the letter gets, the more scratch paper they need to keep track of it all. That scratch paper is called the “KV cache,” and for a 1 000 000-word letter it can fill a small library.
Kimi Linear is a new way for the AI to read and write that throws away most of that scratch paper yet still understands the letter. It does this by replacing the usual “look at every word every time” trick (full attention) with a clever shortcut called linear attention. The shortcut is packaged into something they call Kimi Delta Attention (KDA).
What the numbers mean in plain English
    51.0 on MMLU-Pro: on a 4 000-word school-test set, the shortcut scores about as well as the old, slow method.
    84.3 on RULER at 128 000 words: on a much longer test it keeps the quality high while running almost four times faster.
    6 × faster TPOT: when the AI is writing its reply, each new word appears up to six times sooner than with the previous best shortcut (MLA).
    75 % smaller KV cache: the scratch paper is only one-quarter the usual size, so you can fit longer conversations in the same memory.
    Full attention: the old, accurate but slow “look back at every word” method.
    KV cache: the scratch paper that stores which words were already seen.
    Linear attention: a faster but traditionally weaker way of summarising what was read.
    Gated DeltaNet: an improved linear attention trick that keeps the most useful bits of the summary.
    Kimi Delta Attention (KDA): Moonshot’s even better version of Gated DeltaNet.
    Hybrid 3:1 mix: three layers use the fast KDA shortcut, one layer still uses the old reliable full attention, giving speed without losing smarts.
    48 B total, 3 B active: the model has 48 billion total parameters but only 3 billion “turn on” for any given word, saving compute.
    Context length 1 M: it can keep track of about 1 000 000 words in one go—longer than most novels.