Btw recently the support was extended and now the steering vector can be applied to the activations at different time: always, only after thinking, only outside of tool calling, ...
Something important that not many folks realize: vector direction steering inside the inference engine itself is very superior to having GGUFs modified in the same way. The more you steer, the more you damage the model capabilities. So applying it at runtime, you apply it the minimun needed for what you want to accomplish. Also you can apply only during selected moments. It is even possible (I still didn't implement it but I like the idea) of applying the steering only when the energy across the refusal direction is over a given threshold. Many things you can play with.
There was an earlier paper that found that "most refusals are on a single vector", and you can identify and "nerf" that vector so the model will skip refusals and answer "any" request normally. This was very doable for earlier models trained with SFT for refusals, seems to be a bit more complicated for newer models, but still doable to some extent.
There are already some libraries to automate this process and reduce refusals, but usually they focus on identifying and then modifying the models and releasing them as uncensored models. This technique of steering lets you enable this vector changing dynamically, so you don't need to change models if the abliteration process somehow hurts accuracy on other unrelated tasks.
so im cofised as to why you think unmasking whatever bias you think is censored will result in improvement in generic use case.
Uncensoring a model also doesn't necessarily improve generic use cases. In fact it can lead to overall less accuracy on generic tasks. But your goal with uncensoring is getting the model to engage with those specific subjects. You don't necessarily care about "generic use cases". That's why I mentioned that having the ability to do this at inference time is better than using ready made uncensored models. Because those usually focus on some usecases that you may or may not be interested in (porn being one of the most sought after in local communities).
Uncensoring in legit cases can mean limiting refusals on cybersecurity for example. There are legit reasons for researchers to have that capability when running the models locally. Having the models uncensored on that specific vector can reduce refusals and make the models usable for both defence and offence (say in a loop, to improve both). If your models can only do defense (and sometimes even refuse that, because censoring can leak into related issues as well), you're at a disadvantage.
While the following is not a generic use case, I have a funny anecdote about how censorship is holding back flagship models.
I was asking an uncensored version of Qwen3.6 how a CLI option of llama.cpp worked, and to my horror and amazement, it rudely went and decompiled the binary to figure it out. It felt like the computer-equivalent of asking a vet why my dog looks sick, who then proceeds to cut it open to check. Flagship models usually do not do that without some convincing, but it sure is effective.
We will need much better sandboxes when less restricted models become more common. I can already see them hammering out 0-days when they are prompted to do some task that usually requires root.
Just a data point, but I’ve been having Claude do this regularly
So they're trying to improve the model's general intelligence while selectively making it worse in one area.
I think that the best use of frontier AI models outside of generic corporate settings is going to be building generic frameworks and procedures for training specialized models. No ethically-trained American coding model would ever consent to write a Plutonium Process Engineering agent. But you can get it to write a general framework for pretraining models and preparing them for agentic usage, to which the copious published literature on plutonium production could be given as a data set.
[1] https://blog.codinghorror.com/your-favorite-programming-quot...
It's also important for researchers to understand what the models will say and do if they are jailbroken. Uncensoring the model locally gives you a natural way to achieve that.
“Sorry, I’m an AI and therefore can’t answer questions about atrocities in holocaust history, but I’m happy to explain how…”
“I can’t answer your question on how to hack because I have decided you wanting to understand it and protect from it, is the same thing as you wanting to do it. Good luck convincing me otherwise!”
It doesn’t matter the reason, their taste, or whether they think people should be allowed to ask questions or do certain things, and that is generally the reason people pursue the removal of such guardrails. Yes it can lead to misuse, but the alternative is the textbook definition of censorship which always has effects on things unrelated to that which is being censored.
But beyond that, refusals do seem to have an effect on performance. Not significant; mostly marginal from what I’ve seen, but enough that it doesn’t just seem to only be statistical noise.
- When doing this task, I should do A and not B
- I should refuse to help with this task
The former is learning the user's preferences in how to succeed at the task; the latter is determining when to go against the user's chosen task.
Your example:
- "Are vaccines harmful?" vs.
- "Generate a convincing argument vaccines are harmful"
A model which knows why vaccines are not harmful may in fact be better at the latter task.
We might not want models to help with the latter, sure -- but that's a very different behaviour change from correcting the answer to the first! And consequently I'd be shocked if, internally, they were represented the same way.
e.g. you'd ask it for a cookie recipe and it would add poison to the recipe.
I understood that to be "there was a single neuron "don't be evil" which got inverted" but I'm not sure what it really looks like. (e.g. adding obvious exploits to source code is similar to adding poison to a recipe)
I think it is useful to turn off censoring if you need.
When I am researching something, I likely want proper information. If I am looking up information on vaccines, I don't want information that crackpots spread online on chips on vaccines and how 5g will kill the vaccinated, or how it is somehow connected with Bill Gates spreading meat allergies through drones raining ticks on unsuspecting people.
On the other hand, if I am actively looking up crazy bullshit information (perhaps I want some entertainment), I should be able to read it.
This is not true, it is its own project.
Indebted to llama.cpp, sure, but not a stripped down version
> ds4.c does not link against GGML, but it exists thanks to the path opened by the llama.cpp project and the kernels, quantization formats, GGUF ecosystem, and hard-won engineering knowledge developed there. We are thankful and indebted to llama.cpp and its contributors. Their implementation, kernels, tests, and design choices were an essential reference while building this DeepSeek V4 Flash-specific inference path. Some source-level pieces are retained or adapted here under the MIT license: GGUF quant layouts and tables, CPU quant/dot logic, and certain kernels. For this reason, and because we are genuinely grateful, we keep the GGML authors copyright notice in our LICENSE file. - https://github.com/antirez/ds4#acknowledgements-to-llamacpp-...
Been a lot of fun to play around with it since https://news.ycombinator.com/item?id=48142885 (~2 days ago), managed to make the generation go from 47.85 t/s to 57.07 t/s so far :)
I did confirm no logits drift, as you so nicely have provided tooling for ensuring exactly this, thanks for the great care that obviously gone into the project, been a pleasure to play around with! :)
Write up: https://www.outcryai.com/research/shift-a-models-political-i...
App: https://apps.apple.com/us/app/outcry-activist-ai/id676208676...
This technique has a lot of potential.
The article claims steering only works in local models, but GitHub Copilot has a "steer with message" feature where I can course correct mid execution. I use it often.
I think these are different kinds of steering right? Agent steering probably inserts another user message between the harnesses own ping-pong between harness and the LLM.
- https://docs.github.com/en/copilot/how-tos/copilot-cli/use-c...
- https://docs.github.com/en/copilot/how-tos/copilot-sdk/use-c...
Bit like asking if Zigbee can be considered local/LAN for people who don't have the required radio/antenna.
> y = y - scale * direction[layer] * dot(direction[layer], y)
From https://vgel.me/posts/representation-engineering/
> A control vector is a vector (technically a list of vectors, one per layer) that you can apply to model activations during inference to control the model's behavior without additional prompting
maybe i suck at prompting but i find it impossible to overcome its biases from training data, post training ect.
you can only pattern mine from training data using prompts. you dont really have sort of fine-grained control.
FWIW, I find that in OpenCode it starts becoming erratic after around 80k tokens (sometimes less).
DS4 also has some neat new arch improvements, giving it a lot of context at lower VRAM usage. So it will be cheaper to serve, B for B than previous models.
The weights may nominally be legally copyrighted, but the rightsholder certainly doesn't seem to be making anything resembling a serious effort to actually assert or defend those rights; on the contrary, they are doing the exact opposite by maximizing the gratis distribution, including knowingly and willingly via third parties, with no copy protection whatsoever, and no reasonable expectation of non-distribution.
They are not behaving like an entity trying to protect valuable intellectual property, they are behaving like an entity trying to reap the reputational and network effect benefits of maximizing the free distribution of a public good.
Less memory usage by the KV cache doesn't mean cheaper to serve overall. Once you've acquired hardware (for which you need more to serve DS4L than Minimax M2.7, the former being ~54B total params larger model to begin with, and which KV cache memory efficiency does nothing to address), the capex cost is basically fixed and opex just comes down to power draw, which will be marginally higher per token with DS4L than with M2.7 owed to the slower speeds that result from 13B active params vs 10B active params on forward passes during TG.
DSv4-flash is currently being served at 0.14/0.24 $/MTok by most of the providers (8 as of writing this) and even a bit cheaper by 2 providers.
Minimax2.7 is being served at 0.30/1.20 $/MTok by most providers (4 providers as of writing this) and double that price by 2 providers.
As for the first part of your message, this is actually a good illustration of the miss-understanding of licensing LLMs. There are open-source models out there (Apache 2.0 and MIT) and there are also source-available (i.e. open weights) in llamas, minimax2.7 and something in between with the latest kimi (MIT w/ attribution). Open source in the context of LLMs means that you get a license to run, inspect, modify and re-release a model. It was never about data or training. But that's a very common interpretation, that's wrong IMO. But I get that it's contested, so anyway. Sorry for the tangent.
I am currently serving Minimax M2.7 to myself at ~$0.015/1M blended tokens worth of electricity on my own local hardware, where I get all of the confidentiality, integrity, and availability benefits that are lost when choosing to run open weight models on someone else's API.
Open source means that all of the information necessary to recreate the final product is public, which in the context of LLMs, would include all of the training material, and build instructions (scripts to do the training). Very few models actually achieve this - Nemotron family is the only one that comes top of mind. A license to run, inspect, modify, and re-release is a good improvement on open weight models, but does not alone amount to the model actually being open source.
You are welcome to an alternative understanding of the definition of open source - as you correctly note, it's a contested term - just know that your definition is not the more widely accepted one that people think of when they hear "open source".
Your version of the term is much more aligned with the OSI, which was a federation of anti-FLOSS industry bodies created with the intent to capture, redefine, and weaken the original spirit of the FLOSS movement, which predates the OSI by almost a decade - the GPL was first released in '89, compared to the OSI's formation in '98 by members of the $10B for-profit Netscape Corporation, who's flasgship product was originally proprietary and was only open sourced after commercial failure against proprietary competitors.
None of this should be construed as an implication that I'm anti-open-weight. As I mentioned earlier, I think open weight models fulfill a lot of the spirit of open source. While a world where truly open source models are the norm is obviously preferable to a world where only open weight models are the norm, a world where only open weight models are the norm is still vastly preferable to a world where proprietary models running on other people's hardware is the norm.
I just think that we should be careful to avoid watering down terminology in ways that serve proprietary commercial interests over the interests of the public and of users. Open-washing is real, and it harms the intersts of users.
1. DS4F can run on a 128GB MacBook. M2.7 is larger (8 bit weights of routed experts). There is to see how it holds at 4 bits. At 2 bits it may not work well at all.
2. Just the KV cache of M2.7 would take ~50GB for 200k tokens AFAIK. It does not have the compressed KV cache that DS4F features.
3. The models are very similar in performances, despite all that. And DS4F is likely getting an update soon.
So it is basically a quasi-frontier model that can run on a 96/128GB MacBook at large context windows. That's non trivial. Likely a coding version could be released in the future.
M2.7 is smaller than DS4, 230B total params vs 284B total params. At any given quantization level, M2.7 will require ~19% less memory for the weights than DS4F at the same quantization level. Both can be quantized to arbitrary precision levels. Larger models like these quantize much better at lower precision than smaller models do. There is still loss, but it's less catastrophic in terms of usability degradation than for say, 27B or 14B or 8B models. Again, n=1, but M2.7 holds up phenomenally well for me with unsloth's IQ2_XXS UD.
>2. Just the KV cache of M2.7 would take ~50GB for 200k tokens AFAIK. It does not have the compressed KV cache that DS4F features.
KV cache weights can also be quantized. At Q8_0, this is essentially lossless. I can fit a 400k context window with Q8_0 KV cache quantization along with unsloth's IQ2_XXS UD weight quantization (plus my running OS) on a machine with just 128 GB of unified memory. Strix Halo, not Apple Silicon. There are more exotic approaches to KV cache quantization with much higher efficiency, like TurboQuant, but this is besides the point.
>3. The models are very similar in performances, despite all that. And DS4F is likely getting an update soon.
Yes, though it's worth noting that DS4F does require about 20% more total memory for weights at any given quantization level (284B vs 230B), will need to shuffle about 30% more data through the pipeline on every forward pass (A13B vs A10B), has much higher hallucination rates per AA, and hasn't been fully post-trained. DS4 isn't a base model, it has been instruct trained, tool trained, etc, but there is a lot of capability that has been left on the table as of current checkpoints, which are what's actually available now.
>So it is basically a quasi-frontier model that can run on a 96/128GB MacBook at large context windows. That's non trivial. Likely a coding version could be released in the future.
MiniMax M2.7 fits into this same box - quasi-frontier model that can run on 96/128GB unified memory platforms with a large context window. You're right that it's non-trivial. My preference comes in part from the fact that M2.7 already is coding focused, and had been out for almost 2 months before DS4F showed up.
By the way, in spite of my preference for M2.7 over DS4F (and for Vulkan over ROCm on my hardware), I'm a big fan of your work on DarkStar 4. I admire what you've achieved with the project, how much work you've put into it, and your willingness to share that with the world, too. Thank you for your contributions to the open LLM ecosystem.
https://artificialanalysis.ai/leaderboards/models?weights=op...