Doublespeak: In-Context Representation Hijacking
69 points
6 days ago
| 6 comments
| mentaleap.ai
| HN
hyperhello
1 hour ago
[-]
I guess I understand what is meant, but what is the actual attack? It’s more than a little abstracted from any consequences, like kids using google to search for boobs by typing ‘boobs’.
reply
wood_spirit
13 hours ago
[-]
Intriguing and very cunning attack! So obvious in hindsight!

It makes me wonder how Deepseek avoids commenting politically on China? I have heard anecdotes that it will be writing out a long reply and then presumably it generates some forbidden phrase and it abandons the output and replaces it all with an error message. So presumably the safeguards could be a separate trivial non-LLM-based post filtering which makes it immune to the doublespeak attack?

reply
gunalx
12 hours ago
[-]
Deepseek the model is not that censored. Deepseek the service is. So preaumably like openai and others, there is an additional model and filtering detecting misues or sensitive topics, and filtering the output.
reply
measurablefunc
13 hours ago
[-]
This means whatever NNs are currently used for "safety" will need to be extended. In the limit you essentially get another network of the same width & depth as the original network but which is designed for rejecting all "unsafe" queries which are context hijacking bomb construction with stories about fruits.
reply
acjohnson55
13 hours ago
[-]
These types of attacks are interesting ways in which LLM "thinking" differs from human thinking.
reply
amannm
6 hours ago
[-]
Wasn't able to outsmart GPT 5.2 at least. Saw through it completely.
reply
behnamoh
12 hours ago
[-]
summary: interesting idea, slop website, tested only on old AI models
reply
orbital-decay
6 hours ago
[-]
The trick is also old, it's a very basic tool from the jailbreaking toolset. It's pretty useless on its own, without others. The paper is mostly about mechinterp analysis of that.
reply