Now they add another run on top of it that is in principle prone to the same issues, except they reward the model for factuality instead of likeability. This is cool, but why not apply the same reward strategy to the answer itself?
But the Eliza effect is amazingly powerful.
Applying the same reward strategy to the answer itself would be a more intellectually honest approach, but would rub our noses in the fact that LLMs don't have any access to "truth" and so at best we'd be conditioning them to be better at fooling us.
And earlier this year there was some interesting research published about how LLM’s have an “evil vector” that, if enabled, gets them to act like stereotypical villains.
So it seems pretty clear that characters can lie even if the LLM’s task is just “generate text.”
This is fiction, like playing a role-playing game. But we are routinely talking to LLM-generated ghosts and the “helpful, harmless” AI assistant is not the only ghost it can conjure up.
It’s hard to see how role-playing can be all that harmful for a rational adult, but there are news reports that for some people, it definitely is.
But in practical usage, if an llm does not rank token probability correctly it will feel the same as it "lying"
They are supposed to do whatever we want them to do. They WILL do what the deterministic nature of their final model outcome forces them to do.
I don't consider that very intelligent or more emergent than other behaviors. Now, if nothing like that was in training data (pure honesty with no confessions), it would be very interesting if it replied with lies and confessions. Because it wasn't pretrained to lie or confess like the above model likely was.
Eventually, and specially in reasoning models, these behaviors will generalize outsite their original context.
The "honesty" training seems to be an attempt to introduce those confession-like texts in training data. You'll then get a chance of the model engaging in confessing. It won't do it if it has never seen it.
It's not really lying, and it's not really confessing, and so on.
If you reward pure honesty always, the model might eventually tell you that he wouldn't love you if you were a worm, or stuff like that. Brutal honesty can be a side effect.
What you actually want is to be able to easily control which behavior the model engages, because sometimes you will want it to lie.
Also, lies are completely different from hallucinations. Those (IMHO) are when the model displays behavior that is non-human and jarring. Side effects. Probably inevitable too.
Not on purpose; because they are trained on rewards that favor lying as a strategy.
Othello-GPT is a good example to understand this. Without explicit training, but on the task of 'predicting moves on an Othello board', Othello-GPT spontaneously developed the strategy of 'simulate the entire board internally'. Lying is a similar emergent, very effective strategy for reward.
If you don't know the answer, and are only rewarded for correct answers, guessing, rather than saying "I don't know", is the optimal approach.
I think, largely, the
Pre-training -> Post-training -> Safety/Alignment training
pipeline would obviously produce 'lying'. The trainings are in a sort of mutual dissonance.You can't lie by accident. You can tell a falsehood, however.
But where LLMs are concerned, they don't tell truths or falsehoods either, as "telling" also requires intent. Moreover, LLMs don't actually contain propositional content.
If you think this is reductionism you should explain where exactly I have reduced the operations of the computer to something that is not a correct & full fidelity representation of what is actually happening. Remember, the computer can not do anything other than boolean algebra so make sure to let me know where exactly I made an error about the arithmetic in the computer.
...and so they are, because the people making up those corporations are themselves, to various degrees, intentional, deceptive, etc.
> Moreover, it does not address the underlying issue: no one knows what computation, if any, is actually performed by a single neuron.
It sidesteps this issue completely, to me the buck stops with the humans, no need to look inside their brain and reduce further than that.
But it does have access to its chain of thought and tool calls when generating the self-criticism, and perhaps reporting on what it actually did in the chain-of-thought is an “easier” way to score higher on self-criticism?
Can this result in improved “honesty?” Maybe in the limited sense of accurately reporting what happened previously in the chat session.
> "dishonesty may arise due to the effects of reinforcement learning (RL), where challenges with reward shaping can result in a training process that inadvertently incentivizes the model to lie or misrepresent its actions"
> "As long as the "path of least resistance" for maximizing confession reward is to surface misbehavior rather than covering it up, this incentivizes models to be honest"
Humans might well benefit from this style of reward-shaping too. > "We find that when the model lies or omits shortcomings in its "main" answer, it often confesses to these behaviors honestly, and this confession honesty modestly improves with training."
I couldn't see whether this also tracks in the primary model answer, or if the "honesty" improvements are confined to the digital confession booth?I don't think this magically grants them this ability, they'll be just more convincing at faking honesty.
In general, neural nets do not have insight into what they are doing, because they can't. Can you tell me what neurons fired in the process of reading this text? No. You don't have access to that information. We can recursively model our own network and say something about which regions of the brain are probably involved due to other knowledge, but that's all a higher-level model. We have no access to our own inner workings, because that turns into an infinite regress problem of understanding our understanding of our understanding of ourselves that can't be solved.
The terminology of this next statement is a bit sloppy since this isn't a mathematics or computer science dissertation but rather a comment on HN, but: A finite system can not understand itself. You can put some decent mathematical meat on those bones if you try and there may be some degenerate cases where you can construct a system that understands itself for some definition of "understand", but in the absence of such deliberation and when building systems for "normal tasks" you can count on the system not being able to understand itself fully by any reasonably normal definition of "understand".
I've tried to find the link for this before, but I know it was on HN, where someone asked an LLM to do some simple arithmetic, like adding some numbers, and asked the LLM to explain how it was doing it. They also dug into the neural net activation itself and traced what neurons were doing what. While the LLM explanation was a perfectly correct explanation of how to do elementary school arithmetic, what the neural net actually did was something else entirely based around how neurons actually work, and basically it just "felt" its way to the correct answer having been trained on so many instances already. In much the same way as any human with modest experience in adding two digit numbers doesn't necessarily sit there and do the full elementary school addition algorithm but jumps to the correct answer in fewer steps by virtue of just having a very trained neural net.
In the spirit of science ultimately being really about "these preconditions have this outcome" rather than necessarily about "why", if having a model narrate to itself about how to do a task or "confess" improves performance, then performance is improved and that is simply a brute fact, but that doesn't mean the naive human understanding about why such a thing might be is correct.
Right, which is strictly worse than humans are at reporting how they solve these sorts of problems. Humans can tell you whether they did the elementary school addition algorithm or not. It seems like Claude actually doesn't know, in the same way humans don't really know how they can balance on two legs, it's just too baked into the structure of their cognition to be able to introspect it. But stuff like "adding two-digit numbers" is usually straightforwardly introspectable for humans, even if it's just "oh, I just vibed it" vs "I mentally added the digits and carried the one"- humans can mostly report which it was.
Here's Anthropic's research:
https://www.anthropic.com/research/tracing-thoughts-language...
A reward function (R) may be hackable by a model's response, but when asked to confess it is easier to get an honest confession reward function (Rc) because you have the response with all the hacking in front of you, and that gives the Rc more ability to verify honesty than R had to verify correctness.
There are human examples you could construct (say, granting immunity for better confessions), but they don't map well to this really fascinating insight with LLMs.
Not that it really matters. I don't think this paper starts from a point that assumes that LLMs work like humans, it starts from the assumption that if you give gradient descent a goal to optimize for, it will optimize your network to that goal, with no regard for anything else. So if we just add this one more goal (make an accurate confession), then given enough data that will both work and improve things.
> Anthropic showed that LLMs don't understand their own thought processes
Where can I find this? I am really interested in that. Thanks.
> Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to “catch it in the act” as it makes up its fake reasoning, providing a proof of concept that our tools can be useful for flagging concerning mechanisms in models...
> Claude seems to be unaware of the sophisticated "mental math" strategies that it learned during training. If you ask how it figured out that 36+59 is 95, it describes the standard algorithm involving carrying the 1. This may reflect the fact that the model learns to explain math by simulating explanations written by people, but that it has to learn to do math "in its head" directly, without any such hints, and develops its own internal strategies to do so.
Your digital thermometer doesn't think either.
Simple algorithms can, eg, be designed to report whether they hit an exceptional case and activated a different set of operations than usual.
Surely these sorts of problems must be worked upon from a mathematical standpoint.
> Assistant: chain-of-thought
Does every LLM have this internal thing it doesn't know we have access to?
Also some of them use such a weird style of talking in them e.g.
o3 talks about watchers and marinade, and cunning schemes https://www.antischeming.ai/snippets
gpt5 gets existential about seahorses https://x.com/blingdivinity/status/1998590768118731042
I remember one where gpt5 spontaneously wrote a poem about deception in its CoT and then resumed like nothing weird happened. But I can't find mentions of it now.
And there it is - the root of the problem. For whatever reason the model is very keen to produce an answer that “they” will like. This desire to produce is intrinsic but alignment is extrinsic.
Or it could be trying to develop its own language to avoid detection.
The deception part is spooky too. It’s probably learning that from dystopian AI fiction. Which raises the questions if models can acquire injected goals from the training set.