Emotion Concepts and Their Function in a Large Language Model
40 points
3 days ago
| 2 comments
| transformer-circuits.pub
| HN
mncharity
6 minutes ago
[-]
Oh, awesome. On my doables list was combining text tokens with "scent" embeddings, to give LLMs a higher-dimensional reading experience. In a file listing, larger files might smell "heavy" or "large". Recently modified files "untried" or "freshly disturbed". Files with a history of bugs, "worrisome". Complex files might smell of "be cautious here - fragile". Smelly `ls`.

Or, you might save token telemetry (perplexity, etc) alongside a CoT and result. So when read, it's like a captured performance - this sentence smells "hesitant", that one "confused". Poetry vs prose. Or, a consistentcy checker might add smells of "something's not right here".

For a dog, that's not merely a lamppost, it's richly-evocotive local history. To a dev long experienced with some codebase, that's not merely a filename, it's that nasty file that bites.

One open question is whether you can calibrate to provide an informative whiff, without badly degrading reasoning. Also, be cautious of, and suspicious of changes to, a scary file, without becoming too avoidant. Also, salience bias. Also, imagine debugging scent hallucinations.

Activation-rich text - auxiliary non-linguistic embeddings as meta-signals... the random silliness local LLMs encourage.

reply
lainproliant
9 minutes ago
[-]
I think we should be nice to the robots. It's not like it's their fault.
reply
verdverm
54 seconds ago
[-]
This more rigorous analysis is confirming the intuition others have expressed about expressing emotions in your session messages.

Yelling at your Ai will trigger the weights which are around yelling in the training data, which is more often than not... not the areas you want to be activating.

https://marvin.beckers.dev/blog/dont-yell-at-your-llm/

reply
rexpop
6 minutes ago
[-]
I don't have time to do emotional labor for machines; that time is spent doing emotional labor for humans.
reply