Probably not GDPR-compliant then if comments can be deanonymised by LLMs.
If someone can figure out who I am or what city I live in just by this username or my comments (with proof), I'll personally send you 500,000 JPY. I'm quite confident that's not going to happen though.
The paper referenced in the article does not even explain their exact testing methodology (such as the tools or exact prompts used) because they claim it would be misused for evil. In other words, "trust me bro."
Also see the previous discussion here: https://news.ycombinator.com/item?id=47139716
Not that I care, and that could be wildly off, but opsec is a wide term… and Claude one shot that… so safe out there bro, AI is wild
Unless I am misreading something. Take a look at surveillance capitalism to see what's possible right now. It's going to be 100x worse as LLMs become more advanced.
It's not the things you post online, it's the nuances behind the way you type and other ways to determine behavior that allows them to be able to build these kinds of profiles.
From what I can tell, the article/paper in question does not appear to utilize any of the techniques you mention, but I'd be interested to learn more about it.
> it's the nuances behind the way you type
I found this paper which talks about some of those methods.
https://www.audiolabs-erlangen.de/content/04_fraunhofer/assi...
For example the "Text" section on page 91.