AI fakes duel over impeachment of Vice-President in Phillipines
50 points
5 hours ago
| 9 comments
| factcheck.afp.com
| HN
anigbrowl
4 hours ago
[-]
It's the 'authenticity' issue of generative AI tat troubles me rather than the content of the viewpoints.

If these same ideas were expressed by Vtubers (virtual youtubers, anime-like filters for people who want to do to-camera video but are shy or protective of their privacy), it would not be troubling, as everyone understands that fictionalized characters are a form of puppetry an can focus on the content of the argument.

But using generative video to simulate ordinary people expressing those ideas is a way of hijacking people's neural responses. Just pick the demographic you wish to micro-target (young/middle/old, working/middle/upper class, etc. etc. etc.) and generate attractive-looking exemplars for messages you want to promote and ugly-looking exemplars for those you wish to discredit.

reply
AlienRobot
4 hours ago
[-]
I didn't have "I generated myself as the chad and you as the soyjak" turning out to be a valid psyop strat in my AI tech doomerism bingo.
reply
johnfn
4 hours ago
[-]
I wonder what recipients would think if I sent this sentence 10 years back via time machine.
reply
Dylan16807
3 hours ago
[-]
The basic framing of the sentence works fine if you know what those terms are. And while "soyjak" is a bit newer, "chad", "soy boy" and "wojak" were well established at that point, so I think it would be easy enough to figure out.
reply
madaxe_again
3 hours ago
[-]
Send it back a century and it’d even be “what’s a bingo card?”
reply
kylestanfield
5 hours ago
[-]
Realistic AI video generation will put us firmly in a post-truth era. If those videos were 25% more realistic it would be indistinguishable from a real interview.

The speed with which you’ll be able to create and disseminate propaganda is mind blowing. Imagine what a 3 letter agency could do with their level of resources.

reply
a_wild_dandan
3 hours ago
[-]
Content authenticity follows from the source's credibility. This is why chains of evidence are crucial. If squinting at pixels were the harbinger of a post-truth reality, then it's already many decades late.
reply
pkoird
3 hours ago
[-]
Good luck telling that to aging grandparents and people in third world countries who are just getting acquaintanted with the internet.
reply
dingnuts
2 hours ago
[-]
even in first world countries grandparents are reading the print version of The Epoch Times lol
reply
JumpCrisscross
3 hours ago
[-]
> speed with which you’ll be able to create and disseminate propaganda is mind blowing

The problem isn’t the fakery, it’s this speed of dissemination on algorithmic social media. It’s increasingly looking like the modern West’s Roman lead pipes.

reply
andrepd
3 hours ago
[-]
Yet people in this very website are 200% hyped about GenAI because it makes it easier to generate slop frontend code or whatever.
reply
phkahler
3 hours ago
[-]
>> Realistic AI video generation will put us firmly in a post-truth era.

Verifiable authenticity might just be the next big thing.

reply
eloisius
1 hour ago
[-]
It’s already a thing in some cameras. New Leicas will embed a cryptographic signature in the photo.
reply
madaxe_again
3 hours ago
[-]
People don’t want authenticity. They want confirmation of what they already think.
reply
wizzwizz4
2 hours ago
[-]
People want authentic confirmation of what they already think, which makes it easy to give them the confirmation and lie to them about the authenticity.

If they can't have both, some people prefer confirmation to authenticity. But that's far from a universal preference.

reply
baxtr
4 hours ago
[-]
Text can be mass faked since Guttenberg, photograph since at least decades.

What makes you think fake videos will have an outsized impact?

reply
ofjcihen
3 hours ago
[-]
Text - “we have no idea if they actually said that”

Picture - “this could be out of context” (this is used constantly in politics and people fall for it anyway)

Video removes the question of context and if the person actually did it. So now instead of printing about it or showing an awkward picture from a certain unflattering angle I can generate a video of your favorite politician taking upskirt photos on a city bus.

As the tech gets more and more realistic we’re increasingly straining the average persons ability to maintain presence of mind and ask questions.

reply
baxtr
3 hours ago
[-]
Yes! And:

Video - “There’s no way to tell whether this is AI-faked or not.”

reply
ofjcihen
3 hours ago
[-]
Ha, I’d love to share the optimism that that’s what most people would say but what we’ve seen is the opposite. The more convincing the media (as in media format) is the more people are willing to believe it.

Don’t get me wrong. I hope this level of fake media causes people to stop taking things at face value and dig into the facts but unfortunately it seems we’re just getting worse at it.

reply
heylook
3 hours ago
[-]
A picture is worth a thousand words.
reply
rightbyte
4 hours ago
[-]
I don't agree. 3 letter agencies have been able to fake videos since the inception of videos. Even more so with CGI.

It has always been about trust in the authors.

The main difference is petty fakes would be cheap. I.e. my wife could be shown a fake portraying me for whatever malicious reasons.

reply
heylook
3 hours ago
[-]
You're being obtuse. There's an obvious difference between "state-level actors can produce misleading films" and "anyone with an internet connection and 5 minutes can make anything they want".
reply
rightbyte
2 hours ago
[-]
The post I responded to wrote "Imagine what a 3 letter agency could do with their level of resources" and I don't think much changed in that regard.
reply
krapp
3 hours ago
[-]
Not in terms of effect. This might have been a gamechanger 20 years ago but nowadays people already trust TikTok memes more than they trust CNN. The bar for credibility is so low that this sort of thing is almost trying too hard.
reply
ofjcihen
3 hours ago
[-]
“People” aren’t a monolith. Certain people are definitely falling for low effort TikTok trash but now more people will fall for these more “credible” fakes.
reply
rightbyte
47 minutes ago
[-]
I think it will be like these "X celebrity is dead" fake articles that went viral on Facebook 201X something. People, as in enough people to make gossip, will only get fooled 3 or 4 times.
reply
hcarvalhoalves
3 hours ago
[-]
The Philippines has a story of foreign influence on their local politics. It wouldn't be crazy to expect this is just the latest chapter of the 3 letter using it as laboratory.
reply
dyauspitr
3 hours ago
[-]
To be honest I can consider myself pretty savvy when it comes to identifying fakes and the schoolboy ones would have fooled me. Pretty flawless videos and the accent and lip sync were spot on. I don’t even think you truly need that extra 25% for the casual observer.
reply
patrakov
4 hours ago
[-]
No, it won't.

Expected reaction: every camera manufacturer will embed chips that hold a private key used to sign and/or watermark photos and videos, thus attesting that the raw footage came from a real camera.

Now it only remains to solve the analog hole problem.

reply
ghushn3
4 hours ago
[-]
I don't think that's as trivial and invulnerable as you think it is. You are talking about a key that exists on a device, but cannot be extracted from the device. It can be used to sign a high volume of data in a unique way such that the signature cannot be transferred to another video.

Now you have another problem -- a signature is unique to a device, presumably, so you've got the "I can authenticate that this camera took this photo" problem, which is great if you are validating a press agency's photographs but TERRIBLE if you are covering a protest or human rights violation.

reply
hansvm
3 hours ago
[-]
- Attach a screen to the camera. Bonus points for bothering to calibrate that contraption.

- Watermarking is nearly useless as a way of conveying that information, either visibly distorting the image or being sensitive to all manner of normal alterations like cropping, lightness adjustments, and screenshotting.

- New file formats are hard to push to a wide enough audience for this to have the desired effect. If half the real images you see aren't signed, ignoring the signature becomes second-nature.

- Hardware keys can always be extracted in O(N) for an N-bit key. The constant factor is large, but not enough to deter a well-funded adversary. The ability to convincingly fake, e.g., video proof you weren't at a crime scene would be valuable in a hurry. I don't know the limits, but it's more than the 2-10 million dollars you need to extract a key.

- You mentioned the analog hole problem, but that's also very real. If the sensor is engineered as a separate unit, it's trivial to sign whatever data you want. That's hard to work around because camera sensors are big and crude, so integration a non-removable crypto enclave onto one is already a nontrivial engineering challenge.

- If this doesn't function something like TLS with certificate transparency logs and chains of trust then one compromised key from any manufacturer kills the whole thing. Would the US even trust Chinese-signed images? Vice versa? The government you obey has a lot of power to steal that secret without the outside world knowing.

- Even if you do have CT logs and trust the company publishing to them to not publish compromised certs, a breach is much worse than for something like TLS. People's devices are effectively just bricked (going back to that 3rd point -- if all the images you personally take aren't appropriately signed, will a lack of signing seem like a big deal?). If you can update the secure enclave then an adversary can too, and if updating tries to protect itself by, e.g., only sending signed bytecode then you still have the problem that the upstream key is (potentially) compromised.

- Everyone's current devices are immediately obsolete, which will kill adoption. If you grandfather the idea in, there's still a period of years where people get used to real images not being signed, and you still have a ton of wasted money and resources that'll get pushback.

Etc. It's really not an easy problem.

reply
AlotOfReading
3 hours ago
[-]
Persistent media watermarking through the analog hole is a solved problem and has been for years. It's standard practice on films.

What does it even mean that hardware keys are extractable in O(N) time? If there's some reasonable multiple of N where you can figure out a key, your cryptosystem is broken, physical or not.

It's also very straightforward to attach metadata to media and wouldn't take a format change.

reply
card_zero
2 hours ago
[-]
The problem would be spurious watermarks, not vanishing ones. Create fake video, point camera at screen, re-record it. Now it's fake and authenticated as a genuine camera recording.
reply
perching_aix
2 hours ago
[-]
> Persistent media watermarking through the analog hole is a solved problem and has been for years. It's standard practice on films.

Can you expand on that a bit? Wikipedia's coverage on this seems mostly historical and copy protection focused.

reply
AlotOfReading
1 hour ago
[-]
The basic idea is that you apply a very, very large amount of error correction to the tag and inject it into the media so that enough survives the severe geometric, color, and luminance distortions of a camcorder to recover the data out the other end. You then download the pirated cams and sue the theater.

There's a fair bit of public information out there on theoretical techniques (e.g. https://www.intechopen.com/chapters/71851), but I'm not deeply familiar with what's actually used in industry, for example by imatag.

reply
perching_aix
13 minutes ago
[-]
Interesting, and the paper is surprisingly accessible to read as well, thanks.

One critique I can lodge against this is that to me it reads like the security model in this scheme trusts the venue to not tamper with the projection equipment. This may not map well to everyday camera recording situations, where the camera owner / operator may have a vested interest and capability in tampering with the camera itself.

reply
blibble
4 hours ago
[-]
a capable university student with their lam equipment could easily extract that key

I don't think the CIA will have any problems

reply
kylestanfield
4 hours ago
[-]
I hope you’re right
reply
dimal
4 hours ago
[-]
Does anyone else feel like at some point reality turned into an unpublished Neal Stephenson novel?
reply
billy99k
4 hours ago
[-]
Well, we could use AI to write one.
reply
madaxe_again
3 hours ago
[-]
Or a published PKD novel, for that matter.
reply
Rodeoclash
2 hours ago
[-]
Was chatting to someone last week who was using AI to help teach their kids. A young ladies illustrated primer.
reply
decimalenough
2 hours ago
[-]
This seems pretty... mild? It's short clips of AI-generated schoolboys raising a pretty reasonable if obviously still politically motivated argument.

From the headline, I was expecting VP candidates slapping each other in the face with a glove and then facing off at dawn with loaded pistols.

reply
ninjaa
4 hours ago
[-]
Lol students fake stuff. These actor-less statements got to go for humanity's sake.

IMO all digital content is going to have to be signed so the provenance trail can be crawled by an AI across devices.

https://aditya-advani.medium.com/how-to-defeat-fake-news-wit...

reply
vik0
4 hours ago
[-]
AI fakes, and AI in general, will push more and more people to interact with each other in real life. I, for one, can't wait for that. Sometimes, the more things change, the more they stay the same
reply
ghushn3
4 hours ago
[-]
I don't know that this is true. A lot of people are getting sucked into "this is my AI friend/girlfriend/boyfriend/waifu/husbando" territory.

In real life, other humans are not machines you can put kindness tokens in and get sex out. AI, on the other hand, you can put any tokens at all into AI and get sex out. I'm worried that people will stop interacting with humans because it's harder.

Sure, the results from a human relationship are 10,000x higher quality, but they require you to be able to communicate. AI will do what it's told, and you can tell it to love you and it will.

for some values of "will".

reply
logicchains
4 hours ago
[-]
That problem will naturally sort itself out through the magic of evolution: generic and cultural traits that increase the chance of pairing with AI will be bred out (as such people won't have children), and traits that reduce it will be selected for.
reply
perching_aix
2 hours ago
[-]
Implying that this is all somehow a genetic trait?

Which gene do you think encodes for having the hots for AI models?

You remind me to a reporting I saw on Taiwanese schoolchildrens' career goals. Most reported aiming for the semiconductor industry. Crazy how the local gene pool works, what a coincidence.

reply
fullshark
3 hours ago
[-]
Sounds like a plan for a healthy social order
reply
ncpa-cpl
5 hours ago
[-]
It could be a plot line in a Black Mirror episode
reply
john2x
4 hours ago
[-]
Once again the Philippines is leading the way with how these things will play out for other countries in the future[1]

[1]: https://www.bbc.com/news/blogs-trending-38173842

reply
roughly
5 hours ago
[-]
Welcome to the future, y’all. It’s gonna be a wild decade.
reply