Grok turns off image generator for most after outcry over sexualised AI imagery
74 points
17 hours ago
| 10 comments
| theguardian.com
| HN
dragonwriter
7 hours ago
[-]
More accurate: “After free demo proves demand (in the worst possible way), Grok makes image generation and editing a paid-only feature”.
reply
pjc50
15 hours ago
[-]
Presumably in response to https://www.telegraph.co.uk/business/2026/01/08/musks-x-coul... and others. I've seen a claim that Spain is referring X for prosecution over this as well.

It's just been restricted to paying customers, and that decision could be driven as much by cost as by outrage.

Edit: may also be linked to people making deepfakes of Renee Good, the woman murdered by US authorities in Minneapolis.

reply
richsouth
14 hours ago
[-]
So only PAYING customers can make CSAM and distribute it openly. Nice one.
reply
xiphias2
9 hours ago
[-]
Noone can, but it's much easier to verify / prosecute people using credit cards (especially as credit card companies take it very seriously as well)
reply
rsynnott
13 hours ago
[-]
The dreaded bluetick becomes a shade ickier.
reply
rsynnott
13 hours ago
[-]
> Edit: may also be linked to people making deepfakes of Renee Good, the woman murdered by US authorities in Minneapolis.

Bloody hell, what the hell is wrong with people?

reply
pjc50
11 hours ago
[-]
Culture war.
reply
kyleee
9 hours ago
[-]
People have been this way since the dawn of time...
reply
Havoc
16 hours ago
[-]
Probably one of the most weak ass responses to a crisis ever. How was this not done within hours? Or if they can’t manage that at least within hours of it hitting mainstream news
reply
pjc50
15 hours ago
[-]
Crisis? It was an intentional product launch. They assumed they'd be able to "get away with it" and that media outrage would not translate into effective legal action.
reply
soco
14 hours ago
[-]
Move fast and break things? Or, innovation at all costs? Or, business value here and now? Or... (add more marketing buzzwords)
reply
Havoc
15 hours ago
[-]
That does seem plausible given how blatant it was

CaaS

reply
rsynnott
9 hours ago
[-]
It's hard to believe that they didn't know that they had this problem before launching; given the volume of material, it's not like it can be difficult to drag out of the offending magic robot.

I'd assume they were just blindsided by the response; they're likely in real danger of getting either DNS-delisted or outright banned in several jurisdictions.

reply
praptak
15 hours ago
[-]
They first tried to manage it by putting the blame 100% on their pedophile users and obviously absolving themselves of any reponsibility (cue tired analogies with knife makers not responsible for stabbings).

Fortunately this narration did not catch traction.

reply
close04
15 hours ago
[-]
> cue tired analogies with knife makers not responsible for stabbings

The knife maker will be in hot water if you ask them for a knife and you're very specific about how you'll break the law with it, and they just give it to you and do nothing else about it (equivalent to the prompt telling the LLM exactly the illegal thing you want).

Even more if the knife they made is illegal, an automatic knife or a dagger (equivalent to the model containing the necessary information to create CSAM).

reply
fortranfiend
7 hours ago
[-]
Hmm it just let me put Keir Starmer in a bikini.
reply
drcongo
14 hours ago
[-]
Willing to bet he got threats from Apple and Google (well, Apple at least) that the CSAM app formerly known as Twitter would be removed from the App Store.
reply
pjc50
14 hours ago
[-]
Everyone else just gets deleted instantly with nowhere to call. Twitter has long had favourable treatment despite the "adult content" rules of the app stores.
reply
duxup
12 hours ago
[-]
All the big companies give each other so much extra room to operate.

Facebook’s practices would have gotten any other dev banned from all stores long ago.

Meanwhile any other devs are under a different microscope / standard.

reply
rchaud
6 hours ago
[-]
The walled garden never claimed to offer equal treatment under its laws.
reply
neko_ranger
9 hours ago
[-]
>Twitter has long had favourable treatment despite the "adult content" rules of the app stores.

Reddit as well

reply
Urahandystar
16 hours ago
[-]
Took them long enough, This was predictable and dangerous. It's a real shame because Elon's goals of allowing an unrestricted AI are somewhat noble even if the execution is haphazard and horrendous. The combination of X's userbase and that technology made this almost inevitable.
reply
ben_w
9 hours ago
[-]
> It's a real shame because Elon's goals of allowing an unrestricted AI are somewhat noble

Are those goals noble? This is the same guy who also said "with AI we are summoning the demon" and whose self-justification for getting a trillion dollar Tesla bonus deal involved the phrase "robot army"?

reply
bakies
10 hours ago
[-]
Just like his guise of "Platform of Free Speech" this is an intentional marketing tool and not at all his nobility.
reply
nickmyersdt
15 hours ago
[-]
The goal itself is flawed, not just the execution.

If you build a system explicitly designed to have no content boundaries, and it produces CSAM, that's not a failure of execution - that's the system working as designed. You don't get credit for noble intentions when the outcome was entirely foreseeable.

Deciding to place no limits on what an AI will generate is itself a value judgment. It's choosing to enable every possible use, including the worst ones. That's not principled neutrality; it's moral abdication dressed up as libertarianism.

reply
maplethorpe
15 hours ago
[-]
> It's a real shame because Elon's goals of allowing an unrestricted AI are somewhat noble

When I was young it was considered improper to scrape too much data from the web. We would even set delays between requests, so as not to be rude internet citizens.

Now, it's considered noble to scrape all the world's data as fast as possible, without permission, and without any thought as to the legality the material, and then feed that data into your machine (without which the machine could not function) and use it to enrich yourself, while removing our ability to trust that an image was created by a human in some way (an ability that we have all possessed for hundreds of thousands of years -- from cave painting to creative coding -- and which has now been permanently and irrevocably destroyed).

reply
westpfelia
16 hours ago
[-]
Now only paying Grok subscribers can make CSAM. Super cool.
reply
literalAardvark
15 hours ago
[-]
Paying subscribers are trivial to track down and convict if they're making CSAM.

In a way, leaving it open as a honeypot is the best action.

reply
janice1999
15 hours ago
[-]
Doubtful. The first thing Musk did was fire the safety team at Twitter.
reply
Hamuko
15 hours ago
[-]
Safety people are also quitting Xitter themselves.

https://bsky.app/profile/caseynewton.bsky.social/post/3mbwqh...

reply
Hamuko
15 hours ago
[-]
Makes perfect business sense. Where else would these users go to for their CSAM-generation needs? They have no other option but to pay!
reply
nickmyersdt
15 hours ago
[-]
Therefore we know that a proportion of paying grok subscribers will cause harm to real victims. This isn't an abstract debate about free expression.

Non-consensual intimate imagery harms real people.

CSAM normalizes and facilitates abuse of real children.

Grok, and everyone involved in it or similar endeavours, facilitate abuse.

reply
DataDaemon
15 hours ago
[-]
Too late, let's wait for another 120M from EU.
reply
ChoGGi
11 hours ago
[-]
Oh okay, so only a few pedophiles will have access to Elon Musk's pedophile picture generator?

"Random Braveheart quote"

Unless you use the grok app...

"Random Matrix quote"

I'll take those downvotes and see myself out.

reply
usrnm
15 hours ago
[-]
I understand that it's a very controversial take, but I don't really understand what's so terrible about computer-generated images depicting something like this. I mean, it's clearly wrong when it concerns actual children, but this is just pixels on the screen and bytes on disks. No living creature was actually hurt when producing these images and cannot be hurt by them. We as a society are totally ok with images depicting various horrible and outrageous stuff, why is this example suddenly such a big issue?

Edit: I'm not talking about deepfakes of real people

reply
nicbou
15 hours ago
[-]
A woman posts on twitter. In the replies, people ask Grok to remove her clothes. Deepfakes are proliferating, sometimes out of personal interest, sometimes as a form of bullying. Fake images can and do hurt real people.

There is also a greater debate about giving people with harmful deviances a “victimless” outlet versus normalising their deviance. Sure there are no children harmed in making the images, but what if they generate images of child actors, or normalise conversation about child sexual abuse?

reply
scotty79
14 hours ago
[-]
That is super interesting culturally. Once video hosting became feasible to be offered for free people came up with the idea that posting their recordings online is a good idea. Which is fine bacause to the casual observer their face is as anonymous as their online handle. You can extract value from millions of your viewers with them knowing about you only as much as you told and shown them.

Publishing is just the first part. The other part is reactions. Most platforms let you disable them so your viewers don't see the disgusting things people say about you right there along your content. Yet many people who publish themselves decide to leave them on. Because the vile comments actually help them exploit their viewers. There's a value to being told to kill yourself in a comment on your post.

If the capability to let users generate fake porn in the comments was left to creators. Many of them would leave that option on. For the same reason they leave the comments on. Ben Shapiro could benefit a lot if some terrible person commented on his video with deepfake or him sucking someone off. Both because of the outrage and because his viewer base is more homophobic and homophoby correlates with homosexual arousal.

reply
pjc50
14 hours ago
[-]
> There's a value to being told to kill yourself in a comment in your post.

At this stage it might just be easier to seek out Satan directly, you'll probably get a better deal for your soul.

reply
scotty79
11 hours ago
[-]
Welcome to media economy of 2026 where a prolific UK white supremacist is actually an Indian living in Dubai.

Have an interesting stay.

reply
w4rh4wk5
15 hours ago
[-]
I think this got blown up right now due to how accessible it was.

However, I think it's clear that Pandora's box is now wide open and that you cannot close it. Sure, you can turn off that Grok integration, but the AI image generation capabilities are now widely available for basically anyone to use.

I wonder whether it'd be better to just "accept and live with it". I agree that this can cause a lot of harm, but I don't see a way where this can be outlawed and prosecuted in such a way that there's a net benefit for society. In the EU many have been battling proposals like Chat Control for the past decades not because they want to protect sex offenders, but because backdooring societies privacy on a grand scale is likely far more detrimental than the impact of sex offenders. (And here we aren't even talking about "real" CSAM content.)

reply
pjc50
14 hours ago
[-]
> I wonder whether it'd be better to just "accept and live with it".

I don't think a world where every female public figure gets nonconsensual porn of themselves shared _publicly_ is better.

Private is a separate matter, but only if it stays truly private.

reply
soraminazuki
9 hours ago
[-]
> I wonder whether it'd be better to just "accept and live with it".

Big tech's approach of move fast, break things, and gain a sh** load of money and influence has cost the world so much over the past two decades. So much so that the post-WWII rules-based international order is under threat. We're on the verge of sliding back towards a world where might make right and the powerful gets to kill, beat, steal, and sexually abuse whoever they want whenever they want. Worse, with the help of technology, they get to entertain the masses by turning those horrific acts into social media content.

It's largely due to the acts of big tech that we got into this mess. But instead learning from this biggest mistake of our generation and taking proactive steps to prevent further harm, you propose that we all suck it up and accept whatever our tech billionaire overlords want to further inflict on this world? WTAF.

> proposals like Chat Control

Are you seriously comparing banning tools for openly forging nudes and sex pics of people to backdooring people's private communications?

reply
w4rh4wk5
5 hours ago
[-]
First, I am not proposing anything.

I feel like we are already past the point where influential people have to play by the same rules as everyone else. I dislike this as much as 99% of the population, but I don't realistically see a remedy given how our governments (EU) are operating.

> Are you seriously comparing banning tools for openly forging nudes and sex pics of people to backdooring people's private communications?

I am not comparing these two things; I mentioned Chat Control as tackling CSAM is one of its main selling points. These forging tools are in the open and banning access and use of them is practically impossible. You could force platforms into setting up filters for public content, but this won't stop the nudes from being shared privately and likely still being accessible on the web _somewhere_. Just look at the bs on tiktok that's been publicly accessible and growing for years now...

IMHO public content filtering won't help much and the following steps will likely involve tapping into people's private content and messages. And this is where I draw the line.

reply
danso
10 hours ago
[-]
You condition your take with “I’m not talking about deepfakes of real people”, but why do you think it is that Grok offers users the ability to generate pornographic imagery of entirely fake digital people — even tailoring them to be as visually flawless as one could desire — and yet so many users end up using it to generate deepfake porn of real people?

So with that in mind, why should we assume that the unrestricted proliferation of fake child porn would not produce significant harm downstream to society? Is the assumption that unlimited fake child porn would satiate the kind of people who now seek out real child porn?

reply
duxup
11 hours ago
[-]
I think the issue becomes an issue of what content and engagement you’re selling when most of the responses to a pretty girl are “grok post a pic of her clothes off” type posts.

The level of discourse on twitter is pretty terrible already but at that point what’s the platform even selling… and who is the platform for?

Just from a product standpoint you got problems.

reply
oktoberpaard
15 hours ago
[-]
I’d say it’s very easy to hurt someone with pixels on the screen by spreading these generated images of actual people online.
reply
Heapifying
10 hours ago
[-]
I do wonder why Grok has this capability in the first place.

Is it because it was pre-trained with real images? (which would be highly illegal and immoral, but I wonder if Twitter has a data-curation team somewhere)

Maybe some kind of distillation technique such as "generate normal porn -> decrease body size -> generate childish face and replace the original image's face with it"? Which would prove there's an intent to explicitly allow generation of this kind

Is it an emergent behaviour?

Why are there no better safeguards?

reply
jiggawatts
4 hours ago
[-]
I’ve heard anecdotes that AI image generators become better at illustrating clothed people if they’re also trained on nudes.

A good analogy is that human artists also often train by painting or drawing a nude model.

reply
DANmode
15 hours ago
[-]
1) Gateway-drug theory.

2) Inability to differentiate between real and fake - less theoretical.

reply
pjc50
14 hours ago
[-]
> Inability to differentiate between real and fake - less theoretical.

This feels like a deeper, much more important topic that I'm at risk of writing thousands of words on. It feels like the distinction used to be .. more clearly labelled? And now nobody cares, and entire media ecosystems live on selling things that aren't really true as "news", with real and occasionally deadly consequences.

reply
palmotea
5 hours ago
[-]
>> Inability to differentiate between real and fake - less theoretical.

> This feels like a deeper, much more important topic that I'm at risk of writing thousands of words on. It feels like the distinction used to be .. more clearly labelled? And now nobody cares, and entire media ecosystems live on selling things that aren't really true as "news", with real and occasionally deadly consequences.

I don't think he's talking about "media ecosystems" but rather enforcement. If fake CSAM proliferates (that can be confused for the real thing), it would create problems for investigating real cases. Investigative resources would be wasted investigating fake CSAM, and people may not report real CSAM because they falsely assume it's fake.

So it probably makes sense to make fake CSAM illegal, not because of the direct harm it does, but for the same reasons its illegal to lie to a police officer or make a false crime report.

reply
usrnm
15 hours ago
[-]
That's the same argument used for banning video games. By killing monsters on the screen you somehow become more violent in real life. Which is comlete bullshit
reply
admash
15 hours ago
[-]
The difference being that sexuality is typically considered an innate desire, whilst the desire to commit violence is not.

Plus, we as a species have a biological imperative to protect our offspring, but apparently an immense capacity to ignore violence committed against others.

reply
scotty79
14 hours ago
[-]
Availability of pornography correlates with people having less sex not more on average.
reply
mrbombastic
11 hours ago
[-]
Sure but there have also been numerous studies that it does affect sexual behavior and expectations for those that do have sex
reply
scotty79
11 hours ago
[-]
Monkey see. Monkey do. Nothing surprising here. Once people decide to do something then they model their actions on what they've seen. Even for such innate and strong desires. So completely hands off approach, leaving it to market forces, might not be the best course of action. But banning doesn't seem like a golden bullet either.
reply
scns
15 hours ago
[-]
Those monsters are virtual ie not real. The people harmed by this are real breathing human beings.
reply
usrnm
14 hours ago
[-]
I stated it already several times in this thread: my question is not about deepfakes, I can clearly see how those can be harmful. I'm talking about pure computer-generated content, not based on existing people
reply
episteme
11 hours ago
[-]
I think I hold a similar view to you and have the same question so maybe what I’ve been thinking about might be useful to you. Everyone is upset about CSAM but when you talk about it, it’s only about deepfakes.

I don’t think we can avoid a world where people can generate CSAM easily, so we have to separate the discussion between being able to do that privately and grok being able to do it.

It makes sense to me that we don’t want widely used websites to contain images of CSAM that you can’t easily avoid, it’s simply repulsive to almost everyone and that’s almost certainly a human instinct, I don’t think it needs to be much more complicated than that.

In terms of generating CSAM privately or even sharing it with other people, I think this is a much more interesting discussion. I think at this point it is an open question on whether it is harmful. Could this replace the abuse that is happening to create some of the real content? Does the escalation argument hold water - will people be more likely to sexually assault children due to access to this material? I don’t think we know enough about pedophilia to answer these questions but given that I don’t think there is a way to stop generating this content in 2026 we really need to answer these questions before we decide to simply incarcerate everyone doing it.

reply
muwtyhg
7 hours ago
[-]
> Edit: I'm not talking about deepfakes of real people

Then you are not talking about the article, so what was the point of your comment?

reply
gbil
15 hours ago
[-]
Is this some kind of a joke? People, children, are getting bullied every day by fake nude images of them ending even in suicide and you are asking what is "so terrible" ???

EDIT: op seems to have clarified in another reply that they are talking about fully generated computer images but this is out of context, the outcry here is that Grok generated fake images of actual people to start with!

reply
Handy-Man
15 hours ago
[-]
| No living creature was actually hurt when producing these images and cannot be hurt by them

Huh? People have been editing images of real women to depict them in bikinis (and that's the least offensive).

reply
usrnm
15 hours ago
[-]
I'm not talking about deepfakes of real people, I'm talking about computer-generated CSAM. Should've made myself clearer
reply
defrost
15 hours ago
[-]
Is that trained on real CSAM just as computer generated art is trained on real images?
reply
Hamuko
15 hours ago
[-]
This is sorta the wrong context to have this debate, since in Grok's case, most of them are edits of real pictures of real people. It's not just Grok generating some lolicon out of thin air.
reply
igleria
15 hours ago
[-]
You don't understand what is dangerous about being able to trivially generate believable images of events that actually did not happen?
reply
kyleee
7 hours ago
[-]
The cat is out of the bag
reply
array_key_first
4 hours ago
[-]
Yeah, that doesn't mean you need to give the cat a car bomb and license to kill. Of course X has responsibility here, people deserve to go to jail for this.
reply
ndsipa_pomu
10 hours ago
[-]
> No living creature was actually hurt when producing these images

I would strongly dispute that as the amount of energy required by and pollution produced by AI systems does cause a lot of harm to living creatures and environments.

reply
_petronius
15 hours ago
[-]
Not so much controversial, as evidence that you completely lack capacity for empathy and you should do some serious self-reflection. This is just a really vile and amoral view to hold.

People are being hurt by this, because "just pixels on a screen and bytes on a disk" can constitute harm due to the social function that information serves.

It's like calling hurling insults at someone as "just words" because no physical violence has occurred yet. The words themselves absolutely can be harm because of the effect they have, and also create an environment that leads to further, physical violence. Anyone who has experienced even mild bullying can attest to that.

Furthermore women and girls are often subject to online harassment and humiliation. This is of course part of that -- we aren't talking about fictional images here, we are talking about photos of real people, many who are children, being manipulated to shame, humiliate, and harass them sexually, targetted at women and girls overwhelmingly.

Advocating for the freedom to commit that kind of harm against other people is gross, and you should reconsider your views and how much care you have for other people.

reply
drcongo
14 hours ago
[-]
The fact that this has downvotes speaks volumes about HN users.
reply
scotty79
11 hours ago
[-]
Who knew users of HACKER news wouldn't be in favor of suppressing technology and exchange of information whatever it might be for mainstream morality reasons.
reply
nomel
14 hours ago
[-]
Or, it's that

> we aren't talking about fictional images here, we are talking about photos of real people, many who are children,

is not compatible with what GP actually said

> it's clearly wrong when it concerns actual children

> No living creature was actually hurt when producing these images and cannot be hurt by them.

making these overly dramatic character attacks seem mostly silly

> you completely lack capacity for empathy and you should do some serious self-reflection. This is just a really vile and amoral view to hold.

> Advocating for the freedom to commit that kind of harm against other people is gross, and you should reconsider your views and how much care you have for other people.

And everyone clapped.

reply
drcongo
8 hours ago
[-]
> is not compatible with what GP actually said

GP edited their post to add that after everyone pointed out that that's what the entire thread is actually about and GP realised how disgusting they looked. Keep up.

reply
nomel
56 minutes ago
[-]
> GP edited their post to add that

This is false. They only added the "edit: " text, not anything I quoted. I know because I quoted the same in a now-deleted reply before the "edit: " text was added.

reply
exodust
12 hours ago
[-]
It's just manipulated photos. No need to panic.

Everyone knows photos can be easily faked. The alarmist response serves no purpose. AI itself can be tasked to make and publish fake photos. Will you point pitchforks at the generators of the generators of the generators?

Fake content has a momentary fizz followed by a sharp drop-off and demotion to yesterday's sloppy meme. Fading to nothing more than a cartoon you don't like. Let's not, I hope, go after "cartoons" or their publishers.

reply
admash
15 hours ago
[-]
Because it creates an appetite for that type of content which is expected to grow to include real images with real harm.
reply
Jigsy
5 hours ago
[-]
People say the same thing about anime artwork and manga, which is an excuse I don't buy into.

The only reason I personally can't condone photorealistic AI images is because they're indistinguishable from photographs.

And in this case, it's required the exploitation of another human being (their photograph, or a photograph of them) in order to undress/manipulate the image.

reply
nicbou
15 hours ago
[-]
I agree, but then again didn’t we have the same debate about violent video games? I don’t know why I am okay with simulated violence but repulsed by simulated sexuality.
reply
admash
15 hours ago
[-]
The difference being that sexuality is typically considered an innate desire, whilst the desire to commit violence is not. Plus, we as a species have a biological imperative to protect our offspring, but apparently an immense capacity to ignore violence committed against others.
reply
Jigsy
5 hours ago
[-]
> whilst the desire to commit violence is not.

Considering how many people want to murder people (or justify murdering people) simply for being attracted to children, are you categorically sure about that?

reply
raincole
15 hours ago
[-]
The argument is that virtual porn normalizes actual porn and virtual abuse normalizes actual abuse. You know, like how the Bible normalized burning people alive.
reply
Hamuko
15 hours ago
[-]
I would've never cast my first stone had it not been for the Bible.
reply
7bit
15 hours ago
[-]
I agree, let's finally ban the fucking Bible.
reply
cosmicgadget
4 hours ago
[-]
The kama sutra?
reply
wrecked_em
14 hours ago
[-]
Back that up with verse and context, please.
reply
willmarch
11 hours ago
[-]
There actually are explicit cases: Leviticus 20:14 and Leviticus 21:9 prescribe burning as a punishment in certain scenarios (ancient Israelite legal code).

Leviticus 20:14 (KJV)

“And if a man take a wife and her mother, it is wickedness: they shall be burnt with fire, both he and they; that there be no wickedness among you.”

Leviticus 21:9 (KJV)

“And the daughter of any priest, if she profane herself by playing the whore, she profaneth her father: she shall be burnt with fire.”

reply