Probably about half of us here remember photos before the cell phone era. They were rare, and special, and you'd have a few photos per YEAR to look back on. The feel of photos back then, was at least 100x stronger than now. They were a special item, could be given as a gift. But once they became freely available that same amount of emotion is now split across many thousands of photos. (not saying this is good or bad, just increased supply reducing value of each item)
With image/art generation the same thing will happen and I can already feel it happening. Things that used to be beautiful or fantastic looking now just feel flat and AI-ish. If claymation scenes can be generated in 1s, and I see a million claymation diagrams a year, then claymation will lose its charm. If I see a million fake Tom Cruise videos, then it oversaturates my desire for desire for all Tom Cruise movies.
What a time to be alive.
Likewise with the sort of resurgence of vinyl, and the obsession over "old" point and shoot digicams.
Also for VHS camcorder footage
I don't think I fully agree. Sure people make so many photo's that they don't have the time or the will to start looking through them all.
You can't just whip out your phone and start scrolling through thousands of photo's with friends. It would get so boring so fast.
But if you put some effort into making a nice little selection of the best photo's, that emotion is 100% still there.
Yes, it’s crude, and you have to do the face tagging, but I think it’s a huge improvement over not having that.
I take a hundred photos on a trip, my phone uses AI (not even the new fancy AI, but old 5-10 year old stuff to detect smiling faces and people in frame) to pull out less than a dozen that are worth keeping. Once a month or so I get fed a reminder of some past trip.
This isn't any different than before. The number of photos taken is greater, but the overall number of worthwhile photos from a given trip is about the same.
And we were lucky if even 1 picture per roll was worth keeping long term. And my family almost never looks through those photo albums.
Digital picture frames with a curated rotation of old scans and new digital pictures are what made pictures great for my family.
"One of the primary properties of anything with Mana is a feeling of uniqueness. That one has never encountered something like this before, and therefore it is important. The uniqueness of the thing is a property that pulls you in to focus more closely, to attempt to understand more closely why the thing is unique."
Scott Alexander has written about it:
(except The Mandalorian, and I can't believe I'm using the word "content" :/)
edit: Totally forgot about Andor & Rogue One sorry, great film and two seasons of top-notch storytelling.
To each their own, but I think Andor is, by far, the best post-ROTJ output.
And that is the gist of the problem, isn't it? As we approach our forties and beyond, chances are we have lived more than half our lives. So do I really want to spend hours watching something I might hate and might leave a bad taste in my mouth? (See game of thrones season 8 or worse, Westworld the HBO series which I don't even want to know what happened in season 3 or 4). I am sure there are people who will enjoy those but for the average person it is highly unlikely.
You could ask "how many more movies should we make?" and the answer would be "there is no limit, I always want more"
"I like this thing therefore more of it is obviously better"
I think it takes maturity to say "I like this thing and I don't want more of it."
Even if there were a million fake Tom Cruise movies I would still like Edge of Tomorrow (even if it had been AI made).
I totally get this, but on the other hand, we have definitely benefited from being able to take more photos. I have some older friends (pushing 80 or so) who sucked at taking photos, so 9 of 10 photos they have from their prime adult years raising their family are blurry to the point of not recognizing the people if you don't already know who they are.
They have great photos from the last 15-20 years, but of course they do, phone cameras are vastly superior to the point-and-shoot cameras from the 70s, and when you reflexively shoot a dozen photos every time you pose for a picture your odds are way better that one will come out clear, everyone looking at the camera, smiling, etc.
I dare say, the feel of photos from back then is much stronger than of the photos taken today. See e.g.:
https://plfoto.com/zdjecie/413363/bez-tytulu?from=autor/beak...
https://plfoto.com/zdjecie/619173/bez-tytulu?from=autor/beak...
My generation generally only had photos from birthdays, holidays, vacations, weddings, graduations and reunions. We looked at the three albums which contained every family photo often and I know them all by heart.
My kid was born in 2009 and our family digital album has nearly 1,000 photos per year of her life. And she's seen virtually none of them and seems to have little interest in ever seeing them since she creates so many of her own photos every day which are ephemeral.
- https://en.wikipedia.org/wiki/On_Photography
- https://en.wikipedia.org/wiki/Regarding_the_Pain_of_Others
You said it too:
> If I see a million fake Tom Cruise videos, then it oversaturates my desire for desire for all Tom Cruise movies.
The trick of course is to keep yourself from seeing that content.
The other nuance is that as long as real performance remains unique, which so far it is, we can appreciate more what flesh and blood brings to the table. For example, I can appreciate the reality of the people in a picture or a video that is captured by a regular camera; it's AI version lacks that spunk (for now).
Note that iPhone in its default settings is already altering the reality, so AI generation is far right on that slippery axis.
Perhaps, AI and VR would be the reason why our real hangouts would be more appreciated even if they become rare events in the future.
I guess my stick figure hand drawn diagrams, a doc with few mistakes in grammar or spelling would be seen as more worthy to read as long as my ideas are sound. Right? :-)
I often call this over-saturation the media equivalent of semantic satiation. Anything commoditized or mass-manufactured isn't going to have emotional appeal.
Feels like what you described describes that inner personality trait better than I have heard before.
With respect to people with a consumptive addictive personality though - I really feel for them, it's a rough time to be alive.
Unimaginable abundance may sound good (it does to me), but scarcity has value too. We might just find put that its value is too important. I just hope that if we do, it’s not too late.
My parents took way more photos with film than I do with my cellphone camera.
Or a photo of my freshman dorm room during exam season. Subpar image quality, lousy lighting, etc. but so many memories, positive and negative, are elicited by that fleeting glimpse from an era of excitement, boredom, stress, uncertainty, and optimism, not knowing where I was going in life, when I'd ever look back at that snapshot, but deciding on a whim to grab it during a break from cramming topics now long forgotten.
But I roll my eyes at the idea of injecting my likeness into a short clip depicting random over-the-top action sequences, no matter how photorealistic, because I've never wanted to do that.
I have a photo of a friend I’ve since drifted from, it’s her in her army fatigues after basic. She was had just went through a horrible divorce and that was a shining achievement for her.
The story behind the photo is what makes it matter.
Not the format.
However I will agree AI is a poor substitute. You’ll have people creating AI photos of a fake marriage and fake pets in a big fake house, while they sleep in a bunk bed in a halfway house.
But I think it's more because of growing up with it have now pc, money. Not because people rediscover pixel games.
No, ALL CONTENT is asymptotically approaching 0. This includes photos, videos, stories, app features, even code. Code is now worthless. If you want better security from generated code, wait 2 months and it will be better. If you want a photo, you just prompt and it will generate it on the fly.
AI will be generating movies and videos on the fly, either legally or illegally infringing on IP. Do you want a movie where Deadpool fights The Hulk? Easy. And just like how ad technology knows your preferences, each movie will be individually tailored to YOUR liking just so that your engagement will increase. Do you like happy endings? Deadpool and Hulk will join forces and defeat Thanos. Do you prefer dark endings? Deadpool and Hulk fight until they float off into the Sun and get atomized but keep regenerating for eternity.
If you want to see a photo of you and your family from 15 years ago, it will generate slightly better versions of yourself and your wife and maximize how cute your kids look. This is the world we are facing now, where authenticity is meaningless. And while YOU may not prefer it, think about the kids who aren't born yet and will grow up in a world where this exists.
> If you want to see a photo of you and your family from 15 years ago, it will generate slightly better versions of yourself and your wife and maximize how cute your kids look.
Sure, but why would any of this media have any emotional significance?
The reason we enjoy media of friends and family is because it depicts a moment in the life of our loved ones. A fake image or video of them is of absolutely zero value to everyone.
The reason we enjoy cinema is because a talented group of people had an interesting story to tell and brought it to life in a memorable way. Me, or a random person with no filmmaking talent, prompting a tool to generate a particular scene wouldn't be interesting at all. Talented individuals will also rely on this technology, of course, but a demand for human creativity will still exist, possibly even stronger than today, once everyone is exhausted from the flood of shitty Deadpool vs Hulk videos.
I suspect the same will eventually happen with every other product these tools are currently commoditizing, including software.
All of this seems like a neat technology in search of a problem to solve, while actually introducing countless societal problems we haven't even begun to acknowledge, let alone address. But it sure is a great money and power grab opportunity for giant corporations to further extend their reach. And they have the gall to tell us it will bring world prosperity. Most of these sociopathic assholes should be prosecuted and jailed.
Well, world changes dramatically. Connected old folks are like neanderthals in big city now. However not connected are still living locally in their minds. Youngsters are just accepting the world as it is. Nobody is amused by computers and cameras anymore. (at least in developed areas)
And with all that the worst is yet to come...
In my experience, a digital photo of myself and my partner used as the lock screen of my phone has the same emotional weight as the one sitting on my desk (which is a print out of a digital photo). Additionally, printing out a photo of you and your partner and gifting it to them has the same weight as going through childhood photo. A scrapbook of a recent vacation filled with printed digital photos evokes memories just as vividly as one from the 80s. On the flip side of this, a photo in a box in the basement has the same weight as a photo sitting in the cloud.
I'll offer you some more food for thought: are Aardman Animations films charming because they use claymation? Or is it the creative force of people like Nick Park and Peter Lord?
The one factory you refer to was the last one, and was purchased by the Impossible Project (now Polaroid BV). So they were able to save one set of machines. But the actual process of making the film was lost. So it’s an old set of machines making a new but similar product.
I see what you did there and know exactly the political economist you are talking about, but if you Speak His Name, the shrieking hordes descend.
Um yeah I don't know. I fully resonate with the _emotional_ appeal here, but realistically I remember going round to people's houses to be shown analog photo albums that nobody was that bothered about seeing, because they didn't really care -- they weren't their photos.
The special photos (a few a year) still exists in digital form.
1. The narrative/life of the artist becomes a lot more important. The most successful artists are ones that craft a story around their life and art, and don't just create stuff and stop. This will become even more important.
2. Originality matters more than ever. By design, these tools can only copy and mix things that already exist. But they aren't alive, they don't live in the world and have experiences, and they can't create something truly new.
3. Those that bother to learn the actual art skills, and not merely prompting, will increasingly be miles ahead of everyone else. People are lazy, and bothering to put in the time to actually learn stuff will stand out more and more. (Ditto for writing essays and other writing people are doing with AI.)
4. Taste continues to be the single most important thing. The vast, vast majority of AI art out there is...not very good. It's not going to get better, because the lack of taste isn't a technical problem.
5. Art with physical materials will become increasingly popular. That is, stuff that can't be digitized very well: sculpture, installation art, etc. Above all, AI art is uncool, which means it has no real future as a leading art form. This uncoolness will push people away from the screen and towards things that are more material.
The obvious ones stand out, but there are so many that are indiscernible without spending lots of time digging through it. Even then there are ones that you can at best guess it's maybe AI gen.
The positive aspect of this advance is that I've basically stopped using social media because of the creeping sense that everything is slop
a lot of these accounts mix old clips with new AI clips
or tag onto something emotional like a fake Epstein file image with your favorite politician, and pointing out its AI has people thinking you’re deflecting because you support the politician
Meanwhile the engagement farmer is completely exempt from scrutiny
Its fascinating how fast and unexpected the direction goes
Soon many real OF models will be out of job when everyone will be able to produce content to their personal taste from a few prompts.
A big part of it also the feeling of "connection" with the creator via messages and what not, but that too can be replicated (arguably better) by AI. In fact, a lot of those messages are already being generated haha.
net positive to society
I still think, even with that, that like most predictions of AI taking over any content industries, the short-term predictions are overblown.
Also, I suspect that we'll soon see the same pattern of open weights models following several months behind frontier in every modality not just text.
It's just too easy for other labs to produce synthetic training data from the frontier models and then mimic their behavior. They'll never be as good, but they will certainly be good enough.
-They simply aren't into real women/men (so you couldn't even pay a model to do what they're looking for).
-They want to play out fantasies that would be hard to coordinate even if you could pay models (I guess this is more on the video side of things, but a string of photos can put be together into a comic)
-They want to generate imagery that would be illegal
Based on this, I would guess fetish artists are more at risk than OF models. However, AI isn't free. Depending on what you're looking for commissions might be cheaper still for quite a while...
That was the beginning of my journey into understanding what proper verification/vetting of a source is. It's been going on for a long time and there are always new things to learn. This should be taught to every child, starting early on.
Has this thought process ever worked in real life? I know plenty of seniors who still believe everything that comes out of Facebook, be AI or not, and before that it was the TV, radio, newspapers, etc.
Most people choose to believe, which is why they have a hard time confronting facts.
And not just seniors. I see people of all ages who are perfectly happy to accept artificially generated images and video so long as it plays to their existing biases. My impression is that the majority of humanity is not very skeptical by default, and unwilling to learn.
New generations gets unlimited brain rot delivered through infinite scroll, don't know what a folder is, think everything is "an app" and keep falling for the "technology will free us from work and cure cancer"
There was a sweet spot during which you could grow alongside the internet at a pace that was still manageable and when companies and scammers weren't trying so hard to robbyou from your time money and attention
What in the world is a fake OF model?
Does "OF" stand for "of food"?
Also, using AI will not allow you to better express yourself. To use an analogy, it will not put your self-expression into any better focus, but just apply one of the stock IG filters to it.
Cameras are now "enhancing" photos with AI automatically. The contents of a 'real' photo are increasingly generated. The line is blurring and it's only going to get worse.
I suppose if the AI was able to tell me a true and compelling story, I might not even mind so much. I just don't want to be spoon fed drivel for 15 minutes to find it was all complete made up BS.
I think part of the issue with architects and designers today is that they use CAD too much. It's easy to design boxes and basic roof lines in CAD. It's harder to put in curves and more craftsman features. Nano Banana's renders have more organic design features IMO.
Our house is looking great and we're very happy how it's going so far with a lot of the thanks to Nano Banana.
Like... What are your inputs to the model? Empty renders of the space, or more fully decorated views/ photos? Do you have a light harness around this to help you discover the style you like and then stay consistent with it?
Do you find that giving a lot of context around the space you're designing helps (it hasn't in my attempts)?
It wouldn’t show me the exact things I wanted, but got close enough that I could test ideas and iterate quickly.
The "cubism" example seems like it would be a closer fit to something like stained glass or something. I don't think the thing really understands what cubism was all about. Cubist painters were trying to free themselves from the confines of a single integral plane of perspective by allowing themselves to show various parts of the image from different viewpoints, different times, different styles, etc.
The division of the image into geometric shapes is just a by-product of that quest, whereas the examples here have made it the sum total of the whole piece.
This feels to me like an example of how LLMs still don't "understand" what the art means, and are just aping its facade.
And actually, the link I saw a bit ago was this [0] which is more in-depth and has a lot more examples + prompts.
Now extrapolate to all other artforms. Sculpture seems safe, for now, but only barely so.
Artists aren't doing it for the money. With advanced tools like these they wouldve iterated much faster and created much grander designs.
Art is about pushing limits of what's possible and AI just raises those limits.
That is unlike any artist that I know and I know quite a lot of them. They love their work and the process but they also need to eat. And that included those mentioned above.
Agree that if you are Artist this is not going to be a big concern to you.
AI is well on the way to eliminating human made art since the skills to actually make art will be lost to the skill of being able to describe art. You know, since the only thing that matter is reducing costs.
That's engineering, if that.
Art isn't, and has never been about that.
The only thing AI art makes possible that wasn't possible before is the scale of slop
These days, through commissions, art is a much more viable profession than it ever was.
So you were making book covers? Ah, so sorry. Nobody really cared that it was you.
And you can probably extend that to what's between the covers...
AI is incompatible with capitalism, but the world isn't ready for that. So we'll have a prolonged period of intense aggregation where more and more value is attributed to systems of control that already have more than they could ever spend, long after the free parts could have provided for basic human needs.
In other words, the masters existed because they had benefactors and a market for their art and inventions. Today there are better artists and inventors toiling in obscurity, but they won't be remembered because they merely make rent. Which gets harder every day, so there's a kind of deification of the working class hero NPC mindset and simultaneously no bandwidth for ingenuity (what we once thought of as divine inspiration).
Terence McKenna predicted this paradox that the future's going to get weirder and weirder back in 1998:
People who actually care about art, if given a chance to see it, yes.
Of course, it being done by Davinci is not some random fact about the painting - as if a painting is a mere artifact.
You can argue things like code generation are an extension of the engineer wielding it. Image generation just seems like a net negative overall if it’s used at scale.
Edit: By scale, I mean large corporations putting content in front of millions. I understand the appeal for smaller businesses where they probably weren’t going to pay an artist anyway.
When a company sends an email or docu-sign, they don’t want to pay a courier.
Technology supplements or replaces jobs, often reducing costs. This is no different.
It's an ethical conundrum because we're not paying anyone, but we don't have the money to pay anyone, and it's good enough for our budget.
But we're getting used to the process of changing a part of the text in a few seconds without any artist involved and for 0$.
I guess that soon we'll be able to create voice sample from know personalities for a few $ with prices based on the popularity of the artist and some sanity check based on the artist preferences.
My thought is the large corps that could afford it, still won’t because it’s a cost they don’t need to incur. For them it’s not even a moral conundrum.
Much like the star bellied sneetches, when the quality of some ad format becomes untethered from the cost of production and placement, then marketers will flock to some alternative.
YouTube influencers fill[ed] that niche for a while because content milling SEO spam and fake reviews is a lot more expensive if you present the results in video form with good production values. (Not sure how long that will be true, since AI is getting better at short-term video).
This is like the last mile for online presence. The average barber out here doesn't use Squarespace, barely knows how to use Facebook and doesn't touch GenAi. But they can still cut your hair pretty well - tech savvyness doesn't have a huge connection to business competence out here.
Average person won't notice, and would not care either way.
Things that would take me an hour or so the old way takes three minutes with NB.
But I can see this applying to small businesses. Something that some random person would have to spend on hour photoshopping can be done in a few minutes with NB.
Larian Studios most recently was under fire for this [1]. Like I can see a director going “what would X look like?” and then speeding over to the concept artists for a proper rendition if they liked it. I don’t think this is at scale though. Any large business is just going to get rid of the concept artists.
[1]: https://www.pcgamer.com/games/rpg/baldurs-gate-3-developer-l...
I'm torn on the scale thing. It definitely seems net negative. But I think we collectively underestimate just how deeply sick the existing thing already is. We're repulsed by image gen at scale because it breaks our expectation that images are at least somewhat based on reality, that they reflect the natural world or what we can really expect from a product, from a company, from the future. But that was already a bad expectation: when's the last time you saw a mcdonalds meal that looked like the advert? Or a sub-30$ amazon product that wasn't a complete piece of shit? Advertisements were already actively malicious fantasies to exploit the way our brains react to pictures. They're just fantasies that required whole teams of humans doing weird bullshit with lighting and photoshop, and I'm not sure that's much better. It was already slop. All the grieving we do about the loss of truth, or the extent to which corps will gleefully spray us with mind-breaking waterfalls of outright lies, I think those ships sailed a long time ago. The disgust, deceit, the rage we feel about genAI slop is the way we should have felt about all commercials since at least the 80s IMO.
This is a good point. My gut reaction is “well at least someone was paid to do it and can continue to keep society/the economy going ”.
I can see the other side where that’s a soulless job. Not sure what’s worse. Soulless job where your skills apply or even less jobs in a competitive industry.
You could easily say the same about anytime computers or robots or automation have taken a job away. We’ve been going down this road for decades.
Two what I could consider "interesting prompts" for image gen testing. Did pretty well.
"A macro close-up photograph of an old watchmaker's hands carefully replacing a tiny gear inside a vintage pocket watch. The watch mechanism is partially submerged in a shallow dish of clear water, causing visible refraction and light caustics across the brass gears. A single drop of water is falling from a pair of steel tweezers, captured mid splash on the water's surface. Reflect the watchmaker's face, slightly distorted, in the curved glass of the watch face. Sharp focus throughout, natural window lighting from the left, shot on 100mm macro lens." - Only major problem i could find at a glance is the clasps don't make sense probably, and the drop of water inside the watch on the cog doesn't make sense/cog mangled into tweezers.
"A candid photograph taken from behind an elderly woman sitting alone on a park bench in late autumn. She is gently resting one hand on the empty seat beside her, where a man's weathered flat cap and a folded newspaper sit untouched. Fallen golden leaves cover the path ahead. The low afternoon sun casts her long shadow alongside a second, fainter shadow that almost seems to be there, the suggestion of someone sitting next to her, visible only in the light on the ground. Muted, warm color palette, shallow depth of field on the background trees, photojournalistic style." - I don't know why but it internal errored twice on this one but then got there.
Here's some of my captions that tend to trip up even state-of-the-art models.
https://mordenstar.com/other/nb-pro-2-tests
So far it does feel more iterative than an entirely new leap in terms of capabilities, but I haven't run it through the more multimodal aspects such as editing existing images.
That being said, it actually managed the King Louie jump rope test which surprised me.
EDIT: after significant prompting, it actually solved it. I think it's the first one to do so in my testing.
And not a (botched) fake white/gray grid background that is commonly used to visualize transparency?
Nano Banana was technically impressive the first time, but after Seedance it's not really. It's all just an internet pollution machine anyway.
I guess even Google is running out of GPUs.
The banana models (image) are a different than the mainline models, but the confusingly leverage the same naming scheme.
I don't have inside info, but everything we've seen about gemini3.0 makes me think they aren't doing distillation for their models. They are likely training different arch/sizes in parallel. Gemini 3.0-flash was better than 3.0-pro on a bunch of tasks. That shouldn't happen with distillation. So my guess is that they are working in parallel, on different arches, and try out stuff on -flash first (since they're smaller and faster to train) and then apply the learnings to -pro training runs. (same thing kinda happened with 2.5-flash that got better upgrades than 2.5-pro at various points last year). Ofc I might be wrong, but that's my guess right now.
Afaik the only real competitor is Riverflow V2.
we have user-preference rankings that put NB2 on top: https://arena.ai/leaderboard/text-to-image
- https://hunyuan.tencent.com/image/en?tabIndex=0
- https://seed.bytedance.com/en/seedream5_0_lite
someone shared benchmarks that differ my experience tho, so I may be biased
> I'm sorry, but I cannot fulfill your request as it contains conflicting instructions. You asked me to include the self-carved markings on the character's right wrist and to show him clutching his electromancy focus, but you also explicitly stated, "Do NOT include any props, weapons, or objects in the character's hands - hands should be empty." This contradiction prevents me from generating the image as requested.
My prompts are automated (e.g. I'm not writing them) and definitely have contained conflicting instructions in the past.
A quick google search on that error doesn't reveal anything either
- Base pricing for a 1024x1024 image is almost 1.6x what normal Nano Banana is ($0.067 vs. $0.039), however you can now get a 512x512 image for cheaper, or a 4k image for cheaper than four 1k images: https://ai.google.dev/gemini-api/docs/pricing#gemini-3.1-fla...
- Thinking is now configurable between `Minimal` and `High` (was not the case with Nano Banana Pro)
- Safety of the model appears to be increased so typical copyright infringing/NSFW content is difficult to generate (it refused to let me generate cartoon characters having taken psychedelics)
- Generation speed is really slow (2-3min per image) but that may be due to load.
- Prompt adherence to my trickier prompts for Nano Banana Pro (https://minimaxir.com/2025/12/nano-banana-pro/) is much worse, unsurprisingly. For example I asked it to make a 5x2 grid with 10 given inputs and it keeps making 4x3 grids with duplicate inputs.
However, I am skeptical with their marquee feature: image search. Anyone who has used Nano Banana Pro for awhile knows that it will strongly overfit on any input images by copy/pasting the subject without changes which is bad for creativity, and I suspect this implementation appears the same.
Additionally I have a test prompt which exploits the January 2025 knowledge cutoff:
Generate a photo of the KPop Demon Hunters performing a concert at Golden Gate Park in their concert outfits.
That still fails even with Grounding with Google Search and Image Search enabled, and more charitable variants of the prompt.tl;dr the example images (https://deepmind.google/models/gemini-image/flash/) seem similar to Nano Banana Pro which is indeed a big quality improvement but even relative to base Nano Banana it's unclear if it justifies a "2" subtitle especially given the increased cost.
Original Nano Banana (gemini-2.5-flash-image): $0.039 per image (up to 1024×1024px)
Nano Banana 2 (gemini-3.1-flash-image-preview): $0.045 per 512px image $0.067 per 1K (1024×1024) image $0.101 per 2K image $0.151 per 4K image
Nano Banana Pro (gemini-3-pro-image-preview): $0.134 per 1K/2K image $0.240 per 4K image
So at the most common 1K resolution, NB2 is ~72% more expensive than the original NB ($0.067 vs $0.039), but still half the price of NB Pro ($0.134).
source: https://deepmind.google/models/model-cards/gemini-3-1-flash-...
Previous nano banana frequently made speech attribution errors, the new one seems a lot more consistent.
Just think we conceptually know what a brushless motor design looks like and it's just pixels. I guess even if it did produce the image we wouldn't know what it means.
You could generate "pregnant Elon Musk with four arms and three eyes doing yoga poses" because the image models have enough visual concepts of each of those individual things, but that specific image is (likely) not in any training dataset.