Art is something that cannot be generated like synthetic text so it will have to be nearly forever powered by human artists or else you will continue to end up with artifacting. So it makes me wonder if artists will just be downgraded to an "AI" training position, but it could be for the best as people can draw what they like instead and have that input feed into a model for training which doesn't sound too bad.
While being very pro AI in terms of any kind of trademaking and copyright, it still make me wonder what will happen to all the people who provided us with entertainment and if the quality continue to increase or if we're going to start losing challenging styles because "it's too hard for ai" and everything will start 'felling' the same.
It doesn't feel the same as people being replaced with computer and machines, this feels like the end of a road.
As my mom retired from being a translator, she went from typewriter to machine-assisted translation with centralised corpus-databases. All the while the available work became less and less, and the wages became lower and lower.
In the end, the work we do that is heavily robotic will be done by less expensive robots.
The output of her translations had no copyright. Language developed independently of translators.
The output of artists has copyright. Artists shape the space in which they’re generating output.
The fear now is that if we no longer have a market where people generate novel arts, that space will stagnate.
This means a book can be in public domain for the original text, because it's very old, but not the translation because it's newer.
For example Julius Caesar's "Gallic War" in the original latin is clearly not subject to copyright, but a recent English translation will be.
If not, that would put pressure on production companies to use machines so they don’t have to pay future royalties
Our current best technology, LLMs, are good enough for translating an email or meeting transcript and getting the general message across. Anything more creative, technical, or nuanced, and they fall apart.
Meaning for anything of value like books, plays, movies, poetry, humans will necessarily be part of the process: coaxing, prompting, correcting...
If we consider the machine a tool, it's easy, the work would fall under copyright.
If we consider the machine the creator, then things get tricky. Are only the parts reworked/corrected under copyright? Do we consider under copyright only if a certain portion of the work was machine generated? Is the prompt under copyright, but not its output?
Without even getting into the issue of training data under copyright...
There is some movement regarding copyright of AI art, legislation being drawn up and debated in some countries. It's likely translations would be impacted by those decisions.
No, but it will be derived work covered by the same copyright as original.
The quality of human translation is better, for now.
Copyright is a very messy and divisive topic. How exactly can an artist claim ownership of a thought or an image? It is often difficult to ascertain whether a piece of art infringes on the copyright of another. There are grey areas like "fair use", which complicate this further. In many cases copyright is also abused by holders to censor art that they don't like for a myriad of unrelated reasons. And there's the argument that copyright stunts innovation. There are entire art movements and music genres that wouldn't exist if copyright was strictly enforced on art.
> Artists shape the space in which they’re generating output.
Art created by humans is not entirely original. Artists are inspired by each other, they follow trends and movements, and often tiptoe the line between copyright infringement and inspiration. Groundbreaking artists are rare, and if we consider that machines can create a practically infinite number of permutations based on their source data, it's not unthinkable that they could also create art that humans consider unique and novel, if nothing else because we're not able to trace the output to all of its source inputs. Then again, those human groundbreaking artists are also inspired by others in ways we often can't perceive. Art is never created in a vacuum. "Good artists copy; great artists steal", etc.
So I guess my point is: it doesn't make sense to apply copyright to art, but there's nothing stopping us from doing the same for machine-generated art, if we wanted to make our laws even more insane. And machine-generated art can also set trends and shape the space they're generated in.
The thing is that technology advances far more rapidly than laws do. AI is raising many questions that we'll have to answer eventually, but it will take a long time to get there. And on that path it's worth rethinking traditional laws like copyright, and considering whether we can implement a new framework that's fair towards creators without the drawbacks of the current system.
There are very few laws that are not giant ambiguities. Where is the line between murder, self-defense and accident? There are no lines in reality.
(A law about spectrum use, or registered real estate borders, etc. can be clear. But a large amount of law isn’t.)
Something must change regarding copyright and AI model training.
But it doesn’t have to be the law, it could be technological. Perhaps some of both, but I wouldn’t rule out a technical way to avoid the implicit or explicit incorporation of copyrighted material into models yet.
These things are very well and precisely defined in just about every jurisdiction. The "ambiguities" arise from ascertaining facts of the matter, and whatever some facts fits within a specific set of set rules.
> Something must change regarding copyright and AI model training.
Yes, but this problem is not specific to AI, it is the question of what constitutes a derivative, and that is a rather subjective matter in the light of the good ol' axiom of "nothing is new under the sun".
Yes, we have lots of wording attempting to be precise. And legal uses of terms are certainly more precise by definition and precedent than normal language.
But ambiguities about facts are only half of it. Even when all the facts appear to be clear, human juries have to use their subjective human judgement to pair up what the law says, which may be clear in theory, but is often subjective at the borders, vs. the facts. And reasonable people often differ on how they match the two up in many borderline cases.
We resolve both types of ambiguities case-by-case by having a jury decide, which is not going to be consistent from jury to jury but it is the best system we have. Attorneys vetting prospective jurors are very much aware that the law comes down to humans interpreting human language and concepts, none of which are truly precise, unless we are talking about objective measures (like frequency band use).
---
> it is the question of what constitutes a derivative
Yes, the legal side can adapt.
And the technical side can adapt too.
The problem isn't that material was trained on, but that the resulting model facilitates reproducing individual works (or close variations), and repurposing individual's unique styles.
I.e. they violate fair use by using what they learn in a way that devalues other's creative efforts. Being exposed to copyrighted works available to the public is not the violation. (Even though it is the way training currently happens that produces models that violate fair use.)
We need models that one way or another, stay within fair use once trained. Either by not training on copyrighted material, or by training on copyrighted material in a way that doesn't create models that facilitate specific reproduction and repurposing of creative works and styles.
This has already been solved for simple data problems, where memorization of particular samples can be precluded by adding noise to a dataset. Important generalities are learned, but specific samples don't leave their mark.
Obviously something more sophisticated would need to be done to preclude memorization of rich creative works and styles, but a lot of people are motivated to solve this problem.
Which is a very fair use and copyright respecting approach.
Taking/obtaining value from works is ok, up until the point where damage to the value of original works happen. And that is not ok. Because copyright protects that value to incentivize the creation and sharing of works.
The problem is that models are shipping that inherently make it easy to reproduce copyrighted works, and apply specific styles lifted from single author's copyrighted bodies of work.
I am very strongly against this.
Note that prohibiting copying of a recognizable specific single author's style is even more strict than fair use limits on humans. Stricter makes sense to me, because unlike humans, models are mass producers.
So I am extremely respectful of protecting copyright value.
But it is not the same thing as not training on something. It is worth exploring training algorithms that can learn useful generalities about bodies of work, without retaining biases toward the specifics of any one work, or any single authored style. That would be in the spirit of fair use. You can learn from any art, if it's publicly displayed, or you have paid for a copy, but you can't create mass copiers of it.
Maybe that is impossible, but I doubt it. There are many ways to train that steer important properties of the resulting models.
Models that make it trivial to create new art deco works, consistent with the total body of art deco works, ok. Models that make it trivial to recreate Erte works, or with an accurately Erte style specifically. Not ok.
This sounds like gate-keeping rather than genuine copyright concerns.
> Models that make it trivial to create new art deco works, consistent with the total body of art deco works, ok. Models that make it trivial to recreate Erte works, or with an accurately Erte style specifically. Not ok.
Yeah, again, sounds like gate-keeping more than an economic and incentives argument which are, in my opinion, the only legitimate concerns underpinning copyright's moral ground.
Every step of progress has made doing things easier and easier to the point that now arguing with some strange across the world seems trivial, almost natural. Surely there are some arguments to curtail this dangerous machinery that undermines the control of information flow and corrupts the minds of the naive! we must shut it down!
Jokes aside, "making things easier/trivial" is the name of the game of progress. You can't stop progress. Everything will be easier and easier as the time goes on.
The catch here is that a human can use single sample as input, but AI needs a torrent of training data. Also when AI generates permutations of samples, does their statistic match training data?
Humans have that torrent of training data baked in from years of lived experience. That’s why people who go to art school or otherwise study art are generally (not always of course) better artists.
> Language developed independently of translators.
And it also developed independently of writers and poets.
> Artists shape the space in which they’re generating output.
Not writers and poets, apparently. And so maybe not even artists, who typically mostly painted book references. Color perception and symbolism developed independently of professional artists, too. Moreover, all of the things you mention predate copyright.
> The fear now is that if we no longer have a market where people generate novel arts, that space will stagnate.
But that will never happen; it's near-impossible to stop humans from generating novel arts. They just do it as a matter of course - and the more accessible the tools are, the more people participate.
Yes, memes are a form of art, too.
What's a real threat is lack of shared consumption of art. This has been happening for the past couple decades now, first with books, then with visual arts. AI will make this problem worse by both further increasing volume of "novel arts" and by enabling personalization. The real value we're using is the role of art as social objects - the ability to relate to each other by means of experiencing the same works of art, and thus being able to discuss and reference it. If no two people ever experienced the same works of art, there's not much about art they can talk about; if there's no shared set of art seen by most people in a society, there's a social baseline lost. That problem does worry me.
I don't think have an AI partner that is trained from zero from childhood to adulthood with goals such as "make me laugh" is too far fetched. The problem is you will never be able to connect with this child because the AI is feeding it insanely obscure, highly specific videos that matches the neurons of the kid perfectly.
I never thought I'd be thankful for global, toy-pushing franchises, but they at least serve as a social object for kids, when the current glut of kids videos on YouTube doesn't.
Most translation work is simple just as the day-to-day of many creative professions is rather uncreative. But translating a book, comic or movie requires creative decisions on how to best convey the original meaning in the idioms and cultural context of a different language. The difference between a good and a bag translation can be stark
I don't see too many people defending artists also calling for people to start buying handmade clothing and fabrics again.
That said and because people on here are feisty, I have many artist friends and I deeply appreciate their work at the same time as appreciating how cool diffusion models are.
The difference being of course that we live in a modern society and we should be able to find a solution that works for all.
That said, humans can't even get in something basic like UBI for people and humans consistently vote against each other in favour of discriminating on skin colour, sex, sexuality, culture. Meanwhile the billionaires that are soon to become trillionaires are actively defended by many members of our species, sometimes even by the poor. The industrial age broke our evolved monkey brains.
Also in case of graphic and voice artists unique style looks more valuable than output itself, but style isn't protected by copyright.
It will be like furniture.
A long time ago, every piece of furniture was handmade. It might have been good furniture, or crude, poorly constructed furniture, but it was all quite expensive, in terms of hours per piece. Now, furniture is almost completely mass produced, and can be purchased in a variety of styles and qualities relatively cheaply. Any customization or uniqueness puts it right back into the hand-made category. And that arrangement works for almost everyone.
Media will be like that. There will be a vast quantity of personalized media of decent quality. It will be produced almost entirely automatically based on what the algorithm knows about you and your preferences.
There will be a niche industry of 'hand made' media with real acting and writing from human brains, but it will be expensive, a mark of conspicuous consumption and class differentiation.
The only discernable difference that won't be replicable is a cryptographic signature "Certified 100% Human-Made!" sticker, which will probably become the mark of the niche industry.
Somewhat more accurate analogy would be the custom car market. Beautiful collectible convertibles with fine detailing everywhere, priced thousands of times higher than normal cars, that actually run far worse and basically break apart after a few thousand miles and are impossible to find parts for. Automated factories certainly could churn them out but they don't because they're impractical poorly-designed status items kept artificially scarce for the very rich to peacock with.
Except AI will probably still produce equivalent impractical stuff anyway, just because production (digital and physical) will eventually be easy enough that resources are negligible, and everyone can have flashy impractical stuff. So again, only that "100% Human!" seal will distinguish, eventually.
If people instead care about the creation story and influences (the idea of "behind the scenes" and "creator interviews" for on demand ai generated media is pretty funny) then this won't have much value.
Time will tell - it's an exciting, discouraging time to be alive, which has probably always been the case.
This addresses one axis of development.
Meanwhile, there's lots of people around willing to express themselves for advertisement money.
Like with translation: We're going to see tool-assisted work where the tools get more and more sophisticated.
Your example with furniture is good. Another is cars: From horses to robotaxis. Humans are in the loop somewhere still.
She was lucky to be able to retire when she did, as the job of a translator is definitely going to become extinct.
You can already get higher quality translations from machine learning models than you get from the majority of commercial human translations (sans occasional mistakes for which you still need editors to fix), and it's only going to get better. And unlike human translators LLMs don't mangle the translations because they're too lazy to actually translate so they just rewrite the text as that's easier, or (unfortunately this is starting to become more and more common lately) deliberately mistranslate because of their personal political beliefs.
It also varies by language. Every time I give an example here of machine translated English-to-Chinese, it's so bad that the responses are all people who can read Chinese being confused because it's gibberish.
And as for politics, as Grok has just been demonstrating, they're quite capable of whatever bias they've been trained to have or told to express.
But it's worse than that, because different languages cut the world at different joints, so most translations have to make a choice between literal correctness and readability — for example, you can have gender-neutral "software developer" in English, but in German to maintain neutrality you have to choose between various unwieldy affixes such as "Softwareentwickler (m/w/d)" or "Softwareentwickler*innen" (https://de.indeed.com/karriere-guide/jobsuche/wie-wird-man-s...), or pick a gender because "Softwareentwickler" by itself means they're male.
I personally have no strong opinion on this, FWIW, just confirming GP's making a good point there. A translated word or phrase may be technically, grammatically correct, but still not be culturally correct.
That's not how you get good translations from off-the-shelf LLMs! If you give a model the whole book and expect it to translate it in one-shot then it will eventually hallucinate and give you bad results.
What you want is to give it a small chunk of text to translate, plus previously translated context so that it can keep the continuity.
And for the best quality translations what you want is to use a dedicated model that's specifically trained for your language pairs.
> And as for politics, as Grok has just been demonstrating, they're quite capable of whatever bias they've been trained to have or told to express.
In an open ended questions - sure. But that doesn't apply to translations where you're not asking the model to come up with something entirely by itself, but only getting it to accurately translate what you wrote into another language.
I can give you an example. Let's say we want to translate the following sentence:
"いつも言われるから、露出度抑えたんだ。"
Let's ask a general purpose LLMs to translate it without any context (you could get a better translation if you'd give it context and more instructions):
ChatGPT (1): "Since people always comment on it, I toned down how revealing it is."
ChatGPT (2): "People always say something, so I made it less revealing."
Qwen3-235B-A22B: "I always get told, so I toned down how revealing my outfit is."
gemma-3-27b-it (1): "Because I always get told, I toned down how much skin I show."
gemma-3-27b-it (2): "Since I'm always getting comments about it, I decided to dress more conservatively."
gemma-3-27b-it (3): "I've been told so often, I decided to be more modest."
Grok: "I was always told, so I toned down the exposure."
And how humans would translate it:
Competent human translator (I can confirm this is an accurate translation, but perhaps a little too literal): "Everyone was always saying something to me, so I tried toning down the exposure."
Activist human translator: "Oh those pesky patriarchal societal demands were getting on my nerves, so I changed clothes."
(Source: https://www.youtube.com/watch?v=dqaAgAyBFQY)
It should be fairly obvious which one is the biased one, and I don't think it's the Grok one (which is a little funny, because it's actually the most literal translation of them all).
> That's not how you get good translations from off-the-shelf LLMs! If you give a model the whole book and expect it to translate it in one-shot then it will eventually hallucinate and give you bad results.
You're assuming something about how I used ChatGPT, but I don't know what exactly you're assuming.
> What you want is to give it a small chunk of text to translate, plus previously translated context so that it can keep the continuity
I tried translating a Wikipedia page to support a new language, and ChatGPT rather than Google translate because I wanted to retain the wiki formatting as part of the task.
LLM goes OK for a bit, then makes stuff up. I feed in a new bit starting from its first mistake, until I reach a list at which point the LLM invented random entries in that list. I tried just that list in a bunch of different ways, including completely new chat sessions and the existing session, it couldn't help but invent things.
> In an open ended questions - sure. But that doesn't apply to translations where you're not asking the model to come up with something entirely by itself, but only getting it to accurately translate what you wrote into another language.
"Only" rather understates how hard translation is.
Also, "explain this in Fortnite terms" is a kind of translation: https://x.com/MattBinder/status/1922713839566561313/photo/3
My Chinese isn't good enough to explain the difference between ice cream and gelato to my in-laws but ChatGPT gave me a good-enough output in seconds, this far exceeds anything that has come before. A friend (who speaks zero Chinese) was able to have conversations with his in-laws using one of those in-ear translation devices.
Normal people would never ever hire translators in this type of situations and now our spouses can also relax on vacation :)
To paraphrase Frank Zappa...Art just needs a frame. If you poo on a table...not art. If you declare 'my poo on the table will last from the idea, until the poo dissappears', then that is art. Similarly, banksy is just graffiti unless you understand (or not) the framing of the work.
I'm not even sure if bilingualism is real or if it's just an alternate expression for relatively benign forced split personality. Could very well be.
Downgraded to AI training? Nonsense. You forget artists do more than just draw for money, we also draw for FUN, and that little detail escapes every single AI-related discussion I've been reading for the last 3 years.
It's a fact of life that creative production companies will always attempt to optimize costs -- which means most efficiently using any human labor.
In the 00s/10s, animation studios especially tried to do this with... mixed results (coughToeicough)
More capable models should allow better keyframe-to-keyframe animation.
Those show are cheap because they employ fewer people. They still need to employ some people though. To me the greater tradgedy is that they make a product where those people who make it do not care about it. People are working to make things they don't like because they need income to survive.
The problem is not that AI is taking jobs, it is that it is taking incomes. If we really are heading to a world where most jobs can be done by AI (I have my doubts about most, but I'll accept many), we need a strategy to preserve incomes. We already desperately need a system to prevent massive wealth inequality.
We need to have a discussion about the future we want to have, but we are just attacking the tools used by people making a future we don't want. We should be looking at the hands that hold the tools.
Discussions like this often lead to talking about universal basic income. I think that is a mistake. We need a broader strategy than that. The income needs to be far better than 'basic'. Education needs to change to developing the individual instead of worker units.
Imagine a world where the only TV shows were made were the ones who could attract people who care about the program enough that they would offer their time to work on it.
That too would generate a lot of poor quality content, because not everyone is good at the things they like to do. It would be heartless to call it slop though. More importantly those people who are afforded the lifestyle that enables them to produce low quality things are doing precisely the work they need to be doing to become people who produce high quality things.
Some of those hands learning to make high quality things may be holding the tools of AI. People making things because they want to make will produce some incredible things with or without AI. A lot of astounding creations we haven't even seen or perhaps even imagined will be produced by people creatively using new tools.
(This is what I get for checking HN when I let the dog out to toilet in the middle of night)
Doesn’t sound too bad? It sounds like the premise of a dystopian novel. Most artists would be profoundly unhappy making “art” to be fed to and deconstructed by a machine. You’re not creating art at that point, you’re simply another cog feeding the machine. “Art” is not drawing random pictures. And how, pray tell, will these artists survive? Who is going to be paying them to “draw whatever they like” to feed to models? And why would they employ more than two or three?
> it still make me wonder (…) if we're going to start losing challenging styles (…) and everything will start 'felling' the same.
It already does. There are outliers, sure, but the web is already inundated by shit images which nonetheless fool people. I bet scamming and spamming with fake images and creating fake content for monetisation is already a bigger market than people “genuinely” using the tools. And it will get worse.
That's the definition of commercial art, which is what most art is.
> “Art” is not drawing random pictures.
It's exactly what it is, if you're talking about people churning out art by volume for money. It's drawing whatever they get told to, in endless variations. Those are the people you're really talking about, because those are the ones whose livelihoods are being consumed by AI right now.
The kind of art you're thinking of, the art that isn't just "drawing random pictures", the art that the term "deconstruction" could even sensibly apply to - that art isn't in as much danger just yet. GenAI can't replicate human expression, because models aren't people. In time, they'll probably become so, but then art will still be art, and we'll have bigger issues to worry about.
> There are outliers, sure, but the web is already inundated by shit images which nonetheless fool people. I bet scamming and spamming with fake images and creating fake content for monetisation is already a bigger market than people “genuinely” using the tools. And it will get worse.
Now that is just marketing communications - advertising, sales, and associated fraud. GenAI is making everyone's lives worse by making the job of marketers easier. But that's not really the fault of AI, it's just the people who were already making everything shitty picking up new tools. It's not the AI that's malevolent here, it's the wielder.
Open it for all or nothing.
It’s a very emotionally loaded space for many, meaning most comments I read lean to the extremes of either argument, so seeing a comment like yours that combines both makes me curious.
Would be interesting to hear a bit more about how you see the role of copyright in the AI space.
I also think AI is the next evolution of humanity.
The role of the artist has always been to provide excellent training data for future minds to educate themselves with.
This is why public libraries, free galleries, etc are so important.
Historically, art has been ‘best’ when the process of its creation has been heavily funded by a wealthy body (the church or state, for example).
‘Copyright’, as a legal idea, hasn’t existed for very long, relative to ‘subsidizing the creation of excellent training data’.
If ‘excellent training data for educating minds’ genuinely becomes a bottleneck for AI (though I’d argue it’s always a bottleneck for humanity!), funding its creation seems a no-brainer for an AI company, though they may balk at the messiness of that process.
I would strongly prefer that my taxes paid for this subsidization, so that the training data could be freely accessed by human minds or other types of mind.
Copyright isn’t anything more than a paywall, in my opinion. Art isn’t for revenue generation - it’s for catalyzing revenue generation.
We are not aware of the implications of this sentence. This is it. The only "source" is play. Joyful play.
With AI tools artists will be able to push further, doing things that AI can't do yet.
Every artist worth anything strives to be better at their craft on the daily, if that artist gets discouraged because there's something "better", that means that artist is not good because those negative emotions are coming from a competitive place instead of one of self-improvement and care for their craft or the audience. Art is only a competition with oneself, and artists that don't understand or refuse this fact are doomed from the start.
10 years ago: "real real text cannot be generated like stock phrases, so writing will be nearly forever powered by human writers."
Obviously we have synthetic graphics (like synthetic text). So something else must be meant by "art" here.
The machine could also just produce lots of examples and test them on a large number of humans - in which case none of them individually is the artist, but the art is still being produced.
In terms of losing styles, that is already been happening for ages. Disney moved to xeroxing instead of inking, changed the style because inking was “too hard”. In the late 90s/early 2000s we saw a burst of cartoons with a flash animation style on TV because it was a lot easier and cheaper to animate in flash.
They took the weaknesses of last years style transfer models and used them as a style, working around and with it’s shortcomings and weaknesses. That is a far cry from “type a prompt and be do e with it”.
Secondly I think the story is fun and the whole thing is fun, not in a will smith eats spaghetti kind of why but fun as in an actually fun short film.
I think it shows that AI can be a tool that empowers creativity and creative work and more than a power point stock photo generator.
Fair. But I wouldn’t say that automatically translates to “absolute great” or that it “by all accounts be called art”. Though there is a high degree of subjectivity there, which why I simply said I disagree.
> actually fun short film
Sure, you like what you like, no judgement. But again, “it’s a short exaggerated parody and we have a high tolerance for flaws in comedy”. Had they tried to make something more substantial, serious, provocative, emotional, or slower paced, I believe they would’ve fallen flat.
> I think it shows that AI can be a tool that empowers creativity and creative work
Plenty of actual artist disagreed, though. And the backlash wasn’t just limited to the end product, but Corridor Crew’s attitude to it (creator intentions matter a lot when defining something as art) and lack of understanding on the very real and very negative impacts on the industry.
Of course it can be, you're seeing it first hand with your very own eyes.
There's a difference, in my mind at least. "Art" is cultural activity and expression, there needs to be intent, creativity, imagination..
A printer spooling out wallpaper is not making art, even if there was artistry involved in making the initial pattern that is now being spooled out.
When I see generative AI produced illustrations, they'll usually be at least aesthetically pleasing. But sometimes they are already more than that. I found that there are lots and lots of illustrations that already deliver higher level experiences that go beyond just their quality of aesthetics delivery. They deliver on the goal they were trying to use those aesthetics for to begin with. Whether this is through tedious prompting and "borrowed" illustrational techniques I think is difficult to debate right now, but just based on what I've seen so far of this field and considering my views and definitions, I have absolutely zero doubts that AI will 100% generate artworks that are more and more "legitimately" artful, and that there's no actual hard dividing line one can draw between these and manmade art, and what difference does exist now I'm confident will gradually fade away.
I do not believe that humans are any more special than just the fact that they're human provides to them. Which is ultimately ever-dwindling now it seems.
AI is technically another tool, and it can be used poorly (what people refer to "AI slop", using default settings, some LoRA and calling it a day) and it can be used properly (forcing compositions, editing, fixing errors...) to convey an idea or emotion or tell a story. Critical eye does the rest.
After all, the machine doesn't do anything on its own, it needs a driver. The quality of the output is directly proportional to the operator's amount of passion.
Consider zero and single click deployments in IT operations. With single click deployments, you need to have everything automated, but the go sign is still given by a human. With zero click, you'll have a deployment policy instead - the human decision is now out of the critical path completely, and only plays part during the authoring and later editing of said policy. And you can also then generate those policies, and so on.
Same can be applied to AI. You can have canned prompts that you keep refining to encode your intent and preferences, then you just use them with a single click. But you can also build a harness that generates prompts and continuously monitors trends and the world as a whole for any kind of arbitrary criteria (potentially of its own random or even shifting choice), and then follows that: a reward policy. And then like with regular IT, you can keep layering onto that.
Because of this, I don't think that intent is the point of differentiation necessarily, but the experience and shared understanding of human intent. That people have varying individual, arbitrary preferences, and are going through life of arbitrary and endless differences, and then source from those to then create. Indeed, this is never going to be replicated, exactly because of what I said: this is humans being human, and that giving them a unique, inalienable position by definition.
It's like if instead of planes we called aircraft "mechanical birds" and dunked on them for not flying by flapping their wings, despite their more than a century long history and claims of "flying". But just like I think planes do actually fly, I do also think that these models produce art. [0]
Is it? I have no knowledge of this product, but I recall Novel AI paid for a database of tagged Anime style images. Its not impossible for something similar to have happened here.
That isn't to say that they purchased everything they have ever used. Nor do I care if they have.
Examples:
Disney isn't going to start using AI art. But all those gacha games on the iOS app store are ABSOLUTELY going to. And I suspect gacha apps support at least 10-100x more artists than Disney staffs.
Staff engineers aren't going anywhere - AI can't tell leadership the truth. But junior engineers are going to be gutted by this, because now their already somewhat dubious direct value proposition - turning tickets into code while they train up enough to participate more in the creative and social process of Software Engineering - now gets blasted by LLMs. Mind you, I don't personally hold this ultra-myopic view of juniors - but mgmt absolutely does, and they pick headcount.
Hmm yknow I could actually see Big Books getting the "top" end eaten by AI instead of the bottom, actually. All the penny dreadfuls you see lining the shelves of Barnes and Noble. Vs the truly creative work already happens at the bottom anyway, and is self-published.
Also, as someone who's watched copyright from the perspective of a GPL fanboy, good fucking luck actually enforcing anything copyright related. The legal system is pay to play and if you're a small (or even medium!) fry, you will probably never even know your copyright is being violated. Much less enforcing it or getting any kind of judgement.
The result will be less original art. They will simply stop creating it or publishing it.
IMO music streaming has similarly lead to a collapse in quality music artistry, as fewer talented individuals are incentivised to go down that path.
AI will do the same for illustration.
It won’t do the same for _art_ in the “contemporary art” sense, as great art is mostly beyond the abilities of AI models. That’s probably an AGI complete task. That’s the good news.
I’m kinda sad about it. The abilities of the models are impressive, but they rely on harvesting the collective efforts of so many talented and hardworking artists, who are facing a double whammy: their own work is being dubiously used to put them out of a job.
Sometimes I feel like the tech community had an opportunity to create a wonderful future powered by technology. And what we decided to do instead was enshittify the world with ads, undermine the legal system, and extract value from people’s work without their permission.
Back in the day real hackers used to gather online to “stick it to the man”. They despised the greed and exploitation of Wall Street. And now we have become torch bearers for the very same greed.
Is there data for this? I feel there's more musicians than ever and there's more very talented musicians than ever and the most famous ones are more famous than ever so I would like to see if that's correct.
I think there are more musicians with reach than ever.
I would say it’s very likely there are far fewer musicians making a living out of their music than there were in the last. That’s the key difference.
And the truth is that for most people incentives matter, so not being able to make a living from music means very talented people who are financially motivated (ie most of them) do something else instead.
I wonder if there is a mitigation strategy for this. Is there a way to make (human-made-art) scraping robustly difficult, while leaving human discovery and exploration intact?
Art stealing is a thing. I've had by art stolen regularly. Multiple Doom mods use sprites I made and only one person (the DRLA guy) asked for permission. I've had my art traced and even used in advertisements with me only finding out by sheer chance. I've had people use it for coloring without crediting the source. This has happened for more than thirty years. You can only learn to live with it, lest you risk going absolutely insane. If you are popular, people will do stupid stuff with your stuff. And if you aren't popular, you art is not going to be used to train, anyway (sets are ordered by popularity and only the top stuff gets used. The one with 3 upvotes is not going in.)
The rise of GPT slop is making it increasingly clear to me that this distinction doesn't exist, and it's just an under-appreciation of the skill that goes into good writing. That thing where LLMs generate overly-wordy mealy-mouthed text is just what bad writing looks like: the writing equivalent of a bad drawing. Subtle inaccuracies and ill-fitting metaphors are just the text version of visual artifacts.
Not to diminish the plight of art and artists, but it's the same as the plight of writers and writing. Writers are also having their copyrighted works used against their will to destroy their own industry. LLMs also need big human-written datasets to keep the magic running, that are drying up as they get poisoned by their own output.
1. Haruhi is based on light novels, so has to actually perform to get a release. Japanese market is upside down, the anime often goes to free to air to support a manga release where the real money is made (I have no idea how this works economically this is just how its explained to me) as there isn't any more manga or light novels to release, the likelihood of another season is low. It was sort of always a passion project.
2. The studio was firebombed. https://en.wikipedia.org/wiki/Kyoto_Animation_arson_attack
3. Season 2 was critically panned, but I dunno I thought it was pretty genius.
My suggestion, watch both series, then read the english translation of the novels.
Also don't forget to watch Disappearance after the 2 seasons.
The IP is likely doa anyway as it's on indefinite hiatus
I don't know how anime series economics works these days -- AIUI traditionally the live late night TV broadcast was effectively an advert to get the hardcore fans to buy the extremely expensive Japanese market DVD/bluray sets, which were what brought in the money. But I expect streaming has changed things a lot.
I'd assume streaming services have changed the industry. Things like Devilman Crybaby wouldn't have been released if Netflix wasn't involved.
https://goto.isaac.sh/neon-anisora
Prompt: The giant head turns to face the two people sitting.
Oh, there is a docs page with more examples:
https://pwz4yo5eenw.feishu.cn/docx/XN9YdiOwCoqJuexLdCpcakSln...
> a variable-length training approach is adopted, with training durations ranging from 2 to 8 seconds. This strategy enables our model to generate 720p video clips with flexible lengths between 2 and 8 seconds.
I'd like to see it benched against FramePack which in my experience also handles 2d animation pretty well and doesn't suffer from the usual duration limitations of other models.
Looks incredibly impressive btw. Not sure it's wise to call it `AniSora` but I don't really know.
> This model has 1 file scanned as unsafe. testvl-pre76-top187-rec69.pth
Hm, perhaps I'll wait for this to get cleared up?
I wouldn't expect this from Bilibili's Index Team, though, given how high profile they are. It's probably(?) a false positive. Though I wouldn't use it personally, just to be safe.
The safetensors format should be used by everyone. Raw pth files and pickle files should be shunned and abandoned by the industry. It's a bad format.
https://huggingface.co/Disty0/Index-anisora-5B-diffusers
For the record, the dev branch of SD.Next (https://github.com/vladmandic/sdnext) already supports it.
Given that OpenAI call themselves "Open", I think it's great and hilarious that we're reusing their names.
There was OpenSora from around this time last year:
https://github.com/hpcaitech/Open-Sora
And there are a lot of other products calling themselves "Sora" as well.
It's also interesting to note that OpenAI recently redirected sora.com, which used to be its own domain, to sora.chatgpt.com.
Probably to share cookies.
We need cross-domain cookies. Google took them away so they could further entrench their analytics and ads platform. Abuse of monopoly power.
We use first-party cookies for session management.
We use APIs and signed tokens (JWT) to federate across domains without leaking user data.
The ones hurt by the death of third-party cookies are ad tech parasites who refused to innovate imho...
Also: tech should be easier, not harder.
Building this shouldn't take more than an hour, yet somehow we did this to ourselves.
Current stance:
https://www.copyright.gov/newsnet/2025/1060.html
“It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements”.
If it isn’t covered (after all it’s the AI that drew all the pictures) then anyone using such service to produce a movie would be screwed - anyone could copy it or its characters).
I’m leaving out the problem of whether the service was trained on copyright material or not.
In all seriousness I wonder where is this all headed? Are people long term going to be more forgiving of visual artifacts if it will mean that their favourite franchise gets another season? Or will generated imagery be shunned just like the not-so-subtle use of 3D models?
Toei Animation is looking to utilize AI in areas such as
storyboarding, coloring, and “color specification,” as
well as in-between animation and backgrounds.
The specific use cases mentioned include:
• Storyboarding: Leveraging AI to “generate simple
layouts and shooting of the storyboards.”
• Colors: Employing AI to “specify colors and
automatically correct colors.”
• In-betweens: Utilizing AI to “automatically correct
line drawings and generate in-betweens.”
• Backgrounds: Using AI to “generate backgrounds from
a photo.”
Source: https://www.japannihon.com/toei-animation-discusses-ai-use-i...I think this is fine. The director will still make sure there's no visual artifacts. On the other hand indies will be able to create their own works, maybe with some warts, but better than nothing.
But seriously, I had the same thought, considering the general lack of guardrails surrounding high-profile Chinese genAI models... Eventually, someone will know the answer... It's inevitable...
I know there is a huge market for those excited for infinite anime music videos and all things anime.
This is great for an abundance of content and everyone will become anime artists now.
Japan is truly is embracing AI and there will be new jobs for everyone thanks to the boom AI is creating as well as Jevons paradox which will create huge demand.
Even better if this open source.
Nowadays it seems everyone is interested by "anime style" of content but all I see is terrible in term of quality. It seems quantity increased so much in the last 30 years it only made quality stuff more invisible and we are inundated with animelike trash.
YouTube channels like Mother’s Basement help picking out something to watch. Geoff has routinely pointed how he literally watches anime for a living and it’s still hard to watch everything worthy he finds.
Video titles are pretty self-explanatory. If you want to find something to watch, fire up one of “The BEST Anime of [season] [year]” and you’ll get plenty of recommendations, nicely ordered and with some short explanation of what it is about and why it’s noteworthy.
I find I don't like "fanservice" (boob jokes etc.), or at least find it to be a signifier of poor quality. So I use animefeminist.com for recommendations, since it's pretty effective at filtering out (or at least warning of in advance) that kind of red flag, and also for their ranking of seasonal anime. (If you use this method, make sure to look for their "recommendation," "digest," and "three-episode check-in" articles specifically). This improves things from about 1% chance of enjoyment to probably 1-in-15 chance of enjoyment. On average, each season has an anime I find is not bad, but only every 2-3 years or so is there an anime I unabashedly love.
(Of course, there may be shows the site rules out which I really loved -- for instance, my all-time favourite anime is Attack on Titan, which is blacklisted on anifem because someone once wrote an article on polygon about how its overtly pro-jewish anti-fascist allegory is anti-semitic somehow. I think that's a load of bull, but probably not enough of a problem for me to stop using this method of finding anime.)
Have you seen Psycho-Pass? I wonder what your site has to say about it, as it seemed kind of a feminist work to me.
You might find this site interesting:
Regarding the Japanese interpretation of the allegory -- I don't think it's apologist toward fascism really, because essentially every side in the conflict has fascist elements. In that sense, it's more "some situations just don't have any good solutions." But what's clear to me is that the situation itself has fascist roots. Regardless, the Polygon article in question has a much more surface-level reading -- it is clearly stating that it's antisemitic, which I simply can't see at all. Spoilers rot13: vg'f gehr gur gvgnaf ner ~wrjf, ohg guvf vtaberf gung gurl ner gur perngvbaf bs ~anmv rkcrevzragf, naq nyfb gung nyy gur cebgntbavfgf ner ~wrjf nf jryy.
I don't understand what I'm looking at with the site you linked, but I am intrigued.
> I don't understand what I'm looking at with the site you linked, but I am intrigued.
It’s kind of like an interactive bibliography with commentary?
And in fact we seem to have a once of a decade alignment of talent (starting in 2023 with Season 1) with Frieren.
In the end, there will still be quality content but it will be much more expensive, available only to an elite.
Then the elite will now what's good quality and will be able to produce more good quality. Those will be hired.
The vast majority, only exposed to bad quality, will not be able to produce quality anymore. And won't be hired anymore.
And so here is your great quality divide.
I don't think they'd be artists, but AI-prompters, although you're right that there will be a huge flood of content.
Wan2.1 is great. Does this mean anisora is also 16fps?
There are increasingly more reports of foreign scalpers stocking couples of $5 doujinshi in weekend cons and demanding receipts, and authors are moving to block them. That's like mafias genuinely smuggling charity home baked cookies. It shouldn't make sense. This astronomical gap in supply and demand, alone, should be enough to create incentives for people to even just mess up and ruin the market.
I haven’t heard about this. Do you have a link to some more info about this?
You could argue that those tools in the hands of skilled craftsman will create amazing things faster, but we all know what will happen is absolute flood of AI slop in every entertainment category.
South Park looks like MS-Paint drawings hastily animated by someone without access to Adobe Animate. It still manages to be a good and beloved show because it shines in other ways
The world of entertainment is big enough for both Studio Ghibli productions and South Park to exist. AI slop will find its niche too. It will consume some animation jobs just as all the automaton and tooling coming before has, but I'm of the strong belief there will still be a market for good handmade art
You understand that china has "different" view on copyright,license etc right??
It's the equivalent of Crunchyroll putting out a video generation model. If the rightsholders disagree with this usage, it'll come up during the negotiations for new releases.
how can you prove then??? its literally the same way OpenAI use Ghibli material and they can't do anything about it
Bilibili does have an existing business based on licensing Studio Ghibli content, so Studio Ghibli can threaten to refuse to sell them distribution rights for future releases, even without a lawsuit.
then tell me what chinnese government stance on this matters, because I can tell that Meta doing is illegal but I cant say the same with chinnese company doing it on mainland china