Art is something that cannot be generated like synthetic text so it will have to be nearly forever powered by human artists or else you will continue to end up with artifacting. So it makes me wonder if artists will just be downgraded to an "AI" training position, but it could be for the best as people can draw what they like instead and have that input feed into a model for training which doesn't sound too bad.
While being very pro AI in terms of any kind of trademaking and copyright, it still make me wonder what will happen to all the people who provided us with entertainment and if the quality continue to increase or if we're going to start losing challenging styles because "it's too hard for ai" and everything will start 'felling' the same.
It doesn't feel the same as people being replaced with computer and machines, this feels like the end of a road.
As my mom retired from being a translator, she went from typewriter to machine-assisted translation with centralised corpus-databases. All the while the available work became less and less, and the wages became lower and lower.
In the end, the work we do that is heavily robotic will be done by less expensive robots.
The output of her translations had no copyright. Language developed independently of translators.
The output of artists has copyright. Artists shape the space in which they’re generating output.
The fear now is that if we no longer have a market where people generate novel arts, that space will stagnate.
This means a book can be in public domain for the original text, because it's very old, but not the translation because it's newer.
For example Julius Caesar's "Gallic War" in the original latin is clearly not subject to copyright, but a recent English translation will be.
> Language developed independently of translators.
And it also developed independently of writers and poets.
> Artists shape the space in which they’re generating output.
Not writers and poets, apparently. And so maybe not even artists, who typically mostly painted book references. Color perception and symbolism developed independently of professional artists, too. Moreover, all of the things you mention predate copyright.
> The fear now is that if we no longer have a market where people generate novel arts, that space will stagnate.
But that will never happen; it's near-impossible to stop humans from generating novel arts. They just do it as a matter of course - and the more accessible the tools are, the more people participate.
Yes, memes are a form of art, too.
What's a real threat is lack of shared consumption of art. This has been happening for the past couple decades now, first with books, then with visual arts. AI will make this problem worse by both further increasing volume of "novel arts" and by enabling personalization. The real value we're using is the role of art as social objects - the ability to relate to each other by means of experiencing the same works of art, and thus being able to discuss and reference it. If no two people ever experienced the same works of art, there's not much about art they can talk about; if there's no shared set of art seen by most people in a society, there's a social baseline lost. That problem does worry me.
Copyright is a very messy and divisive topic. How exactly can an artist claim ownership of a thought or an image? It is often difficult to ascertain whether a piece of art infringes on the copyright of another. There are grey areas like "fair use", which complicate this further. In many cases copyright is also abused by holders to censor art that they don't like for a myriad of unrelated reasons. And there's the argument that copyright stunts innovation. There are entire art movements and music genres that wouldn't exist if copyright was strictly enforced on art.
> Artists shape the space in which they’re generating output.
Art created by humans is not entirely original. Artists are inspired by each other, they follow trends and movements, and often tiptoe the line between copyright infringement and inspiration. Groundbreaking artists are rare, and if we consider that machines can create a practically infinite number of permutations based on their source data, it's not unthinkable that they could also create art that humans consider unique and novel, if nothing else because we're not able to trace the output to all of its source inputs. Then again, those human groundbreaking artists are also inspired by others in ways we often can't perceive. Art is never created in a vacuum. "Good artists copy; great artists steal", etc.
So I guess my point is: it doesn't make sense to apply copyright to art, but there's nothing stopping us from doing the same for machine-generated art, if we wanted to make our laws even more insane. And machine-generated art can also set trends and shape the space they're generated in.
The thing is that technology advances far more rapidly than laws do. AI is raising many questions that we'll have to answer eventually, but it will take a long time to get there. And on that path it's worth rethinking traditional laws like copyright, and considering whether we can implement a new framework that's fair towards creators without the drawbacks of the current system.
There are very few laws that are not giant ambiguities. Where is the line between murder, self-defense and accident? There are no lines in reality.
(A law about spectrum use, or registered real estate borders, etc. can be clear. But a large amount of law isn’t.)
Something must change regarding copyright and AI model training.
But it doesn’t have to be the law, it could be technological. Perhaps some of both, but I wouldn’t rule out a technical way to avoid the implicit or explicit incorporation of copyrighted material into models yet.
These things are very well and precisely defined in just about every jurisdiction. The "ambiguities" arise from ascertaining facts of the matter, and whatever some facts fits within a specific set of set rules.
> Something must change regarding copyright and AI model training.
Yes, but this problem is not specific to AI, it is the question of what constitutes a derivative, and that is a rather subjective matter in the light of the good ol' axiom of "nothing is new under the sun".
The catch here is that a human can use single sample as input, but AI needs a torrent of training data. Also when AI generates permutations of samples, does their statistic match training data?
Humans have that torrent of training data baked in from years of lived experience. That’s why people who go to art school or otherwise study art are generally (not always of course) better artists.
Most translation work is simple just as the day-to-day of many creative professions is rather uncreative. But translating a book, comic or movie requires creative decisions on how to best convey the original meaning in the idioms and cultural context of a different language. The difference between a good and a bag translation can be stark
Also in case of graphic and voice artists unique style looks more valuable than output itself, but style isn't protected by copyright.
It will be like furniture.
A long time ago, every piece of furniture was handmade. It might have been good furniture, or crude, poorly constructed furniture, but it was all quite expensive, in terms of hours per piece. Now, furniture is almost completely mass produced, and can be purchased in a variety of styles and qualities relatively cheaply. Any customization or uniqueness puts it right back into the hand-made category. And that arrangement works for almost everyone.
Media will be like that. There will be a vast quantity of personalized media of decent quality. It will be produced almost entirely automatically based on what the algorithm knows about you and your preferences.
There will be a niche industry of 'hand made' media with real acting and writing from human brains, but it will be expensive, a mark of conspicuous consumption and class differentiation.
This addresses one axis of development.
Meanwhile, there's lots of people around willing to express themselves for advertisement money.
Like with translation: We're going to see tool-assisted work where the tools get more and more sophisticated.
Your example with furniture is good. Another is cars: From horses to robotaxis. Humans are in the loop somewhere still.
If people instead care about the creation story and influences (the idea of "behind the scenes" and "creator interviews" for on demand ai generated media is pretty funny) then this won't have much value.
Time will tell - it's an exciting, discouraging time to be alive, which has probably always been the case.
She was lucky to be able to retire when she did, as the job of a translator is definitely going to become extinct.
You can already get higher quality translations from machine learning models than you get from the majority of commercial human translations (sans occasional mistakes for which you still need editors to fix), and it's only going to get better. And unlike human translators LLMs don't mangle the translations because they're too lazy to actually translate so they just rewrite the text as that's easier, or (unfortunately this is starting to become more and more common lately) deliberately mistranslate because of their personal political beliefs.
It also varies by language. Every time I give an example here of machine translated English-to-Chinese, it's so bad that the responses are all people who can read Chinese being confused because it's gibberish.
And as for politics, as Grok has just been demonstrating, they're quite capable of whatever bias they've been trained to have or told to express.
But it's worse than that, because different languages cut the world at different joints, so most translations have to make a choice between literal correctness and readability — for example, you can have gender-neutral "software developer" in English, but in German to maintain neutrality you have to choose between various unwieldy affixes such as "Softwareentwickler (m/w/d)" or "Softwareentwickler*innen" (https://de.indeed.com/karriere-guide/jobsuche/wie-wird-man-s...), or pick a gender because "Softwareentwickler" by itself means they're male.
I personally have no strong opinion on this, FWIW, just confirming GP's making a good point there. A translated word or phrase may be technically, grammatically correct, but still not be culturally correct.
That's not how you get good translations from off-the-shelf LLMs! If you give a model the whole book and expect it to translate it in one-shot then it will eventually hallucinate and give you bad results.
What you want is to give it a small chunk of text to translate, plus previously translated context so that it can keep the continuity.
And for the best quality translations what you want is to use a dedicated model that's specifically trained for your language pairs.
> And as for politics, as Grok has just been demonstrating, they're quite capable of whatever bias they've been trained to have or told to express.
In an open ended questions - sure. But that doesn't apply to translations where you're not asking the model to come up with something entirely by itself, but only getting it to accurately translate what you wrote into another language.
I can give you an example. Let's say we want to translate the following sentence:
"いつも言われるから、露出度抑えたんだ。"
Let's ask a general purpose LLMs to translate it without any context (you could get a better translation if you'd give it context and more instructions):
ChatGPT (1): "Since people always comment on it, I toned down how revealing it is."
ChatGPT (2): "People always say something, so I made it less revealing."
Qwen3-235B-A22B: "I always get told, so I toned down how revealing my outfit is."
gemma-3-27b-it (1): "Because I always get told, I toned down how much skin I show."
gemma-3-27b-it (2): "Since I'm always getting comments about it, I decided to dress more conservatively."
gemma-3-27b-it (3): "I've been told so often, I decided to be more modest."
Grok: "I was always told, so I toned down the exposure."
And how humans would translate it:
Competent human translator (I can confirm this is an accurate translation, but perhaps a little too literal): "Everyone was always saying something to me, so I tried toning down the exposure."
Activist human translator: "Oh those pesky patriarchal societal demands were getting on my nerves, so I changed clothes."
(Source: https://www.youtube.com/watch?v=dqaAgAyBFQY)
It should be fairly obvious which one is the biased one, and I don't think it's the Grok one (which is a little funny, because it's actually the most literal translation of them all).
To paraphrase Frank Zappa...Art just needs a frame. If you poo on a table...not art. If you declare 'my poo on the table will last from the idea, until the poo dissappears', then that is art. Similarly, banksy is just graffiti unless you understand (or not) the framing of the work.
I'm not even sure if bilingualism is real or if it's just an alternate expression for relatively benign forced split personality. Could very well be.
Downgraded to AI training? Nonsense. You forget artists do more than just draw for money, we also draw for FUN, and that little detail escapes every single AI-related discussion I've been reading for the last 3 years.
Those show are cheap because they employ fewer people. They still need to employ some people though. To me the greater tradgedy is that they make a product where those people who make it do not care about it. People are working to make things they don't like because they need income to survive.
The problem is not that AI is taking jobs, it is that it is taking incomes. If we really are heading to a world where most jobs can be done by AI (I have my doubts about most, but I'll accept many), we need a strategy to preserve incomes. We already desperately need a system to prevent massive wealth inequality.
We need to have a discussion about the future we want to have, but we are just attacking the tools used by people making a future we don't want. We should be looking at the hands that hold the tools.
Discussions like this often lead to talking about universal basic income. I think that is a mistake. We need a broader strategy than that. The income needs to be far better than 'basic'. Education needs to change to developing the individual instead of worker units.
Imagine a world where the only TV shows were made were the ones who could attract people who care about the program enough that they would offer their time to work on it.
That too would generate a lot of poor quality content, because not everyone is good at the things they like to do. It would be heartless to call it slop though. More importantly those people who are afforded the lifestyle that enables them to produce low quality things are doing precisely the work they need to be doing to become people who produce high quality things.
Some of those hands learning to make high quality things may be holding the tools of AI. People making things because they want to make will produce some incredible things with or without AI. A lot of astounding creations we haven't even seen or perhaps even imagined will be produced by people creatively using new tools.
(This is what I get for checking HN when I let the dog out to toilet in the middle of night)
Doesn’t sound too bad? It sounds like the premise of a dystopian novel. Most artists would be profoundly unhappy making “art” to be fed to and deconstructed by a machine. You’re not creating art at that point, you’re simply another cog feeding the machine. “Art” is not drawing random pictures. And how, pray tell, will these artists survive? Who is going to be paying them to “draw whatever they like” to feed to models? And why would they employ more than two or three?
> it still make me wonder (…) if we're going to start losing challenging styles (…) and everything will start 'felling' the same.
It already does. There are outliers, sure, but the web is already inundated by shit images which nonetheless fool people. I bet scamming and spamming with fake images and creating fake content for monetisation is already a bigger market than people “genuinely” using the tools. And it will get worse.
That's the definition of commercial art, which is what most art is.
> “Art” is not drawing random pictures.
It's exactly what it is, if you're talking about people churning out art by volume for money. It's drawing whatever they get told to, in endless variations. Those are the people you're really talking about, because those are the ones whose livelihoods are being consumed by AI right now.
The kind of art you're thinking of, the art that isn't just "drawing random pictures", the art that the term "deconstruction" could even sensibly apply to - that art isn't in as much danger just yet. GenAI can't replicate human expression, because models aren't people. In time, they'll probably become so, but then art will still be art, and we'll have bigger issues to worry about.
> There are outliers, sure, but the web is already inundated by shit images which nonetheless fool people. I bet scamming and spamming with fake images and creating fake content for monetisation is already a bigger market than people “genuinely” using the tools. And it will get worse.
Now that is just marketing communications - advertising, sales, and associated fraud. GenAI is making everyone's lives worse by making the job of marketers easier. But that's not really the fault of AI, it's just the people who were already making everything shitty picking up new tools. It's not the AI that's malevolent here, it's the wielder.
Open it for all or nothing.
It’s a very emotionally loaded space for many, meaning most comments I read lean to the extremes of either argument, so seeing a comment like yours that combines both makes me curious.
Would be interesting to hear a bit more about how you see the role of copyright in the AI space.
The role of the artist has always been to provide excellent training data for future minds to educate themselves with.
This is why public libraries, free galleries, etc are so important.
Historically, art has been ‘best’ when the process of its creation has been heavily funded by a wealthy body (the church or state, for example).
‘Copyright’, as a legal idea, hasn’t existed for very long, relative to ‘subsidizing the creation of excellent training data’.
If ‘excellent training data for educating minds’ genuinely becomes a bottleneck for AI (though I’d argue it’s always a bottleneck for humanity!), funding its creation seems a no-brainer for an AI company, though they may balk at the messiness of that process.
I would strongly prefer that my taxes paid for this subsidization, so that the training data could be freely accessed by human minds or other types of mind.
Copyright isn’t anything more than a paywall, in my opinion. Art isn’t for revenue generation - it’s for catalyzing revenue generation.
We are not aware of the implications of this sentence. This is it. The only "source" is play. Joyful play.
With AI tools artists will be able to push further, doing things that AI can't do yet.
Every artist worth anything strives to be better at their craft on the daily, if that artist gets discouraged because there's something "better", that means that artist is not good because those negative emotions are coming from a competitive place instead of one of self-improvement and care for their craft or the audience. Art is only a competition with oneself, and artists that don't understand or refuse this fact are doomed from the start.
In terms of losing styles, that is already been happening for ages. Disney moved to xeroxing instead of inking, changed the style because inking was “too hard”. In the late 90s/early 2000s we saw a burst of cartoons with a flash animation style on TV because it was a lot easier and cheaper to animate in flash.
Of course it can be, you're seeing it first hand with your very own eyes.
There's a difference, in my mind at least. "Art" is cultural activity and expression, there needs to be intent, creativity, imagination..
A printer spooling out wallpaper is not making art, even if there was artistry involved in making the initial pattern that is now being spooled out.
When I see generative AI produced illustrations, they'll usually be at least aesthetically pleasing. But sometimes they are already more than that. I found that there are lots and lots of illustrations that already deliver higher level experiences that go beyond just their quality of aesthetics delivery. They deliver on the goal they were trying to use those aesthetics for to begin with. Whether this is through tedious prompting and "borrowed" illustrational techniques I think is difficult to debate right now, but just based on what I've seen so far of this field and considering my views and definitions, I have absolutely zero doubts that AI will 100% generate artworks that are more and more "legitimately" artful, and that there's no actual hard dividing line one can draw between these and manmade art, and what difference does exist now I'm confident will gradually fade away.
I do not believe that humans are any more special than just the fact that they're human provides to them. Which is ultimately ever-dwindling now it seems.
AI is technically another tool, and it can be used poorly (what people refer to "AI slop", using default settings, some LoRA and calling it a day) and it can be used properly (forcing compositions, editing, fixing errors...) to convey an idea or emotion or tell a story. Critical eye does the rest.
After all, the machine doesn't do anything on its own, it needs a driver. The quality of the output is directly proportional to the operator's amount of passion.
Consider zero and single click deployments in IT operations. With single click deployments, you need to have everything automated, but the go sign is still given by a human. With zero click, you'll have a deployment policy instead - the human decision is now out of the critical path completely, and only plays part during the authoring and later editing of said policy. And you can also then generate those policies, and so on.
Same can be applied to AI. You can have canned prompts that you keep refining to encode your intent and preferences, then you just use them with a single click. But you can also build a harness that generates prompts and continuously monitors trends and the world as a whole for any kind of arbitrary criteria (potentially of its own random or even shifting choice), and then follows that: a reward policy. And then like with regular IT, you can keep layering onto that.
Because of this, I don't think that intent is the point of differentiation necessarily, but the experience and shared understanding of human intent. That people have varying individual, arbitrary preferences, and are going through life of arbitrary and endless differences, and then source from those to then create. Indeed, this is never going to be replicated, exactly because of what I said: this is humans being human, and that giving them a unique, inalienable position by definition.
It's like if instead of planes we called aircraft "mechanical birds" and dunked on them for not flying by flapping their wings, despite their more than a century long history and claims of "flying". But just like I think planes do actually fly, I do also think that these models produce art. [0]
Examples:
Disney isn't going to start using AI art. But all those gacha games on the iOS app store are ABSOLUTELY going to. And I suspect gacha apps support at least 10-100x more artists than Disney staffs.
Staff engineers aren't going anywhere - AI can't tell leadership the truth. But junior engineers are going to be gutted by this, because now their already somewhat dubious direct value proposition - turning tickets into code while they train up enough to participate more in the creative and social process of Software Engineering - now gets blasted by LLMs. Mind you, I don't personally hold this ultra-myopic view of juniors - but mgmt absolutely does, and they pick headcount.
Hmm yknow I could actually see Big Books getting the "top" end eaten by AI instead of the bottom, actually. All the penny dreadfuls you see lining the shelves of Barnes and Noble. Vs the truly creative work already happens at the bottom anyway, and is self-published.
Also, as someone who's watched copyright from the perspective of a GPL fanboy, good fucking luck actually enforcing anything copyright related. The legal system is pay to play and if you're a small (or even medium!) fry, you will probably never even know your copyright is being violated. Much less enforcing it or getting any kind of judgement.
Is it? I have no knowledge of this product, but I recall Novel AI paid for a database of tagged Anime style images. Its not impossible for something similar to have happened here.
10 years ago: "real real text cannot be generated like stock phrases, so writing will be nearly forever powered by human writers."
Obviously we have synthetic graphics (like synthetic text). So something else must be meant by "art" here.
The result will be less original art. They will simply stop creating it or publishing it.
IMO music streaming has similarly lead to a collapse in quality music artistry, as fewer talented individuals are incentivised to go down that path.
AI will do the same for illustration.
It won’t do the same for _art_ in the “contemporary art” sense, as great art is mostly beyond the abilities of AI models. That’s probably an AGI complete task. That’s the good news.
I’m kinda sad about it. The abilities of the models are impressive, but they rely on harvesting the collective efforts of so many talented and hardworking artists, who are facing a double whammy: their own work is being dubiously used to put them out of a job.
Sometimes I feel like the tech community had an opportunity to create a wonderful future powered by technology. And what we decided to do instead was enshittify the world with ads, undermine the legal system, and extract value from people’s work without their permission.
Back in the day real hackers used to gather online to “stick it to the man”. They despised the greed and exploitation of Wall Street. And now we have become torch bearers for the very same greed.
Is there data for this? I feel there's more musicians than ever and there's more very talented musicians than ever and the most famous ones are more famous than ever so I would like to see if that's correct.
I think there are more musicians with reach than ever.
I would say it’s very likely there are far fewer musicians making a living out of their music than there were in the last. That’s the key difference.
And the truth is that for most people incentives matter, so not being able to make a living from music means very talented people who are financially motivated (ie most of them) do something else instead.
I wonder if there is a mitigation strategy for this. Is there a way to make (human-made-art) scraping robustly difficult, while leaving human discovery and exploration intact?
Art stealing is a thing. I've had by art stolen regularly. Multiple Doom mods use sprites I made and only one person (the DRLA guy) asked for permission. I've had my art traced and even used in advertisements with me only finding out by sheer chance. I've had people use it for coloring without crediting the source. This has happened for more than thirty years. You can only learn to live with it, lest you risk going absolutely insane. If you are popular, people will do stupid stuff with your stuff. And if you aren't popular, you art is not going to be used to train, anyway (sets are ordered by popularity and only the top stuff gets used. The one with 3 upvotes is not going in.)
1. Haruhi is based on light novels, so has to actually perform to get a release. Japanese market is upside down, the anime often goes to free to air to support a manga release where the real money is made (I have no idea how this works economically this is just how its explained to me) as there isn't any more manga or light novels to release, the likelihood of another season is low. It was sort of always a passion project.
2. The studio was firebombed. https://en.wikipedia.org/wiki/Kyoto_Animation_arson_attack
3. Season 2 was critically panned, but I dunno I thought it was pretty genius.
My suggestion, watch both series, then read the english translation of the novels.
Also don't forget to watch Disappearance after the 2 seasons.
The IP is likely doa anyway as it's on indefinite hiatus
I don't know how anime series economics works these days -- AIUI traditionally the live late night TV broadcast was effectively an advert to get the hardcore fans to buy the extremely expensive Japanese market DVD/bluray sets, which were what brought in the money. But I expect streaming has changed things a lot.
I'd assume streaming services have changed the industry. Things like Devilman Crybaby wouldn't have been released if Netflix wasn't involved.
https://goto.isaac.sh/neon-anisora
Prompt: The giant head turns to face the two people sitting.
Oh, there is a docs page with more examples:
https://pwz4yo5eenw.feishu.cn/docx/XN9YdiOwCoqJuexLdCpcakSln...
> a variable-length training approach is adopted, with training durations ranging from 2 to 8 seconds. This strategy enables our model to generate 720p video clips with flexible lengths between 2 and 8 seconds.
I'd like to see it benched against FramePack which in my experience also handles 2d animation pretty well and doesn't suffer from the usual duration limitations of other models.
Current stance:
https://www.copyright.gov/newsnet/2025/1060.html
“It concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements”.
If it isn’t covered (after all it’s the AI that drew all the pictures) then anyone using such service to produce a movie would be screwed - anyone could copy it or its characters).
I’m leaving out the problem of whether the service was trained on copyright material or not.
In all seriousness I wonder where is this all headed? Are people long term going to be more forgiving of visual artifacts if it will mean that their favourite franchise gets another season? Or will generated imagery be shunned just like the not-so-subtle use of 3D models?
Toei Animation is looking to utilize AI in areas such as
storyboarding, coloring, and “color specification,” as
well as in-between animation and backgrounds.
The specific use cases mentioned include:
• Storyboarding: Leveraging AI to “generate simple
layouts and shooting of the storyboards.”
• Colors: Employing AI to “specify colors and
automatically correct colors.”
• In-betweens: Utilizing AI to “automatically correct
line drawings and generate in-betweens.”
• Backgrounds: Using AI to “generate backgrounds from
a photo.”
Source: https://www.japannihon.com/toei-animation-discusses-ai-use-i...I think this is fine. The director will still make sure there's no visual artifacts. On the other hand indies will be able to create their own works, maybe with some warts, but better than nothing.
But seriously, I had the same thought, considering the general lack of guardrails surrounding high-profile Chinese genAI models... Eventually, someone will know the answer... It's inevitable...
Looks incredibly impressive btw. Not sure it's wise to call it `AniSora` but I don't really know.
> This model has 1 file scanned as unsafe. testvl-pre76-top187-rec69.pth
Hm, perhaps I'll wait for this to get cleared up?
https://huggingface.co/Disty0/Index-anisora-5B-diffusers
For the record, the dev branch of SD.Next (https://github.com/vladmandic/sdnext) already supports it.
I wouldn't expect this from Bilibili's Index Team, though, given how high profile they are. It's probably(?) a false positive. Though I wouldn't use it personally, just to be safe.
The safetensors format should be used by everyone. Raw pth files and pickle files should be shunned and abandoned by the industry. It's a bad format.
Given that OpenAI call themselves "Open", I think it's great and hilarious that we're reusing their names.
There was OpenSora from around this time last year:
https://github.com/hpcaitech/Open-Sora
And there are a lot of other products calling themselves "Sora" as well.
It's also interesting to note that OpenAI recently redirected sora.com, which used to be its own domain, to sora.chatgpt.com.
Probably to share cookies.
We need cross-domain cookies. Google took them away so they could further entrench their analytics and ads platform. Abuse of monopoly power.
We use first-party cookies for session management.
We use APIs and signed tokens (JWT) to federate across domains without leaking user data.
The ones hurt by the death of third-party cookies are ad tech parasites who refused to innovate imho...
Also: tech should be easier, not harder.
Building this shouldn't take more than an hour, yet somehow we did this to ourselves.
Wan2.1 is great. Does this mean anisora is also 16fps?
You could argue that those tools in the hands of skilled craftsman will create amazing things faster, but we all know what will happen is absolute flood of AI slop in every entertainment category.
South Park looks like MS-Paint drawings hastily animated by someone without access to Adobe Animate. It still manages to be a good and beloved show because it shines in other ways
The world of entertainment is big enough for both Studio Ghibli productions and South Park to exist. AI slop will find its niche too. It will consume some animation jobs just as all the automaton and tooling coming before has, but I'm of the strong belief there will still be a market for good handmade art
I know there is a huge market for those excited for infinite anime music videos and all things anime.
This is great for an abundance of content and everyone will become anime artists now.
Japan is truly is embracing AI and there will be new jobs for everyone thanks to the boom AI is creating as well as Jevons paradox which will create huge demand.
Even better if this open source.
Nowadays it seems everyone is interested by "anime style" of content but all I see is terrible in term of quality. It seems quantity increased so much in the last 30 years it only made quality stuff more invisible and we are inundated with animelike trash.
YouTube channels like Mother’s Basement help picking out something to watch. Geoff has routinely pointed how he literally watches anime for a living and it’s still hard to watch everything worthy he finds.
Video titles are pretty self-explanatory. If you want to find something to watch, fire up one of “The BEST Anime of [season] [year]” and you’ll get plenty of recommendations, nicely ordered and with some short explanation of what it is about and why it’s noteworthy.
In the end, there will still be quality content but it will be much more expensive, available only to an elite.
Then the elite will now what's good quality and will be able to produce more good quality. Those will be hired.
The vast majority, only exposed to bad quality, will not be able to produce quality anymore. And won't be hired anymore.
And so here is your great quality divide.
I don't think they'd be artists, but AI-prompters, although you're right that there will be a huge flood of content.
You understand that china has "different" view on copyright,license etc right??
It's the equivalent of Crunchyroll putting out a video generation model. If the rightsholders disagree with this usage, it'll come up during the negotiations for new releases.
how can you prove then??? its literally the same way OpenAI use Ghibli material and they can't do anything about it
Bilibili does have an existing business based on licensing Studio Ghibli content, so Studio Ghibli can threaten to refuse to sell them distribution rights for future releases, even without a lawsuit.
then tell me what chinnese government stance on this matters, because I can tell that Meta doing is illegal but I cant say the same with chinnese company doing it on mainland china
There are increasingly more reports of foreign scalpers stocking couples of $5 doujinshi in weekend cons and demanding receipts, and authors are moving to block them. That's like mafias genuinely smuggling charity home baked cookies. It shouldn't make sense. This astronomical gap in supply and demand, alone, should be enough to create incentives for people to even just mess up and ruin the market.