OP replied and there's another in-depth reply to that below it
> I appreciate you citing the specific clauses. You are reading the 'Black Letter Law' correctly, but you are missing the Litigation Reality (how this actually plays out in court). The 'Primarily Designed' Trap: You argue this only targets a specific 'Arnold Bot.' blah blah blah.
I'm not sure what entirely to do about it, but the fact that some providers still release open models is hopeful to me.
Control and consistency, as well as support for specialized workflows is more important, even if it comes at the expense of model quality.
1. AI succeeds and this success is defined by they will be able to raise prices really high
2. AI fails and receives a bailout
But if there are alternatives I can run locally, they can't raise prices as high because at some point I'd rather run stuff locally even if not state of the art. Like personally, USD 200 a month is already too expensive if you ask me.
And the prices can only go up, right?
For the long-term value of a horribly overpriced and debt ridden corporation it matters quite a lot if the catch up of open source to adequate is less than a decade in instead of several decades in some uses and a few more in others.
Either there's a massive reduction of inference and training costs due to new discoveries and then those big hardware hoarders have no moat anymore or nothing new is discovered and they won't be able to scale forward.
Either way it doesn't sound good for them.
If the courts upheld the part in question, it would create a clear path to go after software authors for any crime committed by a user. Cryptocurrencies would become impossible to develop in the US. Holding authors responsible for the actions of their users basically means everyone has to stop distributing software under their real names. There would be a serious chilling effect, as most open source projects shutdown or went underground.
As the law is specifically targeting models made to duplicate an individual it isn't hard to provide sufficient evidence to clear the hurdles required of restrictions on free speech as examples of the negative effects are well documented and "speech model for an individual" isn't a broad category.
Also I would point out first amendment isn't used by the US to protect software delivery anywhere. Instead it is Congress explicitly encouraging it through the laws it passes.
It's a stretched metaphor at this point, but I hope that makes sense (:
If these roads had been designed differently, to naturally enforce the desired speeds, it would be a safer road in general and as a side effect be a less desirable getaway route.
Again I agree we're really stretching here, but there is a real common problem where badly designed roads don't just enable but encourage illegal and potentially unsafe driving. Wide, straight, flat roads are fast roads, no matter what the posted speed limit is. If you want low traffic speeds you need roads to be designed to be hostile to high speeds.
But what I was trying to describe is a "mild mannered" getaway driver. Not fleeing from cops, not speeding. Just calmly driving to and from crimes. Should we punish the road makers for enabling such nefarious activity?
(it's a rhetorical question; I'm just trying to clarify the point)
Risk becomes bound to the total value of the company and you can start acting rationally.
The model (pardon) in my mind is like this:
* The forger of the banknote is punished, not the maker of the quill
* The author of the libelous pamphlet is punished, not the maker of the press
* The creep pasting heads onto scandalous bodies is punished, not the author of Photoshop
In this world view, how do we handle users of the magic bag of math? We've scarcely thought before that a tool should police its own use. Maybe, we can say, because it's too easy to do bad things with, it's crossed some nebulous line. But it's hard to argue for that on principle, as it doesn't sit consistently with the more tangible and well-trodden examples.
With respect to the above, all the harms are clearly articulated in the law as specific crimes (forgery, libel, defamation). The square I can't circle with proposals like the one under discussion is that they open the door for authors of tools to be responsible for whatever arbitrary and undiscovered harms await from some unknown future use of their work. That seems like a regressive way of crafting law.
In this case the guy making the images isn't doing anything wrong either.
Why would we punish him for pasting heads onto images, but not punish the artist who supplied the mannequin of Taylor Swift for the music video to Famous?†
https://www.youtube.com/watch?v=p7FCgw_GlWc
Why would we punish someone for drawing us a picture of Jerry Falwell having sex with his mother when it's fine to describe him doing it?
(Note that this video, like the recent SNL "Home Alone" sketch, has been censored by YouTube and cannot be viewed anonymously. Do we know why YouTube has recently kicked censorship up to these levels?)
You mean like speed limits, drivers licenses, seat belts, vehicle fitness and specific police for the roads?
I still can't see a legitimate use for anyone cloning anyone else's voice. Yes, satire and fun, but also a bunch of malicious uses as well. The same goes with non-fingerprinted video gen. Its already having a corrosive effect on public trust. Great memes, don't get me wrong, but I'm not sure thats worth it.
As a former VFX person, I know that a couple of shows are testing out how/where it can be used. (currently its still more expensive than trad VFX, unless you are using it to make base models.)
Productivity gains in the VFX industry over the last 20 years has been immense. (ie a mid budget TV show has more, and more complex VFX work than most movies that are 10 years old, and look better.)
But, does that mean we should allow any bad actor to flood the floor with fake clips of whatever agenda they want to push? no. If I as a VFX enthusiast gets fooled by GenAI videos (Picture area done deal, its super hard to stop reliably) then we are super fucked.
The reason safe harbor clauses externalize risks onto the user is because the user gets the most use (heh) of the software.
No developer is going to accept unbounded risk based on user behavior for a limited reward, especially not if they're working for free.
While both can be misused, to me the latter category seems to afford a far larger set of tertiary/unintended uses.
(shill)
https://www.youtube.com/watch?v=2HMsveLMdds
Which is of course the European Version, not the evil American Version.
Edit: Also does this mean OpenAI can bring back the Sky voice?
Chinese companies will happy to drive innovation further after Google and OpenAI giants goes on with this to kill competition in the US.
US capitalism eats itself alive with this.
(not my interpretation, it's what the post states - personally that is not what I think of when I read "Open Source")
Why the hell can't these legislators keep to punishing the law breakers instead of creating unending legal pitfalls for innovation?
We have laws against murdering people with kitchen knives. Not laws against dinnerware. Williams Sonoma doesn't have to worry about lawsuits for its Guy Degrenne Beau Manoir 20-Piece Flatware Set.
Don't worry, the UK is on the case https://reason.com/2019/10/07/the-u-k-must-ban-pointy-knives...
I thought this was an exaggeration
google:
"Reason.com is a reputable source that adheres to journalistic standards, but its content is filtered through a specific, consistent libertarian lens."
If current tech appeared all of a sudden in 1999; I am sure as a society we would all accept this, but slow boiling frog theory I guess.
Here is some background info on the act from wikipedia: https://en.wikipedia.org/wiki/No_Fakes_Act.
[0] https://github.com/StevenBlack/hosts/blob/master/readme.md
Good.
But what is frustrating to me is that the second order effects of making the law more restrictive will be doing us all a big disfavor. It will not stop this technology, but it will just make it more inaccessible to normal people and put more power into the hands of the big corporations which the "they're stealing our data!" people would like to stop.
Right now I (a random nobody) can go on HuggingFace, download model which is more powerful that anything that was available 6 months ago, and run it locally on my machine, unrestricted and private.
Can we agree that's, in general, a good thing?
So now if you make the model creators liable for misuse of the models, or make the models a derivative work of its training data, or anything along these lines - what do you think will happen? Yep. The model on HuggingFace is gone, and now the only thing you'll have access to is a paywalled, heavily filtered and censored version of it provided by a megacorporation, while the megacorporation itself has internally an unlimited, unfiltered access to that model.
The models come from overt piracy, and are often used to make fake news, slander people, or other illegal content. Sure it can be funny, but the poison fruit from a poison tree is always going to be overt piracy.
I agree research is exempt from copyright, but people cashing in on unpaid artists works for commercial purposes is a copyright violation predating the DMCA/RIAA.
We must admit these models require piracy, and can never be seen as ethical. =3
'"Generative AI" is not what you think it is'
'"Generative AI" is not what you think it is'
https://www.youtube.com/watch?v=ERiXDhLHxmo
And this paper proved the absurd outcome of the bubble is hype:
'Researchers Built a Tiny Economy. AIs Broke It Immediately'
https://www.youtube.com/watch?v=KUekLTqV1ME
It is true bubbles driven by the irrational can't be stopped, but one may profit from peoples delusions... and likely get discount GPUs when the economic fiction inevitably implodes. Best of luck =3
Maybe a different comparison you would agree with is Stingrays, the devices that track cell phones. Ideally nobody would have them but as is, I’m glad they’re not easily available to any random person to abuse.
...and the lawyers win. =3
Arguing regulatory capture versus overt piracy is a ridiculous premise. The "AI" firms have so much liquid capital now... they could pay the fines indefinitely in districts that constrain damages, and already settled with larger copyright holders like it was just another nuisance fee. =3