The Generative AI Industry Is Fraudulent, Immoral and Dangerous
12 points
20 hours ago
| 2 comments
| dianne.skoll.ca
| HN
whoknowsidont
19 hours ago
[-]
>The Generative AI Industry is based on Theft

This is the big thing that needs to be addressed. These models are nothing without that data. Code, art, music, just plain old conversations freely given for the benefit or entertainment of other humans.

Humans are accountable for what they use or "borrow."

These models seemingly are getting a free pass through the legal system.

Another way of looking at this is that humans have to have some type of credential for many professions, and they have to pay to be taught and certified. Out of their own pocket, and time.

Not only will these models copy cat your work, but a lot of companies/industries seem very keen on just ignoring the fact that these models have not had to pass any sort of exam.

The software has more rights and privilege than actual humans at this point.

reply
mathgorges
19 hours ago
[-]
> The software has more rights and privilege than actual humans at this point.

That's been true for some time though, right? For example if you have a community notice board in front of your store and someone pins illegal content to it you're held to a different legal standard than if someone posts the same content to a social media platform.

I don’t think that’s right either, but this kind of “tech exceptionalism” has been baked into law for decades. AI is inheriting those privileges more than inventing new ones.

reply
slowmovintarget
10 hours ago
[-]
I hear this, and I want to sympathize, but I always have trouble reconciling it with the philosophy pointed to by the phrase "information wants to be free."

Should the people be compensated for their work? Yes. Is this piracy? Yes, we know that many of these big organizations just outright copied the work with out proper payment. (No, it isn't theft, theft deprives the owner of property, not just a portion of the revenue.)

But tokenization and vectorization of this data is fundamental transformation. Information wants to be free. Again, I don't know if I can reconcile this with fair use doctrine, but it smells like it, especially for books that would be in a library, or images of paintings.

If we embodied these learning systems in a robot, sent it to the library, and had it read every book, would it be piracy? That's how humans learn. They take away a fundamental summary of the thing. When I read a book I'm affected by it. My vocabulary, even my writing style sometimes. The fact that these are machine systems capable of doing this at scale... does that make a difference?

reply
Yizahi
10 hours ago
[-]
Fair use doctrine is explicitly for the humans. For the better overall state of humankind. It is not logically "fair" in the terms of some abstract logic, since we are giving preference to some part of the population and not everyone equally/fairly. So the fairness is not in the perfect mathematical equality of the application of fair use law, it is in the inequality of it and this is what makes it fair. In short - fair use is a hack, which we humans invented for ourselves, for our better living.

There is no some law of the universe that a hack we have invented for ourselves should be extended to literally everything everywhere, we don't "owe" this to anyone. So we don't "owe" it to our computers that fair use doctrine should be extended to the computer programs. The fairness word in the fair use doctrine doesn't mean that every entity should automatically benefit from such law. It's only for humans now.

My point of this long rant is that making fair use universally fair, automatically makes it unfair for the humans, for the original recipients of the benefits of this law. And there is no compromise here. We either ignore human rights, or computer program rights.

reply
slowmovintarget
10 hours ago
[-]
Fair use would be applied to the humans who caused it to be tokenized and vectorized. (Transformation.)

But I take your point about a robot doing it.

reply