I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?
What we should be doing is surfacing well defined points regarding AI regulation and discussing them, instead of fighting proxy wars for opaque groups with infinite money. It feels like we're at the point where nobody is even pretending like people's opinions on this topic are relevant, it's just a matter of pumping enough money and flooding the zone.
Personally, I still remain very uncertain about the topic; I don't have well-defined or clearly actionable ideas. But I'd love to hear what regulations or mental models other HN readers are using to navigate and think about this topic. Sam Altman and Elon Musk have both mentioned vague ideas of how AI is somehow going to magically result in UBI and a magical communist utopia, but nobody has ever pressed them for details. If they really believe this then they could make some more significant legally binding commitments, right? Notice how nobody ever asks: who is going to own the models, robots, and data centers in this UBI paradise? It feels a lot like Underpants Gnomes: (1) Build AGI, (2) ???, (3) Communist Utopia and UBI.
There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:
"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
"Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
This is a really good point. If a country tries to "protect" jobs by blocking AI, it only puts itself at a disadvantage. Other countries that don't pass those restrictions will produce goods and services more efficiently and at lower cost, and they’ll outcompete you anyway. So even with regulations the jobs aren't actually saved.
The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it's hard to convince people that they need to change instead of expecting the world around them to stay the same.
But why do I have to? Why should your life be dictated by the market and corporations that are pushing these changes? Why do I have to be afraid that my livelihood is at risk because I don't want to adapt to the ever faster changing market? The goal of automation and AI should be to reduce or even eliminate the need for us to work, and not the further reduction of people to their economic value.
So politicians are supposed to create "non bullshit" jobs out of thin air?
The job you've done for decades is suddenly bullshit because some shit LLM is hallucinating nice sounding words?
And the quality of the debate remains very low as well. Most people barely understand the issues. And that includes many journalists that are still getting hung up on the whole "hallucinations can be funny" thing mostly. There are a lot of confused people spouting nonsense on this topic.
There are special interest groups with lobbying powers. Media companies with intellectual properties, actors worried about being impersonated, etc. Those have some ability to lobby for changes. And then you have the wider public that isn't that well informed and has sort of caught on to the notion that chat gpt is now definitely a thing that is sometimes mildly useful.
And there are the AI companies that are definitely very well funded and have an enormous amount of lobbying power. They can move whole economies with their spending so they are getting relatively little push back from politicians. Political Washington and California run on obscene amounts of lobbying money. And the AI companies can provide a lot of that.
Artists are not primarily in the 1% though, it's not only patents that are IP theft.
Doesn't quite align with UBI, unless he envisions the AI companies directly giving the UBI to people (when did that ever happen?)
This would be a 19th century government, just the "regalian" functions. It's not really plausible in a world where most of the population who benefit from the health/social care/education functions can vote.
It's not the tech titans, it's Capitalism itself building the war chest to ensure it's embodiment and transfer into its next host - machines.
We are just it's temporary vehicles.
> “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.”
> “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.”
I see your “roko’s basilisk is real” and counter with “slenderman locked it in the backrooms and it got sucked up by goatse” in this creepypasta-is-real conversation
(disclaimer: I don't actually, I'm just memeing. I don't think we'll get AI overlords unless someone actively puts AI in charge and in control of both people (= people following directions from AI, which already happens, e.g. ChatGPT making suggestions), military hardware, and the entire chain of command in between.)
Speaking of IP, I'd like to see some major copyright reform. Maybe bring down the duration to the original 14 years, and expand fair use. When copyright lasts so long, one of the key components for cultural evolution and iteration is severely hampered and slowed down. The rate at which culture evolves is going to continue accelerating, and we need our laws to catch up and adapt.
Sure, I can give you some examples:
- deceiving someone into thinking they're talking to a human should be a felony (prison time, no exceptions for corporations)
- ban government/law-enforcement use of AI for surveillance, predictive policing or automated sentencing
- no closed-source AI allowed in any public institution (schools, hospitals, courts...)
- no selling or renting paid AI products to anyone under 16 (free tools only)
This is gonna be as enforceable as the CANSPAM act. (i.e. you will get a few big cases, but it's nothing compared to the overall situation)
How do you proof it in court? Do we need to record all private conversations?
Stricter IP laws won't slow down closed-source models with armies of lawyers. They'll just kill open-source alternatives.