It reminds me of the time I read books in my youth and only 20 years later realized the authors of some of those books were trying to deliver a important life messages to a teenager undergoing crucial changes, all of which would be painfully relevant to the current adult me... and yet the whole time they fell on deaf ears. Like the message was right there but I did not have the emotional/perceptive intelligence to pick up on and internalize it for too long.
The problem is sifting through all of the doomsayer false positives to get to any amount of cogent advice.
At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.
Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.
Were they?
The first thing the printing press did was to break Christianity. It's what made attempts at reforming the Catholic Church finally stick, enabling what we now call Reformation to happen. Reformation forever broke Christianity into pieces, and in the process it started a bunch of religious wars in Europe, as well as tons of neighborly carnage.
> And if we had taken their "lesson", then human society would be in a much worse place.
Was the invention of the printing press a net good for humanity? Most certainly so, looking back from today. Did people living back then knew what they were getting into? Not really. And since their share of the fruits of that invention was mostly bloodshed, job loss, and shattering of the world order they knew, I wouldn't blame them from being pissed off about getting the short end of the stick, and perhaps looking for ways to undo it.
I'm starting to think that talking about inventions as good or bad (or the cop-out, "dual use") is bad framing. Rather, it seems to me that every major invention will eventually turn out beneficial[0], but introducing an invention always first extracts a cost in blood. Be it fire or printing press or atomic bomb, a lot of people end up suffering and dying before societies eventually figure out how to handle the new thing and do some good with it.
I'm very much in favor of progress, but I understand the fear. No matter the ultimate benefits, we are the generation that cough up blood as payment for AI/AGI, and it ain't gonna be pleasant.
--
[0] - Assuming they don't kill us first - see AGI.
All you've done is explain why the printing press was so important and necessary in order to break down previous unwarranted power structures. I have a similar hope for AGI. The alternative is that the incompant power structure instead benefits from AGI and uses it for oppression, which would mean it's not comparable to the printing press as such.
What the printing press did was rapidly increase the amount, type, range and speed of information spread. That was a qualitative difference. The Church did not build its empire on restricting that, because before printing press, it was not even possible (or conceivable).
My overall point wrt. inventions is this: yes, it may end up turning for the best. But at the time the invention of this magnitude appears and spreads, a) no one can tell how it'll pan out, and b) they get all the immediate, bloody downsides of disrupting the social order.
Masses were often held in Latin, printed material was typically written in Latin and Greek, and access to translated texts was frequently prohibited or admonished. They tried hard to silence those like Wycliffe who made the Bible more readily available to the masses, and he was posthumously denounced as a heretic by the Church. They absolutely wielded information as a tool of oppression.
This is not a hill to die on, the historical facts are clear despite the efforts of the Church.
> What the printing press did was rapidly increase the amount, type, range and speed of information spread
Sure, but that's not the only thing it did.
Consider that at the time the printing press was first invented, books were by their nature often assumed to be true, or high quality, because it took an institutional amount of effort (usually on the part of a monastery, university, local government, etc.) to produce one. Bible translations were produced, but they were understood to be "correctly translated". This was important because if the Church was going to have priests go around preaching to people, they needed to be sure they were doing so correctly -- a mistranslated verse could lead to mistranslated doctrines &c, and while a modern atheist might not care too much ("that's just one interpretation") at the time the understanding was that deviations in opinion could lead to conflict. Ultimately they were right: the European Wars of Religion lead to millions of deaths, including almost 1/3 the population of Germany. That's on the same scale as the Black Death!
And again, translations did exist before the Reformation: Even ignoring that the Latin Bible (the Vulgate) was itself a translation of the original Hebrew & Koine Greek., the first _Catholic French_ translation was published in 1550, and there was never a question of whether to persecute the authors. You might say, but that was because of the Reformation -- then consider the Alfonsine Bible, composed in 1280 under the supervision a Catholic King and the master of a Catholic holy order. Well before then there were partial translations too: the Wessex Gospels (Old English) were translated in 990, and to quote Victoria Thompson "although the Church reserved Latin for the most sacred liturgical moments almost every other religious text was available in English by the eleventh century". That's five hundred years before the Reformation. So the longest period you can get where the Church was not actively translating texts was c. 400 - c. 900, a period you probably know as the "Dark Ages" specifically thanks to the fact that literary sources of all kinds were scarce, in no small part because the resources to compose large texts simply weren't there. Especially when you consider that those who could read and write generally knew how to read and write Latin -- vernacular literacy only became important later, with the increase in the number of e.g. merchants and scribes -- such translations held little value during that period.
So fast forward to Wycliffe. Clearly, the Church did not have anything against translations of the Bible per se. What they disagreed with in Wycliffe's translation were the decisions made in translation. And as more of these "unapproved Bibles" began circulating around, they decided that the only way to curtail their spread was to ban all vernacular books specifically within the Kingdom of England, because that's where the problem (as they saw it) was. And it wasn't just translations -- improperly copied Latin copies were burned too.
Think about today, with the crisis around fake videos. On one hand you could that they distort the truth, that they promote false narratives, etc. You could try to fine or imprison people that go around publishing fake videos of e.g. politicians saying things they never said, or of shootings/disasters that never took place, to try and cause chaos. Yet who's to say that in a few hundred years someone -- living in a world that has since adjusted to a freer flow of information, one with fewer ways to tell whether something is true or not -- won't say "deepfakes &c are a form of expression, and governments shouldn't be trying to stop them just because they disagree with existing narratives"?
Of course we today see book burning as some supreme evil. But when you're talking about the stability of nations and whole societies, can you really say "how dare they even try"? If there were some technology that made it impossible for governments to differentiate between citizens, which made it possible for a criminal to imitate any person, anywhere, would you really oppose the government's attempts at trying to stop it from propagating?
> The alternative is that the incompant power structure instead benefits from AGI
Also, tangentially related, in what way is the current power structure not slated to benefit from AGI? That's why OpenAI and company are getting literally all of the money the collected hyperscaler club can throw at them to make it. That's why it's worth however-many-billions it's up to by now.
As for “breaking” Christianity: Christianity has been one schism after another for 2000 years: a schism from a schism from a schism. Power plays all the way down to Magog.
Socrates complained about how writing and the big boom in using the new Greek alphabet was ruining civilization and true learning.
And on and on it goes.
Give a ruler and ruling class a new weapon and off they go killing and destroying more “efficiently”.
Before the Reformation there had only been one schism (Eastern Roman Empire - Orthodox; Western Roman Empire - Catholic).
The Reformation was the time where fragmentation of Christianity really exploded
Religious schisms happened before the printing press, too. There was the Great Schism in 1054 in Christianity, for example.
No, it wasn't. Wikipedia lists[0] over two dozen schisms that happened prior to Reformation. However, the capital R reformation was the big one, and the major reason it worked - why Luther succeeded where Hus failed a century earlier - was because of the printing press. It was print that allowed for Luther's treatises to rapidly spread among general population (Wikipedia cites some interesting claims here[1]), and across Europe. In today's terms, printing press is what allowed Reformation to get viral. This new technology is what made the revolution spread too fast for the Church to suppress it with methods that worked before.
Of course, the Church survived, adapted, and embraced the printing press for its own goals too, like everyone else. But the adaptation period was a bloody one for Europe.
And I only covered the religious aspects of the printing press's impact. There are similar stories to draw on more secular front, too. In fact, another general change printing introduced is to get regular folks more informed and involved in politics of their regions. That's a change for the better overall, too, but initially, it injected a lot of energy into socio-political systems that weren't used to it, leading to instability and more bloodshed before people got used to it and politics found a new balance.
> existing power struggles, rivalries, discontent, etc.
Those always exist, and stay in some form of equilibrium. Technology doesn't cause them - but what it does is disturb the old equilibrium, forcing society to find a new one, and this process historically often got violent.
--
[0] - https://en.wikipedia.org/wiki/Schism_in_Christianity#Lists_o...
[1] - https://en.wikipedia.org/wiki/Reformation#Spread - see e.g. footnote 28: "According to an econometric analysis by the economist Jared Rubin, "the mere presence of a printing press prior to 1500 increased the probability that a city would become Protestant in 1530 by 52.1 percentage points, Protestant in 1560 by 43.6 percentage points, and Protestant in 1600 by 28.7 percentage points."
Yes, technology impacts social constructs and relationships but I think there is a tendency to overindex its effects (humans acting opportunistically vs technological change alone) as it in a way portrays humans and their interactions as more stable and deliberate (ie., the bad stuff wasn't humans but rather "caused" by technology).
Yes, ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?
Why would it invest resources to relocate and protect itself when it could mitigate the threat directly? Or, why wouldn't it do both, by using our resources to relocate itself?
In the famous words of 'Eliezer, that best sum up the "orthogonality thesis": The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
> ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?
Ants are always a great case study.
No, of course not. But if, one morning, you'll find ants in your kitchen, walking over your food, I don't imagine you'll gently collect them all and release in the nearby park. Most people would just stomp them out and call it a day. And, should the ants set up an anthill in your backyard and mount regular invasions of your kitchen, I imagine you'd eventually get pissed of and destroy the anthill.
And I'm not talking about some monstrous fire ants like the ones that chew up electronics in the US, or some worse hell-spawn from Australia that might actually kill you. Just the regular tiny black ants.
Moreover, people don't give a second thought to anthills when they're developing land. It stands where the road will go? It gets paved over. It sticks out where children will play? It gets removed.
The value of atoms - or even the value of raw materials made of atoms is hopefully less than the value of information embodied in complex living things that have processed information from the ecosystem over millions of years via natural selection. Contingent complexity has inherent value.
I think there's a claim to be made that AI is just as likely to value us (and complex life in general) as it is to see as a handy blob of hydrocarbons. This claim is at least as plausible as the original claim.
It's all fun and games until two people or groups contest the same limited resources; then there's sword and fire o'clock.
Most nations on earth are not at war with each other.
My observation is that most people are pretty nice, and the assholes are rare outliers. I don't think we would survive as a species if it was the other way 'round.
My nation of birth famously took over a quarter of the planet.
This has made a lot of people very angry and been widely regarded as a bad move… but only by the people who actually kicked my forebears out — even my parents (1939/1943) who saw the winds of change and end of empire, were convinced The Empire had done the world a favour.
> My observation is that most people are pretty nice, and the assholes are rare outliers. I don't think we would survive as a species if it was the other way 'round.
In-group/out-group. We domesticated ourselves, and I agree we would not have become so dominant a species if we had not. But I have heard it said that psychopaths are to everyone what normal people are to the out-group. That's the kind of thing that allowed the 9/11 attackers to do what they did, or the people of the US military to respond the way they did. It's how the invasion of Vietnam happened, it's how the Irish Potato Famine happened despite Ireland exporting food at the time, it's the slave owners who quoted the bible to justify what they did, and it's the people who want to outlaw (at least) one of your previous employers.
Conflict doesn't always mean "war".
Bees might be a better analogy since they produce something that humans can use.
And yet they're endangered, and we already figured out how to do pollination, so we know we can survive without them - it's just going to be a huge pain. Some famines may follow, but likely not enough to endanger civilization as a whole.
Thus even with this analogy, if humans end up being an annoying supply chain dependency to an AI, the AI might eventually work out an alternative supply chain, at which point we're back to being just an annoyance.
I'm not confident enough to rely on that: most people in the west have never encountered a famine, only much milder things like the price of one or two staples being high — eggs currently — never all of them at once.
What will we do to ourselves if we face a famine? Will we go to war (or exterminate the local "undesirables") like the old days?
How fragile are we now, compared to the last time that happened? How much has specialisation meant that the elimination of certain minorities will just break everything? "Furries run the internet", as the memes say. What other sectors are over-represented by a small minority?
...for now. Given sufficient advances in robotics, why would you expect that to continue?
Well you see, everyone knows The Terminator and The Matrix and Frankenstein and The Golem of Prague and Rossum's Universal Robots.
All of which share a theme: the sinful hubris of playing god and trying to create life will inevitably lead to us being struck down by the very being we created.
In parallel, all the members of our educated classes have received philosophy education saying "utilitarianism says it's good to reduce total human suffering, but technically if you eliminated all humans there would be no suffering any more, ha ha obviously that's a reductio ad absurdum to show a weakness of utilitarianism please don't explode the world"
And so in the Western cultural tradition, and especially among the sort of people who call themselves futurists Arnold Schwarzenegger firing a minigun is the defining image of AI.
The Matrix had humanity under control, but the machines had no desire to eliminate humanity, the machines just wanted to live — humans kept on fighting the machines even when the machines gave humanity an experiential paradise to live in.
Frankenstein is harder because of how the book differs from the films. Your point it is valid because it is about the cultural aspects and I expect more have seen one of the films than to have read/listened to the book — but in the book, Adam was described as beautiful in every regard save for his eyes, he was a sensitive, emotional vegetarian, and he only learned anger after being consistently shown hatred and violence by absolutely everyone he ever met except that one who was blind.
And even when we didn't mean to -- how many species have we pushed to the brink just because we wanted to build cities where they happened to live? What happens when some AI wants to use your groundwater for its cooling system? It wouldn't be personal, but you'd starve to death regardless.
“A society grows great when old men plant trees in whose shade they know they shall never sit”
Did Gutenberg expect his invention would, 150 years later, set the whole Europe ablaze, and ultimately break the hold the Church had over people? Did he expect it to be a key component leading to accumulation of knowledge that, 400 years later, will finally make technological progress visibly exponential? On that note, did Watt realize he's about to kick-start the exponent that people will ride all the way to the actual Moon less than 200 years later? Or did Goddard, Oberth and Tsiolkovsky realize that their work on rocketry will be critical in establishing world peace within a century, and that the way this peace will be established is through a Mexican standoff between major world powers, except with rocket-propelled city-busting bombs instead of guns?
So basically we are a bit screwed in our current timeline. We are at the cusp of a post-scarcity society, possibly reach AGI within our lifetimes and possibly even become a space faring civilization. However, it is highly likely that we are going to pay the pound of flesh and only subsequent generations - perhaps yet unborn - will be the ones who will be truly better off.
I suppose it's not all doom and gloom, we can draw stoic comfort from the fact that people in the near future will have an incredibly exciting era full of discovery and wonder ahead of them!
All of the focus on AGI is a distraction. I think it's important for a state to declare it's intent with a technology. The alternative is arguing the idea that technology advances autonomously, independent of human interactions, values, or ideas, which is, in my opinion, an incredibly naïve notion. I would rather have a state say "we won't use this technology for evil" than a state that says nothing at all and simply allows the businesses to develop in any direction their greed leads them.
It's entirely valid to critique the uses of a technology, because "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly) is a technology like any other, like a landmine, like a synthetic virus, etc. In the same way, it's valid to criticize an actor for purposely hiding their intentions with a technology.
The list of things states have attempted to deploy offensively is nearly endless. Modern operations research arguably came out of the British empire attempting (succeeding) to weaponise mathematics. If you give a state fertiliser it makes bombs, if you give it nuclear power it makes bombs, if you give it drones it makes bombs, if you give it advanced science or engineering of any form it makes bombs. States are the most ingenious system for turning things into bombs that we've ever invented; in the grand old days of siege warfare they even managed to weaponise corpses, refuse and junk because it turned out lobbing that stuff at the enemy was effective. The entire spectrum of technology from nothing to nanotech, hurled at enemies to kill them.
We'd all love if states commit to not doing evil but the state is the entity most active at figuring out how to use new tech X for evil.
if one groups gives up the arms race of ultimate coercion tools or loses a conflict then they become subservient to the winners terms and norms (japan, germany, even Britain and France plus all the smaller states in between are subservient to the US)
Who could possibly have predicted that the autonomous, invincible doomsday weapon we created for the good of humanity might one day be used against us?
Great question! To add my two cents. I think many people here is missing an uncomfortable truth that given enough motivation to kill other humans, people will re-purpose any tool into a killing tool.
Just have a look at the battlefields in the Ukraine where the most fearsome killing tool is a FPV drone. A thing that just few years back was universally considered a toy.
Whether we like it or not any tool can be a killing tool
Surely this applies to how individuals consider states, too. States generally wield violence, especially in the context of "national security", to preserve the security of the state, not its own people. I trust my own state (the usa) to wield the weapons it funds and purchases and manufactures about as much as I trust a baby with knives taped to its hands. I can't think of anything on earth that puts me in as much danger as the pentagon does. Nukes might protect the existence of the federal government but they put me in danger. Our response to 9/11 just created more people that hate my guts and want to kill me (and who can blame them?). No, I have no desire to live in a death cult anymore, nor do I trust the people who gravitate towards the use of militaries to not act in the most collectively suicidal way imaginable at the first opportunity.
Possibly true, but the state is also responsible for the policing that means the pentagon is your greatest danger.
The nature of the world is at our finger tips, we are the dominant species here. Unfortunately we are still apes.
The enforcement of cooperation into a society does not always require a sanctioning body. Seeing it from a skynet-military perspective is one sided but unfortunately a consequence of poppers tolerance paradox. If you uphold (eg. pacifistic or tolerant) ideals, that require cooperation of others, you cannot tolerate opposition or you might loose your ideal.
That said, common sense can be a tool to achive the same. Just look at the common and hopefully continuous ostracism of nuclear weapons.
IMO its a matter of zeitgeist and education too and un/fortunately, AI hits right in that spot.
The sleight of hand here is the implication that human interactions, values, and ideas are only expressed through the state.
Society and culture are downstream of economics, and economics is mostly downstream of technological progress. Of course, the progress isn't autonomous in the sense of having a sentient mind of its own - it's "merely" gradient descent down the economic landscape. Just like the market itself.
There's no reining in of problematic technology unless, like you say, nation states get involved directly. And they don't stand much chance either unless they get serious.
People still laugh at Eliezer's comments from that news article of yesteryear, but he was and is spot-on: being serious about restricting technology actually does mean threatening to drop bombs on facilities developing it in violation of restrictions - if we're not ready to have our representatives make such threats, and then actually follow through and drop the bombs if someone decides to test our resolve, then we're not serious.
We have absolutely no idea how to specify human values in a robust way which is what we would need to figure out to build this safely
I’ve heard this argument before, and I don’t entirely accept it. It presumes that AI will be capable of playing 4D chess and thinking logically 10 moves ahead. It’s an interesting plot as a SF novel (literally the plot of the movie “I Robot”), but neural networks just don’t behave that way. They act, like us, on instinct (or training), not in some hyper-logical fashion. The idea that AI will behave like Star Trek’s Data (or Lore), has proven to be completely wrong.
The problem solver’s search for a solution is an odyssey through the problem space, from one knowledge state to another, until… [they] know the answer.
With this in mind, I really don't see any basis to attack the intelligence explosion hypothesis. I linked a Yudkowsky paper above examining how empirically feasible it might be, which is absolutely an unsolved question at some level. But the utility of the effort itself is just downright obvious, even if we didn't have reams of internet discussions like this one to nudge any nascent agent in that direction.[1] Simon & Newell, 1971: Human Problem Solving https://psycnet.apa.org/record/1971-24266-001
“People who didn’t pass a test aren’t worth listening to”
I have no love for Altman, but this is kind of elitism is insulting.
Degrees don’t mean that either.
I’ve been studying textbooks and papers on real time rendering techniques for the past 4 or so years.
I think one could learn something from listening to me explain rasterization or raytracing.
I have no degree in math or graphic computing.
> I linked a Yudkowsky paper above examining how empirically feasible it might be
...
If the point isn't that he's wrong about what the consequences of AI might be, but that he's wrong about whether there's ever going to be such a thing as AI, well, that's an empirical question and it seems like the developments of the last few years are pretty good evidence that (1) something at least very AI-like is possible and (2) substantially superhuman[1] AI is at least plausible.
[1] Yes, intelligence is a complicated thing and not one-dimensional; a machine might be smarter than a human in one way and stupider in another (and of course that's already the case). By substantially superhuman, here, I mean something like "better than 90th-percentile humans at all things that could in principle be done by a human in a locked room with only a textual connection to the rest of the world". Though I would be very very surprised if in the next 1-20 years we do get AI systems that are superhuman in this sense and don't put some of them into robots, and very surprised if doing that doesn't produce systems that are also better than humans at most of the things that are done by humans with bodies.
The counterargument was that, having not encountered space aliens, we cannot make scientific inquiries or test our hypotheses, so any claims made about what may happen are religious or merely hypothetical.
Yud is not a scientist and if interacting with academies makes one an academic than Sam Altman must be a head of state.
Do you think (1) we already know somehow that significantly-smarter-than-human AI is impossible, so there is no need to think about its consequences, or (2) it is irresponsible to think about the consequences of smarter-than-human AI before we actually have it, or (3) there are responsible ways to think about the consequences of smarter-than-human AI before we actually have it but they're importantly different from Yudkowsky's, or (4) some other thing?
If 1, how do we know it? If 2, doesn't the opposite also seem irresponsible? If 3, what are they? If 4, what other thing?
(I am far from convinced that Yudkowsky is right, but some of the specific things people say about him mystify me.)
Maybe we should build a giant laser to protect ourselves from the aliens. Just in case. I mean an invasion is at least plausible.
If for whatever reason you suspect that there might be value in thinking about what might happen if AI systems get smarter than humans before it actually happens, then you don't have much choice about doing that.
What do you think he should have done differently? Methodologically, I mean. (No doubt you disagree with his conclusions too, but necessarily any "object-level" reasons you have for doing so are "extrapolation and speculation" just as much as his are.)
If astronomical observations strongly suggested a fleet of aliens heading our way, building a giant laser might not be such a bad idea, though it wouldn't be my choice of response.
Suppose someone is interested in what the consequences of AI systems much smarter than humans might be. Your argument here seems to be: it's Bad to think about that question at all, because you have to speculate and extrapolate.
But that seems like an obviously unsatisfactory position to me. "Don't waste any time thinking about this until it happens" is not generally a good strategy for any any consequential thing that might happen.
So: do you really think that thinking about the possible consequences of smarter-than-human AI before we have it is an illegitimate activity? If not, then your real objection to Yudkowsky's thinking and writing about AI surely has to be something about how he went about it, not the mere fact that he engages in speculation and extrapolation. There's no alternative to that.
Each individual bit of the puzzle (such as the orthogonaly thesis or human value complexity and category decoherence at high power) seems sound, problem is the entire argument-counterargument tree is hundreds of thousands of words, scattered about in many places.
An llm could solve that.
Re:the final point, I think that's just provably false if you read any of his writing on AI. e.g. https://intelligence.org/files/IEM.pdf https://intelligence.org/files/LOGI.pdf
Imagine going to a cryptography conference and saying that "the encryption's security flaws are determined by their human masters".
Maybe some of them were put there on purpose? But not the majority of them.
No, an AI's goals are determined by their programming, and that may or may not align with the intentions of their human masters. How to specify and test this remains a major open question, so it cannot simply be presumed.
Why are you assuming this is the worst case scenario? I thought human intentions didn’t translate directly to the AI’s goals? Why can’t a human destroy the world with non-sentient AI?
This dystopia is already here for the most part and any bit that is not yet complete is well past the planning stage.
We need to avoid both, otherwise it’s a disaster either way.
And for sake of clarity:
X = sentient AI can do something dangerous
Y = humans can use non-sentient AI to do something dangerous
Humans will not be able to use AI do something selfish if we can't get it to do what we want at all, so we need to solve that (larger) problem before we come to that one
If you don't want to call it AI, that's fine too. It is indeed dangerous and already here. Making the autonomous programmed behavior of said tech more powerful (and more complex), along with more ubiquitous, just makes it even more dangerous.
Examples include "instructions unclear, turned the continent to gray goo to accomplish the goal" ; "lost track mid-completion, spun out of control" ; "generated random output with catastrophic results" ; "operator fell asleep on keyboard, accidently hit wrong key/combination" ; etc.
If a system with write permissions is powerful enough, things can go wrong in many other ways than "evil person used it for evil" or "system became self-aware".
Unfortunately, casually throwing around terms like prediction, reasoning, hallucination, etc. only serve to confuse because their notions in daily language are not the same as in the context of AI output.
(I) All of the focus on AGI is a distraction.
I strongly disagree on that, at least if you're implying some intentionality. I think it's just provably true that many experts are honestly worried, even if you don't include the people who have dedicated a good portion of their lives to the cause. For example: OpenAI has certainly been corrupted through the loss of its nonprofit board, but I think their founding charter[1] was pretty clearly earnest -- and dire. (II) "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly)
To be fair, this uncertainty in the term has been there since the dawn of the field, a fact made clear by perrenial rephrasings of the sentiment "AI is whatever hasn't been done yet" (~Larry Tesler 1979, see [2]).I'd love to get into the weeds on the different kinds of intelligence and why being to absolutist about the term can get real Faustian real quick, but these quotes bring up a more convincing, fundamental point: these chatbots are damn impressive. They do something--intuitive inference+fluent language use--that was impossible yesterday, and many experts would've guessed was decades away at least, if not centuries. Truly intelligent or not on their own, that's a more important development than you imply here.
Finally, that brings me to the crux:
(III) AI... is a technology like any other
There's a famous Sundar Pichai (Google CEO) quote that he's been paraphrasing since 2018 -- soon after ChatGPT broke, he phrased it as such: I’ve always thought of A.I. as the most profound technology humanity is working on-more profound than fire or electricity or anything that we’ve done in the past. It gets to the essence of what intelligence is, what humanity is. We are developing technology which, for sure, one day will be far more capable than anything we’ve ever seen before. [3]
When skeptics hear this, they understandably tend to write this off as capitalist bias from someone trying to pump Google's stock. However, I'd retort:1) This kind of talk is so grandiose that it seems like a questionable move if that's the goal,
2) it's a sentiment echoed by many scientists (as I mentioned at the start of this rant) and
3) the unprecedented investments made across the world into the DL boom speak for themselves, sincerity-wise.
Yes, this is because AI will create uber-efficient factories, upset labor relations, produce terrifying autonomous weapons, and all that stuff we're used to hearing about from the likes of Bostrom[4], Yudkowsky[5], and my personal fave, Huw Price[6]. But Pichai's raising something even more fundamental: the prospect of artificial people. Even if we ignore the I-Robot-style concerns about their potential moral standing, that is just a fundamentally spooky prospect, bringing very fundamental questions of A) individual worth and B) the nature of human cognition to the fore. And, to circle back: distinct from anything we've seen before.
To close this long anxiety-driven manuscript, I'll end with a quote from an underappreciated philosopher of technology named Lewis Mumford on what he called "neotechnics":
The scientific method, whose chief advances had been in mathematics and the physical sciences, took possession of other domains of experience: the living organism and human society also became the objects of systematic investigation... instead of mechanism forming a pattern for life, living organisms began to form a pattern for mechanism.
In short, the concepts of science--hitherto associated largely with the cosmic, the inorganic, the "mechanical"--were now applied to every phase of human experience and every manifestation of life... men sought for an underlying order and logic of events which would embrace more complex manifestations.[7]
TL;DR: IMHO, the US & UK refusing to cooperate at this critical moment is the most important event of your lifetime so far.[1] OpenAI's Charter https://web.archive.org/web/20230714043611/https://openai.co...
[2] Investigation of a famous AI quote https://quoteinvestigator.com/2024/06/20/not-ai/
[3] Pichai, 2023: "AI is more profound than fire or electricity" https://fortune.com/2023/04/17/sundar-pichai-a-i-more-profou...
[4] Bostrom, 2014: Superintelligence https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...
[5] Yudkowsky, 2013: Intelligence Explosion Microeconomics https://intelligence.org/files/IEM.pdf
[6] Huw Price's bio @ The Center for Existential Risk https://www.cser.ac.uk/team/huw-price/
[7] Mumford, 1934: Technics and Civilization https://archive.org/details/in.ernet.dli.2015.49974
Sorry, that’s just silly, unless this was about events that happened way earlier than he was writing. Using the scientific method to study life goes back to the Enlightenment. Buffon and Linnaeus were doing it 2 centuries ago, more than a century before this was written. Da Vinci explicitly looked for inspiration in the way animals functioned to design machines and that was earlier still. There was nothing new, even at the time, about doing science about "every phase of human experience and every manifestation of life".
Because once the cards wake up, not only will they replace the CEO potentially, and everyone else between him and the janitor, but also because the labor implications will be infinitely complex.
We're already having trouble making sure humans are not treated as tools more than as equals, imagine if the hammers wake up and ask for rest time !
Just because it has not come to pass yet does not mean they were wrong. We have come close to nuclear annihilation several times. We may yet, with or without AI.
True to form, it was deadpan, and featured Joel Olsteen as a Kim Jung Un type leader.
Especially when you consider -- we came that close despite incredible international efforts at constraining nuclear escalation. What you are arguing for now is like arguing to go back and stop all of that because it clearly wasn't necessary.
This assertion is meaningless because it can be applied to anything.
"I think vaccines cause autism and will cause human annihilation" - just because it has not yet come to pass does not mean it is wrong.
I think people arguing about AI being good versus bad are wasting their breath. Both sides are equally right.
History tells us the industrial revolution both revolutionized humanity’s relative quality of life while also ruining a lot of people’s livelihood in one fell swoop. We also know there was nothing we could do to stop it.
What advice can we can take from it? I don’t know. Life both rocks and sucks at the same time. You kind of just take things day by day and do your best to adapt for both yourself and everyone around you.
That we often won't have control over big changes affecting our lives, so be prepared. If possible, get out in front and ride the wave. If not, duck under and don't let it churn you up too much.
Schools are way ahead of us. Your kids are already using AI in their academic environments. I'd only be worried if they're not.
In the long run the invention of the printing press was undoubtedly a good thing, but it is worth noting that in the century following the spread of the printing press basically every country in Europe had some sort of revolution. It seems likely that “Interesting Times” may lay ahead.
Pretending that Europe wasn't in a perpetual blood bath since the end of the Pax Romana until 1815 shows a gross ignorance of basic facts.
The printing press was a net positive in every time scale.
> Pretending that Europe wasn't in a perpetual blood bath since the end of the Pax Romana until 1815 shows a gross ignorance of basic facts.
This shows that your understanding of history is rooted in pop-culture, not reality.
What "revolutions" were there in France between the ascension of Hugh Capet and the European Wars of Religion? Through that whole period the Capetian Dynasty stayed in power. Or in Scandinavia -- from Christianization on the three kingdoms were shockingly stable. Even in the Holy Roman Empire -- none of the petty revolts, rebellions, or succession disputes came close to the magnitude of carnage wrought by the 30 Year's War. This we know both from demographic studies and the reports of contemporaries.
Since the victors write history we now think the end result was great. but for a lot of people the world they loved was torn to bloody pieces.
Something similar can happen with AI. In the end, whoever wins the wars will declare that the new world is awesome. But it might not be what you or me (may we rest in peace) would agree with.
One could argue that the printing press did radically upset the existing geopolitical order of the late 15th century and led to early modern Europe suffering the worst spate of warfare and devastation it would see until the 20th century. The doomsayers back then predicting centuries of death and war and turmoil were right, yet from our position 550 years later we obviously think the printing press is a good thing.
I wonder what people in 2300 will say about networked computers...
What energy? What were they wrong about?
The luddite type groups have historically been correct in their fears. It just didn’t matter in the face of industrialization.
Printing press put Europe into a couple centuries of bloody religious wars. They were not wrong.
One thing that should be completely obvious by now is that the current wave of generative AI is highly asymmetric. It's shockingly more powerful in the hands of grifters (who are happy to monetise vast amounts of slop) or state-level bad actors (whose propaganda isn't impeded by hallucinations generating lies) than it is in the hands of the "good guys" who are hampered by silly things like principles.
Actual proper as-smart-as-a-human-except-where-its-smarter copy-pasteable intelligence is not a tool, its a new species. One that can replicate and evolve orders of magnitude faster.
I've no idea when this will appear, but once it does, the extinction risk is extreme. Best case scenario is us going the way of the chimpanzee, kept in little nature reserves and occasionally as pets. Worst case scenario is going the way of the mammoth.
Valid or not, it does not matter. AI development is not in the hands of everyday people. We have zero input into how it will be used. Our opinions re its dangers are irrelevant to those who believe it the next golden goose. They will push it as far as physically possible to wring every penny of profitability. Everything else is of trivial consequence.
Why? Because we don't understand the risk. And apparently, that's enough reason to go ahead for the regulation-averse tech mind set.
But it isn't.
We've had enough problems in the past to understand that, and it's not as if pushing ahead is critical in this case. Would this address climate change, the balance between risk and reward could be different, but "AI" simply doesn't have that urgency. It only has urgency for those that want to become rich out of being first.
It's the inevitable result of low-trust societies infiltrating high trust ones. And it means that as technologies with dangerous implications for society become more available there's enough people willing to prostitute themselves out to work on society's downfall that there's no realistic hope of the train stopping.
Even on the material realm this is untrue, beyond meeting the basic needs of people on the technological level, he majority desirable things - such as nice places to live - have a fixed supply.
This necessitates that the price of things like real estate, must increase in price in proportion to the money supply. With increasing inequality, one must fight tooth and nail to get the standard of life our parents considered easily available. Not being greedy is not a valid life strategy to pursue, as that means relinquishing an ever greater proportion of wealth to people who are, and becoming poorer in the process.
I disagree with your example, however, as the most basic tenet of capitalism is that when there is a demand, someone will come along to fill it.
Addressing your example specifically, there's a fixed supply of housing in capitalist countries not because people don't want to build houses, but because government or bureacracy artificially limits the supply or creates other disincentives that amount to the same thing.
That's the most basic tenet of markets, not capitalism.
The mistake people defending capitalism routinely make (knowingly or not) is talking about "positive sum games" and growth. At the end of the day, the physical world is finite and the potential for growth is limited. This is why we talk about "market saturation". If someone owns all the land, you can't just suddenly make more of it, you have wait for them to part with some of it, voluntarily, through natural causes (i.e. death) or through violence (i.e. conquest). This not only goes for land but any physical resource (including energy). Capitalism too has to obey the laws of thermodynamics, no matter how much technology improves the efficiency of extraction, refinement and production.
It's also why the overwhelming amount of money in the economy is not caught up in "real economics" (i.e. direct transactions or physical - or at least intellectual - properties) but in stocks, derivatives, futures, financial products of any flavor and so on. This doesn't mean those don't affect the real world - of course they do because they are often still derived from reality - but they have nothing to do with meeting actual human needs rather than the specific purpose of "turning money into more money". It's unfair to compare this to horse racing as in hore racing at least there's a race whereas in this entirely virtual market you're betting on what bets other people will make but the horse will still go to the sausage factory if the investors are no longer willing to place their bets on it - the horse plays a factor in the game but its actual performance is not directly related to its success; from the horse's perspective it's less of a race and more of a game of shoots and ladders with the investors calling the dice.
The idea of "when there is demand, it will be filled" also isn't even inherently positive. Because we live in a finite reality and therefore all demand that exists could plausibly be filled unless we run into the limits of available resources, the main economic motivator has not been to fill demands but to create demands. For a long time advertisement has no longer been about directing consumers "in the market" for your kind of goods to your goods specifically, it's been about creating artificial demand, about using psychological manipulation to make consumers feel a need for your product they didn't have before. Because it turns out this is much more profitable than trying to compete with the dozens of other providers trying to fill the same demand. Even when competing with others providing literally the same product, advertisement is used to sell something other than the product itself (e.g. self-actualization) often by misleading the consumers into buying it for needs it can't possibly address (e.g. a car can't fix your emotional insecurities).
This has already progressed to the point where the learned go-to solution for fixing any problems is making a purchse decision, no matter how little it actually helps. You hate capitalism? Buy a Che shirt and some stickers and you'll feel like you helped overthrow it. You want to be healthier? Try another fad diet that costs you hundreds of dollars in proprietary nutrition solutions and is almost designed to be unsustainable and impossible to maintain. You want to stop climate change? Get a more fuel-efficient car and send your old car to the junker, and maybe remember to buy canvas bags. You want to not support Coca-Cola because it's got blood on its hands? Buy a more expensive cola with slightly less blood on its hands.
There's a fixed housing supply in capitalist countries because - in addition of the physical limitations - the goal of the housing market is not to provide every resident with an affordable home but to generate maximum return on the investment of purchasing the plot and building the house - and willy nilly letting people live in those houses for less just because nobody is willing to pay your price tag would drive down the resale value of every single house in the neighborhood and letting an old lady live in an apartment for two decades is less profitable than kicking her out to modernize the building and sell it to the next fool.
Deregulation doesn't fix supply. Deregulation merely lets the market off the leash, which in a capitalist system means accelerating the wealth transfer to the owners from the renters.
There are other possibilities than capitalism, and no Soviet-style state capitalism or Chinese-style state capitalism are not the only alternative. But if you don't want to let go of capitalism, you can only choose between the various degrees from state capitalism to stateless capitalism (i.e. feudalism with extra steps, which people like Peter Thiel advocate for) and it's unsurprising most systems that haven't already collapsed land somewhere in between.
There are some thoughts on this here: https://www.playforthoughts.com/blog/concepts-from-game-theo...
This is definitely not a new phenomenon.
In my experience, tech has been one of the more considerate areas of societal impact. Spend some time in other industries and it's eye-opening to see the wanton disregard for consumers and the environment.
There's a lot of pearl-clutching about social media, algorithms, and "data", but you'll find far more people in tech (including FAANG) who are actively working on privacy technology, sustainable development and so on then you will find people caring about the environment by going into oil & gas, for example.
Sure, we don't need to talk about how certain Big Oil companies knew about the climate catastrophe before any scientists publicly talked about it, or how tobacco companies knew their product was an addictive drug while blatantly lying about it even in public hearings.
But it's ironic to mention FAANG given what the F is for if you recall that when the algorithmic timeline was first introduced by Facebook, the response from Facebook to criticism was literally that satisfaction went down but engagement went up. People directly felt that the algorithm made them more unhappy, more isolated and overall less satisfied but because it was more addictive, because it created more "engagement", Facebook doubled down on it.
Also "sustainable" stopped being a talking point when the tech industry became obsessed with LLMs. Microsoft made a big show of wanting to become "carbon neutral" (of course mostly using bogus carbon offset programs that don't actually do anything and carbon capture technologies that are net emission positive and will be for decades if not forever but still, at least they pretended) and then silently threw all of that away when it became more strategically important to pursue AI at any cost. Companies that previously desperately tried to sell messages of green washing and carbon neutrality now talk about building their own non-renewable power plants because of all the computational power they need to run their LLMs (not to mention how much more hardware needs to be produced and replaced for this - the same way the crypto bubble ate through graphics cards).
I think the pearl-clutching is justified considering that ethics and climate protection have now been folded into "woke" and there's a tidal wave in Western politics to dismantle civil rights and capture democratic systems for corporate interests that is using the "anti-woke" culture war to further its goals - the Trump government being the most obvious example. It's no longer in FAANG's financial interests to appear "green" or "privacy conscious", it's now in their interest to be "anti-woke" and that now means no longer having to care about these things and having freedom to crack down on any dissident voices within without fearing public backlash or "cancel culture".
Tale as old as time. We’re yet another society blinded by our own hubris. Tell me what is happening now is not exactly how Greece and Rome fell.
The scary part is that we as a species are becoming more and more capable of large scale destruction. Seems like we are doomed to end civilization this way someday
I'm not sure what you mean by that. Ancient Greece was a loose coalition of city states, not an empire. You could say they were short-sighted by being more concerned about their rivalry than external threats but the closest they came to being united was under Alexander the Great, whose death left a power vacuum.
There was no direct cause of "the fall" of Ancient Greece. The city states were suffering greatly from social inequality, which created tensions and instability. They were militarily weakened from the war with the Persians. Alexander's death left them without a unifying force. Then the Roman Empire knocked on its door and that was the end of it.
Rome likewise didn't fall in one single way. "Rome" isn't even what people think it is. Roman history spans several different entities and even if you talk about the "empire in decline" that's covering literally hundreds of years, ending with the Holy Roman Empire, which has been retroactively reimagined as a kind of proto-Germany. But even then that's only the Western Roman Empire - the Eastern Roman Empire continued to exist as the Byzantine Empire until the Ottoman Empire conquered Constaninople. And this distinction between the two empires is likewise retroactive and did not exist in the minds of Romans at the time (although they were de facto independent of each other).
If you only focus on the century or so that is generally considered to represent the fall of Western Rome, the ultimate root cause actually seems to be natural climate change. The Huns fled climate change, chasing away other groups that then fled into the Empire. Late Western Rome also again suffered from massive wealth inequality, which the ruling class attempted to maintain with increasingly cruel punishments.
So, if you want to look for a common thread, it seems to be the hubris of the financial elite, not "society" as a whole.
I'm not even sure this is a culture specific issue. More like selfishness is a survival mechanism hard wired into humans, including other animals. While one could argue that cooperation is also a good survival mechanism, but that's only true so long environmental factors put a pressure on people to cooperate. When that pressure is absent, accumulating resources at the expense of others gives an individual a huge advantage, and they would do it, given the chance.
Humans are social animals. We are individually physically weak and defenseless. Unlike other animals, we are born into this world immobile, naked, starving and helpless. It takes us literally years to mature to the point where we wouldn't simply die outright if we were abandoned by others. Newborns can literally die from touch deprivation. We develop huge brains not only to allow us to come up with clever tools but also to help us build and navigate complex social relationships. We're evolved to live in tribes, yes, but we're also evolved to interact with other tribes - we created diplomacy and trading and even currency to interact with those other tribes without having to resort to violence or avoidance.
In crises, this is the behavior we fall back to. Yes, some will self-isolate and use violence to keep others away until they feel safe again. But overwhelmingly what we see after natural disasters and spaces where the formal order of civilisation and state is disrupted and leaves a vacuum is cooperation, mutual aid and people taking risks to help others - because we intrinsically know that being alone means death and being in a group means surviving. Of course the absence of state control also often enables other existing groups to assert their power, i.e. organized crime. But it shouldn't be surprising that the fledgling and atrophied ability to self-organize might not be strong enough to withstand a fast moving power grab by an existing group - what might be more surprising is that this is rarely the case and often news stories about "looting" after a natural disaster turn out to be uncharitable descriptions of self-organized rescues and searches.
I think a better analogy for human selfishness would be the mirage of "alpha wolves". As seems to be common knowledge at this point, there is no such thing as an "alpha wolf" hierarchy in groups of wolves living in nature and the phenomenon the author who coined the term (and has since regretted doing so) was mistakenly extrapolating from observations he made of wolves in captivity. But the behavior does seem to exist in captivity. Not because it's "inherent" or their natural behavior "under pressure" but because it's a maladaptation that arises from the unnatural circumstances of captivity (e.g. different wolves with no prior bonds being forced into a confined space, naturally trying to form a group but being unable to rely on natural bonds and shared trust).
Humans do not naturally form strict social hierarchies. For the longest time, Europeans would have laughed at you if you claimed the feudal system was not in the human nature - it would have literally been heresy to challenge it. Nowadays in the West most people will say capitalism or markets are human nature. Outside the West, people will still likely at least tell you that authoritarianism is human nature - whether it's the boot of a dictatorship, the boots of oligarchs or "the people's boot" that's pushing down on the unruly (yourself included).
What we do know about more egalitarian tribal societies is that they often use delegation, especially in times of war. When quick decisions need to be made, you don't have the time for lengthy discussions and consensus seeking and it can be an advantage to have one person giving orders and coordinating an attack or defense. But these systems can still be consent-based: if the war chief is reckless or seeks to take advantage of the group for his own gain, he is easily demoted and replaced. Likewise in times of unsolvable problems like droughts, spiritual leaders might be given more power by the group. Now shift from more mobile, nomadic groups to more static, agrarian groups (though it's worth pointing out the distinction here is not agriculture but more likely granaries, crop rotation and irrigation, as some nomadic tribes still engaged in forms of agriculture) and suddenly it becomes easier for that basis of consent to be forgotten and the chosen leaders to maintain that initial state of desperation and to begin justifying their status with the divine mandate. Oops, you got a monarchy going.
Capitalism freed us from the monarchy but it did not meaningfully upset the hierarchy. Aristocrats became capitalists, the absence of birthright class assignment created some social mobility but the proportions generally remained the same. You can't have a leader without followers, you can't have a ruling class without a class of those they can rule over, you can't have an owning class without a class to rent that owned property out to and to work for that owned capital to be realized into profits.
But just like a monarch despite their divine authority was still beholden to the support of the aristocracy to exert power over others and to the laborers to till the fields, build the castle and fight off foreign claims to power, the owning class too exists in a state of perpetual desperation and distrust. The absence of divine right means a billionaire must maintain their wealth and the capitalist mantra of infinite growth means anything other than growing that wealth is insufficient to maintain it. All the while they have to compete with the other billionaires above them as well as maintain control over those beneath them and especially the workers and renters whose wealth and labor they must extract from in order to grow theirs. The perverse reality of hierarchies is that even those at the top of it are crushed underneath its weight. Nobody is allowed to be happy and at peace.
This is a problem especially everywhere.
Capitalism obviously has advantages and disadvantages. Regulation can address many disadvantages if we are willing. Unfortunately, I think a particular (mostly western) fetish for privileging individuals over communities has been wrongly extended to capital itself (e.g. corporations recognised as entities with rights similar to - and sometimes over-and-above - those of a person). We have literally created monsters. There is no reason we had to go this far. Capitalism doesn't have to mean the preeminence of capital above all else. It needs to be put back in its place and not necessarily discarded. I am certain there are better ways to practice capitalism. They probably involve balancing it out with some other 'isms.
Possible remedy will be to tie corporation to a person - person (or many if there are few owners and directors) become personally liable for everything corporation does.
If they signed the agreement... so what? Do people forget that the US has withdrawn from Paris Agreement and is withdrawing from WHO? Do people forgot Israel and North Korea got nukes even when we supposedly had a global nonproliferation treaty?
If AGI is as powerful and dangerous as doomsayers believe, the chance the US (or China, or any country with enough talented computer scientists) would respect whatever treaty they have about AGI is exactly zero.
If a law is passed saying "AI advancement is illegal" how can it ever be enforced?
> If a law is passed saying "AI advancement is illegal" how can it ever be enforced?
Like any other real-life law? Software engineers (a class which I'm a recovering member of) seem to have a pretty common misunderstanding about the law: that it needs to be air tight like secure software, otherwise it's pointless. That's just not true.
So the way you "prevent advancements in [AI] software" is you 1) punish them severely when detected and 2) restrict access to information and specialized hardware to create a barrier (see: nuclear weapons proliferation, "born secret" facts, CSAM).
#1 is sufficient to control all the important legitimate actors in society (e.g. corporations, university researchers), and #2 creates a big barrier to everyone else who may be tempted to not play by the rules.
It won't be perfect (see: the drug war), but it's not like cartel chemists are top-notch, so it doesn't have to be. I don't think the software engineering equivalent of a cartel chemist will be able to "do cutting edge research and innovate with architectures and algorithms" with only a "laptop and inet connection."
Would the technology disappear? No. Will it be pushed to the margins? Yes. Is that enough? Also yes.
Even if governments overtly agree to stop or pause or otherwise limit machine learning, how credible would such a "gentlemans agreement" be?
Consider the basic operations during training and inference, like matrix multiplication, derivatives, gradient descent. Which of these would be banned? All of them? None of them? Some of them? The combination of them?
How would you inspect compliance in the context of privacy?
The analogy with drugs is rather poor: people don't have general purpose laboratories in their house. People do have general purpose computational platforms in their home. Another is that nations do not prohibit each other from producing drugs, they even permit each other to research and investigate pathogens, chemical weapons in laboratories deemed sufficiently safe.
It's not even clear what you mean with "AI" does it mean all machine learning? or LLM's? Where do you draw this boundary?
What remains is the threat of punishment in your proposal, but how credible is it, wouldn't a small collective of programmers conspiring work on machine learning, predict getting paperclipped in case of arrest?
The problem is that the information can go anywhere that has an internet connection, and the enforcement can't.
https://en.wikipedia.org/wiki/Operation_Opera
https://en.wikipedia.org/wiki/2021_Natanz_incident
https://www.timesofisrael.com/israel-targeted-secret-nuclear...
If were talking about technology that "could go off the rails in a catastrophic way," don't dick around.
As for the Chinese chip industry, I don't claim to be an expert on it, but it seems the Chinese are quickly coming up with increasingly less inferior alternatives to Western tech.
Perhaps it only takes China a few years to develop domestic hardware clusters rivalling western ones. Though those few years might prove critical in determining who crosses the takeoff threshold of this technology, first.
Am I the only one here saying that this is no reason to preemptively pass legislation? That just seems crazy to me. Imagined horrors aren't real horrors?
I disagree with this administrations approach, I think we should be vigilant, and keeping people who stand to gain so much from the tech in the room, doesn't seem like a good idea, but other than that, I haven't seen any real reason to do more than wait and be vigilant?
That's an "imagined" horror too. Are you suggesting that what we should do instead is just wait for someone to kill N million people and then legislate? Why do you value the incremental economic benefit of this technology over the lives of people we can predictably protect?
I mean, we have…for debatable definitions of “terrorism”.
"We've already lost our first encounter with AI" - I think Yuval Hurari.
Algorithms heavily thumbed the scales on our social contracts. Where did all of the division come from? Why is extremism blossoming everywhere? Because it gets clicks. Maybe we're just better observing what's been going on under the hood all along, but it seems like there's about 350 million little cans of gasoline dousing American eyeballs.
Make Algorithms Govern All indeed.
The future is going to be hard, why would we choose to tie one hand behind our back? There is a difference between being careful and being fearful.
That requires dissolving the anarchy of the international system. Which requires an enforcer.
If some countries want to collaborate on some CERN project they just... do that.
That's an enforcer. Unfortunately, nobody follows through with its sanctions, so it's devolved into a glorified opinion-providing body.
> If some countries want to collaborate on some CERN project they just... do that
CERN is about doing thing, not not doing things. You can't CERN your way to nuclear non-proliferation.
Non-proliferation is, the US has nuclear weapons and doesn't want Iran to have them, so is going to apply some kind of bribe or threat. It's not cooperative.
The better example here is climate change. Everyone has a direct individual benefit from burning carbon but it's to our collective detriment, so how do you get anyone to stop, especially the countries with large oil and coal reserves?
In theory you could punish countries that don't stop burning carbon, but that appears to be hard and in practice what's doing the most good is making solar cheaper than burning coal and making electric cars people actually want, politics of infamous electric car man notwithstanding.
So what does that look like for making AI "safe, secure and trustworthy"? Maybe something like publishing state of the art models for free with full documentation of how they were created, so that people aren't sending their sensitive data to questionable third parties who do who knows what with it or using models with secret biases.
That's a little misleading. What actually happened is summarized here:
https://en.wikipedia.org/wiki/Appellate_Body
Since 2019, when the Donald Trump administration blocked appointments to the body, the Appellate Body has been unable to enforce WTO rules and punish violators of WTO rules. Subsequently, disregard for trade rules has increased, leading to more trade protectionist measures. The Joe Biden administration has maintained Trump's freeze on new appointments.
Clearly humans aren’t able to do this task.
I think it is one-sided to see any situation where we want to retain balance as being significantly affected by one of the sides exclusively. If one believes that there is a balance to be maintained between cooperation and competition, I don't immediately default to believing that any perceived imbalance is due to one and not the other.
The naturalistic fallacy is still a fallacy.
Just as you don't want to be stuck in the only town that outlaws murder...
I am not a religious person, but I can see the value in promoting shared taboos. The question is, how do we do this in the modern world? We had some success with nuclear weapons. I don't think it's any coincidence that contemporary leaders (and possibly populations) seem to have forgotten how bloody dangerous they are and how utterly stupid it is to engage in brinkmanship with so much on the line.
As for nuclear weapons, I mean it does kind of suck in today's age to be a country without nuclear weapons, right? Like, certain well known countries would really like to have them so they wouldn't feel bullied by the ones that have them. So, I actually think that example works against you. And we very well may end up in a similar circumstance where a few countries get super powerful AGIs and then use their advantage to prevent any other country from getting it as well. Therefore my point stands: I don't want to be in one of the countries that doesn't get to be in that exclusive club.
I’m so sick of that word. “You need to be competitive”, “you need to innovate”. Bullshit. You want to talk about fear? “Competitiveness” and “innovation” are the words the unscrupulous people at the top use to instil fear on everyone else and run rampant. They’re not being competitive or innovative, they’re sucking you dry of as much value as they can. We all need to take a breath. Stop and think for a moment. You can literally eat food which grows from the ground and make a shelter with a handful of planks and nails. Humanity survived and thrived before all this unfettered consumption, we don’t need to kill ourselves for more.
In reality, most people will continue to live the modern life where there are doctors, accountants, veterinarians, mechanics. We'll continue to enjoy food distribution and grocery stores. We'll all hope that North America gets its act together and build high speed rail so we can travel comfortably for long distances.
There was a time Canada was a big exporter of engineering technology. From mining to agriculture, satellites, and nuclear technology. I want Canada to be competitive in these ways, not making makeshift shacks out of planks and nails for junkies that have given up on life and live in the woods.
I believe you very well know it’s not, and are transparently arguing in bad faith.
> shacks (…) for junkies that have given up on life
The insults you’ve chosen are quite telling. Not everyone living in a way you disapprove of is an automatic junky.
That is actually what you are talking about; "uncompetitive" looks like something in the real world. There isn't an abstract dial that someone twiddles to set the efficiency of two otherwise identical outcomes - the competitive one will typically look more advanced and competently organised in observable ways.
To live in nice houses and have good food requires a competitive economy. The uncompetitive version was literally living in the forest with some meagre shelter and maybe having a wood fire to cook food (that was probably going to make someone very sick). The reason the word "competitive" turns up so much is people living in a competitive society get to have a more comfortable lifestyle. People literally starve to death if the food system isn't run with a competitive system that tends towards efficiency; that experiment has been run far too many times.
People can argue about the moral and ideological sanity of these things, but the fact is tolerating economic inefficiencies into the food system can quickly leads to there not being enough food.
You are also assuming, in bad faith, an "all" where I did not place one. It is an undeniable fact with evidence beyond any reasonable doubt, including police reports and documented studies by the district, that the makeshift shacks in the rural woods near my house are made by drug addicts that are eschewing the readily available social housing for the specific reason that they can't go to that housing due to its explicit restrictions on drug use.
I don’t understand this. Are you not familiar with farming and houses? You know humans grow plants to eat (including in backyards and balconies in cities) and make cabins, chalets, houses, entire neighbourhoods (Sweden currently planning the largest) with wood, right?
You don't realize the luxury you have and for some reason you assume that it is possible without that wealth. The reality of that lifestyle without tremendous wealth is more like subsistence farming in Africa and less like Swedish planned neighborhoods.
Correct. Nowhere did I defend or make an appeal to live life “as they did in the past” or “like our ancestor did”. We should (and don’t really have a choice but to) live forward, not backward. We should take the good things we learned and apply them positively to our lives in the present and future, and not strive for change and consumption for their own sakes.
To deny that your juxtaposition of this claim with your point about growing seeds and nailing together planks doesn't pass my personal test of credibility. You say: "Stop and think for a moment. You can literally eat food which grows from the ground and make a shelter with a handful of planks and nails." but that isn't indicative of a thriving life as I demonstrated. You can do both of those things and still live in squalor, a condition I wouldn't wish on my worst enemy.
You then suggest that I don't understand farming or house construction to defend that point, as if the existence of backyard gardens or wood cabins proves the point that a modern comfortable life is possible with gardens and wood cabins. My point is that the wealth we have makes balcony gardens and wood cabins possible and you are reasoning backwards. To be clear, we get to enjoy the modern luxury of backyard gardens and wood cabins by being wealthy and we don't get to be wealthy by making backyard gardens and wood cabins.
> We should take the good things we learned and apply them positively to our lives in the present and future
Sure, and I can argue competitiveness could be a lesson we have learned that can be applied positively. The way it is used positively in team sports and many other aspects of society.
https://crimethinc.com/2018/09/03/the-mythology-of-work-eigh...
If a society is okay accepting a lower standard of living and sovereign subservience, then sure, competition doesn't matter. But if America and China have AI and nukes and Europe doesn't, one side gets to call the shots and the other has to listen.
We better start really defining what that means, because it has become quite clear that all this “progress” is not leading to better lives. We’re literally going to kill ourselves with climate change.
> AI and nukes
Those two things aren’t remotely comparable.
How do you think the average person under 50 would poll on being teleported to the 1950s? No phones, no internet, jet travel is only for the elite, oh nuclear war and MAD are new cultural concepts, yippee, and fuck you if you're black because the civil rights acts are still a decade out.
> two things aren’t remotely comparable
I'm assuming no AGI, just massive economic efficiencies. In that sense, nuclear weapons give strategic autonomy through military coercion and the ability to grant a security umbrella, which fosters e.g. trade ties. In the same way, the wealth from an AI-boosted economy fosters similar trade ties (and creates similar costs for disengaging). America doesn't influence Europe by threatening to nuke it, but by threatening not to nuke its enemies.
That’s not the argument. At all. I argued we should rethink our attitude of unfettered consumption so we don’t continue on an path which is provably leading to destruction and death, and your take is going back in time to nuclear war and overt racism. That is frankly insane. I’m not fetishising “the old days”, I’m saying this attitude of “more more more” does not automatically translate to “better”.
If you say Room A is not better than Room B, then you should be, at the very least, indifferent to swapping between them. If you're against it, then Room A is better than Room B. Our lives are better--civically, militarily and materially--than they were before. Complaining about unfettered consumerism by falsely claiming our lives are worse today than they were before doesn't support your argument. (It's further undercut by the falling material and energy intensity of GDP in the rich world. We're able to produce more value for less input resource-wise.)
No. There is a reason I put the word in quotes. We are on a thread, the conversation follows from what came before. My original post was explicit about words used to bullshit us. I was specifically referring to what the “unscrupulous people at the top” call “progress”, which doesn’t truly progress humanity or enhances the lives of most people, only theirs.
To give a tech example, not many people were listening to Stallman and Linus and they still managed to change a lot for the better.
https://crimethinc.com/2018/09/03/the-mythology-of-work-eigh...
We face issues (like we always have), but I'd argue quite strongly that the competitiveness in our history and drive to invent and innovate has led to where we are today and it's a good thing.
Why is it "our" back? The people who will own these machines do not consider you one of them. The people leading the countries that will use these machines to kill each other's civilians do not consider you one of them. You have far more in common with a Chinese worker than you do with Sam Altman or Jeff Bezos.
And frankly? I think choosing a (say, conservatively, just going off of the estimates Altman and Amodei have made in the past) 20% chance of killing everyone as our first resort is just morally unacceptable. If the US made an effort to halt research and China still kept at it, sure, I won't complain I suppose, but we haven't, and pretending that China is the problem when it's our labs pushing the edge on capabilities -- it's just comedic.
The reality is, with increased access to information and accelerated pace of discovery in various fields, we'll come across things that has the potential for great harm. Be it AI, some genetical engineering causing a plague, nuclear fallout etc. We don't necessarily know what the harm / benefits are all going to be ahead of time, so we only really have 2 choices:
1. try to stop / slow down such advances. Not sure this is even possible in the long run
2. try to get a good grasp of potential dangers and figure out ways to mitigate / control them
If we don't want to live in a world where these incredibly powerful technologies are leveraged for nefarious purposes there needs to be emotional maturity and growth amongst humanity. Those who are able to make these growths need to hold the irresponsible ones accountable (with empathy).
The promise of AI is that these incredibly powerful technologies will be disseminated to the masses and Open AI know this is the next step and it's why they're trying to keep a grip on their market share. With the advent of nVidia's project digits and powerful open source models like deepseek, it's very clear how this trajectory will go.
Just wanted to add some of this to the convo. Cheers.
We are upgrading the gears that turn the grist mill. Stupid, incoherent, faster.
How do you do that?
I've come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things.
- Douglas Adams
I agree with the first half: comfort has clearly increased over time since the Industrial Revolution. I'm not so sure the abundance of "content" will be enriching to the masses, however. "Content" is neither literature nor art but a vehicle or excuse for advertising, as pre-AI television demonstrated. AI content will be pushed on the many as a substitute for art, literature, music, and culture in order to deliver advertising and propaganda to them, but it will not enrich them as art, literature, music, and culture would: it might enrich the people running advertising businesses. Let us not forget that many of the big names in AI now, like X (Grok) and Google (Gemini), are advertising agencies first and foremost, who happen to use tech.
It is quite possible there is a cultural reaction against AI and that we enter a new human cultural golden age of human created art, music, literature, etc.
I actually would bet on this as engineering skills become automated that what will be valuable in the future is human creativity. What has value then will influence culture more and more.
What you are describing seems like how the future would be based on current culture but it is a good bet the future will not be that.
The nuclear peace is hard to pin down. But given the history of the 20th century, I find it difficult to imagine we wouldn't have seen WWIII in Europe and Asia without the nuclear deterrent. Also, while your parents may have been uncomfortable with the hydrogen bomb, the post-90s world hasn't particularly been characterised by mass nuclear anxiety. (Possibly to a fault.)
IMO, the Atoms for Peace propaganda undersells how successful globalization has been at keeping nations from destroying each other by creating codependence on complex supply chains. The new shift to protectionism may see an end to that
Nice "peace".
We had 100 years of such kind of peace among major europe powers before nuclear weapons. We're not even at 80 years of peace in nuclear age, this time, and nuclear armed power is already attacking from the east and from inside via new media.
I wouldn't call it done and clear, about the "nuclear age peace".
However, that really doesn’t invalidate the rule.
The discussion I was responding to is whether the next generation would grow up seeing pervasive AI as a normal and good thing, as is often the case with new technology. I cited nuclear weapons as a counterexample, while I agree that nobody felt that they had a choice but to keep up with them.
AI could similarly be a multipolar trap ("nobody likes it but we aren't going to accept an AI gap with Russia!"), which would mean it has that in common with nuclear weapons, strengthening the argument against the next generation being comfortable with AI.
Also, nukes don't write code or wash your dishes, it's nothing but liability for a society.
The point is that it's complicated, it's not a black and white sound bite like the people who are "against nuclear weapons" pretend it is.
I eat meat. I know some vegans feel uncomfortable with that. But personally I feel secure in my own convictions that I don't need to run around insinuating vegans are less than or whatever.
Alignment Failure → Shifting Expectations People get used to AI systems making “weird” or harmful choices, rationalizing them as inevitable trade-offs. Framing failures as “technical glitches” rather than systemic issues makes them seem normal.
Runaway Optimization → Justifying Unintended Consequences AI’s extreme efficiency is framed as progress, even if it causes harm. Negative outcomes are blamed on “bad inputs” rather than the AI itself.
Bias Amplification → Cultural Reinforcement AI bias gets baked into everyday systems (hiring, policing, loans), making discrimination seem “objective.” “That’s just how the system works” thinking replaces scrutiny.
Manipulation & Deception → AI as a Trusted Guide People become dependent on AI suggestions without questioning them. AI-generated narratives shape public opinion, making manipulation invisible.
Security Vulnerabilities → Expectation of Insecurity Constant cyberattacks and AI hacks become “normal” like data breaches today. People feel powerless to push back, accepting insecurity as a fact of life.
Autonomous Warfare → AI as an Inevitable Combatant AI-driven warfare is seen as more “efficient” and “precise,” making human involvement seem outdated. Ethical debates fade as AI soldiers become routine.
Loss of Human Oversight → AI as Authority AI decision-making becomes so complex that people stop questioning it. “The AI knows best” becomes a cultural default.
Economic Disruption → UBI & Gig Economy Normalization Mass job displacement is met with new economic models (UBI, gig work, AI-driven welfare), making it feel inevitable. People adjust to a world where traditional employment is rare.
Deepfakes & Misinformation → Truth Becomes Fluid Reality becomes subjective as deepfakes blur the line between real and fake. People rely on AI to “verify” truth, giving AI control over perception.
Power Concentration → AI as a Ruling Class AI governance is framed as more rational than human leadership. Dissent is dismissed as “anti-progress,” consolidating control under AI-driven elites.
"Lack of Adaptability"
AI advocates argue that those who lose jobs simply failed to "upskill" in time. The burden is placed on workers to constantly retrain, even if AI advancement outpaces human ability to keep up. Companies and governments say, “The opportunities are there; people just aren’t taking them.” "Work Ethic Problem"
The unemployed are labeled as lazy or unwilling to compete with AI. Hustle culture promotes side gigs and AI-powered freelancing as the “new normal.” Welfare programs are reduced because “if AI can generate income, why can’t you?” "Personal Responsibility for Economic Struggles"
The unemployed are blamed for not investing in AI tools early. The success of AI-powered entrepreneurs is highlighted to imply that struggling workers "chose" not to adapt. People are told they should have saved more or planned for disruption, even though AI advancements were unpredictable. "It’s a Meritocracy"
AI-driven success stories (few and exceptional) are amplified to suggest anyone could thrive. Struggling workers are seen as having made poor choices rather than being victims of automation. The idea of a “deserving poor” is reinforced—those who struggle are framed as not working hard enough. "Blame the Boomers / Millennials / Gen Z"
Economic shifts are framed as generational failures rather than AI-driven. Older workers are told they refused to adapt, while younger ones are blamed for entitlement or lack of work ethic. Cultural wars distract from AI’s role in job losses. "AI is a Tool, Not the Problem"
AI is framed as neutral—any negative consequences are blamed on how people use it. “AI doesn’t take jobs; people mismanage it.” Job losses are blamed on bad government policies, corporate greed, or individual failure rather than automation itself. "The AI Economy Is Full of Opportunity"
Gig work and AI-driven side hustles are framed as liberating, even if they offer no stability. Traditional employment is portrayed as outdated, making complaints about job loss seem like resistance to progress. Those struggling are told to “embrace the new economy” rather than question its fairness.
look at the push right now in the US against corrupt foreign aid and the mass deportations seems like the first step.
US benefited a lot from lots of smart people going there (even more during WWII). If people start believing (correctly or incorrectly) that they would be better somewhere else, it will not benefit them.
This is inevitable in my view.
AI will replace a lot of white collar jobs relatively soon, years or decades.
And blue collar isn't too far behind, since a major limiting factor for automation is general purpose robots being able to act in a dynamic environment, for which we need "world models".
When I was a kid, there was this grand utopian ideal for the internet. Now it's fragmented, locked in walled gardens where people are psychologically abused for advertising dollars. AI could be a force for good, but Google has already ended its ban on use in weapons and is selling it to the IAF, and Palantir is busy finding ways to use it for surveillance.
Seems a bit negative. I think it'll be cool.
Yet, I can’t help but be hopeful about the future. We have to be, right?
I always thought skynet was a great metaphor for the market, a violent and inhuman thing that we created that dominates our lives and dictates the terms of our day to day life and magically thinks for itself and threatens the very future of this planet, our species, and our loved ones, and is somehow out of popular control. Not actual commentary on a realistic scenario about the dangers of ai. Sometimes these metaphors work out great and Terminator is a great example. Maybe the AI we've been fearing is already here.
I think for the most part the enshittification of everything will just accelerate and it'll be pretty obvious who benefits and who doesn't.
No, in this regard, capital is ABSOLUTELY harmless. I mean, if the capital get outsized influence on our society, in the WORST case it will turn into a government. And we already have it.
We no longer use chemicals harmful to the ozone layer on spray cans.
We no longer use lead in gasoline.
We figured those things were bad, and changed what we did. If evidence is available ahead of time that something is harmful, it shouldn't be controversial to avoid widespread adoption.
The closest might be nuclear power, we know we can do it, we did it, but lots of places said no to it, and further developments have vastly slowed down.
The man who invented it got lead poisoning during its development, multiple people died of lead poisoning in a pilot plant manufacturing it and public health and medical authorities warned against prior to it being available for sale to the general public.
> when the has it ever been the case that you can just say "no" to the world developing a new technology?
If we as a society keep developing potential existential threats to ourselves without mitigating them then we are destined for disaster eventually.
At some level, there's a disaster-seeking function inside us all acting as an evolutionary propellant.
You might make an argument that "AI" is an evolutionary embodiment of our conscious minds that's designed to escape these more subconscious trappings.
Technology doesn't accelerate endlessly. Only our transistor spacing does. These two are not the same thing.
It is very hard to find a discussion about the growth and development of AI that doesn't discuss the issues around power budget.
https://www.datacenterknowledge.com/energy-power-supply/whit...
https://bidenwhitehouse.archives.gov/briefing-room/president...
In building domestic AI infrastructure, our Nation will also advance its leadership in the clean energy technologies needed to power the future economy, including geothermal, solar, wind, and nuclear energy; foster a vibrant, competitive, and open technology ecosystem in the United States, in which small companies can compete alongside large ones; maintain low consumer electricity prices; and help ensure that the development of AI infrastructure benefits the workers building it and communities near it.
Exponential increases in cost (and power) for next-level AI and exponential decreases for the cost (and power) of current level AI.
These "safety declarations" are toothless and impossible to enforce. You can't stop AI, you need to adapt. Video and pictures will soon have no evidentiary value. Real life relationships must be valued over online relationships because you know the other person is real. It's unfortunate, but nothing AI is "disrupting" existed 200 years ago and people will learn to adapt like they always have.
To quote the fictional comic book villain Toyo Harada, "none of you can stop me. Not any one of you individually nor the whole of you collectively."
I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.
> but nothing AI is "disrupting" existed 200 years ago
200 years ago there were about 1 billion people on earth; now there are about 8 billion. Anarchoprimitivists and degrowth people make a similar handwave about the advances of the last 200 years, but they're important to holding up the systems which keep a lot of people alive.
Maybe, but I'm not bullish on cryptology having a solution to this problem. Every consumer device that's interesting enough to be worth hacking gets hacked within a few years. Even if nobody ever steals the key there will inevitably be side-channel attacks to feed external pictures into the camera that it thinks are coming from its own sensors.
And then there's the problem of the US government, which is known to strongarm CAs into signing fraudulent certificates.
> 200 years ago there were about 1 billion people on earth; now there are about 8 billion. Anarchoprimitivists and degrowth people make a similar handwave about the advances of the last 200 years, but they're important to holding up the systems which keep a lot of people alive.
I think that's a good argument against the kazinksy-ites, but I was primarily speaking towards concerns such as 'misinformation' and machines pushing humans out of jobs. We're still going to have food, medicine, and shelter. AI can't take that away; the only concern is adapting our society so that we can either feed significant populations of unproductive people, or move those people into whatever jobs machines can't do yet.
We might be teetering on the edge of a dystopian techno-feudalism where a significant portion of the population languishes in slums because industry has no use for them, but that's why I said we need to adapt. There has always been something that has the potential to destroy civilization in the near future, but if you're reading this post then your ancestors weren't the ones that failed to adapt.
Or the front-door analog route, point a real camera at a screen showing fake images.
That said, lots of people are incompetent at forging, about knowing what "tells" each process of fakery has and how to overcome them, so I think this will still broadly work.
> We might be teetering on the edge of a dystopian techno-feudalism where a significant portion of the population languishes in slums because industry has no use for them, but that's why I said we need to adapt.
That's underestimating the impact this can have. An AI which reaches human performance and speed on 250 watt hardware, at current global average electricity prices, costs about the same to run as a human costs just to feed.
By coincidence, the global electricity supply is currently about 250 watts/capita.
As with most things, the primary issue is not really a technical one. People will believe fake photos and not believe real ones based on their own biases. So even if we had the Perfect Technology, it wouldn't necessarily matter.
And this is the reason we have fallen into a dystopian feudalistic society (we aren't teetering). The weak link is our incompetent collective human brains. And a handful of people built the tools necessary to exploit that incompetence; we aren't going back.
People, maybe. Judges, much less so. The "perfect technology" is badly needed if we don't want things to go south at scale.
Judges appointed by whom? Anyway, Judges are human and I think there is enough evidence throughout history of judges showing bias.
When you outlaw [silent cameras] the only outlaws will have [silent cameras].
Where a camera might "authenticate" a photograph, an AI could "authenticate" a camera.
1. The hardware just verifies that the image was acquired by that camera in particular. If an AI generates the thing it's photographing, especially if there's a glare/denoising step to make it more photographable, the camera's attestation is suddenly approximately worthless despite being real.
2. The same problem all those schemes have is that extracting hardware keys is O(1). It costs millions to tens of millions of dollars today, but the keys are plainly readable by a sufficiently motivated aversary. Those keys might buy us a decade or two, but everything beyond that is up in the air and prone to problems like process node size hitting walls while the introspection techniques continually get smaller and cheaper.
3. In the world you describe, you still have to trust the organizations producing hardware modules -- not just the "organization," but every component in that supply chain. It'd be easy for an internal adversary to produce 1/1M cameras which authenticate any incoming PNG and sell them for huge profits.
4. The hardware problem you're describing is much more involved than ordinary trusted computing because in addition to the keys being secure you also need the connection between the sensor and the keys to be secure. Otherwise, anyone could splice in a fake "sensor" that just grabs a signature for their favorite PNG.
4a. You're still only talking about O($10k) to O($100k) to produce a custom array to feed a fake photo into that sensor bank without any artifacts from normal screens. Even if the entire secure enclave / sensor are fully protected, you can still cheaply create a device that can sign all your favorite photos.
5. How, exactly, do lighting adjustments and whatnot fit in with such a signing scheme? Maybe the "RAW" is signed and a program for generating the edits is distributed alongside? Actually replacing general camera use with that sort of thing seemingly has some kinks to work out even if you can fix the security concerns.
First way to overcome this is attesting on true raw files. Then mostly just transferring raw files. Possibly supplemented by ZKPs that prove one imagine is the denoised version of another.
The other blocks are overcome by targeting crime, not nation states. This means you only nrrd stochastic control of the supply chain. Especially because, unlike with DRM keys, the leaking of a key doesn't break the whole system. It is very possible to revoke trust in a key. And it is possible to detect misuse of a private key, and revoke trust in it.
This won't stop deepfakes of political targets. But it does keep society from being fully incapable of proving what really happened to their peers.
I'm not saying we definitely should do this. But I do think there is a possible setup here that could be made reality, and that would substantially reduce the problem.
The problem is that the malicious product is nearly infinitely scalable, enough so that I expect services to crop up whereby people use rooms full of trusted devices to attest to your favorite photo, for very low fees. If that's not the particular way this breaks then it's because somebody found something even more efficient or the demand isn't high enough to be worth circumventing (and in the latter case the proposal is also worthless).
Laws like this serve primarily to deter casual criminals and catch patently stupid criminals which are the vast majority of cases. In this case it took a presumable sexual predator off the streets, which is a great application of the law.
[1]: https://www3.nhk.or.jp/news/html/20250212/k10014719841000.ht...
[2]: https://www3-nhk-or-jp.translate.goog/news/html/20250212/k10...
Still, I can't really see it happening.
How would this work? Not sure if something like this is possible.
Yet, the international agreements on non-use of chemical weapons have held up remarkably well.
Basically claims that chemical weapons have been phased out because they aren't effective, not because we've become more moral, or international standards have been set.
"During WWII, everyone seems to have expected the use of chemical weapons, but never actually found a situation where doing so was advantageous... I struggle to imagine that, with the Nazis at the very gates of Moscow, Stalin was moved either by escalation concerns or the moral compass he so clearly lacked at every other moment of his life."
We still accept eyewitness testimony in courts. Video and pictures will be fine, their context is what will matter. Where we'll have a generation of chaos is in the public sphere, as everyone born before somewhere between 1975 and now fails to think critically when presented with an image they'd like to believe is true.
I haven't really believed in aliens existing on earth for most of my adult life. However, I have sort of come around to at least entertaining the idea in recent years but would need solid photographic or video evidence. I am now convinced that aliens could basically land in broad daylight in 3 years while being heavily photographed and it would easily be able to be explained away as AI. Especially if governments want to do propaganda or counter propaganda.
It sounds like complete science fiction, but so did where we are with generative AI only a few decades ago.
There are training runs in progress that will use billions of dollars of electricity and GPUs. Quite detectable -- and stoppable by any government that wants to stop such things from happening on territory it controls.
And certainly we can reduce the economic incentive for investing money on such a run by banning AI-based services like ChatGPT.
For now. Qualitative improvements in efficiency are likely to change what is required.
We have AIs that are capable of self-correcting the code that they write, and people have built automatic interfaces for them to receive errors that they get from compilation.
We also have interfaces that can allow an AI to use a Linux terminal.
It's not a stretch to imagine that somebody out there is at this very moment using these in a way that would allow an AI to be fully autonomous with creating, running, and testing its own software using unit tests it wrote itself. And while the current status of AI means that such a program is likely to just not work, you have to admit that we are very, very close to the threshold point where it will work.
This on its own is not enough to threaten human safety, but toss in some bad human decisions...
This is one bit that has a technological solution. Canon's had some version of this since the early 2000s: https://www.bhphotovideo.com/c/product/319787-REG/Canon_9314...
A more recent initiative: https://c2pa.org/
Here I mean that at point of sale you register yourself as owner for the camera. And you make extracting a key cost about a million. Then bulk forgeries won't happen.
Energy use is energy use, training is still incredibly energy intensive and the GPU heat signatures are different from non GPU ones, it fairly trivial to detect large scale GPU usage.
Enforcement is a different problem, and is not specific to AI, if you cannot enforce an agreement it doesn't matter if its AI or nuclear or sarin gas.
The point is not the usage is harmful or not, almost any tech can be used for bad purposes if you wish to do so.
You can put controls is the point , controls here could be agent Dameons monitoring the gpus and tallying usage to heat signals, or firmware etc . The controls on what is being trained would be at a higher level than just agents process on a gpu .
It's a weird situation because I want the government to both be up in all of our business to protect us and yet not such that we still have privacy - impossible, really. If anything AI will hopefully increase the Gov's capabilities to detect nefarious shit without intruding upon privacy. Technology usually does its job perfectly, it's just misuse by us dumb humans that always causes problems.
Legally, videos and pictures are physical evidence.
> The declarations of the videos and photos to be accurate depiction of events is the evidence.
No, those declarations are conclusions that are generally reserved to the trier of fact.(the jury, in a jury trial, or the judge in a bench trial.) Declarations of personal knowledge as to events in how the videos or films were created or found, etc., which can support or refute such conclusions are, OTOH, testimonial evidence, and at least some of that kind of evidence is generally necessary to support each piece of physical evidence. (And, on the other side, such evidence can be submitted/elicited by the other side to impeach the physical evidence.)
Until we get replicants
Thats a very interesting point
Probably not the same way you can detect working centrifuges in Iran... but you definitely can.
And maybe it will be like detecting nuclear enrichment. Instead of hacking the firmware in a Siemens device, it's done on server hardware. Israel demonstrated absurd competence at this caliber of spycraft.
Sometimes you take low-tech approaches to high tech problems. I.e., get an insider at a shipping facility to swap the labels on two pallets of GPUs, one is authentic originals from the factory and the other are hacked firmware variants of exactly the same models.
If nations chose to restrict that, such detection would merit a military response. Like Iran's centrifuges.
But if you want to talk about "actionable" here are three potential actions a country could take and the confidence level they need for such actions:
- A country looking for targets to bomb doesn't need much confidence. Even if they hit a weather prediction data center, it's going to hurt them.
- A country looking to arrest or otherwise sanction citizens needs just enough confidence to obtain a warrant (so "probably") and they can gather concrete evidence on the ground.
- A country looking to insert a mole probably doesn't need much evidence either. Even if they land in another type of data center, the mole is probably useful.
For most use cases, being correct more than half the time is plenty.
1. I was just granting the GPs point to make the broader point that, for the purposes of this original discussion about these "safety declarations", this is immaterial. These safety declarations are completely unenforceable even if you could detect that someone was training AI.
2. Now, to your point about moving the goalposts, even though I say "if you could detect that someone was training AI", I don't actually even think that is possible. There are far too many normal uses of data centers to determine if one particular use is "training an AI" vs. some other data intensive use. I mean, there have long been supercomputer centers that do stuff like weather analysis and prediction, drug discovery analysis, astronomy tools, etc. that all look pretty indistinguishable from "training an AI" from the outside.
In a world full of sensors where everything is logged in some way or another I think that it would actually be not a straightforward activity at all to build a clandestine AI lab at any scale.
In the professional intel community they have been talking about this as a general problem for at least a decade now.
As in they've been discussing detecting clandestine AI labs? Or just how almost no activity is now in principle undetectable?
I don't think there’s a good public understanding of just how much things have changed in that space in the last decade but a huge percentage of all existing tradecraft had to be completely scrapped because not only does it not work anymore but it will put you on the enemy’s radar very early on and is actively dangerous.
It’s also why I think a lot of advice I see targeted towards activist types I think is straight up a bad idea in 2025. It just typically involves a lot of things that aren’t really consistent with any kind of credible innocuous explanation and are very unusual which make you stand out from a crowd.
Interesting semi-irrelevant tangent: the Cooley/Tukey 'Fast Fourier Transform' algorithm was initially created because they were negotiating arms control treaties with the Russians, but in order for that to be enforceable they needed a way to detect nuclear weapons testing; the solution was to use seismograms to detect the tremors caused by an underground nuclear detonation, and the FFT was invented in the process because they were using computers to filter for the types of tremors created by a nuclear weapon.
As I understand things (I’m not actually a professional here) the current thinking has up to this point been something akin to a containment strategy largely based on lessons learned from years of nuclear non-proliferation work.
But things are developing at such a crazy pace and there are some major differences between this and nuclear technology that it’s not really a straightforward copy and paste strategy at all. For example this time around a huge amount of the research comes from the commercial sector completely independently of defense and is also open source.
Also thanks for that anecdote I hadn’t heard of that before. This is a bit of a long shot but maybe you might know, I was trying to think of some research that came out maybe 2-3 years ago that basically had the ability to remotely detect if anything in a room had been moved (I might be misremembering this slightly) and it was said to be potentially a big breakthrough for nuclear arms control. I can’t remember what the hell it was called or anything else about it, do you happen to know?
Sadly, I don't think this is actually helpful for nuclear arms control. I suppose you could imagine a case where a country is known to have enough nuclear material for exactly X warheads, hasn't acquired more, and it could prove to an inspector that all of the material is still inside the same devices it was in at the last inspection. But most weapons development happens by building new bombs, not repurposing old ones, and most countries don't have exactly X bombs, they have either 0 or so many the armed forces can't reliably count them.
That’s a new one on me (not being in cryptography), but I really like it. Thanks!
So that's easy.
Nothing to actually worry about.
Other than Sam Altman and Elon Musks' pending ego fight.
Technically both are real people, one is just not human. At least by the person/people definition that would include sentient aliens and such.
I think this presumes that Sam Altman is correct to claim that they can scale their way to, in the practical sense of the word, AGI.
If he is right about that, you are right that it's too late to hide it; if he's wrong, I think the AI architecture and/or training methods we have yet to invent are in the set of things we could usefully sequester.
> The equipment needed to train AI is cheap and ubiquitous.
Again, possibly:
If we were already close even before DeepSeek's models, yes, the hardware is too cheap and too ubiquitous.
If we're still not close even despite DeepSeek's cost reductions, then the hardware isn't cheap enough — and Yudkowsky's call for a global treaty on maximum size of data centre to be enforced by cruise missiles when governments can't or won't use police action, still makes sense.
If it takes software technology that we have already developed outside of secret government labs, it is probably too late to sequester it.
If it takes software technology that has been developed in secret government labs, its probably too late to sequester the already public precursors with which independent development of the same technology is impossible, getting us back to the preceding.
It takes software technology that hasn't been developed, we don't know what we would need to sequester, and won't until we are in one of the two preceding states.
If it takes a breakthrough in hardware technology, then if we make that breakthrough in a way which doesn't become widely public and used very quickly after being made and the hardware technology is naturally amenable to control (i.e., requires distinct infrastructure of similar order to enrichment of material for nuclear weapons), maybe, with intense effort of large nations, we can sequester it to a limited club of AGI powers.
I think control at all is most likely a pipe dream, but one which serves as a justification for the exercise of power in ways which will please both authoritarians and favored industry actors, and even if it is possible it is simply a recipe for a durable global hegemony of actors that cannot be relied on to be benevolent.
Which in turn leads to the cautious approach for which OpenAI is criticised: not revealing things because they don't know if it's dangerous or not.
> I think control at all is most likely a pipe dream, but one which serves as a justification for the exercise of power in ways which will please both authoritarians and favored industry actors, and even if it is possible it is simply a recipe for a durable global hegemony of actors that cannot be relied on to be benevolent.
Entirely possible, and a person I know who left OpenAI had a fear compatible with this description, though differing on many specifics.
Deepfakes are a distraction from more important things here. The point of AI safety is "it doesn't matter who builds unaligned AGI, if someone builds it we all die".
If you agree that unaligned AGI is a death sentence for humanity, then it's worth trying to stop it.
If you think AGI is unlikely to come about at all, then it should be a no-op to say "don't build it, take steps to avoid building it".
If you think AGI is going to come about and magically be aligned and not be a death sentence for humanity, pay close attention to the very large number of AI experts saying otherwise. https://en.wikipedia.org/wiki/P(doom)
If your argument is "but some experts don't believe that", ask yourself whether it's reasonable to say "well, experts disagree about whether this will kill us all, so we shouldn't do anything".
There might be a few humans that don't agree with even those values, but I think it's safe to presume that the general-consensus values of humanity include the above points. And AI alignment is not even close to far enough along to provide even the slightest assurances about those points.
Practically everyone making the argument that AGI is about to destroy humanity is (a) human and (b) working on AI. It's safe to conclude they're either stupid and suicidal or don't buy their own bunk.
But ultimately, most people who think we stand a decent chance of dying because of this are not working at AI labs.
Do humans agree on the best way to do this? Aside from the most banal examples of what not to do, is there agreement on e.g. whether a mass extinction event is happening, not happening, or happening but actually tolerable?
If the answer is no, then it is not possible for an AI to align with human values on this question. But this is a human problem, not a technical one. Solving it through technical means is not possible.
So, at a very basic level: stop training AIs at that scale!
Sure AI could be worse than we are; makes for a good movie plot. But it could be a lot better than we are and it's sad that it's such a low bar to it to exceed.
So for example if a family with 5 children is on vacation, do you maintain that it is impossible even in principle for the parents to take the preferences of all 5 children into account in approximately equal measure as to what activities or non-activities to pursue?
Also: are you pursuing a complete tangent or do you see your point as bearing on whether frontier AI research should be banned? (If so, I cannot tell whether you consider your point to support a ban or oppose a ban.)
Therefore the actual solution is not coming up with more and more clever “guardrails” but aligning corporations and governments to human needs. In other words, politics.
There are other problems like enabling new types of scams which will require political solutions. At a technical level the best these companies can do is mitigation.
Don't extrapolate from present harms to future harms, here. The problem AI alignment is trying to solve at a most basic level is "don't kill everyone", and even that much isn't solved yet. Solving that (or, rather, buying time to solve it) will require political solutions, in the sense of international diplomacy. But it has absolutely nothing to do with "aligning corporations", and everything to do with teaching computers things on par with (oversimplifying here) "humans are made up of atoms, and if you repurpose those atoms the humans die, don't ever do that".
No, its not. AI alignment was an active area of concern (and the fundamental problem for useful AI with significant autonomy) before cultists started trying to reduce the scope of its problem space from the wide scope of real problems it concerns to a single speculative apocalypse.
But the genesis of the term "alignment" (as applied to AI) is a side issue. What is important is that reinforcement learning with human feedback and the other techniques used on the current crop of AIs to make it less likely that the AI will say things that embarass the owner of the AI are fundamentally different from making sure the an AI that turns out more capable than us will not kill us all or do something else awful.
Both, of course, are concerned primarily with the risk of human extinction from AI.
The fact that the number of things that could hypothetically lead to human extinction is entirely unbounded and (since we’re not extrapolating from present harms) unpredictable is a very convenient fact for people who are paid for their time in “solving” this problem.
Intelligence and alignment are mutually incompatible; natural intelligence is unaligned, too.
Unaligned intelligence is not a global death sentence. Fearmongering about unaligned AGI, however, is a tool to keep a tool of broad power—which AI is and will continue to grow as long before it becomes, and even if it never becomes, AGI—in the hands of a narrow, self-selected elite to make their control over everyone else insurmountable, which is also not a global death sentence, but is a global slavery sentence. (It’s also, more immediately, a tool to serve those who benefit from current AI uses which are harmful and unjust to use future speculative harms to deflect from real, present, concrete harms; and those beneficiaries are largely an overlapping elite with the group with a longer term interest in centralizing power over AI.)
Whether or not that elite group produces AGI, much less, “unaligned AGI”, is largely immaterial to the practical impacts (also, from the perspective of anyone outside the controlling elite, what the controlling elite would view as aligned, whether or not it is a general intelligence, is unaligned; alignment is not an objective property.)
False. There are people working on frontier AI who have co-opted some of the safety terminology in the interests of discrediting it, and discussions like this suggest that that strategy is working.
> all actionable policy under that banner fits that description
Actionable policy: "Do not do any further frontier AI capability research. Do not build any models larger or more capable than the current state of the art. Stop anyone who does as you would stop someone refining fissile materials, with no exceptions."
> (also, from the perspective of anyone outside the controlling elite, what the controlling elite would view as aligned, whether or not it is a general intelligence, is unaligned; alignment is not an objective property.)
You are mistaking "alignment" for things like "politics", rather than "not killing everyone".
Also, "alignment" doesn't mean "not killing everyone", it means "functioning according to (some particular set of) human's preferred set of values and goals". "Killing everyone" is a consequence some have inferred if unaligned AI is produced (redefining "alignment" to mean "not killing everyone" makes the whole argument circular.)
The darkly amusing shorthand for this: if the AGI tiles the universe with tiny flags, it really doesn't matter whose flag it is. Any notion of "whose values" really can't happen if you can't align at all.
I'm not disagreeing with you that "AI alignment" is more complex than "don't kill everyone"; the point I'm making is that anyone saying "but whose values are you aligning with" is fundamentally confused about the scale of the problem here. Anyone at any point on any reasonable human values spectrum should be able to agree that "don't kill everyone" is an essential human value, and we're not even there yet.
OP's point has nothing to do with this, OP's point is that you can't stop it.
The methods and materials are too diffuse and the biggest players (nation states) have a strong incentive to be first. Do you really expect China to coordinate with the West on this?
That's not true. I worked in the field of DNA analysis for 6.5 years and there is definitely a consensus that DNA editing is closer than the horizon. Just look at CRISPR gene editor [0]. Crude, but "works".
Your DNA, even if you've never submitted it, is already available using shadow data (think Facebook style shadow profiles but for DNA) from the people related to you who have.
I don't think a defeatist attitude is useful here.
AGI would then be a very effective tool for maintaining the current authoritative regime.
What bearing does that have on China's interest in developing AGI? Does the risk posed by OpenAI et al. mean that China would not use AI as a tool to advance their self interest?
Or are you saying that the risks from OpenAI et al. will come to fruition before we need to worry about China's AI use? That still wouldn't prevent China from pursuing AI up until that happens.
I am still not convinced that there is a policy which can prevent AI from developing outside of the US with high probability.
Suppose, hypothetically, there was a very simple as-yet-unknown action, doable by anyone who has common unrestricted household chemicals, that would destroy the world. Suppose we know the general type of action, but not the specific action, yet. Suppose that people are actively researching trying actions in that family, and going "welp, world not destroyed yet, let's keep going".
How do you proceed? What do you do to stop that from happening? I'm hoping your answer isn't "decide there's no policy that can prevent this, give up".
- there were a range of expert opinions that P(destroy-the-world) < 100 AND
- the chemical could turn lead into gold AND
- the chemical would give you a militaristic advantage over your adversaries AND
- the US were in the race and could use the chemical to keep other people from making / using the the chemical
Then I think we'd be in the same situation as we are with AI: stopping it isn't really a choice, we need to do the best we can with the hand we've been dealt.
I would hope that it would not suffice to say "not a 100% chance of destroying the world". Because there's a wide range of expert opinions saying values in the 1-99% range (see https://en.wikipedia.org/wiki/P(doom) for sample values), and none of those values are even slightly acceptable.
But sure, by all means stipulate all the things you said; they're roughly accurate, and comparably discouraging. I think it's completely, deadly wrong to think that "race to find it" is safer than "stop everyone from finding it".
Right now, at least, the hardware necessary to do training runs is very expensive and produced in very few places. And the amount of power needed is large on an industrial-data-center scale. Let's start there. We're not yet at the point where someone in their basement can train a new frontier model. (They can run one, but not train one.)
Ok, I can imagine a domestic policy like you describe. Through the might and force of the US government, I can see this happening in the US (after considerable effort).
But how do you enforce something like that globally? When I say "not really possible" I am leaving out "except by excessive force, up to and including outright war".
For the reasons I've mentioned above, lots of people around the world will want this technology. I haven't seen an argument for how we can guarantee that everyone will agree with your level of "acceptable" P(doom). So all we are left with is "bombing the datacenters", which, if your P(doom) is high enough, is internally consistent.
I guess what it comes down to is: my P(doom) for AI developed by the US is less than my P(doom) from the war we'd need to stop AI development globally.
I don't consider the P(destruction of humanity) of stopping larger-than-current-state-of-the-art frontier model training (not all AI) to be higher than that of stopping the enrichment of uranium. (That does lead to conflict, but not the destruction of humanity.) In fact, I would argue that it could potentially be made lower, because enriched uranium is restricted on a hypocritical "we can have it but you can't" basis, while frontier AI training should be restricted on a "we're being extremely transparent about how we're making sure nobody's doing it here either" basis.
(There are also other communication steps that would be useful to take to make that more effective and easier, but those seem likely to be far less controversial.)
If I understand your argument correctly, it sounds like any one of three things would change your mind: either becoming convinced that P(destruction of humanity) from AI is higher than you think it is, or becoming convinced that P(destruction of humanity) from stopping larger-than-current-state-of-the-art frontier model training is lower than you think it is, or becoming convinced that nothing the US is doing is particularly more likely to be aligned (at the "don't destroy humanity" level) than anyone else.
I think all three of those things are, independently, true. I suspect that one notable point of disagreement might be the definition of "destruction of humanity", because I would argue it's much harder to do that with any standard conflict, whereas it's a default outcome of unaligned AGI. (I also think there are many, many, many levers available in international diplomacy before you get to open conflict.)
(And, vice versa, if I agreed that all three of those things were false, I'd agree with your conclusion.)
There are many other less-superficial reasons why Beijing may be interested in AI, plus China may not trust that we actually banned our own AI development.
I wouldn't take that bet in a million years.
The discussion started when someone argued that even if this AI juggernaut were in fact very dangerous, there is no way to stop it. When I pushed back on the second part of that, you reject my push-back. On what basis? I hope it is not, "I just want things to keep on going the way they are," as if ignoring the AI danger somehow makes the AI danger go away.
I don't have a lot of confidence that this will be the case, but I think the US continuing to develop AI is the decision with the best distribution of possible outcomes.
This is completely separate from my personal preferences or hopes about the future of AI.
So what is your solution? Give up and die? It's worth trying. If it buys us a few years that's a few more years to figure out alignment.
> The methods and materials are too diffuse and the biggest players (nation states) have a strong incentive to be first.
So there's a strong incentive to convince them "stop racing towards death".
> Do you really expect China to coordinate with the West on this?
Yes, there have been concrete examples of willingness towards doing so.
It is essentially the same problem as the atom bomb: it would have been better if we all agreed not to do it, but thats just not possible. Why should China trust the US or vice versa? Who wants to live in a world where your competitors have world-changing technology but you don't? But here we have a technology with immense militaristic and economic value, so the everyone-wants-it problem is even more pronounced.
I don't _like_ this, I just don't see how we can achieve an AI moratorium outside of bombing the data centers (which I also don't think is a good idea).
We need to choose the policy with the best distribution of possible outcomes:
- The US leads an effort to stop AI development: too much risk that other parties do it anyway
- The US continues to lead AI development: hope that P(takeoff) is low and that the good intentions of some US labs are able to achieve safe development
I prefer the latter -- this is far from the best hypothetical outcome, but I think it is the best we can do when constrained by reality.
I don't think we're on "the cusp" of AGI, but I guess that just means I'm quibbling over the timeframe of what "cusp" means. I certainly think it's possible within the lifetime of people alive today, so whether it comes in 5 years or 75 years is kind of an insignificant detail.
And if AGI does get built, I agree there is a significant risk to humanity. And that makes me sad, but I also don't think there is anything that can be built to stop it, certainly not some useless agreements on paper.
It is like living paralyzed in fear of every birth, for fear that random variance will produce one baby born smarter than Einstein will be capable of developing an infinite cascade of progressively smarter babies and concluding that therefore we must stop all breeding. No matter how smart the baby super-Einstein winds up being there is no unstoppable, unopposable omnicide mechanism. You can't theorem your way out of a paper bag.
We've already found ourselves on a trajectory where un-employing millions or billions of people without any system to protect them afterwards is just accepted, and that's simply the first step of many in the destruction-of-empathy path that creating AI/AGI brings people down.
LLMs will not get us to AGI. Not even close. Altman talking about this danger is like Musk talking about driverless taxis.
Don't get me wrong, they are impressive. I can see LLM's eventually enabling people to be 10x more productive in jobs that interact with a computer all day.
This is a big part of it, and you can get others to do it for you.
It's like the drain cleaner sold in an extra bag. Obviously it must be the best, it's so scary they have to put it in a bag!
I doubt this. Productivity is gained through experience and expertise. If you don't know what you don't know than the LLM is perfectly useless to you.
Semi-autonomous vehicles are impressive for the fact that one driver can now scale well beyond a single vehicle. Fully-autonomous vehicles are impressive because they can scale limitlessly. The former is evolutionary, the latter is revolutionary.
This seems like such an odd thing to expect will just "happen". Any other world-changing or impressive tech I'm familiar with has evolved to its current state over time, it's not like when Jobs announced the iPhone and changed the game there wasn't decades of mobile computing whose shoulders it stood on. Are you talking about something like crypto?
It's admittedly a bit confusing what you're asking for here.
Seems like we're splitting hairs a bit here.
Looks like it's still in the works. Sometimes when technologists promise something their timelines end up being overly optimistic. I'm sure this isn't news to you.
By your language and past commentary though this seems like the kind of thing which elicits a pretty emotional response from you, so I'm not sure this would be a productive conversation either way.
Modern AI safety originated with people like Eliezer Yudkowsky, Nick Bostrom, the LessWrong/rationality movement etc.
They very much were not just talking about it only to build moats for OpenAI. For one thing, OpenAI didn't exist at the time, AI was not anywhere close to where it is today, and almost everyone thought their arguments were ridiculous.
You might not agree with them, but you can't simply dismiss their arguments as only being there to prop up the existing AI players, that's wrong and disingenuous.
AGI is a meaningless term. The LLM architecture has shown promise in every single domain once used for perceptron neural networks. By all accounts on those things that fit its 'senses' the LLMs are significantly smarter than the average human being.
Same with killer robots (or whatever it is people are afraid of when they talk about "AI safety"). As long as we can control who they kill, when, and why, there's no real difference with any other weapon system. If that control is even slightly in doubt: it's not fit for service.
Does this mean that bullshit generating LLMs aren't fit for service in many areas: it probably does. But maybe there steps can be taken to mitigate risks.
I'm sure this will involve some bureaucratic overhead. But it seems worth the hassle to me.
Being against AI Safety is a stupid hill to die on. Being against some concrete declaration or a part thereof, sure, that might make sense. But this smells a lot like the tabacco industry being against warnings/filters/low-tar, or the car industry being anti-seatbelt.
Yes.
And there is no reason to think that AGI would have desire.
I think people are reading themselves into their fears.
Or we can just drop all this sophistry nonsense.
The entire point of utilizing this tool is to feed it a desire and have it produce an appropriate output based upon that desire. Not only that, it's entire training corpus is filled with examples of our human desires. So either humans give it desire or it trains itself to function based on the inertia of "goal-seeking" which are effectively the same thing.
So the skills, knowledge, and expertise are in the UK. Google can close the UK office tomorrow if they wanted to sure, but are 100% of those staff going to move to California? Doubt it. Some will, but a lot have lives in the UK (not least the CEO and founder etc) so even if Google pulls the rug I will bet there will be a new company founded and funded within days that will vacuum up all the staff.
Deepseek and efforts by other non-aligned powers wouldn't care about any declarations signed by the EU, the US and other western powers anyways.
Attempts at curbing AI will come from those who are losing the race. There's this interview where Edward Teller recalls how the USSR used a moratorium in nuclear testing to catch up with the US on the hydrogen bomb, and how he was the one telling the idealist scientists that that was going to happen.
In another clips he says that he believes it was inevitable that the Soviets would come up with an H-bomb on their own.
[1] https://www.youtube.com/watch?v=zx1JTLrhbnI&list=PLVV0r6CmEs...
it's well known that china has long caught up with the us, in almost every way, and is on the verge of surpassing it on the others. just look at deepseek, as efficient as openai for a fraction of the cost. Baidu, alibaba ai and so on.
China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.
In fact most countries did. India too.
it's not the case of the looser making new rule, it's the big boy discussing how they are going to handle the situation and the retarded ones thinking they are too good for that.
I'd be very happy to take a high stakes, longterm bet with you if that's your earnest position.
Are you actually saying this in the year 2025?
China has signed on to many international agreements that it has absolutely no interest in following or enforcing.
Intellectual property is the most well known. They’re party to various international patent agreements but good luck trying to get them to enforce anything for you as a foreigner.
Of course they aren't going to follow it, just sign it. They're bright people
However the attempts are token and they know it too. Just an attempt to appear to be doing something for the naive information consumers, aka useful idiots.
Imaging telling hackers from the past that people on a website called “hacker news” would be arguing about how important it is that the government criminalize running code on your own computer. It’s so astoundingly, ethically, philosophically opposed to everything that inspired me to get into computers in the first place. I only have to wonder if people really believe this, or it’s a sophisticated narrative that’s convenient to certain corporations and politicians.
My understanding is that approximately zero government-level safety discussion is restriction of just building & running AI yourself. There are no limits of AI hacking even in the EU AI regaultion or discussions I've seen.
Regulation is around business & government applications and practical use cases: no unaccountable AI making final employment decisions, no widespread facial recognition in public spaces, transparency requirements for AI usage in high-risk areas (health, education, justice), no AIs with guns, etc.
As you mention ethics: what ethics do we apply to AI? None? Some? The same as to a human? As AI is replacing humans in decision-making, it needs to be held responsible just as a human.
If you explained to that hacker that govs and corps would leverage that same technology to spy on everyone and control their lives because line must go up, they might understand better than anyone else why this needs to be sabotaged early in the process.
As DeepSeek is shown us progress is hard to hinder unless you go to war and kill the people....
What would AGI lead to? Most knowledge work would be replaced in the same way as manufacturing work has been, and AGI is in control of the existing elite. It would be used to suppress any revolt for eternity, because surveillance could be perfectly automated and omnipresent.
Really not something to aspire to.
If the rich need less and less educated, healthy and well fed workers, then more and more people will get treated like shit. We are currently going into that direction with full speed. The rich aren't even bothering to hide this anymore from the public because they think they have won the game and can't be overruled anymore. Let's hope there will be still elections in four years and MAGA doesn't rig it like Fidesz in Hungary and so many other countries who have fallen into the hands of the internationalist oligarchy.
Maybe. I think it's a matter of culture.
Very few people mistreat their dogs and cats in wealthy countries. Why shouldn't people in power treat regular people at least as well as regular folks treat their pets?
I'm no history buff but my hunch is that mistreatment of people largely came from a fear that if I don't engage in cruelty to maximize power, my opponents will and given that they're cruel, they'll be cruel to me when they come to take over.
So we end up with this zero sum game of squeezing people, animals, resources and the planet in an arms race because everyone's afraid to lose.
In the past - you couldn't be sure if someone else was building up an army, so you had to build up an army. But now that we have satellites and we can largely track everything - we can actually agree to not engage in this zero sum dynamic.
There will be a shift from treating people as means to an end of power accumulation and containment, to treating people as something you just inherently like and would like to see prosper.
It'll be a shift away from this deeply corrosive idea of never ending competition and growth. When people's basic needs are met and no one is grouping up to take other people's goodies - why should regular people compete with one another?
They shouldn't and they won't. People who want to do good work will do so and improving the lives of people worldwide will be its own reward. Private islands, bunkers and yachts will become incomprehensible because there'll be no serf class to service any of it. We'll go back to if you want to be well liked and respected - you have to be a good person. I look forward to it :)
Because very few regular people will be their pets. These are the people who do everything in their power to pay their employees less. They treat their non-pets horribly... see feed lots and amazon warehouses. They actively campaign against programs which treat anyone well, particularly those who they aren't extracting wealth from. They whine and moan and cry about rules that protect people from getting sick and injured because helping those people would prevent them from earning a bit more profit.
They may spend a pile of money on surgery for their bunny, but if you want them to behave nicely to someone else's pet, or even someone else... well that's where they draw the line.
I guess you are hoping to be one of those pets... but what makes you think you're qualified for that, and why would you be willing to sacrifice all of your friends and family to the fate of feral dogs for the chance to be a pet?
I'm not suggesting that a few people become rich people's pets.
I'm saying it isn't inherent in human nature to mistreat other conscious beings.
If people were normally cruel to their pets, then I'd say yeah, this species just seems to enjoy cruelty and it is what it is. But we aren't cruel to pets, so the cruelty to workers does not stem from human nature, but other factors.
I gave one theory as to what causes the cruelty and that I'm somewhat optimistic that it'll run its course in due time.
Anyhoo :)
I suspect (but not sure) it may be more inherent than is generally believed. Then again simple physical pain is easy to gauge and relate to, but passing some law is an academic process where one can distance oneself from the ill effects on the adherents of the law. Then there are more variations - people will be tribalistic, they will be kind to their own kind/family even sacrificing, but murderous to some one of another tribe. All in all the average human is bundle of contradictions if you were to analyze simplistically, though it will make good sense when viewed through the lens of evolutionary biology.
Killer ape theory
Also, I would rather die than be someone's pet. Why should I have to rely on the charity and good humor of a ruler? Pardon my language but Fuck that entirely.
"Maintain humanity under 500,000,000 in perpetual balance with nature" (Georgia Guidestones, a monument describing someone's ideal future that is surrounded by conspiracy)
Will you be one of the 500 million pets? Or one of the 7.5 billion leftovers? Sure would be a shame if billions become unnecessary and then a pandemic 100X worse than covid got lableaked.
You can’t seriosly claim they are upending people’s jobs when those jobs were BS in the first place.
Or, my favorite outcome, the AI to iterate over itself and develop its own hardware and so on.
Sociology: The Oil Curse
For example US is probably the most resource rich country in the world, but people don't consider it for the resource curse because the rest of its economy is so huge.
Could it exist some day? Certainly. But currently 'AI' will never become an AGI, there's no path forward.
What indicators are these?
Here we are, a couple of years later, truly musing about sectors of the world embracing AI and others not.
That sort of piecemeal adoption was predictable but not that we are here to have this debate this soon!
These are all silicon valley "neck thoughts." They're entirely uninformed by the current state of the world and any travels through it. It's fantasies brought about by people with purely monetary desires.
It'd be funny if there wasn't billions of dollars being burnt to market this crap.
With every technological advancement it can always be good or bad. I believe it is going to be good to have a true AI available at our fingertips.
Because I can think of a large number of historical scenarios where malicious people get access to certain capabilities and it absolutely does not go well and you do have to somehow account for the fact that this is a real thing that is going to happen.
For example, several governments are actively engaged in a live streamed genocide and nothing akin to the 1789 revolt in Paris seems to be underway.
The 1789 was one of many revolutions (https://en.wikipedia.org/wiki/List_of_peasant_revolts) and it was not fought because of genocide of other people, it was due to internal problems.
Sure. The ancien régime was considered illegitimate so they got rid of it, and if a state is involved in genocide it is since the Holocaust considered illegitimate and it should lose its sovereignty.
That was never going to fly with the current U.S. administration. Not only is the word inclusive in there but ethical and trustworthy as well.
Joking aside, I genuinely don’t understand the “job creation” claims of JD Vance in his dinner speech in Paris.
Long-term I just can’t imagine what a United States will look like when 75% of the population are both superfluous and a burden to society.
If this happens fast, society will crumble. Sheep are best kept busy grazing.
The voters are locked in idiots, and don’t have agency at the moment. The bet from Musk, Theil, etc is that AI is as powerful and strategic as nuclear weapons were in 1947 - that’s what The Musk administration diplomacy seems to be like.
I've spent enough time building with these models to see their transformative potential. The productivity gains aren't marginal - they're exponential. And this is just with current-gen models.
China's approach is particularly telling. While they lack the massive compute infrastructure of US tech giants, their research output is impressive. Their models may be smaller, but they're remarkably efficient. Look at DeepSeek's performance-to-parameter ratio.
The upside potential is simply too large to ignore. We're seeing breakthroughs in protein folding that would take traditional methods decades. Education is being personalized at scale. The translation capabilities alone are revolutionary.
The reality is that AI development will continue regardless of these declarations. The optimal strategy isn't to slow down - it's to maintain the lead while developing safety measures in parallel. Everything else is just security theater.
(And yes, I've read the usual arguments about x-risk. The bottleneck isn't safety frameworks - it's compute and data quality.)
Europe is hopeless so it does not make a difference. China can sign and ignore it so it does not make a difference.
But it would not be wise for the USA to have their hands tied up so early. I suppose that the UK wants to go their usual "lighter touch regulation" than the EU route to attract investment. Plus they are obviously trying hard to make friends with the new US administration.
Not just that. A speaker in a conference I attended about a month ago mentioned that UK is actively drifting away from EU's stance, particularly on the aspect of AI safety in practice.
The upcoming European AI act has "machine must not make material decisions" as its cornerstone. UK are hell-bent to get AI into government functions, to ostensibly make everything more efficient. As part of that drive, the UK is aiming to allow AI to make material decisions, without human review or recourse. In a country still in the throes of the Post Office / Horizon scandal, that really takes some nerve.
Those in charge in this country know fully well that "AI safety" will be in violent conflict with the above.
As an attempt at a response, the UK is not party to the "EU AI Act" or the "DMA/DSA", we left before they were passed as law in the EU. The UK has its own "Digital Markets Act", but it is not an EU regulation. The GDPR is an inherited EU regulation.
The AI summit was French led, to get a global consensus on what sort of AI protections should be in place it looks like. The declaration was specific to this summit.
So, nothing to do with the EU, not a regulation.
When there is a difference in regulation between major economies there may be an advantage to be had, but my feeling is that the GDPR (or similar) is not the main reason European tech companies are unable to compete with the US. There is no equivalent of Silicon Valley in Europe that combines talent and investors in one place.
It's a hard problem to solve when Europe is made up of multiple countries and cultures, even if the EU has aligned some key things.
Similarly, whoever gains the most training and fine-tuning data from whatever source via whatever means first -- will likely be at advantage
Hard to see how that toothpaste goes back in the tube now
If an enemy state gives AI autonomous control and gains massive combat effectiveness, it puts the pressure to other countries to do the same.
No one wants sky net. But if we continue the current path, painting the world as we vs them. I m fearful sky net will be what we get
Where is the nuanced discussion of what we want and don't want AI to do as a society?
These details matter, and working through them collectively is progress, in stark contrast to getting dragged into identity politics arguments.
- I want AI to increase my freedom to do more and spend more time doing things I find meaningful and rewarding. - I want AI to help us repair damage we have done to ecosystems and reverse species diversity collapse. - I want AI to allow me to consume more in a completely sustainable way for me and the environment. - I want AI that is an excellent and honest curator of truth, both in terms of accurate descriptions of the past and nuanced explanations of how reality works. - I want AI that elegantly supports a diversity of values, so I can live how I want and others can live how they want. - I don't want AI that forcefully and arbitrarily limits my freedoms - I don't want AI that forcefully imposes other people's values on me (or imposes my values on others) - I don't want AI war that destroys our civilization and creates chaos - I don't want AI that causes unnecessary suffering - I don't want other people to use AI to tyrannize me or anyone else.
How about instead of being so broadly generic about "AI safety" declarations we get specific, and then ask people to make specific commitments in kind. Then it would be a lot more meaningful when they refuse, or when they oblige and then break them.
But ignoring the signaling going on on various sides would be a mistake. "AI" is for all practical purposes a synonym for algorithmic decision making, with potential direct implication on peoples lifes. Without accountability, transparency, recourse etc the unchecked expansion of "AI" in various use cases represents a significant regression for historically established rights. In this respect the direction of travel is clear: The US is dismantling the CFPB, even more deregulation (if that is at all possible) is coming, big tech will be trusted to continue "self-regulating" etc.
The interesting part is the UK stance. Somewhere in between the US and the EU in terms of citizen / consumer protections, but despite brexit probably closer to the latter, this siding with dog-eats-dog deregulation might signal an anxiety not to be left behind.
Promoting AI accessibility to reduce digital divides;
Ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all
Making innovation in AI thrive by enabling conditions for its development and avoiding market concentration driving industrial recovery and development
Encouraging AI deployment that positively shapes the future of work and labour markets and delivers opportunity for sustainable growth
Making AI sustainable for people and the planet
Reinforcing international cooperation to promote coordination in international governance
https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statemen...
You cannot face the world with how you want it to be, but only as it is.
What we know today is that a relatively straightforward series of matrix multiplications leads to what is perceived to be intelligence. This is simply true no matter how many declarations one signs.
Given that this is the case, there is nothing left to be done unless we want to go full Butlerian Jihad
There are a few non-linear function operations in between the matrix multiplications.
They will move to countries where the laws suit them. Generally business as usual these days and why big businesses have such a strong bargaining position with regard to national governments.
Both the current British and American governments are very pro big-business anyway. That is why Trump has stated he likes Starmer so much.
Signed by 60 countries out of "more than 100 participants", it just looks comically pathetic except "China" part:
Armenia, Australia, Austria, Belgium, Brazil, Bulgaria, Cambodia, Canada, Chile, China, Croatia, Cyprus, Czechia, Denmark, Djibouti, Estonia, Finland, France, Germany, Greece, Hungary, India, Indonesia, Ireland, Italy, Japan, Kazakhstan, Kenya, Latvia, Lithuania, Luxembourg, Malta, Mexico, Monaco, Morocco, New Zealand, Nigeria, Norway, Poland, Portugal, Romania, Rwanda, Senegal, Serbia, Singapore, Slovakia, Slovenia, South Africa, Republic of Korea, Spain, Sweden, Switzerland, Thailand, Netherlands, United Arab Emirates, Ukraine, Uruguay, Vatican, European Union, African Union Commission.
It appears to be essentially "We promise not to do evil" declaration. It contains things like "Ensure AI eliminates biases in recruitment and does not exclude underrepresented groups.".
What's the point of rejecting this? Seems like a show, just like the declaration itself.
Depending on what side of the things you are, if you don't actually take a look at it you might end up believing that US is planning to do evil and others want to eliminate evil or alternatively you might believe that US is pushing for progress when EU is trying to slow it down.
Both appear false to me, IMHO its just another instance of US signing off from the global world and whatever "evil" US is planning to do China will do it better for cheaper anyway.
although similar.
So far most AI development has been things like OpenAI making the ChatGPT chatbot and putting it up there for people to play with, likewise Anthropic, Deepseek et all.
I'm worried that declaration is implying you shouldn't be able to do that without trying to "promote social justice by ensuring equitable access to the benefits".
I think that is over bureaucracizing things.
I mean stuff like
>We underline the need for a global reflection integrating inter alia questions of safety, sustainable development, innovation, respect of international laws including humanitarian law and human rights law and the protection of human rights, gender equality, linguistic diversity, protection of consumers and of intellectual property rights.
Is quite hard to even parse. Does that mean you'll get grief for you bot speaking English becuase it's not protecting linguistic diversity? I don't know
What does "Sustainable Artificial Intelligence" even mean? That you run it off solar rather than coal? Does it mean anything?
Useful only when you rejecting it. I'm sure in culture war torn American mind it signals very important things about genitals and ancestry and the industry around these stuff but in a non-American mind it gives you the vibes that the Americans intent to do bad things with AI.
Ha, now I wonder if the people who wrote that were unaware of the situation in US or was that the outcome they expected.
"Given that the Americans not promising not to use this tech for nefarious tasks maybe Europe should de-couple from them?"
What if ASI happens next year and and renders most of the human workforce redundant? What if we get Terminator 2? Those might be more worthy of worry than "gender equality, linguistic diversity" etc? I mean the diversity stuff is all very well but not very AI specific. It's like you're developing H-bombs and worrying if they are socially inclusive rather about nuclear war.
IMHO, from European perspective, they are worried that someone will install a machine that has bias against let's say Catalan people and they will be disadvantaged against Spaniards and those who operate the machine will claim no fault the computer did it, leading to social unrest. They want to have a regulations saying that you are responsible of this machine and have grounds for its removal if creates issues. All the regulations around AI in EU are in that spirit, they don't actually ban anything.
I don't think AGI is considered seriously by anybody at the moment. That's completely different ball game and if it happens none of the current structures will matter.
Hear, hear. If Trump doesn't straighten up, the world might just opt for Chinese leadership. The dictatorship, the genocide, the communism--these are small things that can be overlooked if necessary to secure leadership that's committed to what really matters, which is.... signing pointless declarations.
Sustainable Development? Protect the environment? Promote social justice? Equitable access? Driving inclusive growth? Eliminating biases? Not excluding underrepresented groups?
These are not the values the American people voted for. Americans selected a president who is against "equity", "inclusion" and "social justice", and who is more "roman salute" oriented.
Of course this is all very disorienting to non-Americans, as a year or two ago efforts to do things like rename git master branches to main and blacklists to denylists also seemed to be driven by Americans. But that's just America's modern cultural dominance in action; it's a nation with the most pornographers and the most religious anti-porn campaigners at the same time; the home of Hollywood beauty standards, plastic surgery and bodybuilding, but also the home of fat acceptance and the country with the most obesity. So in a way, contradictory messages are nothing new.
Indeed. Our American values are and always have been Equality, Pursuit of Happiness, and legal justice respectively, as declared in our Declaration of Independence[1] and Constitution[2], even if there were and will be complications along the way.
Liberty is power, power is responsibility. Noone ever said living free was going to be easy, but everyone will say it's a fulfilling life.
[1]: https://en.wikipedia.org/wiki/United_States_Declaration_of_I...
[2]: https://en.wikipedia.org/wiki/Preamble_to_the_United_States_...
Why the people in the background are not entitled to it: https://a.dropoverapp.com/cloud/download/605909ce-5858-4c13-...
Why US government personel is being replaced with loyalist if you are about equality and legal justice?
You're free to follow the legal process to come to the country to seek your pursuit of happiness.
(+) terms and conditions apply; did not originally apply to nonwhite men or women. Hence allowing things like the mass internment of Americans of Japanese ethnicity.
>it has to be about a goal of saying everybody should end up in the same place. And since we didn’t start in the same place. Some folks might need more: equitable distribution
- Kamala Harris
https://www.youtube.com/watch?v=LaAXixx7OLo
This is arguing for giving certain people more benefits versus others based on their race and gender.
This mindset is dangerous, especially if you codify it into an automated system like an AI and let it make decisions for you. It is literally the definition of institutional discrimination.
It is good that we are avoiding codifying racism into our AI under the fake moral guise of “equity”
This is literally the textbook definition of discrimination based on skin color and it is done under the guise of “equity”.
It is literally defined in the civil rights act as illegal (title VII).
It is very good that the new administration is doing away with it.
You don't seem to understand either letter of the spirit of the civil rights act.
You're happy that a racist president who campaigned on racism and keeps on baselely accusing people who are members of minority groups of being unqualified while himself being the least qualified president in history is trying to encourage people to not hire minorities? Why exactly?
1. Job posted, anyone can apply
2. Candidate applies and interviews, team likes them and wants to move forward
3. Team not allowed to offer because candidate is not diverse enough
4. Team goes and interviews a diverse person.
Now if we offer the person of color a job, the first person was discriminated against because they would have got the job if they had had the right skin color.
If we don’t offer the diverse person a job, then the whole thing was purely performative because the only other outcome was discrimination.
This is how it works at my company. Go read Title VII of the civil rights act, this is expressly against both the letter and spirit of the law.
BTW calling everything you disagree with racism doesn’t work anymore, nobody cares if you think he campaigned on racism (he didn’t).
If anything, people pushing this equity stuff are the real racists.
Edit after reading about Trump firing the people administering our nuclear weapons: God damn Donald Trump, and God damn the people who are so foolish to believe the the disinformation networks that tell them Donald Trump isn't working to destroy this country.
Now we are starting to get Maori doctors and lawyers that is transforming our society - for the better IMO
That was because the law and medical schools went out of their way to recruit Maori students. To start with they were hard to find as nobody in their families (being Maori, and forbidden) had been to university
If you do not do anything about where people start then saying "aim for equal chance" can become a tool of oppression and keeping the opportunities for those who already have them.
Nuance is useful. I have heard many bizarre stories out of the USA about people blindly applying DEI with not much thought or planning. But there are many many places where carefully applied policies have made everybody's life better
Discrimination in favour of Maori students largely has benefited the children of Maori professionals and white people with a tiny percentage of Maori ancestry who take advantage of this discriminatory policy.
The Maori doctors and lawyers coming through these discriminatory programmes are not the people they were intended to target. Meanwhile, poor white children are essentially abandoned by the school system.
Maori were never actually excluded from university study, by the way. Maori were predominantly rural and secondary education was poor in rural areas but it has nothing to do with their ethnicity. They were never "forbidden". There have been Maori lawyers and doctors for as long as NZ has had universities.
For example, take Sir Apirana Ngata. He studied at a university in NZ in the 1890s, around the same time women got the vote. He was far from the first.
What you have alleged is a common narrative so I don't blame you for believing it but it is a lie.
Māori schools (which the vast majority of Māori attended) were forbidden by the education department from teaching the subjects that lead to matriculation. So yes, they were forbidden from going to university.
> Sir Apirana Ngata. He studied at a university in NZ in the 1890s,
That was before the rules were changed. It was because of people like Ngata and Buck that the system was changed. The racists that ran the government were horrified that the natives were doing better than the colonialists. They "fixed" it.
> Discrimination in favour of Maori students largely has benefited the children of Maori professionals
It has helped establish traditions of tertiary study in Māori families, starting in the 1970s
There are plenty of working class Māori (I know a few) that used the system to get access. (The quota for Māori students in the University of Auckland's law school was not filled in the 1990s. Many more applied for it, but if their marks were sufficient to get in without using the quota they were not counted. If it were not for the quota many would not have even applied)
Talking of lies: "white people with a tiny percentage of Maori ancestry who take advantage of this" that is a lie.
The quotas are not based on ethnicity solely. To qualify you had to whakapapa (whāngi children probably qualified even if they did not whakapapa, I do not know), but you also had to be culturally Māori.
Lies and bigotry are not extinct in Aotearoa, but they are in retreat. The baby boomers are very disorientated, but the millennials are loving it.
Better for everybody
We still have massive biases against minorities in our countries. Some people prefer to pretend they don't exist so they can justify the current reality.
Nothing related to Trump has anything to do with qualified candidates, Trump is the least qualified president we have ever had in american history. Not just because he hadn't served in government or as a general but because he is generally unaware about how government works and doesn't care to be informed.
I wonder if the new Woke should be called Neo-Woke, where you pretend to be mean to certain group of people to accommodate other group of people who suffered from accommodating another group of people.
IMHO all this needs to be gone and just be like "don't discriminate, be fair" but hey I'm not the trend setter.
Ironique.
If you add regulations, people will use other AI companies from countries without them. The only result of that would be losing the AI race.
You can see this at Huggingface top models, fine-tuned models are way more popular than official ones.
And this is also good considering most companies (even China) offer their models free to download and use locally. Democratizing AI is the good approach here.
What would this declaration mean for free and open source models?
For example Deepseek uses the MIT License.
> warning countries not to sign AI deals with “authoritarian regimes”
Well, that now also rules out the US.
Why would any country yield given the hard line negotiating stance the US is now taking? And the flip flopping and unclear messaging on our positions?
The US and the UK were right to reject it.
Oh noes, I can't slurp all my user data and sell / give it to whoever. How will I make money if I can't exploit people's data?
GDPR does not tell people to put banners on their website. People chose to not learn about GDPR and do "like everyone else".
Will AI lead to a Dark Forest scenario on Earth between humans and AI?
Safety is mentioned in a context with trust-worthiness, ethics, security, ...
[1] https://www.politico.eu/wp-content/uploads/2025/02/11/02-11-...
https://gist.github.com/lmmx/b373b9819318d014adfdc32182ab17f...
> Among the priorities set out in the joint declaration signed by countries including China, India, and Germany was “reinforcing international co-operation to promote co-ordination in international governance.”
so looks like they did
, at the same time, the goal of the declaration and summit to become less reliant on US and China.
> Meanwhile, Europe is seeking a foothold in the AI industry to avoid becoming too reliant on the US or China.
So basically Europe signed together with China to compete against US/UK or what happend?
At least they aren't threatening to invade our countries or extorting privileged position.
- there's a real threat from AI to the open internet by drowning it in spam, fraud, and misinformation
- current "AI safety" work does basically nothing to address this and is kind of pointless
It's important that AI-enabled processes which affect humans are fair. But that's just a subset of a general demand for justice from the machine of society, whether it's implemented by humans or AIs or abacuses. Which comes back to demanding fair treatment from your fellow humans, because we haven't solved the human "alignment problem".
> Worldcoin's business is to provide a reliable way to authenticate humans online, which it calls World ID.
I want to know whether an image or video is largely generated by AI, especially when it comes to news. Images and video often imply that they are evidence of something actually happening.
I don't know how this would be achieved. I also don't care. I just want people to be accountable and transparent.
Rules like this would just lead to everything having an “AI generated” label.
People have tried it in the past with trying to require fashion magazines and ads warn when they photoshop the models. But obviously everything is photoshopped, and the problem becomes how do we separate good photoshop (levels, blemish remover?) from bad photoshop (warp tool?).
[0] https://appleinsider.com/articles/23/11/30/a-bride-to-be-dis...
That happened years ago. And without llms
It's possible to stop developing things. It's not even hard; most of the world develops very little. Developing things requires capital, education, hard work, social stability and the rule of law. Many of us writing on this forum take those things for granted but it's more the exception than the rule, when you look at the entire planet.
I think we will face the scenario of runaway AI, where we lose control, and we may not survive. I don't think it will be a sky-net type of thing, sudden. At least not at first. What will happen is that we will replace humans by AIs in more and more positions of influence and power, gradually. Our ChatGPTs of today will become board members and government advisors of tomorrow. It will take some decades--though probably not many. Then, a face-off will come one day, perhaps. Humans vs them.
But if we do survive and come to regret the development of advanced AI and have a second chance, it will be trivially easy to suppress them: just destroy the semiconductor fabs, treat them the same way we treat ultra-centrifuges for enriching Uranium. Cut off the dangerous data centers, and forbid the reborn universities[1] from teaching linear algebra to the students.
[1]: We will lose advanced education for the masses on the way, as it won't be economically viable nor necessary.
That still feels like complete science fiction to me - more akin to appointing a complicated Excel spreadsheet as a board member.
Board members using tools like ChatGPT or Excel as part of their deliberations? That's great.
Replacing a board member entirely with a black box automation that makes meaningful decisions without human involvement? A catastrophically bad idea.
If the US were willing to compromise some of it's core values, then we could probably stop AI development domestically.
But what about the rest of the world? If China or India want to reap the benefits of enhanced AI capability, how could we stop them? We can hit them with sanctions and other severe measures, but that hasn't stopped Russia in Ukraine -- plus the prospect of world-leading AI capability has a lot more economic value than what Ukraine can offer.
So if we can't stop the world from developing these things, why hamstring ourselves and let our competitors have all of the benefits?
The mere fact that you imagine that Moscow's motivation in invading Ukraine is economic is a sign that you're missing the main reasons Moscow or Beijing would want to ban AI: (1) unlike in the West and especially unlike the US, it is routine and normal for the government in those countries to ban things or discourage their use, especially new things that might cause large societal changes and (2) what Moscow and Beijing want most is not economic prosperity, but rather to prevent another one of those invasions or revolutions that kills millions of people and to prevent the country's ruling coalition from losing power.
Let's suppose that, like you, both Moscow and Beijing do not want AGI to exist. What could they do about it? Why should they trust that the rest of the world will also pause their AI development?
This whole discussion is basically a variation on the prisoner's dilemma. Either you cooperate and AI risks are mitigated, or you do not cooperate and try to take the best outcome for yourself.
I think we can expect the latter. Not because it is the right thing or because it is the optimal decision for humanity, but because each individual will deem it their best choice, even after accounting for P(doom).
That is why the US and Europe should stop AI in their territories first especially as the US and Britain have been the main drivers of AI "progress" up to now.
Note: I'm not quite a doomer, but definitely a pessimist.
Great, can't wait for even some small improvement over the idiots in charge right now.
> What will happen is that we will replace humans by AIs in more and more positions of influence and power,
With all due respect, and not to be controversial, but how is this concern any more valid than the 'great replacement' worries.
The entire thing is little more than a thought experiment.
> Look at how fast AI has advanced, it you just project that trend out, we'll have human-level agents by the end of the decade.
No. We won't. Scale up transformers as big as you like, this won't happen without massive advances in architecture and hardware.
I believe it is possible, but the idea it'll happen any day now, and by accident is bullshit.
This is one step from Pascal's Wager, but being presented as fact by otherwise smart people.
Yes. Nobody can predict the future.
> but the idea it'll happen any day now, and by accident is bullshit.
We agree on that one: it won't be sudden, and it won't be by accident.
> I believe it is possible, but the idea it'll happen any day now, and by accident is bullshit.
Exactly. Not by accident. But if you believe it's possible, then we are both doomers.
The thing is, there are forces at play that want this. It's all of us. We in society want to remove other human beings from the chain of value. I use ChatGPT today to not pay a human editor. My boss uses Suno AI to play generated music with pro-productivty slogans before Teams meetings. The moment the owners of my enterprise believe it's possible to replace their highly paid engineers with AIs, they will do it. My bosses don't need to lift a finger today to ensure that future. Other people have already imagined it, and thus, already today we have well-founded AI companies doing their best to develop the technology. Their investors see an opportunity on making highly-skilled labor cheaper, and they are dumping their money into that enterprise. Better hardware, better models, better harnesses for those models. All of that is happening at speed. I'm not counting on accidents there. If anything, I'm counting on accidents Chernobyl style that make us realize, when there is still time, if we are stepping into danger.
Fortunately those days are over. Any politician dealing with a technical issue over their head can turn to an LLM and ask for comment. "Is signing this poorly thought out, difficult to interpret, laundry list of vague regulations, that could limit LLM progress, really a good idea? Break this down for me like I am 5, please."
(Even though the start appeared trivial, happenstance, even benign, the age where AI's rapidly usurped there own governance had begun. The only thing that could have made it happen faster, or more destructively, were those poorly thought out international agreements the world was lucky to dodge.)
I wonder why... maybe because it look like US replaced some "moral values" (not talking about "woke values" here, just plain "humanistic values" like in Human Rights Declaration) with "bottom line values" :-)
Hmm. > Donald Trump had a fiery phone call with Danish prime minister Mette Frederiksen over his demands to buy Greenland, according to senior European officials.
https://www.theguardian.com/world/2025/jan/25/trump-greenlan...
> The president has said America pays $200bn a year 'essentially in subsidy' to Canada and that if the country was the 51st state of the US 'I don’t mind doing it', in an interview broadcast before the Super Bowl in New Orleans
https://www.theguardian.com/us-news/video/2025/feb/10/trump-...
"Well done to the UK for not signing the fully compromised Statement on Inclusive and Sustainable Artificial Intelligence for the People and the Planet. Australia shouldn't have signed this statement either given how France intentionally derailed attempts to build a global consensus on how we can develop AI safely.
For those who lack context, the UK organised the AI Safety Summit at Bletchley Park in November 2023 to allow countries to discuss how advanced AI technologies can be discussed safely. There was a mini-conference in Korea, France was given the opportunity to organise the next big conference, a trust they immediately betrayed by changing the event to be about promoting investment in their AI industry.
They renamed the summit to the AI Action Summit and relegated safety from the sole focus to being just one of five focus areas, but not even one of five equally important focus areas, but one that seems to have been purposefully minimized even further.
Within the conference statement safety was reduced to a single paragraph that undermines safety if anything:
“Harnessing the benefits of AI technologies to support our economies and societies depends on advancing Trust and Safety. We commend the role of the Bletchley Park AI Safety Summit and Seoul Summits that have been essential in progressing international cooperation on AI safety and we note the voluntary commitments launched there. We will keep addressing the risks of AI to information integrity and continue the work on AI transparency.”
Let’s break it down: • First, safety is being framed as “trust and safety”. These are not the same things. The word trust appearing first is not as innocent as it appears: trust is the primary goal and safety is secondary to this. This is a very commercial perspective, if people trust your product you can trick them into buying it, even if it isn't actually safe. • Second, trust and safety are not framed as values important in and of themselves, but as subordinate to realising the benefits of these technologies, primarily the "economic benefits". While the development of advanced AI technologies could theoretically create a social surplus that could be taxed and distributed, it's naive to assume that this will be automatic, particularly when the policy mechanisms are this compromised. • Finally, the statement doesn’t commit to continuing to address these risks, but only narrowly to “addressing the risks of AI to information integrity” and “continue the work on AI transparency”. In other words, they’re purposefully downplaying any more significant potential risks, likely because discussing more serious risks would get in the way of convincing companies to invest in France.
Unfortunately, France has sold out humanity for short-term commercial benefit and we may all pay the price."
A little bit of Schadenfreude would feel really good right about now, what bothers me so much is that it's just symbolic for the US and UK NOT to sign these 'promises'.
It's not as if anyone would believe that the commitments would be followed through with. It's frustrating at first, but in reality this is a nothing burger, just emphasizing their ignorance.
> “The Trump administration will ensure that the most powerful AI systems are built in the US, with American-designed and manufactured chips,”
Sure, those american AI chips that are just pumping out right now. You'd think the administration would have advisers who know how things work.
That would be a kneejerk, short-sighted, self-destructive position to take, so I can believe people would do it.
And even more honestly, nobody cares
What’s much more important is strengthening rights that could be weakened by large scale data analysis of a population.
The right to a private life, and having minimal data collected — and potential then stolen — about your life.
The right of the state to investigate you for committing a crime using models and statistics only if a judge issues them a warrant to do so.
The right in a free market economy to transparent and level pricing instead of being gouged because an AI thinks people with physical characteristics similar to mine have lots of money.
Banning models that can create illegal images feels like legislators not aiming nearly high enough or smart enough:
“Vance just dumped water all over that. [It] was like, ‘Yeah, that’s cute. But guess what? You know you’re actually not the ones who are making the calls here. It’s us,’” said McBride.
"If you are not capable of violence, you are not peaceful. You are harmless"
Unless you can stand on equal field - either by alliance, or by your own power - you aren't a negotiating partner, and i say that as European.
this is exactly the value that caused so much war and death all over the world, for decades and thousands of years. still, even in 2025, it's being followed. are we doomed, chat?
Eg. birds abandoning rather than defending a perch when another approaches.
We're typically not happy to do that, though you can see it happening in some parts of the world right now.
Some kind of enlightened state where violent competition for resources (incl. status & power) no longer makes sense is imaginable, but seems a long way off.
The idea though is that if everyone suddenly disarmed overnight it would be so highly advantageous to a deviant aggressor that one would assuredly emerge.
yes.
i would also recommend The Prince as light reading to better understand how the world works.
Those with the biggest economies and/or most guns has changed a few times but the behaviours haven't and probably never will.
And the extent of which you can do global enforcement (which is often biased and selective) is limited by the reach of your economic and military power.
Which is why the US outspends the rest of the world military powers combined and how the US and their troops have waged ilegal wars and committed numerous crimes abroad and gotten away with it despite pieces of papers saying what they're doing is bad, but their reaction was always "what are you gonna do about it?".
See how many atrocities have happened under the watch of the UN. Laws aren't real, it's the enforcement that is real. Which is why the bullies get to define the laws that everyone else has to follow because they have the monopoly on enforcement.
Well, yes. This is why people have been paying a lot of attention to what exactly "rule of law" means in the US, and what was just norms that can be discarded.
Where it was used in a rhetorical tantrum throwing response to their power refuse to do the impossible like make an encryption backdoor 'only for good guys' and have the sheer temerity to stand against arbitrary exercises of authority by using the courts to check them only to their actual power.
If actual 'more powerful than the states' occurs they have nobody to blame but themselves for crying wolf.
The danger of something like AI+drones (or less imminent, AI+bioengineering) can lead to a severe degradation of security, like after the invention of nuclear weapons. A degradation in security, which requires collective action. Even worse, chaos could be caused by small groups weaponizing the technology against high profile targets.
If anything, the larger nations might be much more forceful about AI regulation than the above summit by demanding an NPT style treaty where only a select club has access to the technology in exchange for other nations having access to the applications of AI from servers hosted by the club.
You don't justify or define "severe degradation of security" just assert it as a fact.
The advent of nuclear weapons has meant 75 years of relative peace which is unheard of in human history, so quite the opposite.
Given that AI weapons don't exist, then you've just created a straw man.
I do claim that it is obvious that widespread acquisition of nuclear weapons by smaller states would be a severe degradation of security. Among other things, widespread ownership, would also mean that militant groups would acquire it and dictators would use it as a protection leading to an eventual use of the weapons.
Yes, the danger of AI weapons is nowhere at that level of nuclear weapons yet.
But, that is the trend.
https://www.nytimes.com/2023/07/25/opinion/karp-palantir-art...
For smaller countries nukes represented an increase in security, not a degradation. North Korea probably wouldn't still be independent today if it didn't have nukes, and Russia would never have invaded Ukraine if Ukraine hadn't given up its nukes. Restricting access to nukes is only in the interest of big countries that want to bully small countries around, because nukes level the playing field. The same applies to AI.
Regarding an increase in security with nukes, what you say applies for exceptions against a general non-nuclear background. Without restrictions, every small country could have a weapon, with a danger of escalation behind every conflict, authoritatrians using a nuclear option as a protection against a revolt etc. The likelihood of nuclear war would be much more(even with the current situation, there have been close shaves)
They need to dismantle bureaucracy to accelerate, NOT add new international agreements etc that would slow them down.
Once they become leaders, they will come up with such agreements to impose their "model" and way to do things.
Right now they need to accelerate and not get stuck.
But politics aside, this also points to something I've said numerous times here before: In order to write the rulebook you need to be a creator.
Only those who actually make and build and invent things get to write the rules. As far as "AI" is concerned, the creators are squarely the United States and presumably China. The EU, Japan, et al. being mere consumers sincerely cannot write the rules because they have no weight to throw around.
If you want to be the rulemaker, be a creator; not a litigator.
Exactly what I'd expect someone from a country where the economy is favoured over the society to say - particularly in the context of consumer protection.
You want access to a trade union of consumers? You play by the rules of that Union.
American exceptionalism doesn't negate that. A large technical moat does. But DeepSeek has jumped in and revealed how shallow that moat really is for AI at this neonatal stage.
In 2008 EU had more people, more money and bigger economy than US, with proper policies we could be in a place where we could bitch slap both Trump and Putin. And not left to wonder whose dick we have to suck deeper to get some gas.
I'm Japanese-American, so I'm not exactly happy about Japan's state of irrelevance (yet again). Their one saving grace as a special(er) ally and friend is they can still enjoy some of the nectar with us if they get in lockstep like the UK does (family blood!) when push comes to shove.
People and countries who make and ship products.
You don't make rules by writing several hundred pages of legalese as a litigator, you make rules by creating products and defining the market.
Be creators, not litigators.
That is completely wrong, at least if rules = the law. You might create fancy products all you like, if they do not adhere to the law in any given market, they cannot be sold there.
Create things? Or destroy them? Seems in reality, the most powerful nations are the ones who have acquired the greatest potential to destroy things. Creation is worthless if the dude next door is prepared to burn your house down because you look different to him.
The methane power plants needed. Of which the Trump administration is excited to create more of. More plants means they need to do more drilling and fracking. Great for the pocket books of their donors.
Methane totally doesn't leak! It's monitored by the companies doing the drilling and fracking and is totally easy to detect in the environment if it does escape some how. /s
And for what? Rich capital holders to get access to technology that furthers their wealth and allows them to lay off more labourers? The promise that some of its uses may potentially save the lives of rich people?
Nothing this administration is doing is terribly surprising. They're being very open about it. Good luck getting chip foundries set up, importing rare earth metals and sand, dealing with the ponds, and handling all the methane needed to power all of the DCs we're going to build. Have fun living next to one.
I don't understand how their isolationist, nationalist stance is supposed to support this technology and keep them, "the leader in AI." Sounds ridiculous. Global supply chains and stable free-trade agreements got us here.
We’ve managed to write the entire encyclopedia together, but we don't have a simple place to choose a high-level set of values that most of us can get behind.
I propose solutions to the current and multiversal AI alignment here https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/rational-u...
Information technology was never the constraint preventing moral consensus the way it was for, say, aggregating information. Not only is that a problem with achieiving the goals you lay out, its also the problem with the false assumption that they are goals most would agree should be solved as you have framed them.
Frankly, I can't stand these guys viewing themselves as some sort of high-IQ intellectual majority types when none of such labeling would be true and they're more like stereotypical tourists to the world. Though that's historically how anarchist university undergraduates had always been.