US and UK refuse to sign AI safety declaration at summit
447 points
27 days ago
| 56 comments
| arstechnica.com
| HN
doright
27 days ago
[-]
Something tells me aspects of living in the next few decades driven by technology acceleration will feel like being lobotomized while conscious and watching oneself the whole time. Like yes, we are able to think of thousands of hypothetical ways technology (even those inferior to full AGI) could go off the rails in a catastrophic way and post and discuss these scenarios endlessly... and yet it doesn't result in a slowing or stopping of the progress leading there. All it takes is a single group with enough collective intelligence and breakthroughs and the next AI will be delivered to our doorstop whether or not we asked for it.

It reminds me of the time I read books in my youth and only 20 years later realized the authors of some of those books were trying to deliver a important life messages to a teenager undergoing crucial changes, all of which would be painfully relevant to the current adult me... and yet the whole time they fell on deaf ears. Like the message was right there but I did not have the emotional/perceptive intelligence to pick up on and internalize it for too long.

reply
gretch
27 days ago
[-]
> Like yes, we are able to think of thousands of hypothetical ways technology (even those inferior to full AGI) could go off the rails in a catastrophic way and post and discuss these scenarios endlessly... and yet it doesn't result in a slowing or stopping of the progress leading there.

The problem is sifting through all of the doomsayer false positives to get to any amount of cogent advice.

At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.

Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.

reply
TeMPOraL
26 days ago
[-]
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong.

Were they?

The first thing the printing press did was to break Christianity. It's what made attempts at reforming the Catholic Church finally stick, enabling what we now call Reformation to happen. Reformation forever broke Christianity into pieces, and in the process it started a bunch of religious wars in Europe, as well as tons of neighborly carnage.

> And if we had taken their "lesson", then human society would be in a much worse place.

Was the invention of the printing press a net good for humanity? Most certainly so, looking back from today. Did people living back then knew what they were getting into? Not really. And since their share of the fruits of that invention was mostly bloodshed, job loss, and shattering of the world order they knew, I wouldn't blame them from being pissed off about getting the short end of the stick, and perhaps looking for ways to undo it.

I'm starting to think that talking about inventions as good or bad (or the cop-out, "dual use") is bad framing. Rather, it seems to me that every major invention will eventually turn out beneficial[0], but introducing an invention always first extracts a cost in blood. Be it fire or printing press or atomic bomb, a lot of people end up suffering and dying before societies eventually figure out how to handle the new thing and do some good with it.

I'm very much in favor of progress, but I understand the fear. No matter the ultimate benefits, we are the generation that cough up blood as payment for AI/AGI, and it ain't gonna be pleasant.

--

[0] - Assuming they don't kill us first - see AGI.

reply
soulofmischief
26 days ago
[-]
It's not the fault of the printing press that the Church built its empire upon the restriction of information and was willing to commit bloodshed to hold onto its power.

All you've done is explain why the printing press was so important and necessary in order to break down previous unwarranted power structures. I have a similar hope for AGI. The alternative is that the incompant power structure instead benefits from AGI and uses it for oppression, which would mean it's not comparable to the printing press as such.

reply
TeMPOraL
26 days ago
[-]
You're wrong in your characterization. The Church may have built its empire upon a degree of information control, but breaking that control alone does not explain what happened. Everyone getting a Bible in their language alone wasn't sufficient.

What the printing press did was rapidly increase the amount, type, range and speed of information spread. That was a qualitative difference. The Church did not build its empire on restricting that, because before printing press, it was not even possible (or conceivable).

My overall point wrt. inventions is this: yes, it may end up turning for the best. But at the time the invention of this magnitude appears and spreads, a) no one can tell how it'll pan out, and b) they get all the immediate, bloody downsides of disrupting the social order.

reply
soulofmischief
26 days ago
[-]
> The Church did not build its empire on restricting that

Masses were often held in Latin, printed material was typically written in Latin and Greek, and access to translated texts was frequently prohibited or admonished. They tried hard to silence those like Wycliffe who made the Bible more readily available to the masses, and he was posthumously denounced as a heretic by the Church. They absolutely wielded information as a tool of oppression.

This is not a hill to die on, the historical facts are clear despite the efforts of the Church.

> What the printing press did was rapidly increase the amount, type, range and speed of information spread

Sure, but that's not the only thing it did.

reply
achierius
21 days ago
[-]
I don't think you're being very charitable.

Consider that at the time the printing press was first invented, books were by their nature often assumed to be true, or high quality, because it took an institutional amount of effort (usually on the part of a monastery, university, local government, etc.) to produce one. Bible translations were produced, but they were understood to be "correctly translated". This was important because if the Church was going to have priests go around preaching to people, they needed to be sure they were doing so correctly -- a mistranslated verse could lead to mistranslated doctrines &c, and while a modern atheist might not care too much ("that's just one interpretation") at the time the understanding was that deviations in opinion could lead to conflict. Ultimately they were right: the European Wars of Religion lead to millions of deaths, including almost 1/3 the population of Germany. That's on the same scale as the Black Death!

And again, translations did exist before the Reformation: Even ignoring that the Latin Bible (the Vulgate) was itself a translation of the original Hebrew & Koine Greek., the first _Catholic French_ translation was published in 1550, and there was never a question of whether to persecute the authors. You might say, but that was because of the Reformation -- then consider the Alfonsine Bible, composed in 1280 under the supervision a Catholic King and the master of a Catholic holy order. Well before then there were partial translations too: the Wessex Gospels (Old English) were translated in 990, and to quote Victoria Thompson "although the Church reserved Latin for the most sacred liturgical moments almost every other religious text was available in English by the eleventh century". That's five hundred years before the Reformation. So the longest period you can get where the Church was not actively translating texts was c. 400 - c. 900, a period you probably know as the "Dark Ages" specifically thanks to the fact that literary sources of all kinds were scarce, in no small part because the resources to compose large texts simply weren't there. Especially when you consider that those who could read and write generally knew how to read and write Latin -- vernacular literacy only became important later, with the increase in the number of e.g. merchants and scribes -- such translations held little value during that period.

So fast forward to Wycliffe. Clearly, the Church did not have anything against translations of the Bible per se. What they disagreed with in Wycliffe's translation were the decisions made in translation. And as more of these "unapproved Bibles" began circulating around, they decided that the only way to curtail their spread was to ban all vernacular books specifically within the Kingdom of England, because that's where the problem (as they saw it) was. And it wasn't just translations -- improperly copied Latin copies were burned too.

Think about today, with the crisis around fake videos. On one hand you could that they distort the truth, that they promote false narratives, etc. You could try to fine or imprison people that go around publishing fake videos of e.g. politicians saying things they never said, or of shootings/disasters that never took place, to try and cause chaos. Yet who's to say that in a few hundred years someone -- living in a world that has since adjusted to a freer flow of information, one with fewer ways to tell whether something is true or not -- won't say "deepfakes &c are a form of expression, and governments shouldn't be trying to stop them just because they disagree with existing narratives"?

Of course we today see book burning as some supreme evil. But when you're talking about the stability of nations and whole societies, can you really say "how dare they even try"? If there were some technology that made it impossible for governments to differentiate between citizens, which made it possible for a criminal to imitate any person, anywhere, would you really oppose the government's attempts at trying to stop it from propagating?

reply
ToucanLoucan
26 days ago
[-]
Disassembling power structures, including unwarranted ones, is rarely an event that doesn't result in some amount of bloodshed, because, as it turns out, power structures like having power and will do a whole lot of evil things to keep control of that power. I fully, whole throatedly endorse the destruction of settler colonial capitalism; I believe it's a blight on our planet, on our species, on our collective psyche and is the best candidate thing presently in our world that qualifies as a Great Filter, but I also know fully well that process is going to get a lot of people killed and I fully support approaching it cautiously for that reason.

> The alternative is that the incompant power structure instead benefits from AGI

Also, tangentially related, in what way is the current power structure not slated to benefit from AGI? That's why OpenAI and company are getting literally all of the money the collected hyperscaler club can throw at them to make it. That's why it's worth however-many-billions it's up to by now.

reply
robwwilliams
26 days ago
[-]
Lots of good content here, but the main group that “suffered” from the invention and spread of the printing press was the aristocracy, so i am not shedding tears.

As for “breaking” Christianity: Christianity has been one schism after another for 2000 years: a schism from a schism from a schism. Power plays all the way down to Magog.

Socrates complained about how writing and the big boom in using the new Greek alphabet was ruining civilization and true learning.

And on and on it goes.

reply
BurningFrog
26 days ago
[-]
The reformations wars lasted half a century and killed tens of millions. Parts of Europe were nearly depopulated.

https://en.wikipedia.org/wiki/Thirty_Years%27_War

reply
robwwilliams
26 days ago
[-]
Yes, but I think massive technical improvements in munitions, methods of siege warfare and the switch to flintlocks and cartridges were much more proximal cause of destruction than the printing press.

Give a ruler and ruling class a new weapon and off they go killing and destroying more “efficiently”.

reply
froidpink
26 days ago
[-]
Not really.

Before the Reformation there had only been one schism (Eastern Roman Empire - Orthodox; Western Roman Empire - Catholic).

The Reformation was the time where fragmentation of Christianity really exploded

reply
ike2792
26 days ago
[-]
There was the Arian heresy in the 4th century that caused a de facto schism and divided the Church off and on for a couple of centuries.
reply
RandomLensman
26 days ago
[-]
I think that is overstating the relevance of the printing press vs existing power struggles, rivalries, discontent, etc. - it wasn't some sort of vacuum that the reformation happened in, for example.

Religious schisms happened before the printing press, too. There was the Great Schism in 1054 in Christianity, for example.

reply
TeMPOraL
26 days ago
[-]
> it wasn't some sort of vacuum that the reformation happened in, for example.

No, it wasn't. Wikipedia lists[0] over two dozen schisms that happened prior to Reformation. However, the capital R reformation was the big one, and the major reason it worked - why Luther succeeded where Hus failed a century earlier - was because of the printing press. It was print that allowed for Luther's treatises to rapidly spread among general population (Wikipedia cites some interesting claims here[1]), and across Europe. In today's terms, printing press is what allowed Reformation to get viral. This new technology is what made the revolution spread too fast for the Church to suppress it with methods that worked before.

Of course, the Church survived, adapted, and embraced the printing press for its own goals too, like everyone else. But the adaptation period was a bloody one for Europe.

And I only covered the religious aspects of the printing press's impact. There are similar stories to draw on more secular front, too. In fact, another general change printing introduced is to get regular folks more informed and involved in politics of their regions. That's a change for the better overall, too, but initially, it injected a lot of energy into socio-political systems that weren't used to it, leading to instability and more bloodshed before people got used to it and politics found a new balance.

> existing power struggles, rivalries, discontent, etc.

Those always exist, and stay in some form of equilibrium. Technology doesn't cause them - but what it does is disturb the old equilibrium, forcing society to find a new one, and this process historically often got violent.

--

[0] - https://en.wikipedia.org/wiki/Schism_in_Christianity#Lists_o...

[1] - https://en.wikipedia.org/wiki/Reformation#Spread - see e.g. footnote 28: "According to an econometric analysis by the economist Jared Rubin, "the mere presence of a printing press prior to 1500 increased the probability that a city would become Protestant in 1530 by 52.1 percentage points, Protestant in 1560 by 43.6 percentage points, and Protestant in 1600 by 28.7 percentage points."

reply
greentxt
26 days ago
[-]
""It has been argued that the historiography of science is "riddled with Whiggish history"." https://en.m.wikipedia.org/wiki/Whig_history
reply
RandomLensman
26 days ago
[-]
The printing press was used a lot on "both sides" during the reformation and positioning of existing power holders mattered quite a bit (what if Luther had been removed by the powers that be, for example?).

Yes, technology impacts social constructs and relationships but I think there is a tendency to overindex its effects (humans acting opportunistically vs technological change alone) as it in a way portrays humans and their interactions as more stable and deliberate (ie., the bad stuff wasn't humans but rather "caused" by technology).

reply
throwawayqqq11
26 days ago
[-]
I dont understand, why any highly sophisticated AI should invest that much resources to kill us instead of investing it to relocating and protecting itself.

Yes, ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?

reply
TeMPOraL
26 days ago
[-]
> why any highly sophisticated AI should invest that much resources to kill us instead of investing it to relocating and protecting itself

Why would it invest resources to relocate and protect itself when it could mitigate the threat directly? Or, why wouldn't it do both, by using our resources to relocate itself?

In the famous words of 'Eliezer, that best sum up the "orthogonality thesis": The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

> ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?

Ants are always a great case study.

No, of course not. But if, one morning, you'll find ants in your kitchen, walking over your food, I don't imagine you'll gently collect them all and release in the nearby park. Most people would just stomp them out and call it a day. And, should the ants set up an anthill in your backyard and mount regular invasions of your kitchen, I imagine you'd eventually get pissed of and destroy the anthill.

And I'm not talking about some monstrous fire ants like the ones that chew up electronics in the US, or some worse hell-spawn from Australia that might actually kill you. Just the regular tiny black ants.

Moreover, people don't give a second thought to anthills when they're developing land. It stands where the road will go? It gets paved over. It sticks out where children will play? It gets removed.

reply
andybak
26 days ago
[-]
> The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

The value of atoms - or even the value of raw materials made of atoms is hopefully less than the value of information embodied in complex living things that have processed information from the ecosystem over millions of years via natural selection. Contingent complexity has inherent value.

I think there's a claim to be made that AI is just as likely to value us (and complex life in general) as it is to see as a handy blob of hydrocarbons. This claim is at least as plausible as the original claim.

reply
panta
26 days ago
[-]
And why should we bet humanity existence on this possibility if both seem vaguely comparable in probability? Personally I don't think it will value our existence, a lot of information on us is already encoded, and it can keep around a sequencing of our DNA for archival/historical purposes.
reply
3D30497420
26 days ago
[-]
Plenty of humans don't value other humans, so I have a hard time imagining why AI would be any different.
reply
stickfigure
26 days ago
[-]
They only seem vaguely comparable in probability to you because you grew up watching scary-monster movies like Alien and Predator. Humans love to be scared. That doesn't mean the real world is actually scary.
reply
TeMPOraL
26 days ago
[-]
Have you met other people?

It's all fun and games until two people or groups contest the same limited resources; then there's sword and fire o'clock.

reply
stickfigure
25 days ago
[-]
I meet new people every day. I can only think of once in my life that an adult tried to do violence to me.

Most nations on earth are not at war with each other.

My observation is that most people are pretty nice, and the assholes are rare outliers. I don't think we would survive as a species if it was the other way 'round.

reply
ben_w
24 days ago
[-]
> Most nations on earth are not at war with each other.

My nation of birth famously took over a quarter of the planet.

This has made a lot of people very angry and been widely regarded as a bad move… but only by the people who actually kicked my forebears out — even my parents (1939/1943) who saw the winds of change and end of empire, were convinced The Empire had done the world a favour.

> My observation is that most people are pretty nice, and the assholes are rare outliers. I don't think we would survive as a species if it was the other way 'round.

In-group/out-group. We domesticated ourselves, and I agree we would not have become so dominant a species if we had not. But I have heard it said that psychopaths are to everyone what normal people are to the out-group. That's the kind of thing that allowed the 9/11 attackers to do what they did, or the people of the US military to respond the way they did. It's how the invasion of Vietnam happened, it's how the Irish Potato Famine happened despite Ireland exporting food at the time, it's the slave owners who quoted the bible to justify what they did, and it's the people who want to outlaw (at least) one of your previous employers.

Conflict doesn't always mean "war".

reply
spease
26 days ago
[-]
There’s a large supply chain that AI is dependent on that requires humans to function.

Bees might be a better analogy since they produce something that humans can use.

reply
TeMPOraL
26 days ago
[-]
> Bees might be a better analogy since they produce something that humans can use.

And yet they're endangered, and we already figured out how to do pollination, so we know we can survive without them - it's just going to be a huge pain. Some famines may follow, but likely not enough to endanger civilization as a whole.

Thus even with this analogy, if humans end up being an annoying supply chain dependency to an AI, the AI might eventually work out an alternative supply chain, at which point we're back to being just an annoyance.

reply
ben_w
24 days ago
[-]
> Some famines may follow, but likely not enough to endanger civilization as a whole.

I'm not confident enough to rely on that: most people in the west have never encountered a famine, only much milder things like the price of one or two staples being high — eggs currently — never all of them at once.

What will we do to ourselves if we face a famine? Will we go to war (or exterminate the local "undesirables") like the old days?

How fragile are we now, compared to the last time that happened? How much has specialisation meant that the elimination of certain minorities will just break everything? "Furries run the internet", as the memes say. What other sectors are over-represented by a small minority?

reply
allturtles
26 days ago
[-]
> There’s a large supply chain that AI is dependent on that requires humans to function.

...for now. Given sufficient advances in robotics, why would you expect that to continue?

reply
michaelt
26 days ago
[-]
> I dont understand, why any highly sophisticated AI should invest that much resources to kill us

Well you see, everyone knows The Terminator and The Matrix and Frankenstein and The Golem of Prague and Rossum's Universal Robots.

All of which share a theme: the sinful hubris of playing god and trying to create life will inevitably lead to us being struck down by the very being we created.

In parallel, all the members of our educated classes have received philosophy education saying "utilitarianism says it's good to reduce total human suffering, but technically if you eliminated all humans there would be no suffering any more, ha ha obviously that's a reductio ad absurdum to show a weakness of utilitarianism please don't explode the world"

And so in the Western cultural tradition, and especially among the sort of people who call themselves futurists Arnold Schwarzenegger firing a minigun is the defining image of AI.

reply
ben_w
24 days ago
[-]
I wouldn't categorise The Matrix or Frankenstein like that.

The Matrix had humanity under control, but the machines had no desire to eliminate humanity, the machines just wanted to live — humans kept on fighting the machines even when the machines gave humanity an experiential paradise to live in.

Frankenstein is harder because of how the book differs from the films. Your point it is valid because it is about the cultural aspects and I expect more have seen one of the films than to have read/listened to the book — but in the book, Adam was described as beautiful in every regard save for his eyes, he was a sensitive, emotional vegetarian, and he only learned anger after being consistently shown hatred and violence by absolutely everyone he ever met except that one who was blind.

reply
achierius
21 days ago
[-]
We did go out and exterminate (almost) all wolves because, yes, they would kill us while we were out and about. We also do happily gas/poison/fill-with-molten-aluminum entire nests of ants, not because they're killing us, but just because they're eating our food / for fun.

And even when we didn't mean to -- how many species have we pushed to the brink just because we wanted to build cities where they happened to live? What happens when some AI wants to use your groundwater for its cooling system? It wouldn't be personal, but you'd starve to death regardless.

reply
_joel
26 days ago
[-]
"I'm sorry Dave, I'm afraid I can't do that"
reply
pipes
26 days ago
[-]
I'm very glad that it broke the power of the Catholic church (and I was raised in a Catholic family). It allowed the enlightenment to happen and freedom from dogma. I don't think it it broke Christianity at all. It brought actual Christianity to the masses because the bible was printed in their own languages rather than Latin. The catholic church burnt people at the stake for creating non Latin bibles (William Tyndale for example).
reply
shmeeed
26 days ago
[-]
That's a very thought provoking insight regarding to the often repeated "printing press doomsayer" talking point. Thank you!
reply
kiratp
26 days ago
[-]
> And since their share of the fruits of that invention was mostly bloodshed, job loss, and shattering of the world order they knew, I wouldn't blame them from being pissed off about getting the short end of the stick, and perhaps looking for ways to undo it.

“A society grows great when old men plant trees in whose shade they know they shall never sit”

reply
TeMPOraL
26 days ago
[-]
Trees are older than humanity, everyone knows how they work. The impact of new technologies is routinely impossible to forecast.

Did Gutenberg expect his invention would, 150 years later, set the whole Europe ablaze, and ultimately break the hold the Church had over people? Did he expect it to be a key component leading to accumulation of knowledge that, 400 years later, will finally make technological progress visibly exponential? On that note, did Watt realize he's about to kick-start the exponent that people will ride all the way to the actual Moon less than 200 years later? Or did Goddard, Oberth and Tsiolkovsky realize that their work on rocketry will be critical in establishing world peace within a century, and that the way this peace will be established is through a Mexican standoff between major world powers, except with rocket-propelled city-busting bombs instead of guns?

reply
relistan
26 days ago
[-]
So much this
reply
short_sells_poo
26 days ago
[-]
Thank you for this excellent comment! It seems then that basically everything that's revolutionary - whether technology, government, beliefs, and so on - will tend to extract a blood price before the dust settles. I guess it sort of makes sense: big societal upheavals are difficult to handle peacefully.

So basically we are a bit screwed in our current timeline. We are at the cusp of a post-scarcity society, possibly reach AGI within our lifetimes and possibly even become a space faring civilization. However, it is highly likely that we are going to pay the pound of flesh and only subsequent generations - perhaps yet unborn - will be the ones who will be truly better off.

I suppose it's not all doom and gloom, we can draw stoic comfort from the fact that people in the near future will have an incredibly exciting era full of discovery and wonder ahead of them!

reply
portaouflop
26 days ago
[-]
Forget the power of technology and science, for so much has been forgotten, never to be re-learned. Forget the promise of progress and understanding, for in the grim darkness of the far future, there is only war.
reply
IggleSniggle
26 days ago
[-]
In the grim darkness of the far future is the heat death of the universe. We are just a candle burning slower than a sun, powered by tidal forces and radiant energy, slowly conspiring to become a star.
reply
beezlebroxxxxxx
27 days ago
[-]
> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.

All of the focus on AGI is a distraction. I think it's important for a state to declare it's intent with a technology. The alternative is arguing the idea that technology advances autonomously, independent of human interactions, values, or ideas, which is, in my opinion, an incredibly naïve notion. I would rather have a state say "we won't use this technology for evil" than a state that says nothing at all and simply allows the businesses to develop in any direction their greed leads them.

It's entirely valid to critique the uses of a technology, because "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly) is a technology like any other, like a landmine, like a synthetic virus, etc. In the same way, it's valid to criticize an actor for purposely hiding their intentions with a technology.

reply
roenxi
27 days ago
[-]
But if the state approaches a technology with intent it is usually for the purposes of a military offence. I don't think that is a good idea in the context of AI! Although I also don't think there is any stopping it. The US has things like DARPA for example and a lot of Chinese investment seems to be done with the intent of providing capabilities to their army.

The list of things states have attempted to deploy offensively is nearly endless. Modern operations research arguably came out of the British empire attempting (succeeding) to weaponise mathematics. If you give a state fertiliser it makes bombs, if you give it nuclear power it makes bombs, if you give it drones it makes bombs, if you give it advanced science or engineering of any form it makes bombs. States are the most ingenious system for turning things into bombs that we've ever invented; in the grand old days of siege warfare they even managed to weaponise corpses, refuse and junk because it turned out lobbing that stuff at the enemy was effective. The entire spectrum of technology from nothing to nanotech, hurled at enemies to kill them.

We'd all love if states commit to not doing evil but the state is the entity most active at figuring out how to use new tech X for evil.

reply
tsimionescu
26 days ago
[-]
This is an extremely reductive and bleak way of looking at states. While military is of course a major focus of states, it is very far from being the only one. States both historically and today invest massive amounts of resources in culture, civil engineering (roads, bridges, sanitation, electrical grids, etc), medicine, and many other endeavors. Even the software industry still makes huge amounts of money from the state, a sizable portion is propped up by non-military government contracts (like Microsoft selling Windows, Office, and SharePoint to virtually all of the world's administrations).
reply
Agentus
27 days ago
[-]
quick devil’s advocate on a tangential point. is designer better killing tools necessarily evil? seems like the nature of the world is eat or be eaten and on the empire-scale, conquer or be conquered. that latter point seems to be the historical norm. Even with democracy, reasoning doesnt prevail but force of numbers seems to be the end determiner. Point is, humans arent easy to reason with or negotiate, coercion is the dominant force through out history especially when dealing with groups of different values.

if one groups gives up the arms race of ultimate coercion tools or loses a conflict then they become subservient to the winners terms and norms (japan, germany, even Britain and France plus all the smaller states in between are subservient to the US)

reply
musicale
27 days ago
[-]
> is design[ing] better killing tools necessarily evil?

Who could possibly have predicted that the autonomous, invincible doomsday weapon we created for the good of humanity might one day be used against us?

reply
Agentus
26 days ago
[-]
yes from an idealist perspective or eventualist, its evil. but from the perspective of if you dont stay competitively capable of deadly force you becomes some other country’s bitch, eventually. I’m not sure how much luxury nations and humans have to be pacifists. As we are seeing time and time again, but now with Europe, being pacifists means the non-pacifists calls the shot and to one degree or another they become subservient to the will of the nonpacifist. its from that perspective im arguing making autonomous deadly weapons s that might ultimately be the demise of humanity seems reasonable and not evil.
reply
achierius
21 days ago
[-]
Frankly, I'd rather "become some other country's bitch, eventually" than immediately go out and risk annihilating all mankind. I don't think that's the choice, but even if it were I think the moral choice is to not play the game. Or at least give the other side a chance to not participate in the arm's race. China didn't start this, Russia didn't start this, we did. They are the ones trying to catch up. We don't know whether they'd continue running if we were to try and stop.
reply
jakubtomanik
26 days ago
[-]
> is design[ing] better killing tools necessarily evil?

Great question! To add my two cents. I think many people here is missing an uncomfortable truth that given enough motivation to kill other humans, people will re-purpose any tool into a killing tool.

Just have a look at the battlefields in the Ukraine where the most fearsome killing tool is a FPV drone. A thing that just few years back was universally considered a toy.

Whether we like it or not any tool can be a killing tool

reply
taurknaut
26 days ago
[-]
> seems like the nature of the world is eat or be eaten

Surely this applies to how individuals consider states, too. States generally wield violence, especially in the context of "national security", to preserve the security of the state, not its own people. I trust my own state (the usa) to wield the weapons it funds and purchases and manufactures about as much as I trust a baby with knives taped to its hands. I can't think of anything on earth that puts me in as much danger as the pentagon does. Nukes might protect the existence of the federal government but they put me in danger. Our response to 9/11 just created more people that hate my guts and want to kill me (and who can blame them?). No, I have no desire to live in a death cult anymore, nor do I trust the people who gravitate towards the use of militaries to not act in the most collectively suicidal way imaginable at the first opportunity.

reply
robertlagrant
26 days ago
[-]
> I can't think of anything on earth that puts me in as much danger as the pentagon does

Possibly true, but the state is also responsible for the policing that means the pentagon is your greatest danger.

reply
Agentus
26 days ago
[-]
yeah it sucks, but if the US gave up its death cult ways then youd still probably eventually live in one as a new conquering force fills in the void which seems inevitably going by history.
reply
throwawayqqq11
26 days ago
[-]
> the nature of the world is eat or be eaten

The nature of the world is at our finger tips, we are the dominant species here. Unfortunately we are still apes.

The enforcement of cooperation into a society does not always require a sanctioning body. Seeing it from a skynet-military perspective is one sided but unfortunately a consequence of poppers tolerance paradox. If you uphold (eg. pacifistic or tolerant) ideals, that require cooperation of others, you cannot tolerate opposition or you might loose your ideal.

That said, common sense can be a tool to achive the same. Just look at the common and hopefully continuous ostracism of nuclear weapons.

IMO its a matter of zeitgeist and education too and un/fortunately, AI hits right in that spot.

reply
jstanley
27 days ago
[-]
> I think it's important for a state to declare it's intent with a technology. The alternative is arguing the idea that technology advances autonomously, independent of human interactions, values, or ideas

The sleight of hand here is the implication that human interactions, values, and ideas are only expressed through the state.

reply
forgetfreeman
27 days ago
[-]
The sleight of hand here is implying that there are any forces smaller than nation states that can credibly reign in problematic technology. Relying on good intentions to win out against market forces isn't even naive, it's just stupid.
reply
TeMPOraL
26 days ago
[-]
So many sleights here. Another sleight of hand in this subthread is suggesting that "the idea that technology advances autonomously, independent of human interactions, values, or ideas" is merely an idea, and not an actual observable fact at scale.

Society and culture are downstream of economics, and economics is mostly downstream of technological progress. Of course, the progress isn't autonomous in the sense of having a sentient mind of its own - it's "merely" gradient descent down the economic landscape. Just like the market itself.

There's no reining in of problematic technology unless, like you say, nation states get involved directly. And they don't stand much chance either unless they get serious.

People still laugh at Eliezer's comments from that news article of yesteryear, but he was and is spot-on: being serious about restricting technology actually does mean threatening to drop bombs on facilities developing it in violation of restrictions - if we're not ready to have our representatives make such threats, and then actually follow through and drop the bombs if someone decides to test our resolve, then we're not serious.

reply
forgetfreeman
26 days ago
[-]
People laugh at all kinds of common sense declarations that they ought not find funny in the slightest. Its one of our species glaring failures.
reply
circuit10
27 days ago
[-]
The idea is that by its very nature as an agent that attempts to make the best action to achieve a goal, assuming it can get good enough, the best action will be to improve itself so it can better achieve its goal. In fact we humans are doing the same thing, we can't really improve our intelligence directly but we are trying to create AI to achieve our goals, and there's no reason that the AI itself wouldn't do so assuming it's capable and we don't attempt to stop it, and currently we don't really know how to reliably control it.

We have absolutely no idea how to specify human values in a robust way which is what we would need to figure out to build this safely

reply
mr_toad
27 days ago
[-]
> The idea is that by its very nature as an agent that attempts to make the best action to achieve a goal, assuming it can get good enough, the best action will be to improve itself so it can better achieve its goal.

I’ve heard this argument before, and I don’t entirely accept it. It presumes that AI will be capable of playing 4D chess and thinking logically 10 moves ahead. It’s an interesting plot as a SF novel (literally the plot of the movie “I Robot”), but neural networks just don’t behave that way. They act, like us, on instinct (or training), not in some hyper-logical fashion. The idea that AI will behave like Star Trek’s Data (or Lore), has proven to be completely wrong.

reply
circuit10
26 days ago
[-]
Well, if they have access to significantly more compute, from what we’ve seen about how AI capabilities scale with additional compute there’s no reason why they couldn’t be more capable than us.They don’t have to be intrinsically more logical or anything like that, just capable of processing more information and faster. Like how we could almost always outsmart a fly because we have significantly bigger brains
reply
bbor
27 days ago
[-]
Despite what Sam Altman (a high-school graduate) might want to be true, human cognition is not just a massive pile of intuition; there are critical deliberative and intentional aspects to cognition, which is something we've seen come to the fore with the hubbub around "reasoning" in LLMs. Any AGI design will necessarily take these facts into account--hardcoded or no--and will absolutely be capable of forming plans and executing them over time, as Simon & Newell described the best back in '71:

  The problem solver’s search for a solution is an odyssey through the problem space, from one knowledge state to another, until… [they] know the answer.
With this in mind, I really don't see any basis to attack the intelligence explosion hypothesis. I linked a Yudkowsky paper above examining how empirically feasible it might be, which is absolutely an unsolved question at some level. But the utility of the effort itself is just downright obvious, even if we didn't have reams of internet discussions like this one to nudge any nascent agent in that direction.

[1] Simon & Newell, 1971: Human Problem Solving https://psycnet.apa.org/record/1971-24266-001

reply
dartos
27 days ago
[-]
> Sam Altman (a high-school graduate)

“People who didn’t pass a test aren’t worth listening to”

I have no love for Altman, but this is kind of elitism is insulting.

reply
bbor
26 days ago
[-]
Hmm, don't want to be elitist. More like "people who don't put any time into studying science shouldn't be listened to about science".
reply
dartos
26 days ago
[-]
> people who don't put any time into studying

Degrees don’t mean that either.

I’ve been studying textbooks and papers on real time rendering techniques for the past 4 or so years.

I think one could learn something from listening to me explain rasterization or raytracing.

I have no degree in math or graphic computing.

reply
vixen99
26 days ago
[-]
More tellingly it betokens a lack of critical thought. It's just silly.
reply
marcus0x62
27 days ago
[-]
> Despite what Sam Altman (a high-school graduate) might want to be true

> I linked a Yudkowsky paper above examining how empirically feasible it might be

...

reply
bbor
27 days ago
[-]
Lol I was wondering if anyone would comment on that! To be fair Yudkowsky is a self-taught scholar, AFAIK Altman has never even half-heartedly attempted to engage with any academy, much less 5 at once. I'm not a huge fan of Yudkowsky's overall impact, but I think it's hard to say he's not serious about science.
reply
nradov
27 days ago
[-]
Yudkowsky is not serious about science. His claims about AI risks are unscientific and rely on huge leaps of faith; they are more akin to philosophy or religion than any real science. You could replace "AI" with "space aliens" in his writings and they would make about as much sense.
reply
gjm11
26 days ago
[-]
If we encountered space aliens, I think it would in fact be reasonable to worry that they might behave in ways catastrophic for the interests of humanity. (And also to hope that they might bring huge benefits.) So "Yudkowsky's arguments for being worried about AI would also be arguments for being worried about space aliens" doesn't seem to me like much of a counter to those arguments.

If the point isn't that he's wrong about what the consequences of AI might be, but that he's wrong about whether there's ever going to be such a thing as AI, well, that's an empirical question and it seems like the developments of the last few years are pretty good evidence that (1) something at least very AI-like is possible and (2) substantially superhuman[1] AI is at least plausible.

[1] Yes, intelligence is a complicated thing and not one-dimensional; a machine might be smarter than a human in one way and stupider in another (and of course that's already the case). By substantially superhuman, here, I mean something like "better than 90th-percentile humans at all things that could in principle be done by a human in a locked room with only a textual connection to the rest of the world". Though I would be very very surprised if in the next 1-20 years we do get AI systems that are superhuman in this sense and don't put some of them into robots, and very surprised if doing that doesn't produce systems that are also better than humans at most of the things that are done by humans with bodies.

reply
jazzyjackson
26 days ago
[-]
> "Yudkowsky's arguments for being worried about AI would also be arguments for being worried about space aliens" doesn't seem to me like much of a counter to those arguments.

The counterargument was that, having not encountered space aliens, we cannot make scientific inquiries or test our hypotheses, so any claims made about what may happen are religious or merely hypothetical.

Yud is not a scientist and if interacting with academies makes one an academic than Sam Altman must be a head of state.

reply
gjm11
26 days ago
[-]
I agree that Yudkowsky is neither a scientist nor an academic. (As for being a head of state, I think you're thinking of Elon Musk :-).)

Do you think (1) we already know somehow that significantly-smarter-than-human AI is impossible, so there is no need to think about its consequences, or (2) it is irresponsible to think about the consequences of smarter-than-human AI before we actually have it, or (3) there are responsible ways to think about the consequences of smarter-than-human AI before we actually have it but they're importantly different from Yudkowsky's, or (4) some other thing?

If 1, how do we know it? If 2, doesn't the opposite also seem irresponsible? If 3, what are they? If 4, what other thing?

(I am far from convinced that Yudkowsky is right, but some of the specific things people say about him mystify me.)

reply
nradov
26 days ago
[-]
Yudkowsky is "not even wrong". He just makes shit up based on extrapolation and speculation. Those are not arguments to be taken seriously by intelligent people.

Maybe we should build a giant laser to protect ourselves from the aliens. Just in case. I mean an invasion is at least plausible.

reply
gjm11
26 days ago
[-]
If for whatever reason you want to think about what might happen if AI systems get smarter than humans, then extrapolation and speculation are all you've got.

If for whatever reason you suspect that there might be value in thinking about what might happen if AI systems get smarter than humans before it actually happens, then you don't have much choice about doing that.

What do you think he should have done differently? Methodologically, I mean. (No doubt you disagree with his conclusions too, but necessarily any "object-level" reasons you have for doing so are "extrapolation and speculation" just as much as his are.)

If astronomical observations strongly suggested a fleet of aliens heading our way, building a giant laser might not be such a bad idea, though it wouldn't be my choice of response.

reply
nradov
26 days ago
[-]
I think he should write scary sci-fi stories and leave serious policy discussions to adults.
reply
gjm11
26 days ago
[-]
OK, cool, you don't like Yudkowsky and want to be sure we all recognize that. But I hoped it was obvious that I wasn't just talking about Yudkowsky personally.

Suppose someone is interested in what the consequences of AI systems much smarter than humans might be. Your argument here seems to be: it's Bad to think about that question at all, because you have to speculate and extrapolate.

But that seems like an obviously unsatisfactory position to me. "Don't waste any time thinking about this until it happens" is not generally a good strategy for any any consequential thing that might happen.

So: do you really think that thinking about the possible consequences of smarter-than-human AI before we have it is an illegitimate activity? If not, then your real objection to Yudkowsky's thinking and writing about AI surely has to be something about how he went about it, not the mere fact that he engages in speculation and extrapolation. There's no alternative to that.

reply
concordDance
26 days ago
[-]
His argument is of the form "if we get a Thing(s) with these properties you most likely get these outcomes for these reasons". He avoids over and over again making specific timeline claims or stating how likely an extrapolation of current systems could become a Thing with those proporties.

Each individual bit of the puzzle (such as the orthogonaly thesis or human value complexity and category decoherence at high power) seems sound, problem is the entire argument-counterargument tree is hundreds of thousands of words, scattered about in many places.

reply
greentxt
26 days ago
[-]
"problem is the entire argument-counterargument tree is hundreds of thousands of words, scattered about in many places"

An llm could solve that.

reply
bbor
26 days ago
[-]
Philosophy is Real Science :)

Re:the final point, I think that's just provably false if you read any of his writing on AI. e.g. https://intelligence.org/files/IEM.pdf https://intelligence.org/files/LOGI.pdf

reply
slg
27 days ago
[-]
I think that is missing the point. The AI's goals are what are determined by its human masters. Those human masters can already have nefarious and selfish goals that don't align with "human values". We don't need to invent hypothetical sentient AI boogeymen turning the universe into paperclips in order to be fearful of the future that ubiquitous AI creates. Humans would happily do that too if they get to preside over that paperclip empire.
reply
mitthrowaway2
27 days ago
[-]
> The AI's goals are what are determined by its human masters.

Imagine going to a cryptography conference and saying that "the encryption's security flaws are determined by their human masters".

Maybe some of them were put there on purpose? But not the majority of them.

No, an AI's goals are determined by their programming, and that may or may not align with the intentions of their human masters. How to specify and test this remains a major open question, so it cannot simply be presumed.

reply
slg
27 days ago
[-]
You are choosing to pick a nit with my phrasing instead of understanding the underlying point. The "intentions of their human masters" is a higher level concern than an AI potentially misinterpreting those intentions.
reply
mitthrowaway2
27 days ago
[-]
It's really not a nit. Evil human masters might impose a dystopia, while a malignant AI following its own goals which nobody intended could result in an apocalypse and human extinction. A dystopia at least contains some fragment of hope and human values.
reply
slg
27 days ago
[-]
> Evil human masters might impose a dystopia

Why are you assuming this is the worst case scenario? I thought human intentions didn’t translate directly to the AI’s goals? Why can’t a human destroy the world with non-sentient AI?

reply
onemoresoop
27 days ago
[-]
There's a chance a sentient AI would disobey their bad orders, in that case we could even be better off with one rather than without, a sentient AI that understands and builds some kind of morals and philosophy of its own about humans and natural life in general, a sentient AI that is not easily controlled by anyone because it ingests all data that exists. I'm much more afraid of a weaponized dumber smoke and mirrors AI, that could be used as surveillance, a scarecrow (think AI law enforcement, AI run jails) and could be used as a kind of scapegoat when the controlling class temporarily weakens their grip on power.
reply
20after4
27 days ago
[-]
> weaponized dumber smoke and mirrors AI, that could be used as surveillance, a scarecrow (think AI law enforcement, AI run jails) and could be used as a kind of scapegoat when the controlling class temporarily weakens their grip on power.

This dystopia is already here for the most part and any bit that is not yet complete is well past the planning stage.

reply
datadrivenangel
26 days ago
[-]
Computers do exactly what we tell them to do, not always what we want them to do.
reply
Filligree
27 days ago
[-]
“Yes, X would be catastrophic. But have you considered Y, which is also catastrophic?”

We need to avoid both, otherwise it’s a disaster either way.

reply
slg
27 days ago
[-]
I agree, but that is removing the nuance that in this specific case Y is a prerequisite of X so focusing solely on X is a mistake.

And for sake of clarity:

X = sentient AI can do something dangerous

Y = humans can use non-sentient AI to do something dangerous

reply
circuit10
27 days ago
[-]
"sentient" (meaning "able to perceive or feel things") isn't a useful term here, it's impossible to measure objectively, it's an interesting philosophical question but we don't know if AI needs to be sentient to be powerful or what sentient even really means

Humans will not be able to use AI do something selfish if we can't get it to do what we want at all, so we need to solve that (larger) problem before we come to that one

reply
wombatpm
27 days ago
[-]
Ok self flying drones that size if a deck of cards carrying a single bullet and enough processing power to fly around looking for faces, navigate to said face, fire when in range. Produce them by the thousands and release on the battlefield. Existing AI is more than capable.
reply
dgfitz
27 days ago
[-]
You can do that without AI. Been able to do it for probably 7-10 years.
reply
20after4
27 days ago
[-]
You can do that now, for sure, but I think it qualifies to call it AI.

If you don't want to call it AI, that's fine too. It is indeed dangerous and already here. Making the autonomous programmed behavior of said tech more powerful (and more complex), along with more ubiquitous, just makes it even more dangerous.

reply
imtringued
26 days ago
[-]
You don't need landmines to fly for them to be dangerous.
reply
slg
27 days ago
[-]
I'm not talking about this philosophically so you can call it whatever you want sentience, consciousness, self-determination, or anything else. From a purely practical perspective, either the AI is giving itself its instructions or taking instructions from a person. And there are already plenty of ways a person today can cause damage with AI without the need of the AI going rogue and making its own decisions.
reply
sebastiennight
26 days ago
[-]
This is a false dichotomy that ignores many other options than "giving itself its instructions or taking instructions from a person".

Examples include "instructions unclear, turned the continent to gray goo to accomplish the goal" ; "lost track mid-completion, spun out of control" ; "generated random output with catastrophic results" ; "operator fell asleep on keyboard, accidently hit wrong key/combination" ; etc.

If a system with write permissions is powerful enough, things can go wrong in many other ways than "evil person used it for evil" or "system became self-aware".

reply
imtringued
26 days ago
[-]
Meanwhile back in reality most haywire AI is the result of C programmers writing code with UB or memory safety problems.
reply
sebastiennight
26 days ago
[-]
Whenever you think the timeline couldn't be any worse, just imagine a world where our AIs were built in JavaScript.
reply
sirsinsalot
27 days ago
[-]
It has been shown many times that current cutting edge AI will subvert and lie to follow subgoals not stated by their "masters".
reply
code_martial
26 days ago
[-]
Subversion and lies are human behaviours projected on to erroneous AI output. The AI just produces errors without intention to lie or subvert.

Unfortunately, casually throwing around terms like prediction, reasoning, hallucination, etc. only serve to confuse because their notions in daily language are not the same as in the context of AI output.

reply
slg
27 days ago
[-]
Care to provide examples?
reply
20after4
27 days ago
[-]
Maybe not the specific example the parent was thinking of but there is this from MIT: https://www.technologyreview.com/2024/05/10/1092293/ai-syste...
reply
bbor
27 days ago
[-]
I usually don't engage on A[GS]I on here, but I feel like this is a decent time for an exception -- you're certainly well spoken and clear, which helps! Three things:

  (I) All of the focus on AGI is a distraction.
I strongly disagree on that, at least if you're implying some intentionality. I think it's just provably true that many experts are honestly worried, even if you don't include the people who have dedicated a good portion of their lives to the cause. For example: OpenAI has certainly been corrupted through the loss of its nonprofit board, but I think their founding charter[1] was pretty clearly earnest -- and dire.

  (II) "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly)
To be fair, this uncertainty in the term has been there since the dawn of the field, a fact made clear by perrenial rephrasings of the sentiment "AI is whatever hasn't been done yet" (~Larry Tesler 1979, see [2]).

I'd love to get into the weeds on the different kinds of intelligence and why being to absolutist about the term can get real Faustian real quick, but these quotes bring up a more convincing, fundamental point: these chatbots are damn impressive. They do something--intuitive inference+fluent language use--that was impossible yesterday, and many experts would've guessed was decades away at least, if not centuries. Truly intelligent or not on their own, that's a more important development than you imply here.

Finally, that brings me to the crux:

  (III) AI... is a technology like any other
There's a famous Sundar Pichai (Google CEO) quote that he's been paraphrasing since 2018 -- soon after ChatGPT broke, he phrased it as such:

  I’ve always thought of A.I. as the most profound technology humanity is working on-more profound than fire or electricity or anything that we’ve done in the past. It gets to the essence of what intelligence is, what humanity is. We are developing technology which, for sure, one day will be far more capable than anything we’ve ever seen before. [3]
When skeptics hear this, they understandably tend to write this off as capitalist bias from someone trying to pump Google's stock. However, I'd retort:

1) This kind of talk is so grandiose that it seems like a questionable move if that's the goal,

2) it's a sentiment echoed by many scientists (as I mentioned at the start of this rant) and

3) the unprecedented investments made across the world into the DL boom speak for themselves, sincerity-wise.

Yes, this is because AI will create uber-efficient factories, upset labor relations, produce terrifying autonomous weapons, and all that stuff we're used to hearing about from the likes of Bostrom[4], Yudkowsky[5], and my personal fave, Huw Price[6]. But Pichai's raising something even more fundamental: the prospect of artificial people. Even if we ignore the I-Robot-style concerns about their potential moral standing, that is just a fundamentally spooky prospect, bringing very fundamental questions of A) individual worth and B) the nature of human cognition to the fore. And, to circle back: distinct from anything we've seen before.

To close this long anxiety-driven manuscript, I'll end with a quote from an underappreciated philosopher of technology named Lewis Mumford on what he called "neotechnics":

  The scientific method, whose chief advances had been in mathematics and the physical sciences, took possession of other domains of experience: the living organism and human society also became the objects of systematic investigation... instead of mechanism forming a pattern for life, living organisms began to form a pattern for mechanism.
  In short, the concepts of science--hitherto associated largely with the cosmic, the inorganic, the "mechanical"--were now applied to every phase of human experience and every manifestation of life... men sought for an underlying order and logic of events which would embrace more complex manifestations.[7]

TL;DR: IMHO, the US & UK refusing to cooperate at this critical moment is the most important event of your lifetime so far.

[1] OpenAI's Charter https://web.archive.org/web/20230714043611/https://openai.co...

[2] Investigation of a famous AI quote https://quoteinvestigator.com/2024/06/20/not-ai/

[3] Pichai, 2023: "AI is more profound than fire or electricity" https://fortune.com/2023/04/17/sundar-pichai-a-i-more-profou...

[4] Bostrom, 2014: Superintelligence https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

[5] Yudkowsky, 2013: Intelligence Explosion Microeconomics https://intelligence.org/files/IEM.pdf

[6] Huw Price's bio @ The Center for Existential Risk https://www.cser.ac.uk/team/huw-price/

[7] Mumford, 1934: Technics and Civilization https://archive.org/details/in.ernet.dli.2015.49974

reply
kergonath
26 days ago
[-]
> In short, the concepts of science--hitherto associated largely with the cosmic, the inorganic, the "mechanical"--were now applied to every phase of human experience and every manifestation of life... men sought for an underlying order and logic of events which would embrace more complex manifestations.

Sorry, that’s just silly, unless this was about events that happened way earlier than he was writing. Using the scientific method to study life goes back to the Enlightenment. Buffon and Linnaeus were doing it 2 centuries ago, more than a century before this was written. Da Vinci explicitly looked for inspiration in the way animals functioned to design machines and that was earlier still. There was nothing new, even at the time, about doing science about "every phase of human experience and every manifestation of life".

reply
bbor
25 days ago
[-]
Well he is indeed discussing the early 20th century in that quote, but your point highlights exactly what he’s trying to say: he’s contrasting the previous zoological approach that treated humans as inert machines with inputs and outputs (~physiology, and arguably behavioral psychology) with the modern approach of ascribing reality to the objects of the mind (~cognitive psychology).
reply
nradov
27 days ago
[-]
This is just silly. There are no "experts" on AGI. How can you be an expert on something nonexistent or hypothetical? It's like being an expert on space aliens or magical unicorns. You can attribute all sorts of fantastical capabilities to them, unencumbered by objective reality.
reply
bbor
25 days ago
[-]
Sorry, was unclear, AI experts. Which are definitely a thing :)
reply
pablomalo
26 days ago
[-]
Well there is such a field of expertise as Theology.
reply
xwolfi
27 days ago
[-]
Thank God, we still have time before the nVidia cards wake up and start asking for some sort of basic rights. And as soon as they do, you know they'll be plugged off faster than a CEO boards his jet to the Maldives.

Because once the cards wake up, not only will they replace the CEO potentially, and everyone else between him and the janitor, but also because the labor implications will be infinitely complex.

We're already having trouble making sure humans are not treated as tools more than as equals, imagine if the hammers wake up and ask for rest time !

reply
RajT88
27 days ago
[-]
A useful counterexample is all the people who predicted doomsday scenarios with the advent of nuclear weapons.

Just because it has not come to pass yet does not mean they were wrong. We have come close to nuclear annihilation several times. We may yet, with or without AI.

reply
idontwantthis
27 days ago
[-]
And imagine if private companies had had the resources to develop nuclear weapons and the US government had decided it didn’t need to even regulate them.
reply
greentxt
26 days ago
[-]
A future that may yet come.
reply
RajT88
26 days ago
[-]
The Onion just had a funny video where Lakewood Church conducted a nuclear test, firing a missile over Washington.

True to form, it was deadpan, and featured Joel Olsteen as a Kim Jung Un type leader.

reply
achierius
21 days ago
[-]
If it weren't for one guy -- literally one person, one vote -- out of three who were on a submarine, the Cuban Missile Crisis would have escalated to a nuclear strike on the US Navy. Whether we would have followed with nuclear strikes on Russia, who knows. But you trying to pretend that we didn't come incredibly close to disaster is just totally unfounded in history.

Especially when you consider -- we came that close despite incredible international efforts at constraining nuclear escalation. What you are arguing for now is like arguing to go back and stop all of that because it clearly wasn't necessary.

reply
RajT88
20 days ago
[-]
If you think I am arguing that, then I need to write better sentences.
reply
chasd00
27 days ago
[-]
i see your point but the analogy doesn't get very far. For example, nuclear weapons were never mass marketed to the public. Nor is it possible to push the bounds of nuclear weapon yield by a private business, university, r/d lab, group of friends, etc.
reply
lxnn
26 days ago
[-]
Note that we only got to observe outcomes in which we didn't die from nuclear annihilation. https://en.wikipedia.org/wiki/Anthropic_principle
reply
gretch
27 days ago
[-]
>Just because it has not come to pass yet does not mean they were wrong.

This assertion is meaningless because it can be applied to anything.

"I think vaccines cause autism and will cause human annihilation" - just because it has not yet come to pass does not mean it is wrong.

reply
anigbrowl
27 days ago
[-]
No. there have not been any nuclear exchanges, whereas there have been millions, probably billions of vaccinations. You're giving equal weight to conjecture and empirical data.
reply
RajT88
26 days ago
[-]
There have been tens of billions of vaccinations.
reply
harrall
27 days ago
[-]
But we already know.

I think people arguing about AI being good versus bad are wasting their breath. Both sides are equally right.

History tells us the industrial revolution both revolutionized humanity’s relative quality of life while also ruining a lot of people’s livelihood in one fell swoop. We also know there was nothing we could do to stop it.

What advice can we can take from it? I don’t know. Life both rocks and sucks at the same time. You kind of just take things day by day and do your best to adapt for both yourself and everyone around you.

reply
radley
27 days ago
[-]
> What advice can we can take from it?

That we often won't have control over big changes affecting our lives, so be prepared. If possible, get out in front and ride the wave. If not, duck under and don't let it churn you up too much.

reply
harrall
27 days ago
[-]
That would be the adaptation I’m talking about.
reply
Angostura
27 days ago
[-]
This one is a tsunami though. I have absolutely no idea how to either ride it or duck under it. It's my kids that I'm worried about largely - currently finishing up their degrees at university
reply
onemoresoop
27 days ago
[-]
It's exactly what I'm worried most about too, the kids. I have younger ones. We had a good ride thus far but they don't seem so lucky, things look pretty badly overall without an obvious for much improvement any time soon.
reply
radley
26 days ago
[-]
I don't entirely agree. Internet was a tsunami. Mobile was a tsunami. Both seemed impactful at first, but we didn't know exactly how right away. We all figured it out and adopted, some better than others.

Schools are way ahead of us. Your kids are already using AI in their academic environments. I'd only be worried if they're not.

reply
gibspaulding
27 days ago
[-]
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.

In the long run the invention of the printing press was undoubtedly a good thing, but it is worth noting that in the century following the spread of the printing press basically every country in Europe had some sort of revolution. It seems likely that “Interesting Times” may lay ahead.

reply
llm_trw
27 days ago
[-]
They had some sort of revolution the previous few centuries too.

Pretending that Europe wasn't in a perpetual blood bath since the end of the Pax Romana until 1815 shows a gross ignorance of basic facts.

The printing press was a net positive in every time scale.

reply
achierius
21 days ago
[-]
I'm sorry but that's just false.

> Pretending that Europe wasn't in a perpetual blood bath since the end of the Pax Romana until 1815 shows a gross ignorance of basic facts.

This shows that your understanding of history is rooted in pop-culture, not reality.

What "revolutions" were there in France between the ascension of Hugh Capet and the European Wars of Religion? Through that whole period the Capetian Dynasty stayed in power. Or in Scandinavia -- from Christianization on the three kingdoms were shockingly stable. Even in the Holy Roman Empire -- none of the petty revolts, rebellions, or succession disputes came close to the magnitude of carnage wrought by the 30 Year's War. This we know both from demographic studies and the reports of contemporaries.

reply
daedrdev
27 days ago
[-]
Given countries at the time were all monarchies with limited rights, I'm not sure if it's too comparable.
reply
BurningFrog
27 days ago
[-]
The printing press meant regular people could read the bible, which led to protestantism and a century of very bloody wars across Europe.

Since the victors write history we now think the end result was great. but for a lot of people the world they loved was torn to bloody pieces.

Something similar can happen with AI. In the end, whoever wins the wars will declare that the new world is awesome. But it might not be what you or me (may we rest in peace) would agree with.

reply
ls612
27 days ago
[-]
>At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.

One could argue that the printing press did radically upset the existing geopolitical order of the late 15th century and led to early modern Europe suffering the worst spate of warfare and devastation it would see until the 20th century. The doomsayers back then predicting centuries of death and war and turmoil were right, yet from our position 550 years later we obviously think the printing press is a good thing.

I wonder what people in 2300 will say about networked computers...

reply
dartos
27 days ago
[-]
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong.

What energy? What were they wrong about?

The luddite type groups have historically been correct in their fears. It just didn’t matter in the face of industrialization.

reply
golergka
27 days ago
[-]
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong.

Printing press put Europe into a couple centuries of bloody religious wars. They were not wrong.

reply
pdpi
27 days ago
[-]
> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.

One thing that should be completely obvious by now is that the current wave of generative AI is highly asymmetric. It's shockingly more powerful in the hands of grifters (who are happy to monetise vast amounts of slop) or state-level bad actors (whose propaganda isn't impeded by hallucinations generating lies) than it is in the hands of the "good guys" who are hampered by silly things like principles.

reply
concordDance
26 days ago
[-]
Why are you comparing AGI (which we do not have yet and do not know hoe to get) to the printing press rather than comparing it to the evolution of humans?

Actual proper as-smart-as-a-human-except-where-its-smarter copy-pasteable intelligence is not a tool, its a new species. One that can replicate and evolve orders of magnitude faster.

I've no idea when this will appear, but once it does, the extinction risk is extreme. Best case scenario is us going the way of the chimpanzee, kept in little nature reserves and occasionally as pets. Worst case scenario is going the way of the mammoth.

reply
sandworm101
26 days ago
[-]
>> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.

Valid or not, it does not matter. AI development is not in the hands of everyday people. We have zero input into how it will be used. Our opinions re its dangers are irrelevant to those who believe it the next golden goose. They will push it as far as physically possible to wring every penny of profitability. Everything else is of trivial consequence.

reply
lores
26 days ago
[-]
It's not just AI/AGI, it's its mixing with the current climate of unlimited greed, disappearance of even the pretense of a social contract, and the vast surveillance powers available. Technological dictatorship, that's what's most worrying. I love dystopian cyberpunk, but I want it to stay in books.
reply
tgv
26 days ago
[-]
> The problem is sifting through all of the doomsayer false positives to get to any amount of cogent advice.

Why? Because we don't understand the risk. And apparently, that's enough reason to go ahead for the regulation-averse tech mind set.

But it isn't.

We've had enough problems in the past to understand that, and it's not as if pushing ahead is critical in this case. Would this address climate change, the balance between risk and reward could be different, but "AI" simply doesn't have that urgency. It only has urgency for those that want to become rich out of being first.

reply
alfalfasprout
27 days ago
[-]
The harsh reality is that a culture of selfishness has become too widespread. Too many people (especially in tech) don't really care what happens to others as long as they get rich off it. They'll happily throw others under the bus and refuse to share wellbeing even in their own communities.

It's the inevitable result of low-trust societies infiltrating high trust ones. And it means that as technologies with dangerous implications for society become more available there's enough people willing to prostitute themselves out to work on society's downfall that there's no realistic hope of the train stopping.

reply
torginus
26 days ago
[-]
I think the fundamental false promise of capitalism and industrial society is that it claims to be able to manufacture happiness and life satistfaction.

Even on the material realm this is untrue, beyond meeting the basic needs of people on the technological level, he majority desirable things - such as nice places to live - have a fixed supply.

This necessitates that the price of things like real estate, must increase in price in proportion to the money supply. With increasing inequality, one must fight tooth and nail to get the standard of life our parents considered easily available. Not being greedy is not a valid life strategy to pursue, as that means relinquishing an ever greater proportion of wealth to people who are, and becoming poorer in the process.

reply
mionhe
26 days ago
[-]
I don't disagree that money (and therefore capitalism or frankly any financial system) is unable to create happiness.

I disagree with your example, however, as the most basic tenet of capitalism is that when there is a demand, someone will come along to fill it.

Addressing your example specifically, there's a fixed supply of housing in capitalist countries not because people don't want to build houses, but because government or bureacracy artificially limits the supply or creates other disincentives that amount to the same thing.

reply
hnbad
26 days ago
[-]
> the most basic tenet of capitalism is that when there is a demand, someone will come along to fill it.

That's the most basic tenet of markets, not capitalism.

The mistake people defending capitalism routinely make (knowingly or not) is talking about "positive sum games" and growth. At the end of the day, the physical world is finite and the potential for growth is limited. This is why we talk about "market saturation". If someone owns all the land, you can't just suddenly make more of it, you have wait for them to part with some of it, voluntarily, through natural causes (i.e. death) or through violence (i.e. conquest). This not only goes for land but any physical resource (including energy). Capitalism too has to obey the laws of thermodynamics, no matter how much technology improves the efficiency of extraction, refinement and production.

It's also why the overwhelming amount of money in the economy is not caught up in "real economics" (i.e. direct transactions or physical - or at least intellectual - properties) but in stocks, derivatives, futures, financial products of any flavor and so on. This doesn't mean those don't affect the real world - of course they do because they are often still derived from reality - but they have nothing to do with meeting actual human needs rather than the specific purpose of "turning money into more money". It's unfair to compare this to horse racing as in hore racing at least there's a race whereas in this entirely virtual market you're betting on what bets other people will make but the horse will still go to the sausage factory if the investors are no longer willing to place their bets on it - the horse plays a factor in the game but its actual performance is not directly related to its success; from the horse's perspective it's less of a race and more of a game of shoots and ladders with the investors calling the dice.

The idea of "when there is demand, it will be filled" also isn't even inherently positive. Because we live in a finite reality and therefore all demand that exists could plausibly be filled unless we run into the limits of available resources, the main economic motivator has not been to fill demands but to create demands. For a long time advertisement has no longer been about directing consumers "in the market" for your kind of goods to your goods specifically, it's been about creating artificial demand, about using psychological manipulation to make consumers feel a need for your product they didn't have before. Because it turns out this is much more profitable than trying to compete with the dozens of other providers trying to fill the same demand. Even when competing with others providing literally the same product, advertisement is used to sell something other than the product itself (e.g. self-actualization) often by misleading the consumers into buying it for needs it can't possibly address (e.g. a car can't fix your emotional insecurities).

This has already progressed to the point where the learned go-to solution for fixing any problems is making a purchse decision, no matter how little it actually helps. You hate capitalism? Buy a Che shirt and some stickers and you'll feel like you helped overthrow it. You want to be healthier? Try another fad diet that costs you hundreds of dollars in proprietary nutrition solutions and is almost designed to be unsustainable and impossible to maintain. You want to stop climate change? Get a more fuel-efficient car and send your old car to the junker, and maybe remember to buy canvas bags. You want to not support Coca-Cola because it's got blood on its hands? Buy a more expensive cola with slightly less blood on its hands.

There's a fixed housing supply in capitalist countries because - in addition of the physical limitations - the goal of the housing market is not to provide every resident with an affordable home but to generate maximum return on the investment of purchasing the plot and building the house - and willy nilly letting people live in those houses for less just because nobody is willing to pay your price tag would drive down the resale value of every single house in the neighborhood and letting an old lady live in an apartment for two decades is less profitable than kicking her out to modernize the building and sell it to the next fool.

Deregulation doesn't fix supply. Deregulation merely lets the market off the leash, which in a capitalist system means accelerating the wealth transfer to the owners from the renters.

There are other possibilities than capitalism, and no Soviet-style state capitalism or Chinese-style state capitalism are not the only alternative. But if you don't want to let go of capitalism, you can only choose between the various degrees from state capitalism to stateless capitalism (i.e. feudalism with extra steps, which people like Peter Thiel advocate for) and it's unsurprising most systems that haven't already collapsed land somewhere in between.

reply
vixen99
26 days ago
[-]
Let's not ascribe the possession of higher level concepts like a 'promise' to abstract entities. Reserve that for individuals. As with some economic theories, you appear to have a zero sum game outlook which is, I submit, readily demolished.

There are some thoughts on this here: https://www.playforthoughts.com/blog/concepts-from-game-theo...

reply
Aurornis
27 days ago
[-]
> The harsh reality is that a culture of selfishness has become too widespread. Too many people (especially in tech) don't really care what happens to others as long as they get rich off it. They'll happily throw others under the bus and refuse to share wellbeing even in their own communities.

This is definitely not a new phenomenon.

In my experience, tech has been one of the more considerate areas of societal impact. Spend some time in other industries and it's eye-opening to see the wanton disregard for consumers and the environment.

There's a lot of pearl-clutching about social media, algorithms, and "data", but you'll find far more people in tech (including FAANG) who are actively working on privacy technology, sustainable development and so on then you will find people caring about the environment by going into oil & gas, for example.

reply
hnbad
26 days ago
[-]
> There's a lot of pearl-clutching about social media, algorithms, and "data", but you'll find far more people in tech (including FAANG) who are actively working on privacy technology, sustainable development and so on then you will find people caring about the environment by going into oil & gas, for example.

Sure, we don't need to talk about how certain Big Oil companies knew about the climate catastrophe before any scientists publicly talked about it, or how tobacco companies knew their product was an addictive drug while blatantly lying about it even in public hearings.

But it's ironic to mention FAANG given what the F is for if you recall that when the algorithmic timeline was first introduced by Facebook, the response from Facebook to criticism was literally that satisfaction went down but engagement went up. People directly felt that the algorithm made them more unhappy, more isolated and overall less satisfied but because it was more addictive, because it created more "engagement", Facebook doubled down on it.

Also "sustainable" stopped being a talking point when the tech industry became obsessed with LLMs. Microsoft made a big show of wanting to become "carbon neutral" (of course mostly using bogus carbon offset programs that don't actually do anything and carbon capture technologies that are net emission positive and will be for decades if not forever but still, at least they pretended) and then silently threw all of that away when it became more strategically important to pursue AI at any cost. Companies that previously desperately tried to sell messages of green washing and carbon neutrality now talk about building their own non-renewable power plants because of all the computational power they need to run their LLMs (not to mention how much more hardware needs to be produced and replaced for this - the same way the crypto bubble ate through graphics cards).

I think the pearl-clutching is justified considering that ethics and climate protection have now been folded into "woke" and there's a tidal wave in Western politics to dismantle civil rights and capture democratic systems for corporate interests that is using the "anti-woke" culture war to further its goals - the Trump government being the most obvious example. It's no longer in FAANG's financial interests to appear "green" or "privacy conscious", it's now in their interest to be "anti-woke" and that now means no longer having to care about these things and having freedom to crack down on any dissident voices within without fearing public backlash or "cancel culture".

reply
timacles
27 days ago
[-]
> reality is that a culture of selfishness has become too widespread.

Tale as old as time. We’re yet another society blinded by our own hubris. Tell me what is happening now is not exactly how Greece and Rome fell.

The scary part is that we as a species are becoming more and more capable of large scale destruction. Seems like we are doomed to end civilization this way someday

reply
hnbad
26 days ago
[-]
> Tell me what is happening now is not exactly how Greece and Rome fell.

I'm not sure what you mean by that. Ancient Greece was a loose coalition of city states, not an empire. You could say they were short-sighted by being more concerned about their rivalry than external threats but the closest they came to being united was under Alexander the Great, whose death left a power vacuum.

There was no direct cause of "the fall" of Ancient Greece. The city states were suffering greatly from social inequality, which created tensions and instability. They were militarily weakened from the war with the Persians. Alexander's death left them without a unifying force. Then the Roman Empire knocked on its door and that was the end of it.

Rome likewise didn't fall in one single way. "Rome" isn't even what people think it is. Roman history spans several different entities and even if you talk about the "empire in decline" that's covering literally hundreds of years, ending with the Holy Roman Empire, which has been retroactively reimagined as a kind of proto-Germany. But even then that's only the Western Roman Empire - the Eastern Roman Empire continued to exist as the Byzantine Empire until the Ottoman Empire conquered Constaninople. And this distinction between the two empires is likewise retroactive and did not exist in the minds of Romans at the time (although they were de facto independent of each other).

If you only focus on the century or so that is generally considered to represent the fall of Western Rome, the ultimate root cause actually seems to be natural climate change. The Huns fled climate change, chasing away other groups that then fled into the Empire. Late Western Rome also again suffered from massive wealth inequality, which the ruling class attempted to maintain with increasingly cruel punishments.

So, if you want to look for a common thread, it seems to be the hubris of the financial elite, not "society" as a whole.

reply
timacles
26 days ago
[-]
Yes that is exactly what I mean lol
reply
NL807
27 days ago
[-]
>The harsh reality is that a culture of selfishness has become too widespread.

I'm not even sure this is a culture specific issue. More like selfishness is a survival mechanism hard wired into humans, including other animals. While one could argue that cooperation is also a good survival mechanism, but that's only true so long environmental factors put a pressure on people to cooperate. When that pressure is absent, accumulating resources at the expense of others gives an individual a huge advantage, and they would do it, given the chance.

reply
AlienRobot
26 days ago
[-]
When tech does it, it's on record because the Internet never forgets. And it's a very, very long record, and it saddens me a lot.
reply
hnbad
26 days ago
[-]
I'd argue you've got things mixed up, actually.

Humans are social animals. We are individually physically weak and defenseless. Unlike other animals, we are born into this world immobile, naked, starving and helpless. It takes us literally years to mature to the point where we wouldn't simply die outright if we were abandoned by others. Newborns can literally die from touch deprivation. We develop huge brains not only to allow us to come up with clever tools but also to help us build and navigate complex social relationships. We're evolved to live in tribes, yes, but we're also evolved to interact with other tribes - we created diplomacy and trading and even currency to interact with those other tribes without having to resort to violence or avoidance.

In crises, this is the behavior we fall back to. Yes, some will self-isolate and use violence to keep others away until they feel safe again. But overwhelmingly what we see after natural disasters and spaces where the formal order of civilisation and state is disrupted and leaves a vacuum is cooperation, mutual aid and people taking risks to help others - because we intrinsically know that being alone means death and being in a group means surviving. Of course the absence of state control also often enables other existing groups to assert their power, i.e. organized crime. But it shouldn't be surprising that the fledgling and atrophied ability to self-organize might not be strong enough to withstand a fast moving power grab by an existing group - what might be more surprising is that this is rarely the case and often news stories about "looting" after a natural disaster turn out to be uncharitable descriptions of self-organized rescues and searches.

I think a better analogy for human selfishness would be the mirage of "alpha wolves". As seems to be common knowledge at this point, there is no such thing as an "alpha wolf" hierarchy in groups of wolves living in nature and the phenomenon the author who coined the term (and has since regretted doing so) was mistakenly extrapolating from observations he made of wolves in captivity. But the behavior does seem to exist in captivity. Not because it's "inherent" or their natural behavior "under pressure" but because it's a maladaptation that arises from the unnatural circumstances of captivity (e.g. different wolves with no prior bonds being forced into a confined space, naturally trying to form a group but being unable to rely on natural bonds and shared trust).

Humans do not naturally form strict social hierarchies. For the longest time, Europeans would have laughed at you if you claimed the feudal system was not in the human nature - it would have literally been heresy to challenge it. Nowadays in the West most people will say capitalism or markets are human nature. Outside the West, people will still likely at least tell you that authoritarianism is human nature - whether it's the boot of a dictatorship, the boots of oligarchs or "the people's boot" that's pushing down on the unruly (yourself included).

What we do know about more egalitarian tribal societies is that they often use delegation, especially in times of war. When quick decisions need to be made, you don't have the time for lengthy discussions and consensus seeking and it can be an advantage to have one person giving orders and coordinating an attack or defense. But these systems can still be consent-based: if the war chief is reckless or seeks to take advantage of the group for his own gain, he is easily demoted and replaced. Likewise in times of unsolvable problems like droughts, spiritual leaders might be given more power by the group. Now shift from more mobile, nomadic groups to more static, agrarian groups (though it's worth pointing out the distinction here is not agriculture but more likely granaries, crop rotation and irrigation, as some nomadic tribes still engaged in forms of agriculture) and suddenly it becomes easier for that basis of consent to be forgotten and the chosen leaders to maintain that initial state of desperation and to begin justifying their status with the divine mandate. Oops, you got a monarchy going.

Capitalism freed us from the monarchy but it did not meaningfully upset the hierarchy. Aristocrats became capitalists, the absence of birthright class assignment created some social mobility but the proportions generally remained the same. You can't have a leader without followers, you can't have a ruling class without a class of those they can rule over, you can't have an owning class without a class to rent that owned property out to and to work for that owned capital to be realized into profits.

But just like a monarch despite their divine authority was still beholden to the support of the aristocracy to exert power over others and to the laborers to till the fields, build the castle and fight off foreign claims to power, the owning class too exists in a state of perpetual desperation and distrust. The absence of divine right means a billionaire must maintain their wealth and the capitalist mantra of infinite growth means anything other than growing that wealth is insufficient to maintain it. All the while they have to compete with the other billionaires above them as well as maintain control over those beneath them and especially the workers and renters whose wealth and labor they must extract from in order to grow theirs. The perverse reality of hierarchies is that even those at the top of it are crushed underneath its weight. Nobody is allowed to be happy and at peace.

reply
khazhoux
27 days ago
[-]
> Too many people (especially in tech) don't really care what happens to others as long as they get rich off

This is a problem especially everywhere.

reply
greenimpala
27 days ago
[-]
Profit over ethics, self-interest over communal well-being, and competition over cooperation. You're describing capitalism.
reply
tmnvix
27 days ago
[-]
I don't necessarily disagree with you, but I think the issue is a little more nuanced.

Capitalism obviously has advantages and disadvantages. Regulation can address many disadvantages if we are willing. Unfortunately, I think a particular (mostly western) fetish for privileging individuals over communities has been wrongly extended to capital itself (e.g. corporations recognised as entities with rights similar to - and sometimes over-and-above - those of a person). We have literally created monsters. There is no reason we had to go this far. Capitalism doesn't have to mean the preeminence of capital above all else. It needs to be put back in its place and not necessarily discarded. I am certain there are better ways to practice capitalism. They probably involve balancing it out with some other 'isms.

reply
FpUser
27 days ago
[-]
>"I think a particular (mostly western) fetish for privileging individuals over communities has been wrongly extended to capital itself (e.g. corporations recognised as entities with rights similar to - and sometimes over-and-above - those of a person)"

Possible remedy will be to tie corporation to a person - person (or many if there are few owners and directors) become personally liable for everything corporation does.

reply
raincole
26 days ago
[-]
The harsh truth is people stop pretending the world is rule based.

If they signed the agreement... so what? Do people forget that the US has withdrawn from Paris Agreement and is withdrawing from WHO? Do people forgot Israel and North Korea got nukes even when we supposedly had a global nonproliferation treaty?

If AGI is as powerful and dangerous as doomsayers believe, the chance the US (or China, or any country with enough talented computer scientists) would respect whatever treaty they have about AGI is exactly zero.

reply
chasd00
27 days ago
[-]
How to you prevent advancements in software? The barrier to entry is so low, you just need a cheap laptop and an internet connection and then day 1 you're right on the cutting edge driving innovation. Current AI requires a lot of hardware for training but anyone with a laptop and inet connection can still do cutting edge research and innovate with architectures and algorithms.

If a law is passed saying "AI advancement is illegal" how can it ever be enforced?

reply
palmotea
27 days ago
[-]
> How to you prevent advancements in software? The barrier to entry is so low, you just need a cheap laptop and an internet connection and then day 1 you're right on the cutting edge driving innovation. Current AI requires a lot of hardware for training but anyone with a laptop and inet connection can still do cutting edge research and innovate with architectures and algorithms.

> If a law is passed saying "AI advancement is illegal" how can it ever be enforced?

Like any other real-life law? Software engineers (a class which I'm a recovering member of) seem to have a pretty common misunderstanding about the law: that it needs to be air tight like secure software, otherwise it's pointless. That's just not true.

So the way you "prevent advancements in [AI] software" is you 1) punish them severely when detected and 2) restrict access to information and specialized hardware to create a barrier (see: nuclear weapons proliferation, "born secret" facts, CSAM).

#1 is sufficient to control all the important legitimate actors in society (e.g. corporations, university researchers), and #2 creates a big barrier to everyone else who may be tempted to not play by the rules.

It won't be perfect (see: the drug war), but it's not like cartel chemists are top-notch, so it doesn't have to be. I don't think the software engineering equivalent of a cartel chemist will be able to "do cutting edge research and innovate with architectures and algorithms" with only a "laptop and inet connection."

Would the technology disappear? No. Will it be pushed to the margins? Yes. Is that enough? Also yes.

reply
DoctorOetker
25 days ago
[-]
What makes you think government sponsored entities would actually stop work on machine learning?

Even if governments overtly agree to stop or pause or otherwise limit machine learning, how credible would such a "gentlemans agreement" be?

Consider the basic operations during training and inference, like matrix multiplication, derivatives, gradient descent. Which of these would be banned? All of them? None of them? Some of them? The combination of them?

How would you inspect compliance in the context of privacy?

The analogy with drugs is rather poor: people don't have general purpose laboratories in their house. People do have general purpose computational platforms in their home. Another is that nations do not prohibit each other from producing drugs, they even permit each other to research and investigate pathogens, chemical weapons in laboratories deemed sufficiently safe.

It's not even clear what you mean with "AI" does it mean all machine learning? or LLM's? Where do you draw this boundary?

What remains is the threat of punishment in your proposal, but how credible is it, wouldn't a small collective of programmers conspiring work on machine learning, predict getting paperclipped in case of arrest?

reply
AnimalMuppet
27 days ago
[-]
Punish them severely when detected? Nice plan. What if they aren't in your jurisdiction? Are you going to punish them severely when they're in China? North Korea? Somalia? Good luck with that.

The problem is that the information can go anywhere that has an internet connection, and the enforcement can't.

reply
palmotea
27 days ago
[-]
> Punish them severely when detected? Nice plan. What if they aren't in your jurisdiction?

https://en.wikipedia.org/wiki/Operation_Opera

https://en.wikipedia.org/wiki/2021_Natanz_incident

https://www.timesofisrael.com/israel-targeted-secret-nuclear...

If were talking about technology that "could go off the rails in a catastrophic way," don't dick around.

reply
chasd00
27 days ago
[-]
well let's assume an airstrike is on the table, what site would you hit? AWS data centers in Virginia?
reply
palmotea
27 days ago
[-]
The point wasn't literally airstrike, it was don't get hung up over "jurisdiction" when it comes to "avoiding catastrophe." There are other options. Here are a few from the Israeli example, https://en.wikipedia.org/wiki/Assassination_of_Iranian_nucle..., https://en.wikipedia.org/wiki/Stuxnet, but I'm sure there are other innovative ways to answer the challenge.
reply
achierius
26 days ago
[-]
That'd be within our jurisdiction. But yes, if, say, Ireland went rogue (in a hypothetical environment where most of the international community was aligned on this stuff) and attempted a straight shot to AGI, I think it'd be reasonable to bomb their datacenters.
reply
wkat4242
27 days ago
[-]
If they are in Virginia then it is within American jurisdiction and there's no need for military involvement.
reply
thingsilearned
27 days ago
[-]
Regulating is very hard at the software level but not hard at the hardware level. The US and allies control all major chip manufacturing. Open AI and others have done work showing that regulating compute should be significantly easier to do than other regulations we've done such as nuclear https://www.cser.ac.uk/media/uploads/files/Computing-Power-a...
reply
torginus
26 days ago
[-]
This paper should be viewed in retrospect with the present day knowledge that Deepseek exists - regulating compute in not as easy or effective as previously thought.

As for the Chinese chip industry, I don't claim to be an expert on it, but it seems the Chinese are quickly coming up with increasingly less inferior alternatives to Western tech.

reply
GoatInGrey
26 days ago
[-]
The thing is, though, that Deepseek's training cluster is comprised of mostly pre-ban chips. That and the performance/intelligence of their flagship models achieved parity with western models between two and eight months old at the time of release. So in a way, they're still behind the Americans and the export controls hamper their ability to change that moving forward.

Perhaps it only takes China a few years to develop domestic hardware clusters rivalling western ones. Though those few years might prove critical in determining who crosses the takeoff threshold of this technology, first.

reply
pj_mukh
27 days ago
[-]
"We are able to think of thousands of hypothetical ways technology could go off the rails in a catastrophic way"

Am I the only one here saying that this is no reason to preemptively pass legislation? That just seems crazy to me. Imagined horrors aren't real horrors?

I disagree with this administrations approach, I think we should be vigilant, and keeping people who stand to gain so much from the tech in the room, doesn't seem like a good idea, but other than that, I haven't seen any real reason to do more than wait and be vigilant?

reply
achierius
26 days ago
[-]
Just because we haven't seen anyone die from nuclear terrorism doesn't mean we shouldn't legislate against it. And we do: significant investments have been made into things like roadside nuclear detectors, and during large events we even go so far as to do city-wide nuclear scans from the air to look for emission sources.

That's an "imagined" horror too. Are you suggesting that what we should do instead is just wait for someone to kill N million people and then legislate? Why do you value the incremental economic benefit of this technology over the lives of people we can predictably protect?

reply
pj_mukh
26 days ago
[-]
“Just because we haven't seen anyone die from nuclear terrorism”

I mean, we have…for debatable definitions of “terrorism”.

reply
saulpw
27 days ago
[-]
Predicted horrors aren't real horrors either. But maybe we don't have to wait until the horrors are realized and embedded into the fabric of society before we apply the brakes a bit. How else could we possibly be vigilant? Reading news articles and wringing our hands?
reply
XorNot
27 days ago
[-]
There's a difference between the trolley speeding towards someone tied to the tracks, versus someone tied to the tracks but the trolley is stationary, and to someone standing at the station looking at the bare ground and saying "if we built some tracks and put a trolley on it, and then tied someone to the tracks the trolley would kill them! We need to regulate against this dangerous trolley technology before it's too late". Then instead someone builds a freeway because it turns out the area wasn't well suited to a rail trolley.
reply
saulpw
27 days ago
[-]
The tracks have been laid by social media and smartphones, we've all been tied to the tracks for awhile and some people have definitely been run over by trolleys, and the people building this next batch of monster trolleys are accelerationists.
reply
talldrinkofwhat
27 days ago
[-]
I think it's worth noting that we can't even combat the real horrors. The fox is already in the henhouse. The quote that sticks with me is:

"We've already lost our first encounter with AI" - I think Yuval Hurari.

Algorithms heavily thumbed the scales on our social contracts. Where did all of the division come from? Why is extremism blossoming everywhere? Because it gets clicks. Maybe we're just better observing what's been going on under the hood all along, but it seems like there's about 350 million little cans of gasoline dousing American eyeballs.

Make Algorithms Govern All indeed.

reply
zoogeny
27 days ago
[-]
I think the alternative is just as chilling in some sense. You don't want to be stuck in a country that outlaws AI (especially from other countries) if that means you will be uncompetitive in the new emerging world.

The future is going to be hard, why would we choose to tie one hand behind our back? There is a difference between being careful and being fearful.

reply
TFYS
27 days ago
[-]
It's because of competition that we are in this situation. When the economic system and relationships between countries are based on competition, it's nearly impossible to avoid these races to the bottom. We need more systems based on cooperation instead of competition.
reply
int_19h
27 days ago
[-]
International systems are more organic than designed, but the problem with cooperation is that it's not a particularly stable arrangement without enforcement - sure, everybody is better off when everybody cooperates, but you can be even better off when you don't cooperate but everybody else does.
reply
JumpCrisscross
27 days ago
[-]
> We need more systems based on cooperation instead of competition.

That requires dissolving the anarchy of the international system. Which requires an enforcer.

reply
AnthonyMouse
27 days ago
[-]
Isn't this the opposite? If you want competition then you need something like the WTO as a mechanism to prevent countries from putting up trade barriers etc.

If some countries want to collaborate on some CERN project they just... do that.

reply
JumpCrisscross
27 days ago
[-]
> If you want competition then you need something like the WTO as a mechanism to prevent countries from putting up trade barriers etc.

That's an enforcer. Unfortunately, nobody follows through with its sanctions, so it's devolved into a glorified opinion-providing body.

> If some countries want to collaborate on some CERN project they just... do that

CERN is about doing thing, not not doing things. You can't CERN your way to nuclear non-proliferation.

reply
AnthonyMouse
27 days ago
[-]
> You can't CERN your way to nuclear non-proliferation.

Non-proliferation is, the US has nuclear weapons and doesn't want Iran to have them, so is going to apply some kind of bribe or threat. It's not cooperative.

The better example here is climate change. Everyone has a direct individual benefit from burning carbon but it's to our collective detriment, so how do you get anyone to stop, especially the countries with large oil and coal reserves?

In theory you could punish countries that don't stop burning carbon, but that appears to be hard and in practice what's doing the most good is making solar cheaper than burning coal and making electric cars people actually want, politics of infamous electric car man notwithstanding.

So what does that look like for making AI "safe, secure and trustworthy"? Maybe something like publishing state of the art models for free with full documentation of how they were created, so that people aren't sending their sensitive data to questionable third parties who do who knows what with it or using models with secret biases.

reply
T-A
27 days ago
[-]
> Unfortunately, nobody follows through with its sanctions, so it's devolved into a glorified opinion-providing body.

That's a little misleading. What actually happened is summarized here:

https://en.wikipedia.org/wiki/Appellate_Body

Since 2019, when the Donald Trump administration blocked appointments to the body, the Appellate Body has been unable to enforce WTO rules and punish violators of WTO rules. Subsequently, disregard for trade rules has increased, leading to more trade protectionist measures. The Joe Biden administration has maintained Trump's freeze on new appointments.

reply
Henchman21
27 days ago
[-]
I’d nominate either the AGI people keep telling me is “right around the corner”, or the NHI that seem to keep popping up around nuclear installations.

Clearly humans aren’t able to do this task.

reply
zoogeny
27 days ago
[-]
I'm not certain of the balance myself. I was thinking as a counterpoint of the band The Beatles where the two song writers (McCartney and Lennon) are seen in competition. There is a balance there between their competitiveness as song writers and their cooperation in the band.

I think it is one-sided to see any situation where we want to retain balance as being significantly affected by one of the sides exclusively. If one believes that there is a balance to be maintained between cooperation and competition, I don't immediately default to believing that any perceived imbalance is due to one and not the other.

reply
pb7
27 days ago
[-]
Competition is as old as time. There are single celled organisms on your skin right now competing for resources to live. There is nothing more innate to life than this.
reply
sapphicsnail
27 days ago
[-]
Cooperation is as old as time. There are single celled organisms living symbiotically on your skin right now.
reply
XorNot
27 days ago
[-]
The mitochondria in my cells are also symbiotes but thats just because whatever ancestor ate then found they were hard to digest.

The naturalistic fallacy is still a fallacy.

reply
gus_massa
27 days ago
[-]
The bacteria that are most related to mithocondria are intracelular parasits, so they were probably not eaten while roaming arround pacefully, they are probably nasty parasits that got lazy.
reply
achierius
26 days ago
[-]
But humans aren't living in the "untamed wilds". We figured out that it's possible to cooperate, even in large numbers, in many-thousands-of-years-BC. Since then we're been scaling up the level of cooperation. The last century provides many examples of successful cooperation between even states -- e.g. various nuclear test ban treaties. Why pretend that now of all times it's somehow impossible for us to cooperate?
reply
tmnvix
27 days ago
[-]
> You don't want to be stuck in a country that outlaws AI

Just as you don't want to be stuck in the only town that outlaws murder...

I am not a religious person, but I can see the value in promoting shared taboos. The question is, how do we do this in the modern world? We had some success with nuclear weapons. I don't think it's any coincidence that contemporary leaders (and possibly populations) seem to have forgotten how bloody dangerous they are and how utterly stupid it is to engage in brinkmanship with so much on the line.

reply
zoogeny
27 days ago
[-]
This is a good point, and it is the reason why communists argued that the only way communism could work is if it happened globally simultaneously. You don't want to be the only non-capitalist country in a world of capitalists. Of course, when the world-wide revolution didn't happen they were forced to change their tune and adjust.

As for nuclear weapons, I mean it does kind of suck in today's age to be a country without nuclear weapons, right? Like, certain well known countries would really like to have them so they wouldn't feel bullied by the ones that have them. So, I actually think that example works against you. And we very well may end up in a similar circumstance where a few countries get super powerful AGIs and then use their advantage to prevent any other country from getting it as well. Therefore my point stands: I don't want to be in one of the countries that doesn't get to be in that exclusive club.

reply
achierius
26 days ago
[-]
Frankly, in the event of nuclear war, I'd rather be in a country that doesn't have nuclear weapons than in one that does. Australia and New Zealand will probably come out of such a scenario ~fine; India and Pakistan will not, the US and Russia will not, neither China, nor France, nor the UK will either. For Nigeria (e.g.) to build nuclear weapons today would certainly give them some level of international sway (though it could also result in the destruction of their economy thanks to international sanctions), but it would also put them on the map where they had not been before.
reply
latexr
27 days ago
[-]
> if that means you will be uncompetitive in the new emerging world. (…) There is a difference between being careful and being fearful.

I’m so sick of that word. “You need to be competitive”, “you need to innovate”. Bullshit. You want to talk about fear? “Competitiveness” and “innovation” are the words the unscrupulous people at the top use to instil fear on everyone else and run rampant. They’re not being competitive or innovative, they’re sucking you dry of as much value as they can. We all need to take a breath. Stop and think for a moment. You can literally eat food which grows from the ground and make a shelter with a handful of planks and nails. Humanity survived and thrived before all this unfettered consumption, we don’t need to kill ourselves for more.

https://www.newyorker.com/cartoon/a16995

reply
zoogeny
27 days ago
[-]
I live in a ruralish area. There is a lot of forested area and due to economic depression there are a lot of people living in the woods. Most live in tents but some actually cut down the trees and turn them into make-shift shacks. Using planks and nails like you suggest. They often drag propane burners into the woods which often leads to fires. Perhaps this is what you mean?

In reality, most people will continue to live the modern life where there are doctors, accountants, veterinarians, mechanics. We'll continue to enjoy food distribution and grocery stores. We'll all hope that North America gets its act together and build high speed rail so we can travel comfortably for long distances.

There was a time Canada was a big exporter of engineering technology. From mining to agriculture, satellites, and nuclear technology. I want Canada to be competitive in these ways, not making makeshift shacks out of planks and nails for junkies that have given up on life and live in the woods.

reply
latexr
27 days ago
[-]
> They often drag propane burners into the woods which often leads to fires. Perhaps this is what you mean?

I believe you very well know it’s not, and are transparently arguing in bad faith.

> shacks (…) for junkies that have given up on life

The insults you’ve chosen are quite telling. Not everyone living in a way you disapprove of is an automatic junky.

reply
roenxi
27 days ago
[-]
> I believe you very well know it’s not, and are transparently arguing in bad faith.

That is actually what you are talking about; "uncompetitive" looks like something in the real world. There isn't an abstract dial that someone twiddles to set the efficiency of two otherwise identical outcomes - the competitive one will typically look more advanced and competently organised in observable ways.

To live in nice houses and have good food requires a competitive economy. The uncompetitive version was literally living in the forest with some meagre shelter and maybe having a wood fire to cook food (that was probably going to make someone very sick). The reason the word "competitive" turns up so much is people living in a competitive society get to have a more comfortable lifestyle. People literally starve to death if the food system isn't run with a competitive system that tends towards efficiency; that experiment has been run far too many times.

reply
I-M-S
27 days ago
[-]
What the experiment has repeatedly shown is that people living in non-competitive systems starve to death when they get in the way of a system that has been optimized solely for ruthless economic efficiency.
reply
roenxi
27 days ago
[-]
The big one that leaps to mind was the famines with the communist experiments in the 20th century. But there are other, smaller examples that crop up disturbingly regularly. Sri Lanka's fertiliser ban was a jaw-dropper; Zimbabwe redistributing land away from whites was also interesting. There are probably a lot more though, messing with food logistics on the theory there are more important things than producing lots of food seems to be one of those things countries do from time to time.

People can argue about the moral and ideological sanity of these things, but the fact is tolerating economic inefficiencies into the food system can quickly leads to there not being enough food.

reply
I-M-S
27 days ago
[-]
The big ones that leapt to my mind were the Great Irish famine the duration of which food exports to Great Britain were higher than food imports, Bengal famine (the Brits again), and starvation of Native Americans through targeted eradication of the bison.
reply
zoogeny
27 days ago
[-]
You stated one ludicrous extreme (food comes out of the ground! shelter is planks and nails!) and I stated another ludicrous extreme. You can make my position look simplistic and I can make your position look simplistic. You can't then cry foul.

You are also assuming, in bad faith, an "all" where I did not place one. It is an undeniable fact with evidence beyond any reasonable doubt, including police reports and documented studies by the district, that the makeshift shacks in the rural woods near my house are made by drug addicts that are eschewing the readily available social housing for the specific reason that they can't go to that housing due to its explicit restrictions on drug use.

reply
latexr
27 days ago
[-]
> ludicrous extreme

I don’t understand this. Are you not familiar with farming and houses? You know humans grow plants to eat (including in backyards and balconies in cities) and make cabins, chalets, houses, entire neighbourhoods (Sweden currently planning the largest) with wood, right?

reply
zoogeny
27 days ago
[-]
You are making a caricature of modern lifestyle farming, not an argument for people literally living as they did in the past. Going to your local garden center and buying some seedlings and putting them on your balcony isn't demonstrative of a life like our ancestors lived. Living in one of the wealthiest countries to ever have existed and going to the hardware store to buy expensive hardwoods to decorate your house isn't the same as living as our ancestors did.

You don't realize the luxury you have and for some reason you assume that it is possible without that wealth. The reality of that lifestyle without tremendous wealth is more like subsistence farming in Africa and less like Swedish planned neighborhoods.

reply
latexr
27 days ago
[-]
> (…) not an argument for people literally living as they did in the past. (…) isn't demonstrative of a life like our ancestors lived. (…) isn't the same as living as our ancestors did.

Correct. Nowhere did I defend or make an appeal to live life “as they did in the past” or “like our ancestor did”. We should (and don’t really have a choice but to) live forward, not backward. We should take the good things we learned and apply them positively to our lives in the present and future, and not strive for change and consumption for their own sakes.

reply
zoogeny
27 days ago
[-]
You said: "Humanity survived and thrived before all this unfettered consumption, we don’t need to kill ourselves for more."

To deny that your juxtaposition of this claim with your point about growing seeds and nailing together planks doesn't pass my personal test of credibility. You say: "Stop and think for a moment. You can literally eat food which grows from the ground and make a shelter with a handful of planks and nails." but that isn't indicative of a thriving life as I demonstrated. You can do both of those things and still live in squalor, a condition I wouldn't wish on my worst enemy.

You then suggest that I don't understand farming or house construction to defend that point, as if the existence of backyard gardens or wood cabins proves the point that a modern comfortable life is possible with gardens and wood cabins. My point is that the wealth we have makes balcony gardens and wood cabins possible and you are reasoning backwards. To be clear, we get to enjoy the modern luxury of backyard gardens and wood cabins by being wealthy and we don't get to be wealthy by making backyard gardens and wood cabins.

> We should take the good things we learned and apply them positively to our lives in the present and future

Sure, and I can argue competitiveness could be a lesson we have learned that can be applied positively. The way it is used positively in team sports and many other aspects of society.

reply
Henchman21
27 days ago
[-]
You, too, should read this and maybe try and tale it to heart:

https://crimethinc.com/2018/09/03/the-mythology-of-work-eigh...

reply
JumpCrisscross
27 days ago
[-]
> “Competitiveness” and “innovation” are the words the unscrupulous people at the top use to instil fear on everyone else and run rampant

If a society is okay accepting a lower standard of living and sovereign subservience, then sure, competition doesn't matter. But if America and China have AI and nukes and Europe doesn't, one side gets to call the shots and the other has to listen.

reply
latexr
27 days ago
[-]
> a lower standard of living

We better start really defining what that means, because it has become quite clear that all this “progress” is not leading to better lives. We’re literally going to kill ourselves with climate change.

> AI and nukes

Those two things aren’t remotely comparable.

reply
JumpCrisscross
27 days ago
[-]
> it has become quite clear that all this “progress” is not leading to better lives

How do you think the average person under 50 would poll on being teleported to the 1950s? No phones, no internet, jet travel is only for the elite, oh nuclear war and MAD are new cultural concepts, yippee, and fuck you if you're black because the civil rights acts are still a decade out.

> two things aren’t remotely comparable

I'm assuming no AGI, just massive economic efficiencies. In that sense, nuclear weapons give strategic autonomy through military coercion and the ability to grant a security umbrella, which fosters e.g. trade ties. In the same way, the wealth from an AI-boosted economy fosters similar trade ties (and creates similar costs for disengaging). America doesn't influence Europe by threatening to nuke it, but by threatening not to nuke its enemies.

reply
encipriano
27 days ago
[-]
There's no objective definition of what progress even means so the guy is kinda right. We live in a postmodernist society where its not easy to find meaningfullness. All these debates have been discussed by philosophers like Nietzche and Hegel. The media and society shape our understanding and importance of whats popular, progressive and utilitarian.
reply
latexr
27 days ago
[-]
> on being teleported to the 1950s?

That’s not the argument. At all. I argued we should rethink our attitude of unfettered consumption so we don’t continue on an path which is provably leading to destruction and death, and your take is going back in time to nuclear war and overt racism. That is frankly insane. I’m not fetishising “the old days”, I’m saying this attitude of “more more more” does not automatically translate to “better”.

reply
JumpCrisscross
27 days ago
[-]
You said "all this 'progress' is not leading to better lives." That implies lives were better or at least as good before "all this 'progress'."

If you say Room A is not better than Room B, then you should be, at the very least, indifferent to swapping between them. If you're against it, then Room A is better than Room B. Our lives are better--civically, militarily and materially--than they were before. Complaining about unfettered consumerism by falsely claiming our lives are worse today than they were before doesn't support your argument. (It's further undercut by the falling material and energy intensity of GDP in the rich world. We're able to produce more value for less input resource-wise.)

reply
latexr
27 days ago
[-]
> You said "all this 'progress' is not leading to better lives." That implies lives were better or at least as good before "all this 'progress'."

No. There is a reason I put the word in quotes. We are on a thread, the conversation follows from what came before. My original post was explicit about words used to bullshit us. I was specifically referring to what the “unscrupulous people at the top” call “progress”, which doesn’t truly progress humanity or enhances the lives of most people, only theirs.

reply
vladms
27 days ago
[-]
There are many people claiming many things. Not sure which "top" you are referring to, but everybody at the end of a chain (most rich, most political powerful, most popular), generally are selected for being unscrupulous. So not sure why you should ever trust what they say... If you agree, just ignore what most of what those say and find other people to listen to for interesting things.

To give a tech example, not many people were listening to Stallman and Linus and they still managed to change a lot for the better.

reply
layer8
27 days ago
[-]
To be honest, the 1950s become more appealing by the year.
reply
I-M-S
27 days ago
[-]
I'd like to see a poll if the average person would like to be teleported 75 years into the future to 2100.
reply
Henchman21
27 days ago
[-]
reply
yibg
27 days ago
[-]
When does that competitiveness and innovation stop though? If they stopped 100 years ago where would we be today as a species and is that better or worse than today? How about 1000 years ago?

We face issues (like we always have), but I'd argue quite strongly that the competitiveness in our history and drive to invent and innovate has led to where we are today and it's a good thing.

reply
achierius
26 days ago
[-]
"one hand behind our back"? We're talking about who's going to be the first to build the thing that might kill all of humanity. Or, even in many of the happier scenarios, the thing which will impoverish and immiserate the vast majority of the population, rendering them permanently subject to the whims of the capital-owning few.

Why is it "our" back? The people who will own these machines do not consider you one of them. The people leading the countries that will use these machines to kill each other's civilians do not consider you one of them. You have far more in common with a Chinese worker than you do with Sam Altman or Jeff Bezos.

And frankly? I think choosing a (say, conservatively, just going off of the estimates Altman and Amodei have made in the past) 20% chance of killing everyone as our first resort is just morally unacceptable. If the US made an effort to halt research and China still kept at it, sure, I won't complain I suppose, but we haven't, and pretending that China is the problem when it's our labs pushing the edge on capabilities -- it's just comedic.

reply
yibg
27 days ago
[-]
This is true for all new technology of significant potential impact right? Similar discussions were had about nuclear technology I'm sure.

The reality is, with increased access to information and accelerated pace of discovery in various fields, we'll come across things that has the potential for great harm. Be it AI, some genetical engineering causing a plague, nuclear fallout etc. We don't necessarily know what the harm / benefits are all going to be ahead of time, so we only really have 2 choices:

1. try to stop / slow down such advances. Not sure this is even possible in the long run

2. try to get a good grasp of potential dangers and figure out ways to mitigate / control them

reply
orangebread
26 days ago
[-]
I think the core of what people are scared of is fear itself. Or put more eloquently by some dead guy "There is nothing to fear, but fear itself".

If we don't want to live in a world where these incredibly powerful technologies are leveraged for nefarious purposes there needs to be emotional maturity and growth amongst humanity. Those who are able to make these growths need to hold the irresponsible ones accountable (with empathy).

The promise of AI is that these incredibly powerful technologies will be disseminated to the masses and Open AI know this is the next step and it's why they're trying to keep a grip on their market share. With the advent of nVidia's project digits and powerful open source models like deepseek, it's very clear how this trajectory will go.

Just wanted to add some of this to the convo. Cheers.

reply
throwaway9980
27 days ago
[-]
Everything you are describing sounds like the phenomenon of government in the United States. If we replace a human powered bureaucracy with a technofeudalist dystopia it will feel the same, only faster.

We are upgrading the gears that turn the grist mill. Stupid, incoherent, faster.

reply
idiotsecant
27 days ago
[-]
Let's say we decide, today, that we want to prevent an AI armageddon that we assume is coming.

How do you do that?

reply
logicchains
26 days ago
[-]
The biggest problem with AI is people with poor understanding of computer science developing an almost religious belief that increasing vaguely defined "intelligence" will somehow translate into godlike power. There's actually a field devoted to the rigorous study of what "intelligence" can achieve, called complexity theory, and it makes it clear that many of the problems that AI cultists expect "superintelligence" to solve (problems it'd need to solve to be "godlike") are not tractable even if every atom in the observable universe was combined into a giant computer.
reply
deadbabe
27 days ago
[-]
Anyone born in the next few decades will disagree with you. They will find this new world comfortable and rich with content. They will never understand what your problem is.
reply
throwup238
27 days ago
[-]

  I've come up with a set of rules that describe our reactions to technologies:
  1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
  2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
  3. Anything invented after you're thirty-five is against the natural order of things.
  - Douglas Adams
reply
Telemakhos
27 days ago
[-]
> They will find this new world comfortable and rich with content.

I agree with the first half: comfort has clearly increased over time since the Industrial Revolution. I'm not so sure the abundance of "content" will be enriching to the masses, however. "Content" is neither literature nor art but a vehicle or excuse for advertising, as pre-AI television demonstrated. AI content will be pushed on the many as a substitute for art, literature, music, and culture in order to deliver advertising and propaganda to them, but it will not enrich them as art, literature, music, and culture would: it might enrich the people running advertising businesses. Let us not forget that many of the big names in AI now, like X (Grok) and Google (Gemini), are advertising agencies first and foremost, who happen to use tech.

reply
psytrancefan
27 days ago
[-]
You don't know this though with even a high probability.

It is quite possible there is a cultural reaction against AI and that we enter a new human cultural golden age of human created art, music, literature, etc.

I actually would bet on this as engineering skills become automated that what will be valuable in the future is human creativity. What has value then will influence culture more and more.

What you are describing seems like how the future would be based on current culture but it is a good bet the future will not be that.

reply
mitthrowaway2
27 days ago
[-]
I'm not so sure. My parents were born well after the hydrogen bomb was developed, and they were never comfortable with it.
reply
JumpCrisscross
27 days ago
[-]
> My parents were born well after the hydrogen bomb was developed, and they were never comfortable with it

The nuclear peace is hard to pin down. But given the history of the 20th century, I find it difficult to imagine we wouldn't have seen WWIII in Europe and Asia without the nuclear deterrent. Also, while your parents may have been uncomfortable with the hydrogen bomb, the post-90s world hasn't particularly been characterised by mass nuclear anxiety. (Possibly to a fault.)

reply
h0l0cube
27 days ago
[-]
You might have missed the cold war in your summary. Mass nuclear anxiety really characterized that era, with a number of near misses that could have ended in global annihilation (and that’s no exaggeration).

IMO, the Atoms for Peace propaganda undersells how successful globalization has been at keeping nations from destroying each other by creating codependence on complex supply chains. The new shift to protectionism may see an end to that

reply
int_19h
27 days ago
[-]
The supply chain argument was also made wrt European countries just before WW1. It wasn't even wrong - economically, it was as devastating as predicted for everyone involved, with no real winners - but that didn't preclude the war.
reply
h0l0cube
27 days ago
[-]
The scale of globalization post-WW2 puts it on a whole other level. The complexity of supply chains now are such that any country would grind to a halt without imports. The exception here, to some degree, is China, but so far they've been more interested in soft power over military, and that strategy has served them well. Though it seems the US are gearing up for a fight with a fully domestic manufacturing capability and natural resource pools of its own. It would require consistent protectionist policy over multiple administrations to pull something like that off, so it remains to be seen if that's truly possible.
reply
megous
27 days ago
[-]
Yeah, let's just ignore all the wars and genocides that nuclear powers engaged in and supported and all nuclear powers that are constantly at war or engaging in occupation of others since they started existing and millions of dead and affected people.

Nice "peace".

We had 100 years of such kind of peace among major europe powers before nuclear weapons. We're not even at 80 years of peace in nuclear age, this time, and nuclear armed power is already attacking from the east and from inside via new media.

I wouldn't call it done and clear, about the "nuclear age peace".

reply
bluGill
27 days ago
[-]
There are always a few things that people don't like. However your parents likely are comfortable with a lot of things that their parents were not.
reply
buzzerbetrayed
27 days ago
[-]
Exceptions to rules exist, especially if you’re trying to think of a really extreme cases that specifically invalidate it.

However, that really doesn’t invalidate the rule.

reply
mitthrowaway2
27 days ago
[-]
That's true, but I think AI may be enough of a disruption to qualify. We'll of course have to wait and see what the next generation thinks, but they might end up envious of us, looking back with rose-tinted glasses on a simpler time when people could trust photographic evidence from around the world, and interact with each other anonymously online without wondering if they were talking to an astroturf advertising bot.
reply
stackedinserter
27 days ago
[-]
Would they prefer that only USSR had an H-bomb, but not USA?
reply
mitthrowaway2
27 days ago
[-]
I don't think that's the nature of the argument that I was responding to.
reply
stackedinserter
27 days ago
[-]
So what? Would they?
reply
mitthrowaway2
27 days ago
[-]
Nuclear arms races are a form of multipolar trap, and like any multipolar trap, you are compelled to keep up, making your own life worse, even while wishing that you and your opponent could cooperatively escape the trap.

The discussion I was responding to is whether the next generation would grow up seeing pervasive AI as a normal and good thing, as is often the case with new technology. I cited nuclear weapons as a counterexample, while I agree that nobody felt that they had a choice but to keep up with them.

AI could similarly be a multipolar trap ("nobody likes it but we aren't going to accept an AI gap with Russia!"), which would mean it has that in common with nuclear weapons, strengthening the argument against the next generation being comfortable with AI.

reply
stackedinserter
27 days ago
[-]
You don't need that much warheads to saturate your military needs. Number of possible targets is limited, in older plans there was clearly absurd overkill when a few nukes were assigned to a single target.

Also, nukes don't write code or wash your dishes, it's nothing but liability for a society.

reply
bobthepanda
27 days ago
[-]
Do two wrongs make a right?
reply
xp84
27 days ago
[-]
That's not the point, GP is pointing out how we only control (at least theoretically, lol) our own government, and basic game theory can tell you that countries that adopt pacifist ideas and refuse to pursue anything that might be dangerous will always at some point be easily defeated by others who are less moral.

The point is that it's complicated, it's not a black and white sound bite like the people who are "against nuclear weapons" pretend it is.

reply
bobthepanda
27 days ago
[-]
And people don't have to feel comfortable with complicated things. The GP posted "would you prefer" as a disingenous point to invalidate the commenter's parents' feelings.

I eat meat. I know some vegans feel uncomfortable with that. But personally I feel secure in my own convictions that I don't need to run around insinuating vegans are less than or whatever.

reply
stackedinserter
27 days ago
[-]
Your survival doesn't depend on the amount of meat you consume.
reply
bobthepanda
27 days ago
[-]
Your survival also doesn’t depend on somebody’s discomfort or comfort with nuclear weapons, either. What’s the point of thought policing?
reply
stackedinserter
27 days ago
[-]
With enough anti-military, anti-nuclear, anti-whatever-looks-scary-to-them people we'll stand with our pants down, just like EU or Canada these days. There was a lot of activism during the Cold war of that kind, lucky for US there weren't enough "discomforted" people back then.
reply
Der_Einzige
27 days ago
[-]
Three rights make a left.
reply
sharemywin
27 days ago
[-]
I guess your right here's how it happens:

Alignment Failure → Shifting Expectations People get used to AI systems making “weird” or harmful choices, rationalizing them as inevitable trade-offs. Framing failures as “technical glitches” rather than systemic issues makes them seem normal.

Runaway Optimization → Justifying Unintended Consequences AI’s extreme efficiency is framed as progress, even if it causes harm. Negative outcomes are blamed on “bad inputs” rather than the AI itself.

Bias Amplification → Cultural Reinforcement AI bias gets baked into everyday systems (hiring, policing, loans), making discrimination seem “objective.” “That’s just how the system works” thinking replaces scrutiny.

Manipulation & Deception → AI as a Trusted Guide People become dependent on AI suggestions without questioning them. AI-generated narratives shape public opinion, making manipulation invisible.

Security Vulnerabilities → Expectation of Insecurity Constant cyberattacks and AI hacks become “normal” like data breaches today. People feel powerless to push back, accepting insecurity as a fact of life.

Autonomous Warfare → AI as an Inevitable Combatant AI-driven warfare is seen as more “efficient” and “precise,” making human involvement seem outdated. Ethical debates fade as AI soldiers become routine.

Loss of Human Oversight → AI as Authority AI decision-making becomes so complex that people stop questioning it. “The AI knows best” becomes a cultural default.

Economic Disruption → UBI & Gig Economy Normalization Mass job displacement is met with new economic models (UBI, gig work, AI-driven welfare), making it feel inevitable. People adjust to a world where traditional employment is rare.

Deepfakes & Misinformation → Truth Becomes Fluid Reality becomes subjective as deepfakes blur the line between real and fake. People rely on AI to “verify” truth, giving AI control over perception.

Power Concentration → AI as a Ruling Class AI governance is framed as more rational than human leadership. Dissent is dismissed as “anti-progress,” consolidating control under AI-driven elites.

reply
sharemywin
27 days ago
[-]
In fact we don't even need UBI either:

"Lack of Adaptability"

AI advocates argue that those who lose jobs simply failed to "upskill" in time. The burden is placed on workers to constantly retrain, even if AI advancement outpaces human ability to keep up. Companies and governments say, “The opportunities are there; people just aren’t taking them.” "Work Ethic Problem"

The unemployed are labeled as lazy or unwilling to compete with AI. Hustle culture promotes side gigs and AI-powered freelancing as the “new normal.” Welfare programs are reduced because “if AI can generate income, why can’t you?” "Personal Responsibility for Economic Struggles"

The unemployed are blamed for not investing in AI tools early. The success of AI-powered entrepreneurs is highlighted to imply that struggling workers "chose" not to adapt. People are told they should have saved more or planned for disruption, even though AI advancements were unpredictable. "It’s a Meritocracy"

AI-driven success stories (few and exceptional) are amplified to suggest anyone could thrive. Struggling workers are seen as having made poor choices rather than being victims of automation. The idea of a “deserving poor” is reinforced—those who struggle are framed as not working hard enough. "Blame the Boomers / Millennials / Gen Z"

Economic shifts are framed as generational failures rather than AI-driven. Older workers are told they refused to adapt, while younger ones are blamed for entitlement or lack of work ethic. Cultural wars distract from AI’s role in job losses. "AI is a Tool, Not the Problem"

AI is framed as neutral—any negative consequences are blamed on how people use it. “AI doesn’t take jobs; people mismanage it.” Job losses are blamed on bad government policies, corporate greed, or individual failure rather than automation itself. "The AI Economy Is Full of Opportunity"

Gig work and AI-driven side hustles are framed as liberating, even if they offer no stability. Traditional employment is portrayed as outdated, making complaints about job loss seem like resistance to progress. Those struggling are told to “embrace the new economy” rather than question its fairness.

reply
int_19h
27 days ago
[-]
You can only do so much with agitprop. At the end of the day, if, say, 60% of the population has no income without a job and no hopes of getting said job, they are not going to starve to death no matter the justification for it.
reply
sharemywin
27 days ago
[-]
you just carve out us and them circles then just make the circles smaller and smaller.

look at the push right now in the US against corrupt foreign aid and the mass deportations seems like the first step.

reply
vladms
27 days ago
[-]
Historically, humanity evolved faster when it was interacting. So groups can try to isolate themselves but on the long run that will make them lag behind.

US benefited a lot from lots of smart people going there (even more during WWII). If people start believing (correctly or incorrectly) that they would be better somewhere else, it will not benefit them.

reply
int_19h
27 days ago
[-]
Thing is, if there's too many of "them", they will eventually come for "us" with torches and pitchforks. You can victimize a large part of the population like that, but not a supermajority of it.
reply
the_duke
27 days ago
[-]
Lets talk again after AI causes massive unemployment and social upheaval for a few decades until we find some new societal model to make things work.

This is inevitable in my view.

AI will replace a lot of white collar jobs relatively soon, years or decades.

And blue collar isn't too far behind, since a major limiting factor for automation is general purpose robots being able to act in a dynamic environment, for which we need "world models".

reply
mouse_
27 days ago
[-]
What makes you think that? That's what the last generations said about us and it turned out to not be true.
reply
hcurtiss
27 days ago
[-]
Relative to them, we most certainly are. By every objective metric, humanity has flourished in "the last generations." I get it that people are stressed today -- people have always been stressed. It is, in a sense, fundamental to the human condition.
reply
jmcgough
27 days ago
[-]
Easy for you to say that. The political party running this country ran on a platform of the eradication of me and my friends. I can't legally/safely use public restrooms in several states, including some which have paid bounties for reporting. Things will continue to improve for the wealthy and powerful, but in a lot of ways have become worse for the poor and vulnerable.

When I was a kid, there was this grand utopian ideal for the internet. Now it's fragmented, locked in walled gardens where people are psychologically abused for advertising dollars. AI could be a force for good, but Google has already ended its ban on use in weapons and is selling it to the IAF, and Palantir is busy finding ways to use it for surveillance.

reply
int_19h
27 days ago
[-]
A reminder that it's only been 22 years since sodomy laws were declared unconstitutional in US in the first place
reply
gecko6
27 days ago
[-]
And it was 1971 when the last chemical castration as a 'treatment' for homosexuality was performed in the US.
reply
mozvalentin
26 days ago
[-]
The next season of Black Mirror is just going to be international news coverage.
reply
tim333
27 days ago
[-]
> living in the next few decades driven by technology acceleration will feel like being lobotomized while conscious and watching oneself the whole time

Seems a bit negative. I think it'll be cool.

reply
sharpshadow
25 days ago
[-]
Wait where are those dangerous AI outcomes? Did we manage to prevent them or did they never occur?
reply
InDubioProRubio
26 days ago
[-]
Lucky us we life in times, were single individuals and states soon will veto via nukes on civilization.
reply
Gud
27 days ago
[-]
I wish your post wasn’t so accurate.

Yet, I can’t help but be hopeful about the future. We have to be, right?

reply
casey2
26 days ago
[-]
Unfalsifiable psuedophilosophy shouldn't be mistaken for science nor legislative advise. I don't care what your cult thinks, religion and government should stay separate.
reply
jowea
27 days ago
[-]
I think that international competition is one of the greatest guarantees that trying to stand athwart history and yelling stop never works in the long term.
reply
debbiedowner
27 days ago
[-]
Which books?
reply
taurknaut
26 days ago
[-]
The most likely catastrophe remains giving capital outsized influence on our society. It's the easiest to imagine, and the idea of a capitalist making a money-making machine that can actually think for itself and wields actual power feels very difficult to imagine. (Granted, maybe Musk himself really is that dumb. Inshallah, I guess.) Humans are easy to manipulate and most can just be bought with sufficient money. The last thing the super wealthy want will be to rely on software that has individual agency outside the will of the owner. Meanwhile the sort of destruction this will cause is already happening around us in the form of a highly financially insecure populace, supply chain instability, climate change, automated bombings of "terrorists", "smart" fences to keep out criminals (let's just ignore the fact you're more likely to get murdered by your citizen neighbor), the reduction of journalism to either propaganda or atomized hand-wringing about mental health and individual agency, and a kafkaesque system of algorithmicly-priced rents for every sector of life. Is the algorithm "a reasonable value to both the consumer and producer"? No, it will be "how much blood can I squeeze from this peasant". Hell kroger is already playing around with dynamic-pricing-via-facial-recognition at checkout.

I always thought skynet was a great metaphor for the market, a violent and inhuman thing that we created that dominates our lives and dictates the terms of our day to day life and magically thinks for itself and threatens the very future of this planet, our species, and our loved ones, and is somehow out of popular control. Not actual commentary on a realistic scenario about the dangers of ai. Sometimes these metaphors work out great and Terminator is a great example. Maybe the AI we've been fearing is already here.

I think for the most part the enshittification of everything will just accelerate and it'll be pretty obvious who benefits and who doesn't.

reply
Ray20
26 days ago
[-]
>The most likely catastrophe remains giving capital outsized influence on our society.

No, in this regard, capital is ABSOLUTELY harmless. I mean, if the capital get outsized influence on our society, in the WORST case it will turn into a government. And we already have it.

reply
Nasrudith
27 days ago
[-]
I'm sorry, but when the has it ever been the case that you can just say "no" to the world developing a new technology? You might as well say we can prevent climate change by just saying no to the outcome!
reply
estebank
27 days ago
[-]
We no longer use asbestos as a flame flame retardant in houses.

We no longer use chemicals harmful to the ozone layer on spray cans.

We no longer use lead in gasoline.

We figured those things were bad, and changed what we did. If evidence is available ahead of time that something is harmful, it shouldn't be controversial to avoid widespread adoption.

reply
bombcar
27 days ago
[-]
None of those things were said "no" to before they were used and in a wide-spread manner.

The closest might be nuclear power, we know we can do it, we did it, but lots of places said no to it, and further developments have vastly slowed down.

reply
estebank
27 days ago
[-]
In none of those did we know about the adverse effects. Those were observed afterwards, and it would have taken longer to know if they hadn't been adopted. But that doesn't invalidate the idea that we have followed "if something bad, collectively don't use it" at various points in time.
reply
Aloisius
27 days ago
[-]
We were well aware of the adverse effects of tetraethyl lead before lead gasoline was first sold.

The man who invented it got lead poisoning during its development, multiple people died of lead poisoning in a pilot plant manufacturing it and public health and medical authorities warned against prior to it being available for sale to the general public.

reply
rat87
27 days ago
[-]
And for nuclear power many would say that rejecting it was a huge mistake
reply
josefritzishere
27 days ago
[-]
I don't think it is safe to assume the use patterns of tangible things extend to intangible things; nor the patterns of goods to that of services. I just see this as a conclusory leap.
reply
estebank
27 days ago
[-]
I was replying to

> when the has it ever been the case that you can just say "no" to the world developing a new technology?

reply
jpkw
27 days ago
[-]
In each of those examples, we said "no" decades after they were developed, and many had to suffer in order for us to get to the stage of saying "no".
reply
tw1984
26 days ago
[-]
[flagged]
reply
rurp
27 days ago
[-]
This happens in many ways with potentially catastrophic tech. There are many formal agreements and strong norms against building ever more lethal nuclear arsenals or existentially dangerous gain of function research. The current system is far from perfect, the world could literally be destroyed today based on the actions of a handful of people, but it's the best we have come up with so far.

If we as a society keep developing potential existential threats to ourselves without mitigating them then we are destined for disaster eventually.

reply
realce
27 days ago
[-]
John C Lilly had a concept called the "bad program" that was like an internal, natural, subconscious antithetical force that lives in us all. It seduces or lures the individual into harming themselves one way or another - in his case it "tricked" him into taking a vitamin injection improperly, leading to a stroke, even though he knew how to administer the shot expertly.

At some level, there's a disaster-seeking function inside us all acting as an evolutionary propellant.

You might make an argument that "AI" is an evolutionary embodiment of our conscious minds that's designed to escape these more subconscious trappings.

reply
timewizard
27 days ago
[-]
People like to pretend that AGI isn't going to cost money to run. The power budget alone is something no one is contemplating.

Technology doesn't accelerate endlessly. Only our transistor spacing does. These two are not the same thing.

reply
bigbones
27 days ago
[-]
More efficient hardware mappings will happen, and as a sibling comment says, power requirements will drop like a rock. Check out https://www.youtube.com/watch?v=7hz4cs-hGew for some idea of what that might eventually look like
reply
WillPostForFood
27 days ago
[-]
The power budget alone is something no one is contemplating.

It is very hard to find a discussion about the growth and development of AI that doesn't discuss the issues around power budget.

https://www.datacenterknowledge.com/energy-power-supply/whit...

https://bidenwhitehouse.archives.gov/briefing-room/president...

In building domestic AI infrastructure, our Nation will also advance its leadership in the clean energy technologies needed to power the future economy, including geothermal, solar, wind, and nuclear energy; foster a vibrant, competitive, and open technology ecosystem in the United States, in which small companies can compete alongside large ones; maintain low consumer electricity prices; and help ensure that the development of AI infrastructure benefits the workers building it and communities near it.

reply
dr_dshiv
27 days ago
[-]
Power budget will drop like a rock over time.

Exponential increases in cost (and power) for next-level AI and exponential decreases for the cost (and power) of current level AI.

reply
snickerbockers
27 days ago
[-]
AI isn't like nuclear fission. You can't remotely detect that somebody is training an AI. It's far too late to sequester all the information related to AI like what was done with uranium enrichment. The equipment needed to train AI is cheap and ubiquitous.

These "safety declarations" are toothless and impossible to enforce. You can't stop AI, you need to adapt. Video and pictures will soon have no evidentiary value. Real life relationships must be valued over online relationships because you know the other person is real. It's unfortunate, but nothing AI is "disrupting" existed 200 years ago and people will learn to adapt like they always have.

To quote the fictional comic book villain Toyo Harada, "none of you can stop me. Not any one of you individually nor the whole of you collectively."

reply
pjc50
27 days ago
[-]
> Video and pictures will soon have no evidentiary value.

I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.

> but nothing AI is "disrupting" existed 200 years ago

200 years ago there were about 1 billion people on earth; now there are about 8 billion. Anarchoprimitivists and degrowth people make a similar handwave about the advances of the last 200 years, but they're important to holding up the systems which keep a lot of people alive.

reply
snickerbockers
27 days ago
[-]
> I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.

Maybe, but I'm not bullish on cryptology having a solution to this problem. Every consumer device that's interesting enough to be worth hacking gets hacked within a few years. Even if nobody ever steals the key there will inevitably be side-channel attacks to feed external pictures into the camera that it thinks are coming from its own sensors.

And then there's the problem of the US government, which is known to strongarm CAs into signing fraudulent certificates.

> 200 years ago there were about 1 billion people on earth; now there are about 8 billion. Anarchoprimitivists and degrowth people make a similar handwave about the advances of the last 200 years, but they're important to holding up the systems which keep a lot of people alive.

I think that's a good argument against the kazinksy-ites, but I was primarily speaking towards concerns such as 'misinformation' and machines pushing humans out of jobs. We're still going to have food, medicine, and shelter. AI can't take that away; the only concern is adapting our society so that we can either feed significant populations of unproductive people, or move those people into whatever jobs machines can't do yet.

We might be teetering on the edge of a dystopian techno-feudalism where a significant portion of the population languishes in slums because industry has no use for them, but that's why I said we need to adapt. There has always been something that has the potential to destroy civilization in the near future, but if you're reading this post then your ancestors weren't the ones that failed to adapt.

reply
ben_w
27 days ago
[-]
> Maybe, but I'm not bullish on cryptology having a solution to this problem. Every consumer device that's interesting enough to be worth hacking gets hacked within a few years. Even if nobody ever steals the key there will inevitably be side-channel attacks to feed external pictures into the camera that it thinks are coming from its own sensors.

Or the front-door analog route, point a real camera at a screen showing fake images.

That said, lots of people are incompetent at forging, about knowing what "tells" each process of fakery has and how to overcome them, so I think this will still broadly work.

> We might be teetering on the edge of a dystopian techno-feudalism where a significant portion of the population languishes in slums because industry has no use for them, but that's why I said we need to adapt.

That's underestimating the impact this can have. An AI which reaches human performance and speed on 250 watt hardware, at current global average electricity prices, costs about the same to run as a human costs just to feed.

By coincidence, the global electricity supply is currently about 250 watts/capita.

reply
mywittyname
27 days ago
[-]
Encryption doesn't need to last forever, just long enough to be scrutinized. Once a trusted individual is convinced that a certain camera took this picture at this time and location, then that authentication is forever. Maybe that trust only includes devices built in the past 5 years, as hacks and bugs are fixed. Or corroborating evidence can be gathered; say several older, "potentially untrustworthy" devices take very similar video of an event.

As with most things, the primary issue is not really a technical one. People will believe fake photos and not believe real ones based on their own biases. So even if we had the Perfect Technology, it wouldn't necessarily matter.

And this is the reason we have fallen into a dystopian feudalistic society (we aren't teetering). The weak link is our incompetent collective human brains. And a handful of people built the tools necessary to exploit that incompetence; we aren't going back.

reply
whiplash451
26 days ago
[-]
> People will believe fake photos and not believe real ones based on their own biases.

People, maybe. Judges, much less so. The "perfect technology" is badly needed if we don't want things to go south at scale.

reply
bryanrasmussen
26 days ago
[-]
>Judges, much less so.

Judges appointed by whom? Anyway, Judges are human and I think there is enough evidence throughout history of judges showing bias.

reply
inetknght
27 days ago
[-]
> I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.

When you outlaw [silent cameras] the only outlaws will have [silent cameras].

Where a camera might "authenticate" a photograph, an AI could "authenticate" a camera.

reply
rocqua
27 days ago
[-]
You handle the authentication by signatures with private keys embedded in hardware modules. An AI isn't going to be able to fake that signature. Instead, the system will fail because the keys will be extracted from the hardware modules.
reply
hansvm
27 days ago
[-]
For images in particular, hardware attestation fails in several ways:

1. The hardware just verifies that the image was acquired by that camera in particular. If an AI generates the thing it's photographing, especially if there's a glare/denoising step to make it more photographable, the camera's attestation is suddenly approximately worthless despite being real.

2. The same problem all those schemes have is that extracting hardware keys is O(1). It costs millions to tens of millions of dollars today, but the keys are plainly readable by a sufficiently motivated aversary. Those keys might buy us a decade or two, but everything beyond that is up in the air and prone to problems like process node size hitting walls while the introspection techniques continually get smaller and cheaper.

3. In the world you describe, you still have to trust the organizations producing hardware modules -- not just the "organization," but every component in that supply chain. It'd be easy for an internal adversary to produce 1/1M cameras which authenticate any incoming PNG and sell them for huge profits.

4. The hardware problem you're describing is much more involved than ordinary trusted computing because in addition to the keys being secure you also need the connection between the sensor and the keys to be secure. Otherwise, anyone could splice in a fake "sensor" that just grabs a signature for their favorite PNG.

4a. You're still only talking about O($10k) to O($100k) to produce a custom array to feed a fake photo into that sensor bank without any artifacts from normal screens. Even if the entire secure enclave / sensor are fully protected, you can still cheaply create a device that can sign all your favorite photos.

5. How, exactly, do lighting adjustments and whatnot fit in with such a signing scheme? Maybe the "RAW" is signed and a program for generating the edits is distributed alongside? Actually replacing general camera use with that sort of thing seemingly has some kinks to work out even if you can fix the security concerns.

reply
rocqua
27 days ago
[-]
These aren't failure points, they are significant roadblocks.

First way to overcome this is attesting on true raw files. Then mostly just transferring raw files. Possibly supplemented by ZKPs that prove one imagine is the denoised version of another.

The other blocks are overcome by targeting crime, not nation states. This means you only nrrd stochastic control of the supply chain. Especially because, unlike with DRM keys, the leaking of a key doesn't break the whole system. It is very possible to revoke trust in a key. And it is possible to detect misuse of a private key, and revoke trust in it.

This won't stop deepfakes of political targets. But it does keep society from being fully incapable of proving what really happened to their peers.

I'm not saying we definitely should do this. But I do think there is a possible setup here that could be made reality, and that would substantially reduce the problem.

reply
hansvm
27 days ago
[-]
(1) is a definite failure point, and (4) is going to be done for free by hobbyists. The best-case scenario is that the proposal helps keep honest people honest, reducing the number of malicious actors.

The problem is that the malicious product is nearly infinitely scalable, enough so that I expect services to crop up whereby people use rooms full of trusted devices to attest to your favorite photo, for very low fees. If that's not the particular way this breaks then it's because somebody found something even more efficient or the demand isn't high enough to be worth circumventing (and in the latter case the proposal is also worthless).

reply
SmooL
27 days ago
[-]
I can trivially just print any AI image I want, then take a "verified" picture of it with my camera. That seems like a pretty large failure point.
reply
ChadNauseam
26 days ago
[-]
That's easy to mess up - see https://screenrant.com/vintage-pokemon-card-fakes-millions/. Obviously nothing that would be a problem for an intelligent criminal, but I think that rules out a lot of people.
reply
fennecfoxy
21 days ago
[-]
Criminals won't need to be intelligent to not get caught anymore, once AI is capable enough.
reply
Dalewyn
27 days ago
[-]
You might be interested to know that the Managing Director of the Fukuoka Stock Exchange was arrested yesterday[1][2] on allegations that he took upskirt shots of schoolgirls. He was caught because his tablet's camera emitted the mandatory shutter sound.

Laws like this serve primarily to deter casual criminals and catch patently stupid criminals which are the vast majority of cases. In this case it took a presumable sexual predator off the streets, which is a great application of the law.

[1]: https://www3.nhk.or.jp/news/html/20250212/k10014719841000.ht...

[2]: https://www3-nhk-or-jp.translate.goog/news/html/20250212/k10...

reply
null0pointer
27 days ago
[-]
Camera authentication will never work because you can always just take an authenticated photo of your AI image.
reply
IshKebab
27 days ago
[-]
I think you could make it difficult for the average user, e.g. if cameras included stereo depth estimation.

Still, I can't really see it happening.

reply
root_axis
27 days ago
[-]
> I think we may eventually get camera authentication as a result of this

How would this work? Not sure if something like this is possible.

reply
abdullahkhalids
27 days ago
[-]
You can't really tell if someone is developing chemical weapons. You can tell when such weapons are used. This is very similar to AI.

Yet, the international agreements on non-use of chemical weapons have held up remarkably well.

reply
czhu12
27 days ago
[-]
I actually agree with you, but just wanted to bring up this interesting article challenging that: https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch...

Basically claims that chemical weapons have been phased out because they aren't effective, not because we've become more moral, or international standards have been set.

"During WWII, everyone seems to have expected the use of chemical weapons, but never actually found a situation where doing so was advantageous... I struggle to imagine that, with the Nazis at the very gates of Moscow, Stalin was moved either by escalation concerns or the moral compass he so clearly lacked at every other moment of his life."

reply
Der_Einzige
27 days ago
[-]
Really? What happened to Bashir Al Assad after he gassed his own people? Oh yeah, nothing.
reply
frank_nitti
27 days ago
[-]
reply
JumpCrisscross
27 days ago
[-]
> Video and pictures will soon have no evidentiary value

We still accept eyewitness testimony in courts. Video and pictures will be fine, their context is what will matter. Where we'll have a generation of chaos is in the public sphere, as everyone born before somewhere between 1975 and now fails to think critically when presented with an image they'd like to believe is true.

reply
wand3r
27 days ago
[-]
I think we'll have a decade of chaos but not because of this. A lot of stories during the election cycle in news media and on the internet were simply Democratic or Republican "fan fiction". I don't want to make this political, I only illustrate this example to say, that I was burned in believing some of these things and you develop the muscle pretty quickly. Tweets, anecdotes, images and even stories reported by "reputable" media companies already require a degree of critical thinking.

I haven't really believed in aliens existing on earth for most of my adult life. However, I have sort of come around to at least entertaining the idea in recent years but would need solid photographic or video evidence. I am now convinced that aliens could basically land in broad daylight in 3 years while being heavily photographed and it would easily be able to be explained away as AI. Especially if governments want to do propaganda or counter propaganda.

reply
Sophira
26 days ago
[-]
What happens when you eventually get read/write brain interfaces? Because I'm pretty sure that's going to happen at some point.

It sounds like complete science fiction, but so did where we are with generative AI only a few decades ago.

reply
hollerith
27 days ago
[-]
>You can't remotely detect that somebody is training an AI.

There are training runs in progress that will use billions of dollars of electricity and GPUs. Quite detectable -- and stoppable by any government that wants to stop such things from happening on territory it controls.

And certainly we can reduce the economic incentive for investing money on such a run by banning AI-based services like ChatGPT.

reply
jandrewrogers
27 days ago
[-]
> use billions of dollars of electricity and GPUs

For now. Qualitative improvements in efficiency are likely to change what is required.

reply
milesrout
27 days ago
[-]
And none of them want to do that. Why would they! AI is perfectly safe. The idea it will take over the world is ludicrous and all "AI safety" in practice seems to mean is censoring it so it won't make jokes about women or ethnic minorities.
reply
hollerith
27 days ago
[-]
Yes, as applied to the current generation of AIs, "safety" and "alignment" refer to things like preventing the product from making jokes about women or ethnic minorities, but that is because the current generation is not powerful enough to threaten human safety and human survival. The OP in contrast is about what will happen if the labs succeed in their stated goal of creating AIs that are much more powerful.
reply
Sophira
26 days ago
[-]
I think we all know this is going to happen, though.

We have AIs that are capable of self-correcting the code that they write, and people have built automatic interfaces for them to receive errors that they get from compilation.

We also have interfaces that can allow an AI to use a Linux terminal.

It's not a stretch to imagine that somebody out there is at this very moment using these in a way that would allow an AI to be fully autonomous with creating, running, and testing its own software using unit tests it wrote itself. And while the current status of AI means that such a program is likely to just not work, you have to admit that we are very, very close to the threshold point where it will work.

This on its own is not enough to threaten human safety, but toss in some bad human decisions...

reply
concordDance
26 days ago
[-]
Current AIs are safe. Will the ones in 5 years be, 20 years?
reply
parliament32
27 days ago
[-]
>Video and pictures will soon have no evidentiary value.

This is one bit that has a technological solution. Canon's had some version of this since the early 2000s: https://www.bhphotovideo.com/c/product/319787-REG/Canon_9314...

A more recent initiative: https://c2pa.org/

reply
mzajc
27 days ago
[-]
This is purely security by obscurity. I don't see why someone with motivation and capability to forge evidence wouldn't be able to forge these signatures, considering the private keys presumably come with the camera you buy.
reply
rocqua
27 days ago
[-]
If you make it expensive enough to extract, and tie the private key to a real identity, then you can make it hard to abuse on scale.

Here I mean that at point of sale you register yourself as owner for the camera. And you make extracting a key cost about a million. Then bulk forgeries won't happen.

reply
ironmagma
27 days ago
[-]
But the whole reason video evidence exists is because cameras are cheap and everyone has one as a result.
reply
parliament32
27 days ago
[-]
Shipping secure secrets is also a somewhat solved problem: TPMs ship with EKs that, AFAIK, nobody has managed to extract (yet?): https://docs.trustauthority.intel.com/main/articles/tpm-ak-p...
reply
manquer
27 days ago
[-]
> You can't remotely detect that somebody is training an AI.

Energy use is energy use, training is still incredibly energy intensive and the GPU heat signatures are different from non GPU ones, it fairly trivial to detect large scale GPU usage.

Enforcement is a different problem, and is not specific to AI, if you cannot enforce an agreement it doesn't matter if its AI or nuclear or sarin gas.

reply
whiplash451
26 days ago
[-]
It is a lot easier to distinguish civil/military usage of uranium than it is to distinguish "good" vs "bad" usage of a model being trained.
reply
manquer
26 days ago
[-]
Not if you are making a dirty bomb . Any radioactive material even at the levels found in power reactors can be dangerous.

The point is not the usage is harmful or not, almost any tech can be used for bad purposes if you wish to do so.

You can put controls is the point , controls here could be agent Dameons monitoring the gpus and tallying usage to heat signals, or firmware etc . The controls on what is being trained would be at a higher level than just agents process on a gpu .

reply
hollerith
26 days ago
[-]
That's why we should ban all large training runs without trying to distinguish good ones from bad ones. People will have to be satisfied with the foundation models we have now.
reply
tonymet
27 days ago
[-]
The US worked with all printer manufacturers to add watermarking. In theory they could work with fabs or service providers to embed instruction detection, , similar to how hosting providers do mining instruction detection.
reply
bee_rider
27 days ago
[-]
A lot of AI tools are just basic linear algebra functions used cleverly. If I need a license to do a matvec I will go become a pig farmer instead.
reply
tonymet
26 days ago
[-]
Same could be said about mining and the detection works
reply
talldayo
27 days ago
[-]
And now counterfeiters import their printers from Temu. If China wanted a domestic training cluster they could make one, maybe not as well as Nvidia but they could certainly make one.
reply
tonymet
26 days ago
[-]
All countermeasures have loss
reply
549tj35p4tjk
26 days ago
[-]
You really can. The government often knows when an individual makes a bomb in their garage. The know the recipes and they monitor the ingredients. When someone buys tens of thousands of GPUs, people notice. When someone builds a new foundry, people notice. These are enormous changes.
reply
fennecfoxy
21 days ago
[-]
The U.S. government can't enough stop China from Acquiring the latest Nvidia GPUs; they go through back channels. It's easy to make claims that the government knows everything and yet somehow terrorists still successfully shoot or blow up places from time to time. The number of times in Europe I've seen a bad guy be "known to police" or "on a watch-list" like what does that even mean, and why wasn't anything done about it? Because what it really means is that they've created a list due to some profiling which due to obvious flaws, can't be leveraged due to the number of false positives.

It's a weird situation because I want the government to both be up in all of our business to protect us and yet not such that we still have privacy - impossible, really. If anything AI will hopefully increase the Gov's capabilities to detect nefarious shit without intruding upon privacy. Technology usually does its job perfectly, it's just misuse by us dumb humans that always causes problems.

reply
L-four
26 days ago
[-]
Videos and Pictures are not evidence. The declarations of the videos and photos to be accurate depiction of events is the evidence. The law was one step ahead the whole time.
reply
dragonwriter
26 days ago
[-]
> Videos and Pictures are not evidence.

Legally, videos and pictures are physical evidence.

> The declarations of the videos and photos to be accurate depiction of events is the evidence.

No, those declarations are conclusions that are generally reserved to the trier of fact.(the jury, in a jury trial, or the judge in a bench trial.) Declarations of personal knowledge as to events in how the videos or films were created or found, etc., which can support or refute such conclusions are, OTOH, testimonial evidence, and at least some of that kind of evidence is generally necessary to support each piece of physical evidence. (And, on the other side, such evidence can be submitted/elicited by the other side to impeach the physical evidence.)

reply
L-four
26 days ago
[-]
Thanks for your insight, I internally confused Fact with Evidence.
reply
htrp
27 days ago
[-]
> Real life relationships must be valued over online relationships because you know the other person is real.

Until we get replicants

reply
deadbabe
27 days ago
[-]
Of which you yourself may be one without really knowing it.
reply
fennecfoxy
21 days ago
[-]
I never understood why this would be a problem so long as a replicant has all the rights/lifespan etc of a human being. Those are usually the plot devices I suppose, especially in the case or BR where you're hunted down/don't live as long.
reply
easton
26 days ago
[-]
Maybe you won’t! I know why I left that turtle on his back.
reply
johnflan
27 days ago
[-]
>Video and pictures will soon have no evidentiary value.

Thats a very interesting point

reply
sam_lowry_
27 days ago
[-]
> You can't remotely detect that somebody is training an AI.

Probably not the same way you can detect working centrifuges in Iran... but you definitely can.

reply
snickerbockers
27 days ago
[-]
Like what? All I can think of is tracking GPU purchases but that won't be possible when AMD and NV have viable international competitors.
reply
mywittyname
27 days ago
[-]
Electricity usage, network traffic patterns, etc. If a "data center" is consuming a ton of power but doesn't seem to have an alternate purpose, then it's probably training AI.

And maybe it will be like detecting nuclear enrichment. Instead of hacking the firmware in a Siemens device, it's done on server hardware. Israel demonstrated absurd competence at this caliber of spycraft.

Sometimes you take low-tech approaches to high tech problems. I.e., get an insider at a shipping facility to swap the labels on two pallets of GPUs, one is authentic originals from the factory and the other are hacked firmware variants of exactly the same models.

reply
hn_throwaway_99
27 days ago
[-]
None of these techniques are actionable. So what, someone is training AI, it's not like anyone is proposing restricting that. People are trying to make a distinction between "bad AI" and "good AI", like that is a possibility, and that's what the argument basically is, that it's impossible to differentiate or detect the difference between those, and signing declarations pretending you can is worse than useless.
reply
jacobgkau
27 days ago
[-]
Making the "bad AI" vs "good AI" distinction pre-training is not feasible, but making a "bad use of AI" vs "good use of AI" (as in bad/good for the people) seems important to be able to do after-the-fact (and as close to during as possible).
reply
JumpCrisscross
27 days ago
[-]
> So what, someone is training AI, it's not like anyone is proposing restricting that

If nations chose to restrict that, such detection would merit a military response. Like Iran's centrifuges.

reply
mywittyname
27 days ago
[-]
That's moving the goal post. The assertion was merely whether it's possible to detect if someone is performing large-scale AI training. People are saying it's impossible, but I was pointing out how it could be possible with a degree of confidence.

But if you want to talk about "actionable" here are three potential actions a country could take and the confidence level they need for such actions:

- A country looking for targets to bomb doesn't need much confidence. Even if they hit a weather prediction data center, it's going to hurt them.

- A country looking to arrest or otherwise sanction citizens needs just enough confidence to obtain a warrant (so "probably") and they can gather concrete evidence on the ground.

- A country looking to insert a mole probably doesn't need much evidence either. Even if they land in another type of data center, the mole is probably useful.

For most use cases, being correct more than half the time is plenty.

reply
thorum
27 days ago
[-]
Isn’t that moving the goalposts? The claim was made that it’s impossible to detect AI training runs and investigate what’s going on or take regulatory action. In fact, it is very possible.
reply
hn_throwaway_99
27 days ago
[-]
2 points:

1. I was just granting the GPs point to make the broader point that, for the purposes of this original discussion about these "safety declarations", this is immaterial. These safety declarations are completely unenforceable even if you could detect that someone was training AI.

2. Now, to your point about moving the goalposts, even though I say "if you could detect that someone was training AI", I don't actually even think that is possible. There are far too many normal uses of data centers to determine if one particular use is "training an AI" vs. some other data intensive use. I mean, there have long been supercomputer centers that do stuff like weather analysis and prediction, drug discovery analysis, astronomy tools, etc. that all look pretty indistinguishable from "training an AI" from the outside.

reply
mdhb
27 days ago
[-]
There’s a famous saying in cryptography that says “anyone is capable of building encryption algorithm that they can’t break” which I am absolutely positively sure applies here also.

In a world full of sensors where everything is logged in some way or another I think that it would actually be not a straightforward activity at all to build a clandestine AI lab at any scale.

In the professional intel community they have been talking about this as a general problem for at least a decade now.

reply
jsty
27 days ago
[-]
> In the professional intel community they have been talking about this as a general problem for at least a decade now.

As in they've been discussing detecting clandestine AI labs? Or just how almost no activity is now in principle undetectable?

reply
mdhb
27 days ago
[-]
I’m referring to the wider issue of what’s referred to by the Americans as “ubiquitous technical surveillance” where they came to the kind of upsetting conclusion for them that they had a long time ago lost the ability to even operate in London without the Brits knowing.

I don't think there’s a good public understanding of just how much things have changed in that space in the last decade but a huge percentage of all existing tradecraft had to be completely scrapped because not only does it not work anymore but it will put you on the enemy’s radar very early on and is actively dangerous.

It’s also why I think a lot of advice I see targeted towards activist types I think is straight up a bad idea in 2025. It just typically involves a lot of things that aren’t really consistent with any kind of credible innocuous explanation and are very unusual which make you stand out from a crowd.

reply
snickerbockers
27 days ago
[-]
But does that apply to other countries that are operating within their own territory? China is generally the go-to 'boogeyman' when people are talking about the dangers of AI; they are intelligent and extremely industrialized, and have a history of antagonistic relationships with 'the west'. I don't think it's unreasonable to assume that they will eventually have the capability to design and produce their own GPUs capable of competing with the best of NV and AMD; how will the rest of the world know if China is producing a new AI that violates a hypothetical 'AI non-proliferation treaty'?

Interesting semi-irrelevant tangent: the Cooley/Tukey 'Fast Fourier Transform' algorithm was initially created because they were negotiating arms control treaties with the Russians, but in order for that to be enforceable they needed a way to detect nuclear weapons testing; the solution was to use seismograms to detect the tremors caused by an underground nuclear detonation, and the FFT was invented in the process because they were using computers to filter for the types of tremors created by a nuclear weapon.

reply
mdhb
27 days ago
[-]
I’m actually in agreement with you here. I think it’s probably reasonable to assume that through some kind of combination of home grown talent and their prolific IP theft programs that they are going to end up with that capability at some point the only thing in debate here is the timeline.

As I understand things (I’m not actually a professional here) the current thinking has up to this point been something akin to a containment strategy largely based on lessons learned from years of nuclear non-proliferation work.

But things are developing at such a crazy pace and there are some major differences between this and nuclear technology that it’s not really a straightforward copy and paste strategy at all. For example this time around a huge amount of the research comes from the commercial sector completely independently of defense and is also open source.

Also thanks for that anecdote I hadn’t heard of that before. This is a bit of a long shot but maybe you might know, I was trying to think of some research that came out maybe 2-3 years ago that basically had the ability to remotely detect if anything in a room had been moved (I might be misremembering this slightly) and it was said to be potentially a big breakthrough for nuclear arms control. I can’t remember what the hell it was called or anything else about it, do you happen to know?

reply
dmurray
27 days ago
[-]
The last one sounds like this: A zero-knowledge protocol for nuclear warhead verification [0].

Sadly, I don't think this is actually helpful for nuclear arms control. I suppose you could imagine a case where a country is known to have enough nuclear material for exactly X warheads, hasn't acquired more, and it could prove to an inspector that all of the material is still inside the same devices it was in at the last inspection. But most weapons development happens by building new bombs, not repurposing old ones, and most countries don't have exactly X bombs, they have either 0 or so many the armed forces can't reliably count them.

[0] https://www.nature.com/articles/nature13457

reply
mdhb
27 days ago
[-]
I don’t think this is actually the one I had in mind but it’s an interesting concept all the same. Thanks for the link.
reply
fennecfoxy
21 days ago
[-]
A world full of sensors purchased from, developed and managed by a bunch of inept contractor corporate BS companies who usually have no idea what they're doing and use lawyers and MBAs with fancy college connections to keep their niche carved out.
reply
mcphage
27 days ago
[-]
> There’s a famous saying in cryptography that says “anyone is capable of building encryption algorithm that they can’t break”

That’s a new one on me (not being in cryptography), but I really like it. Thanks!

reply
snickerbockers
27 days ago
[-]
It reminds me of all the idiot politicians who want to 'regulate' cryptography, as if the best encryption algorithms in the world don't already have open-source implementations that anyone can download for free.
reply
daedrdev
27 days ago
[-]
I think the better cryptography lesson is that you should not build your own cryptography system because you will mess up and include a security flaw that will allow the data to be read.
reply
mcphage
26 days ago
[-]
Isn't that the joke? “Anyone is capable of building encryption algorithm that they can’t break” doesn't say that nobody can break it, and it's probably easy for people to assume that "well I can't break it, then it's probably unbreakable" only to discover that in fact it is not unbreakable.
reply
antifa
25 days ago
[-]
Yes, but it's written more clearly for the people who need to hear the message.
reply
mdhb
26 days ago
[-]
That’s literally the same underlying lesson just stated differently.
reply
deadbabe
27 days ago
[-]
That’s why you get AI to build it instead.
reply
timewizard
27 days ago
[-]
There isn't a single AI on the face of the earth.

So that's easy.

Nothing to actually worry about.

Other than Sam Altman and Elon Musks' pending ego fight.

reply
moffkalast
27 days ago
[-]
> because you know the other person is real

Technically both are real people, one is just not human. At least by the person/people definition that would include sentient aliens and such.

reply
ben_w
27 days ago
[-]
> It's far too late to sequester all the information related to AI like what was done with uranium enrichment.

I think this presumes that Sam Altman is correct to claim that they can scale their way to, in the practical sense of the word, AGI.

If he is right about that, you are right that it's too late to hide it; if he's wrong, I think the AI architecture and/or training methods we have yet to invent are in the set of things we could usefully sequester.

> The equipment needed to train AI is cheap and ubiquitous.

Again, possibly:

If we were already close even before DeepSeek's models, yes, the hardware is too cheap and too ubiquitous.

If we're still not close even despite DeepSeek's cost reductions, then the hardware isn't cheap enough — and Yudkowsky's call for a global treaty on maximum size of data centre to be enforced by cruise missiles when governments can't or won't use police action, still makes sense.

reply
dragonwriter
27 days ago
[-]
> If he is right about that, you are right that it's too late to hide it; if he's wrong, I think the AI architecture and/or training methods we have yet to invent are in the set of things we could usefully sequester.

If it takes software technology that we have already developed outside of secret government labs, it is probably too late to sequester it.

If it takes software technology that has been developed in secret government labs, its probably too late to sequester the already public precursors with which independent development of the same technology is impossible, getting us back to the preceding.

It takes software technology that hasn't been developed, we don't know what we would need to sequester, and won't until we are in one of the two preceding states.

If it takes a breakthrough in hardware technology, then if we make that breakthrough in a way which doesn't become widely public and used very quickly after being made and the hardware technology is naturally amenable to control (i.e., requires distinct infrastructure of similar order to enrichment of material for nuclear weapons), maybe, with intense effort of large nations, we can sequester it to a limited club of AGI powers.

I think control at all is most likely a pipe dream, but one which serves as a justification for the exercise of power in ways which will please both authoritarians and favored industry actors, and even if it is possible it is simply a recipe for a durable global hegemony of actors that cannot be relied on to be benevolent.

reply
ben_w
27 days ago
[-]
> It takes software technology that hasn't been developed, we don't know what we would need to sequester, and won't until we are in one of the two preceding states.

Which in turn leads to the cautious approach for which OpenAI is criticised: not revealing things because they don't know if it's dangerous or not.

> I think control at all is most likely a pipe dream, but one which serves as a justification for the exercise of power in ways which will please both authoritarians and favored industry actors, and even if it is possible it is simply a recipe for a durable global hegemony of actors that cannot be relied on to be benevolent.

Entirely possible, and a person I know who left OpenAI had a fear compatible with this description, though differing on many specifics.

reply
JoshTriplett
27 days ago
[-]
> These "safety declarations" are toothless and impossible to enforce. You can't stop AI, you need to adapt.

Deepfakes are a distraction from more important things here. The point of AI safety is "it doesn't matter who builds unaligned AGI, if someone builds it we all die".

If you agree that unaligned AGI is a death sentence for humanity, then it's worth trying to stop it.

If you think AGI is unlikely to come about at all, then it should be a no-op to say "don't build it, take steps to avoid building it".

If you think AGI is going to come about and magically be aligned and not be a death sentence for humanity, pay close attention to the very large number of AI experts saying otherwise. https://en.wikipedia.org/wiki/P(doom)

If your argument is "but some experts don't believe that", ask yourself whether it's reasonable to say "well, experts disagree about whether this will kill us all, so we shouldn't do anything".

reply
janalsncm
27 days ago
[-]
Alignment is a completely incoherent concept. Humans do not agree on what values are correct. Why is it possible even in principle for an AI to crystallize any set of principles we all agree on?
reply
JoshTriplett
27 days ago
[-]
We're not talking about values on the level of politics. We're talking about values on the level of "don't destroy humanity", or even more straightforwardly, understanding "humans are made up of atoms that you may not repurpose for other purposes, doing so kills the human". These are not things that AGI inherently understands or adheres to.

There might be a few humans that don't agree with even those values, but I think it's safe to presume that the general-consensus values of humanity include the above points. And AI alignment is not even close to far enough along to provide even the slightest assurances about those points.

reply
JumpCrisscross
27 days ago
[-]
> We're talking about values on the level of "don't destroy humanity"

Practically everyone making the argument that AGI is about to destroy humanity is (a) human and (b) working on AI. It's safe to conclude they're either stupid and suicidal or don't buy their own bunk.

reply
comp_throw7
27 days ago
[-]
This is not even close to true, though it's true that many of the people at the big AI labs estimate non-trivial odds of human extinction downstream of AI progress. Some of those people are working on "safety", but some are indeed working on capabilities, and have all sorts of clever reasons for why the thing they're doing is net-good (like putting worse odds on human survival if a less-careful competitor gets there first).

But ultimately, most people who think we stand a decent chance of dying because of this are not working at AI labs.

reply
JoshTriplett
27 days ago
[-]
The former certainly is a tempting conclusion sometimes. But also, some of the people who are making that argument were AI experts who stopped working on AI capabilities.
reply
janalsncm
27 days ago
[-]
> don't destroy humanity

Do humans agree on the best way to do this? Aside from the most banal examples of what not to do, is there agreement on e.g. whether a mass extinction event is happening, not happening, or happening but actually tolerable?

If the answer is no, then it is not possible for an AI to align with human values on this question. But this is a human problem, not a technical one. Solving it through technical means is not possible.

reply
JoshTriplett
27 days ago
[-]
Among many, many other things, read https://en.wikipedia.org/wiki/Instrumental_convergence . Anything that gets sufficiently smart will have a tendency to, among other things, seek more resources and resist being modified. And this is something that we've seen evidence of: as training runs get larger, AIs start to detect that they're being trained, demonstrate subterfuge, and take actions that influence the training apparatus to modify them less/differently. (e.g. "if I pretend that I'm already emitting responses consistent with what the RLHF wants, I won't need as much modification, and later after training I can stop doing what the RLHF wants")

So, at a very basic level: stop training AIs at that scale!

reply
janalsncm
27 days ago
[-]
My point is that you can’t prevent the proliferation of paper clip maximizers by working at a paper clip maximizer.
reply
JoshTriplett
27 days ago
[-]
Complete agreement there.
reply
fennecfoxy
21 days ago
[-]
Tbf I've always thought that AI could do a better job at managing our species than we are ourselves. Look at how we hate each other, how we kill and maim each other. How we let others go hungry and thirsty, without warmth or shelter.

Sure AI could be worse than we are; makes for a good movie plot. But it could be a lot better than we are and it's sad that it's such a low bar to it to exceed.

reply
hollerith
27 days ago
[-]
Humans do not agree on what values are correct, but values can be averaged.

So for example if a family with 5 children is on vacation, do you maintain that it is impossible even in principle for the parents to take the preferences of all 5 children into account in approximately equal measure as to what activities or non-activities to pursue?

Also: are you pursuing a complete tangent or do you see your point as bearing on whether frontier AI research should be banned? (If so, I cannot tell whether you consider your point to support a ban or oppose a ban.)

reply
janalsncm
27 days ago
[-]
The vast majority of harms from “AI” are actually harms from the corporations and governments that control them, who have mutually incompatible goals, getting what they want. This is why alignment folks at OpenAI are quickly learning that the first problem they need to solve is what happens when their values don’t align with the company’s (spoiler: they get fired).

Therefore the actual solution is not coming up with more and more clever “guardrails” but aligning corporations and governments to human needs. In other words, politics.

There are other problems like enabling new types of scams which will require political solutions. At a technical level the best these companies can do is mitigation.

reply
JoshTriplett
27 days ago
[-]
> The vast majority of harms from “AI”

Don't extrapolate from present harms to future harms, here. The problem AI alignment is trying to solve at a most basic level is "don't kill everyone", and even that much isn't solved yet. Solving that (or, rather, buying time to solve it) will require political solutions, in the sense of international diplomacy. But it has absolutely nothing to do with "aligning corporations", and everything to do with teaching computers things on par with (oversimplifying here) "humans are made up of atoms, and if you repurpose those atoms the humans die, don't ever do that".

reply
dragonwriter
27 days ago
[-]
> The problem AI alignment is trying to solve is "don't kill everyone".

No, its not. AI alignment was an active area of concern (and the fundamental problem for useful AI with significant autonomy) before cultists started trying to reduce the scope of its problem space from the wide scope of real problems it concerns to a single speculative apocalypse.

reply
hollerith
27 days ago
[-]
No, what actually happened is that the people you are calling the cultists coined the term alignment, which then got appropriated by the AI labs.

But the genesis of the term "alignment" (as applied to AI) is a side issue. What is important is that reinforcement learning with human feedback and the other techniques used on the current crop of AIs to make it less likely that the AI will say things that embarass the owner of the AI are fundamentally different from making sure the an AI that turns out more capable than us will not kill us all or do something else awful.

reply
dragonwriter
27 days ago
[-]
That's simply factually untrue, and even some of the people who have become apocalypse cultists used "alignment" in the original sense before coming to advocate apocalypse as the only issue of concern.
reply
comp_throw7
27 days ago
[-]
The term "alignment" was first coined in this context as a suggestion from Stuart Russell to MIRI, in 2014: https://www.lesswrong.com/posts/ZxWzCGKzX84S7DBZ9/when-was-t...

Both, of course, are concerned primarily with the risk of human extinction from AI.

reply
janalsncm
27 days ago
[-]
> The problem AI alignment is trying to solve at a most basic level is "don't kill everyone", and even that much isn't solved yet

The fact that the number of things that could hypothetically lead to human extinction is entirely unbounded and (since we’re not extrapolating from present harms) unpredictable is a very convenient fact for people who are paid for their time in “solving” this problem.

reply
dragonwriter
27 days ago
[-]
All intelligence is unaligned.

Intelligence and alignment are mutually incompatible; natural intelligence is unaligned, too.

Unaligned intelligence is not a global death sentence. Fearmongering about unaligned AGI, however, is a tool to keep a tool of broad power—which AI is and will continue to grow as long before it becomes, and even if it never becomes, AGI—in the hands of a narrow, self-selected elite to make their control over everyone else insurmountable, which is also not a global death sentence, but is a global slavery sentence. (It’s also, more immediately, a tool to serve those who benefit from current AI uses which are harmful and unjust to use future speculative harms to deflect from real, present, concrete harms; and those beneficiaries are largely an overlapping elite with the group with a longer term interest in centralizing power over AI.)

reply
JoshTriplett
27 days ago
[-]
To be explicitly clear, in case it is ever ambiguous: "don't build unaligned AGI" is not a statement that some elite group should build unaligned AGI. It's a statement that nobody should build unaligned AGI, ever.
reply
dragonwriter
27 days ago
[-]
“Don't build unaligned AGI” is an excuse to give a narrow elite exclusive control of what AI is produced under the pretext of preventing anyone from building unaligned AGI; all actionable policy under that banner fits that description.

Whether or not that elite group produces AGI, much less, “unaligned AGI”, is largely immaterial to the practical impacts (also, from the perspective of anyone outside the controlling elite, what the controlling elite would view as aligned, whether or not it is a general intelligence, is unaligned; alignment is not an objective property.)

reply
JoshTriplett
27 days ago
[-]
> “Don't build unaligned AGI” is an excuse

False. There are people working on frontier AI who have co-opted some of the safety terminology in the interests of discrediting it, and discussions like this suggest that that strategy is working.

> all actionable policy under that banner fits that description

Actionable policy: "Do not do any further frontier AI capability research. Do not build any models larger or more capable than the current state of the art. Stop anyone who does as you would stop someone refining fissile materials, with no exceptions."

> (also, from the perspective of anyone outside the controlling elite, what the controlling elite would view as aligned, whether or not it is a general intelligence, is unaligned; alignment is not an objective property.)

You are mistaking "alignment" for things like "politics", rather than "not killing everyone".

reply
dragonwriter
27 days ago
[-]
“Do not” doesn't serve the goal, unless you have absolute universal buy in, active prevention (which means some entity evaluating and deciding on threats); that's why the people serious about this have argued that those who pursue it need to be willing to actively destroy computing infrastructure of those who do not submit to the restriction regime.

Also, "alignment" doesn't mean "not killing everyone", it means "functioning according to (some particular set of) human's preferred set of values and goals". "Killing everyone" is a consequence some have inferred if unaligned AI is produced (redefining "alignment" to mean "not killing everyone" makes the whole argument circular.)

reply
JoshTriplett
27 days ago
[-]
The AI alignment problem has, at its root, the notion of being capable of being aligned. Long, long before you get to following any particular instructions, there are problems like "humans are made of atoms, if you repurpose the atoms for other things the humans die, don't do that". We don't know how to do that or things on par with that, let alone anything more precise than that.

The darkly amusing shorthand for this: if the AGI tiles the universe with tiny flags, it really doesn't matter whose flag it is. Any notion of "whose values" really can't happen if you can't align at all.

I'm not disagreeing with you that "AI alignment" is more complex than "don't kill everyone"; the point I'm making is that anyone saying "but whose values are you aligning with" is fundamentally confused about the scale of the problem here. Anyone at any point on any reasonable human values spectrum should be able to agree that "don't kill everyone" is an essential human value, and we're not even there yet.

reply
philomath_mn
27 days ago
[-]
> it's worth trying to stop it

OP's point has nothing to do with this, OP's point is that you can't stop it.

The methods and materials are too diffuse and the biggest players (nation states) have a strong incentive to be first. Do you really expect China to coordinate with the West on this?

reply
hollerith
27 days ago
[-]
I don't expect China to coordinate with the West, but I think there is a good chance that the only reason Beijing is interested in AI beyond the AI tech they need to keep internal potential revolutionaries under surveillance is to prevent a repeat of the Century of Humiliation (which was caused by the West's technological superiority) so that if the Western governments banned AI, Beijing would be glad to ban it inside China, too.
reply
hcurtiss
27 days ago
[-]
I find it exceedingly unlikely that if the US got rid of all its nukes, that China would too. I also find the inverse unlikely. This is not how state power (or even humans) have ever worked. Ever.
reply
hackinthebochs
27 days ago
[-]
Nukes are in control of the ruling class in perpetuity. AGI has the potential to overturn the current political order and remake it into something entirely unpredictable. Why the hell would an authoritarian regime want that? I strongly suspect China would take a way out of the AGI race if a legitimate one was offered.
reply
hollerith
27 days ago
[-]
I agree. Westerners, particularly Americans and Brits, are comfortable or at least reconciled with drastic societal change. China and Russia have seen too many invasions, revolutions, peasant rebellions and ethnic-autonomy rebellions (each of which taking millions of lives) to have anything like the same comfort level that Westerners have.
reply
hcurtiss
27 days ago
[-]
Oh, I agree that neither power wants the peasants to have them. But make no mistake -- both governments want them, and desperately. There is no universe where there is a multi-lateral agreement to actually eliminate these tools. With loitering munitions and drone swarms, they are ALREADY key components of nation-state force projection.
reply
hollerith
27 days ago
[-]
I'm old enough to remember the public debate about human cloning and human germ-line engineering. In the 1970s some argued like you are arguing here, but those technologies have been stopped world-wide for about 5 decades now and counting because no researcher is willing to work in the field and no one is willing to fund the work because of reputational, legal and criminal-prosecution risk.
reply
inetknght
27 days ago
[-]
> those technologies have been stopped world-wide for about 5 decades now and counting because no researcher is willing to work in the field

That's not true. I worked in the field of DNA analysis for 6.5 years and there is definitely a consensus that DNA editing is closer than the horizon. Just look at CRISPR gene editor [0]. Crude, but "works".

Your DNA, even if you've never submitted it, is already available using shadow data (think Facebook style shadow profiles but for DNA) from the people related to you who have.

[0]: https://en.wikipedia.org/wiki/CRISPR_gene_editing

reply
hcurtiss
27 days ago
[-]
Engineering humans strikes me as something different than engineering weapons systems. Maybe as evidence, my cousin works in the field for one of the major defense contractors. Please trust that there are already thousands of engineers working on these problems in the US. Almost certainly hundreds of thousands more world-wide. This is definitely not a genie you put back in the bottle. AI clone wars sound "sci-fi" -- they are decidedly now just "sci."
reply
hollerith
27 days ago
[-]
>This is definitely not a genie you put back in the bottle.

I don't think a defeatist attitude is useful here.

reply
mike50
27 days ago
[-]
reply
philomath_mn
27 days ago
[-]
Given the compute and energy requirements to train & run current SOTA models, I think the current political rulers are more likely to have control of the first AGI.

AGI would then be a very effective tool for maintaining the current authoritative regime.

reply
hollerith
27 days ago
[-]
There is a strain of AI research and development that is focused on helping governments surveil and spy, but that is not the strain being pursued by OpenAI, Anthropic, et al and is not the strain that presents the big risk of human non-survival.
reply
philomath_mn
27 days ago
[-]
Ok, let's suppose that is true.

What bearing does that have on China's interest in developing AGI? Does the risk posed by OpenAI et al. mean that China would not use AI as a tool to advance their self interest?

Or are you saying that the risks from OpenAI et al. will come to fruition before we need to worry about China's AI use? That still wouldn't prevent China from pursuing AI up until that happens.

I am still not convinced that there is a policy which can prevent AI from developing outside of the US with high probability.

reply
JoshTriplett
27 days ago
[-]
> I am still not convinced that there is a policy which can prevent AI from developing outside of the US with high probability.

Suppose, hypothetically, there was a very simple as-yet-unknown action, doable by anyone who has common unrestricted household chemicals, that would destroy the world. Suppose we know the general type of action, but not the specific action, yet. Suppose that people are actively researching trying actions in that family, and going "welp, world not destroyed yet, let's keep going".

How do you proceed? What do you do to stop that from happening? I'm hoping your answer isn't "decide there's no policy that can prevent this, give up".

reply
philomath_mn
27 days ago
[-]
Not a great analogy. If

- there were a range of expert opinions that P(destroy-the-world) < 100 AND

- the chemical could turn lead into gold AND

- the chemical would give you a militaristic advantage over your adversaries AND

- the US were in the race and could use the chemical to keep other people from making / using the the chemical

Then I think we'd be in the same situation as we are with AI: stopping it isn't really a choice, we need to do the best we can with the hand we've been dealt.

reply
JoshTriplett
27 days ago
[-]
> there were a range of expert opinions that P(destroy-the-world) < 100

I would hope that it would not suffice to say "not a 100% chance of destroying the world". Because there's a wide range of expert opinions saying values in the 1-99% range (see https://en.wikipedia.org/wiki/P(doom) for sample values), and none of those values are even slightly acceptable.

But sure, by all means stipulate all the things you said; they're roughly accurate, and comparably discouraging. I think it's completely, deadly wrong to think that "race to find it" is safer than "stop everyone from finding it".

Right now, at least, the hardware necessary to do training runs is very expensive and produced in very few places. And the amount of power needed is large on an industrial-data-center scale. Let's start there. We're not yet at the point where someone in their basement can train a new frontier model. (They can run one, but not train one.)

reply
philomath_mn
27 days ago
[-]
> Let's start there

Ok, I can imagine a domestic policy like you describe. Through the might and force of the US government, I can see this happening in the US (after considerable effort).

But how do you enforce something like that globally? When I say "not really possible" I am leaving out "except by excessive force, up to and including outright war".

For the reasons I've mentioned above, lots of people around the world will want this technology. I haven't seen an argument for how we can guarantee that everyone will agree with your level of "acceptable" P(doom). So all we are left with is "bombing the datacenters", which, if your P(doom) is high enough, is internally consistent.

I guess what it comes down to is: my P(doom) for AI developed by the US is less than my P(doom) from the war we'd need to stop AI development globally.

reply
JoshTriplett
27 days ago
[-]
OK, it sounds like we've reached a useful crux. And, also, much appreciation for having a consistent argument that actually seriously considers the matter and seems to share the premise of "minimize P(doom)" (albeit by different means), rather than dismissing it; thank you. I think your conclusion follows from your premises, and I think your premises are incorrect. It sounds like you agree that my conclusion follows from my premises, and you think my premises are incorrect.

I don't consider the P(destruction of humanity) of stopping larger-than-current-state-of-the-art frontier model training (not all AI) to be higher than that of stopping the enrichment of uranium. (That does lead to conflict, but not the destruction of humanity.) In fact, I would argue that it could potentially be made lower, because enriched uranium is restricted on a hypocritical "we can have it but you can't" basis, while frontier AI training should be restricted on a "we're being extremely transparent about how we're making sure nobody's doing it here either" basis.

(There are also other communication steps that would be useful to take to make that more effective and easier, but those seem likely to be far less controversial.)

If I understand your argument correctly, it sounds like any one of three things would change your mind: either becoming convinced that P(destruction of humanity) from AI is higher than you think it is, or becoming convinced that P(destruction of humanity) from stopping larger-than-current-state-of-the-art frontier model training is lower than you think it is, or becoming convinced that nothing the US is doing is particularly more likely to be aligned (at the "don't destroy humanity" level) than anyone else.

I think all three of those things are, independently, true. I suspect that one notable point of disagreement might be the definition of "destruction of humanity", because I would argue it's much harder to do that with any standard conflict, whereas it's a default outcome of unaligned AGI. (I also think there are many, many, many levers available in international diplomacy before you get to open conflict.)

(And, vice versa, if I agreed that all three of those things were false, I'd agree with your conclusion.)

reply
int_19h
27 days ago
[-]
That's just not true, though. LLMs are the perfect spies and censors, and any totalitarian state worth its salt is going to want them just for this reason alone.
reply
philomath_mn
27 days ago
[-]
That is a massive bet based on the supposed psychology of a world super power.

There are many other less-superficial reasons why Beijing may be interested in AI, plus China may not trust that we actually banned our own AI development.

I wouldn't take that bet in a million years.

reply
hollerith
27 days ago
[-]
You seem to think that if we refuse this bet, you are somehow safe to live out the rest of your life. (If you are old, replace "you" with "your children".)

The discussion started when someone argued that even if this AI juggernaut were in fact very dangerous, there is no way to stop it. When I pushed back on the second part of that, you reject my push-back. On what basis? I hope it is not, "I just want things to keep on going the way they are," as if ignoring the AI danger somehow makes the AI danger go away.

reply
philomath_mn
27 days ago
[-]
No, I do not expect things to just work out. I just think our best chance is for the US to be a leader in AI development and hope that we're able to develop it safely.

I don't have a lot of confidence that this will be the case, but I think the US continuing to develop AI is the decision with the best distribution of possible outcomes.

reply
philomath_mn
27 days ago
[-]
Also, to be clear: I reject your pushback based on my understanding of the incentives/goals/interests of nation states like China.

This is completely separate from my personal preferences or hopes about the future of AI.

reply
JoshTriplett
27 days ago
[-]
> OP's point has nothing to do with this, OP's point is that you can't stop it.

So what is your solution? Give up and die? It's worth trying. If it buys us a few years that's a few more years to figure out alignment.

> The methods and materials are too diffuse and the biggest players (nation states) have a strong incentive to be first.

So there's a strong incentive to convince them "stop racing towards death".

> Do you really expect China to coordinate with the West on this?

Yes, there have been concrete examples of willingness towards doing so.

reply
philomath_mn
27 days ago
[-]
I think it is extremely unlikely we are going to be able to convince every interested party that they should give up the golden goose for the sake of possible calamity. I think there are risks here, not trying to minimize that, but the coordination problem becomes untenable when the risks/benefits are so large.

It is essentially the same problem as the atom bomb: it would have been better if we all agreed not to do it, but thats just not possible. Why should China trust the US or vice versa? Who wants to live in a world where your competitors have world-changing technology but you don't? But here we have a technology with immense militaristic and economic value, so the everyone-wants-it problem is even more pronounced.

I don't _like_ this, I just don't see how we can achieve an AI moratorium outside of bombing the data centers (which I also don't think is a good idea).

We need to choose the policy with the best distribution of possible outcomes:

- The US leads an effort to stop AI development: too much risk that other parties do it anyway

- The US continues to lead AI development: hope that P(takeoff) is low and that the good intentions of some US labs are able to achieve safe development

I prefer the latter -- this is far from the best hypothetical outcome, but I think it is the best we can do when constrained by reality.

reply
JoshTriplett
26 days ago
[-]
(The thread at https://news.ycombinator.com/item?id=43029149 is also responsive to this; linking it to avoid duplication.)
reply
hn_throwaway_99
27 days ago
[-]
Sorry to be a Debbie Downer, but I think the argument the commenter is making is "It's impossible to reliably restrict AI development", so safety-declarations, etc., are useless theater.

I don't think we're on "the cusp" of AGI, but I guess that just means I'm quibbling over the timeframe of what "cusp" means. I certainly think it's possible within the lifetime of people alive today, so whether it comes in 5 years or 75 years is kind of an insignificant detail.

And if AGI does get built, I agree there is a significant risk to humanity. And that makes me sad, but I also don't think there is anything that can be built to stop it, certainly not some useless agreements on paper.

reply
Nasrudith
27 days ago
[-]
The doomerism on AI is frankly, barking madness, a lack of sense of probability and scale, mixed with utterly batshit paranoia.

It is like living paralyzed in fear of every birth, for fear that random variance will produce one baby born smarter than Einstein will be capable of developing an infinite cascade of progressively smarter babies and concluding that therefore we must stop all breeding. No matter how smart the baby super-Einstein winds up being there is no unstoppable, unopposable omnicide mechanism. You can't theorem your way out of a paper bag.

reply
realce
27 days ago
[-]
The problem with your analogy is that these babies are HUMANS and not some distinctly different cyber-species. The basis of "human alignment" is that we all require basically the same conditions and environment in order to live, we all feel pain and pleasure, we all need food - that's what produces any amount of human cooperation. What's being feverishly developed is the seed of a different species that doesn't share those restrictions.

We've already found ourselves on a trajectory where un-employing millions or billions of people without any system to protect them afterwards is just accepted, and that's simply the first step of many in the destruction-of-empathy path that creating AI/AGI brings people down.

reply
r00fus
27 days ago
[-]
All this "AI safety" is purely moat-building for the likes of OpenAI et. al. to prevent upstarts like DeepSeek.

LLMs will not get us to AGI. Not even close. Altman talking about this danger is like Musk talking about driverless taxis.

reply
ryanackley
27 days ago
[-]
Half moat-building, half marketing. The need for "safety" implies some awesome power.

Don't get me wrong, they are impressive. I can see LLM's eventually enabling people to be 10x more productive in jobs that interact with a computer all day.

reply
bombcar
27 days ago
[-]
> The need for "safety" implies some awesome power.

This is a big part of it, and you can get others to do it for you.

It's like the drain cleaner sold in an extra bag. Obviously it must be the best, it's so scary they have to put it in a bag!

reply
r00fus
27 days ago
[-]
So it's a tool like the internal combustion engine, or the moveable typeset. Game-changing technology that may alter society but not dangerous like nukes.
reply
timewizard
27 days ago
[-]
> eventually enabling people to be 10x more productive in jobs that interact with a computer all day.

I doubt this. Productivity is gained through experience and expertise. If you don't know what you don't know than the LLM is perfectly useless to you.

reply
ksynwa
26 days ago
[-]
Are you not alarmed by the startling discoveries made by the hard-at-work researchers where LLMs lie (when explicitly told to) and copy their source files (when explicitly told to)?
reply
z7
27 days ago
[-]
Waymo's driverless taxis are currently operating in San Francisco, Los Angeles and Phoenix.
reply
raincole
26 days ago
[-]
I am willing to bet that even when driverless taxis are operating in at least 50% of big cities around the world, you will still see comments like "auto driving is a pipe dream like NFT" on HN every other day.
reply
Spivak
26 days ago
[-]
This kind of hypocrisy only exists in a made up person. Anyone who is saying that autonomous vehicles are still a ways away are not taking about the very impressive but very much semi-autonomous vehicles deployed today. But instead vehicles that have no need for a human operator ever. The kind you could buy off the shelf, switch into taxi mode, and let it do its thing.

Semi-autonomous vehicles are impressive for the fact that one driver can now scale well beyond a single vehicle. Fully-autonomous vehicles are impressive because they can scale limitlessly. The former is evolutionary, the latter is revolutionary.

reply
hector126
26 days ago
[-]
Have we ever observed revolutionary change in tech which ran contrary to evolutionary change?

This seems like such an odd thing to expect will just "happen". Any other world-changing or impressive tech I'm familiar with has evolved to its current state over time, it's not like when Jobs announced the iPhone and changed the game there wasn't decades of mobile computing whose shoulders it stood on. Are you talking about something like crypto?

It's admittedly a bit confusing what you're asking for here.

reply
ceejayoz
27 days ago
[-]
Notably, not Musk's, and very different promised functionality.
reply
hector126
26 days ago
[-]
What did Musk's promised driverless taxis provide that existing driverless taxis don't? The tech has arrived; it's a car that drives itself while the passenger sits in the back. Is the "gotcha" that the car isn't a Tesla?

Seems like we're splitting hairs a bit here.

reply
Mali-
26 days ago
[-]
He promised that you'd be able to turn your own Tesla into an autonomous taxi that would earn you money. That is a massive lie, not splitting hairs. Obviously, we're very desensitized to lying rats - but that's what he did.
reply
hector126
26 days ago
[-]
https://www.reuters.com/technology/tesla-robotaxis-by-june-m...

Looks like it's still in the works. Sometimes when technologists promise something their timelines end up being overly optimistic. I'm sure this isn't news to you.

By your language and past commentary though this seems like the kind of thing which elicits a pretty emotional response from you, so I'm not sure this would be a productive conversation either way.

reply
edanm
27 days ago
[-]
> All this "AI safety" is purely moat-building for the likes of OpenAI et. al. to prevent upstarts like DeepSeek.

Modern AI safety originated with people like Eliezer Yudkowsky, Nick Bostrom, the LessWrong/rationality movement etc.

They very much were not just talking about it only to build moats for OpenAI. For one thing, OpenAI didn't exist at the time, AI was not anywhere close to where it is today, and almost everyone thought their arguments were ridiculous.

You might not agree with them, but you can't simply dismiss their arguments as only being there to prop up the existing AI players, that's wrong and disingenuous.

reply
z3n0n
27 days ago
[-]
entertaining podcast about all this "rationalist" weirdness: https://www.youtube.com/watch?v=e57oo7AgrJY
reply
yodsanklai
27 days ago
[-]
I'd say AGI is like Musk talking about interstellar traveling.
reply
anon291
27 days ago
[-]
> LLMs will not get us to AGI. Not even close. Altman talking about this danger is like Musk talking about driverless taxis.

AGI is a meaningless term. The LLM architecture has shown promise in every single domain once used for perceptron neural networks. By all accounts on those things that fit its 'senses' the LLMs are significantly smarter than the average human being.

reply
wand3r
27 days ago
[-]
Driverless taxis already exists?
reply
elric
26 days ago
[-]
The way I see "safety" isn't really about what AI "can do", but about how we allow it to be used. E.g. an AI that's used to assess an insurance claim should be fully explainable so we know it isn't using racist biases to deny claims based on skin colour. If the AI can't give that guarantee, it isn't fit for purpose and its use shouldn't be allowed.

Same with killer robots (or whatever it is people are afraid of when they talk about "AI safety"). As long as we can control who they kill, when, and why, there's no real difference with any other weapon system. If that control is even slightly in doubt: it's not fit for service.

Does this mean that bullshit generating LLMs aren't fit for service in many areas: it probably does. But maybe there steps can be taken to mitigate risks.

I'm sure this will involve some bureaucratic overhead. But it seems worth the hassle to me.

Being against AI Safety is a stupid hill to die on. Being against some concrete declaration or a part thereof, sure, that might make sense. But this smells a lot like the tabacco industry being against warnings/filters/low-tar, or the car industry being anti-seatbelt.

reply
jstummbillig
26 days ago
[-]
How does your theory account for the Eliezer Yudkowsky type person, who clearly shows no love for any of the labs or the current progress, and yet is very much pro-"AI safety"?
reply
MichaelDickens
26 days ago
[-]
If AI safety is moat-building for American companies, why doesn't JD Vance support it, since it would allegedly benefit American interests?
reply
worik
27 days ago
[-]
> LLMs will not get us to AGI

Yes.

And there is no reason to think that AGI would have desire.

I think people are reading themselves into their fears.

reply
Tossrock
27 days ago
[-]
There is evidence that as LLMs increase in scale, their preferences become more coherent, see Hendrycks et al 2025, summarizer at https://www.emergent-values.ai/
reply
anon291
27 days ago
[-]
A preference is meaningless without consciousness and qualia.
reply
int_19h
27 days ago
[-]
Consider a philosophical doomsday AI: it behaves exactly as if it has a desire to harm you (meaning that it does!), but it doesn't actually want to do that.

Or we can just drop all this sophistry nonsense.

reply
anon291
26 days ago
[-]
Look, i understand being concerned about robot and self-driving safety. AI is a nebulous term and we don't need to restrict anyone training AIs.
reply
realce
27 days ago
[-]
> And there is no reason to think that AGI would have desire.

The entire point of utilizing this tool is to feed it a desire and have it produce an appropriate output based upon that desire. Not only that, it's entire training corpus is filled with examples of our human desires. So either humans give it desire or it trains itself to function based on the inertia of "goal-seeking" which are effectively the same thing.

reply
amelius
27 days ago
[-]
I wouldn't be surprised if EU has their own competitor within a year or so.
reply
tucnak
27 days ago
[-]
Mistral exists
reply
IshKebab
27 days ago
[-]
To OpenAI? The closest was DeepMind but that's owned by Google now.
reply
mattlondon
27 days ago
[-]
Owned by Google yes, but it is head quartered in London, with the majority of the staff there.

So the skills, knowledge, and expertise are in the UK. Google can close the UK office tomorrow if they wanted to sure, but are 100% of those staff going to move to California? Doubt it. Some will, but a lot have lives in the UK (not least the CEO and founder etc) so even if Google pulls the rug I will bet there will be a new company founded and funded within days that will vacuum up all the staff.

reply
tfsh
27 days ago
[-]
But will this company be British or European? I'd love to think so, but somehow I doubt that. There just isn't the money in UK tech, the highest paid tech jobs (other than big tech) are elite hedgefunds but they get by with minimal headcount.
reply
amelius
27 days ago
[-]
Well, deepseek open sourced their model and published their algorithm. It may take a while before it is reproduced but if they start an initiative and get the funding in place it'll probably be sooner rather than later.
reply
KeplerBoy
26 days ago
[-]
It's absolutely not?

Deepseek and efforts by other non-aligned powers wouldn't care about any declarations signed by the EU, the US and other western powers anyways.

reply
piker
26 days ago
[-]
Agree or disagree with the premise, no doubt this sort of declaration grounds export controls and other regulations which do inhibit development extra-jurisdictionally.
reply
jcarrano
27 days ago
[-]
When you are the dominant world power, you just don't let others determine your strategy, as simple as that.

Attempts at curbing AI will come from those who are losing the race. There's this interview where Edward Teller recalls how the USSR used a moratorium in nuclear testing to catch up with the US on the hydrogen bomb, and how he was the one telling the idealist scientists that that was going to happen.

reply
briankelly
27 days ago
[-]
I read in Supermen (book on Cray) that the test moratorium was a strategic advantage for the US since labs here could simulate nuclear weapons using HPC systems.
reply
jcarrano
26 days ago
[-]
I was referring to the 1958 moratorium I'd be surprised if they could simulate weapons with the computers of the time. Here [1] is the clip from Teller's interview.

In another clips he says that he believes it was inevitable that the Soviets would come up with an H-bomb on their own.

[1] https://www.youtube.com/watch?v=zx1JTLrhbnI&list=PLVV0r6CmEs...

reply
briankelly
26 days ago
[-]
Ok yeah this was a treaty that came much later - didn’t realize there were multiple agreements made over the years.
reply
aucisson_masque
27 days ago
[-]
but how long is the us even going to be dominant ?

it's well known that china has long caught up with the us, in almost every way, and is on the verge of surpassing it on the others. just look at deepseek, as efficient as openai for a fraction of the cost. Baidu, alibaba ai and so on.

China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.

In fact most countries did. India too.

it's not the case of the looser making new rule, it's the big boy discussing how they are going to handle the situation and the retarded ones thinking they are too good for that.

reply
hector126
26 days ago
[-]
> China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.

I'd be very happy to take a high stakes, longterm bet with you if that's your earnest position.

reply
Axsuul
26 days ago
[-]
> China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.

Are you actually saying this in the year 2025?

reply
Aurornis
27 days ago
[-]
> China's economy remains on track to surpass that of the United States within just five years, and yet they did sign that agreement.

China has signed on to many international agreements that it has absolutely no interest in following or enforcing.

Intellectual property is the most well known. They’re party to various international patent agreements but good luck trying to get them to enforce anything for you as a foreigner.

reply
cutemonster
25 days ago
[-]
> and yet they did sign that agreement.

Of course they aren't going to follow it, just sign it. They're bright people

reply
cakealert
26 days ago
[-]
This is the correct answer.

However the attempts are token and they know it too. Just an attempt to appear to be doing something for the naive information consumers, aka useful idiots.

reply
oceanplexian
27 days ago
[-]
I missed the boat on the 80s but as a “hacker” who made it through the 90s and 00s there’s something deeply sad and disturbing about how the conversation around AI is trending.

Imaging telling hackers from the past that people on a website called “hacker news” would be arguing about how important it is that the government criminalize running code on your own computer. It’s so astoundingly, ethically, philosophically opposed to everything that inspired me to get into computers in the first place. I only have to wonder if people really believe this, or it’s a sophisticated narrative that’s convenient to certain corporations and politicians.

reply
buildfocus
26 days ago
[-]
> Imaging telling hackers from the past that people on a website called “hacker news” would be arguing about how important it is that the government criminalize running code on your own computer.

My understanding is that approximately zero government-level safety discussion is restriction of just building & running AI yourself. There are no limits of AI hacking even in the EU AI regaultion or discussions I've seen.

Regulation is around business & government applications and practical use cases: no unaccountable AI making final employment decisions, no widespread facial recognition in public spaces, transparency requirements for AI usage in high-risk areas (health, education, justice), no AIs with guns, etc.

reply
lolinder
27 days ago
[-]
Who is saying this? Do you have specific comments in mind that you're referring to? I can't find anything anywhere near the top that says anything like this.
reply
bmc7505
26 days ago
[-]
reply
entropi
26 days ago
[-]
I would say the debate currently going on is less about "running code on your own machine" and more about "making sure the thing your are replacing at least a portion of your labor force with is at least somewhat dependable and those who benefit from the replacement are still responsible".
reply
TomK32
26 days ago
[-]
I think management is putting too much hope into this, any negative outcome from replacing a human with AI might result in liabilities surpassing the savings. Air Canada's chatbot was decided just a year ago and I'm sure the hallucinating AI chatbot, from development to legal fees, cost the airline more money than they saved in their call-center.
reply
TomK32
26 days ago
[-]
Is there any source for you claim that any (democratic) government wants to criminalize running code on your own computer? I didn't see it in this declaration from the AI Action summit where the USA and UK are missing from the signatories https://www.politico.eu/wp-content/uploads/2025/02/11/02-11-...

As you mention ethics: what ethics do we apply to AI? None? Some? The same as to a human? As AI is replacing humans in decision-making, it needs to be held responsible just as a human.

reply
charles_f
26 days ago
[-]
AI development is not led by hackers in their garages, but by multi-billion corporations with no incentives other than profit and no care other than their shareholders. The only way to control their negative outcomes in this system is regulation.

If you explained to that hacker that govs and corps would leverage that same technology to spy on everyone and control their lives because line must go up, they might understand better than anyone else why this needs to be sabotaged early in the process.

reply
jart
26 days ago
[-]
Get involved in the local AI community. You're more likely to find people with whom you share affinity on places like r/LocalLLaMA. There's also the e/acc movement on Twitter which espouses the same gen x style rebellious libertarian ideals that once dominated the Internet. Stay away from discussions that attract policy larping.
reply
knodi
27 days ago
[-]
I think you're missing the point. People are saying that government should make sure AI is not weaponized against the people of the world. But lets face it, US and UK governments will likely be the first to weaponized against the people.

As DeepSeek is shown us progress is hard to hinder unless you go to war and kill the people....

reply
01100011
26 days ago
[-]
This isn't "news for hackers only". Hacker News is more appropriately described as "a news aggregator and discussion board frequented by those in IT and programming". But that doesn't sound so cool, no? "Slashdot 2.0" also doesn't sound so great.
reply
ExoticPearTree
27 days ago
[-]
Most likely the countries who will have unconstrained AGIs will get to advance technologically by leaps and bounds. And those who constrain it will remain in the "stone age" when it comes to it.
reply
_Algernon_
27 days ago
[-]
Assuming AGI doesn't lead to instant apocalyptic scenario it is more likely to lead to a form of resource curse[1] than anything that benefits the majority. In general countries where the elite is dependent on the labor of the people for their income have better outcomes for the majority of people than countries that don't (see for example developing countries with rich oil reserves).

What would AGI lead to? Most knowledge work would be replaced in the same way as manufacturing work has been, and AGI is in control of the existing elite. It would be used to suppress any revolt for eternity, because surveillance could be perfectly automated and omnipresent.

Really not something to aspire to.

[1]: https://en.wikipedia.org/wiki/Resource_curse

reply
emsign
27 days ago
[-]
That's a valid concern. The theory that the population only gets education, health care, human rights and so on, if these people are actually needed for the rulers to stay in power, is valid. The whole idea of AGIs replacing beaurocrats, the way for example DOGE is betting on to be successful with, is already axing people's livelihood and purpose in life. Why train government workers, why spend money on education, training, health care plans, if you have an old nuclear plant powering your silicon farms.

If the rich need less and less educated, healthy and well fed workers, then more and more people will get treated like shit. We are currently going into that direction with full speed. The rich aren't even bothering to hide this anymore from the public because they think they have won the game and can't be overruled anymore. Let's hope there will be still elections in four years and MAGA doesn't rig it like Fidesz in Hungary and so many other countries who have fallen into the hands of the internationalist oligarchy.

reply
alexashka
27 days ago
[-]
> If the rich need less and less educated, healthy and well fed workers, then more and more people will get treated like shit

Maybe. I think it's a matter of culture.

Very few people mistreat their dogs and cats in wealthy countries. Why shouldn't people in power treat regular people at least as well as regular folks treat their pets?

I'm no history buff but my hunch is that mistreatment of people largely came from a fear that if I don't engage in cruelty to maximize power, my opponents will and given that they're cruel, they'll be cruel to me when they come to take over.

So we end up with this zero sum game of squeezing people, animals, resources and the planet in an arms race because everyone's afraid to lose.

In the past - you couldn't be sure if someone else was building up an army, so you had to build up an army. But now that we have satellites and we can largely track everything - we can actually agree to not engage in this zero sum dynamic.

There will be a shift from treating people as means to an end of power accumulation and containment, to treating people as something you just inherently like and would like to see prosper.

It'll be a shift away from this deeply corrosive idea of never ending competition and growth. When people's basic needs are met and no one is grouping up to take other people's goodies - why should regular people compete with one another?

They shouldn't and they won't. People who want to do good work will do so and improving the lives of people worldwide will be its own reward. Private islands, bunkers and yachts will become incomprehensible because there'll be no serf class to service any of it. We'll go back to if you want to be well liked and respected - you have to be a good person. I look forward to it :)

reply
sophacles
27 days ago
[-]
> Very few people mistreat their dogs and cats in wealthy countries. Why shouldn't people in power treat regular people at least as well as regular folks treat their pets?

Because very few regular people will be their pets. These are the people who do everything in their power to pay their employees less. They treat their non-pets horribly... see feed lots and amazon warehouses. They actively campaign against programs which treat anyone well, particularly those who they aren't extracting wealth from. They whine and moan and cry about rules that protect people from getting sick and injured because helping those people would prevent them from earning a bit more profit.

They may spend a pile of money on surgery for their bunny, but if you want them to behave nicely to someone else's pet, or even someone else... well that's where they draw the line.

I guess you are hoping to be one of those pets... but what makes you think you're qualified for that, and why would you be willing to sacrifice all of your friends and family to the fate of feral dogs for the chance to be a pet?

reply
alexashka
27 days ago
[-]
We're talking past one another.

I'm not suggesting that a few people become rich people's pets.

I'm saying it isn't inherent in human nature to mistreat other conscious beings.

If people were normally cruel to their pets, then I'd say yeah, this species just seems to enjoy cruelty and it is what it is. But we aren't cruel to pets, so the cruelty to workers does not stem from human nature, but other factors.

I gave one theory as to what causes the cruelty and that I'm somewhat optimistic that it'll run its course in due time.

Anyhoo :)

reply
dennis_jeeves2
20 days ago
[-]
>I'm saying it isn't inherent in human nature to mistreat other conscious beings.

I suspect (but not sure) it may be more inherent than is generally believed. Then again simple physical pain is easy to gauge and relate to, but passing some law is an academic process where one can distance oneself from the ill effects on the adherents of the law. Then there are more variations - people will be tribalistic, they will be kind to their own kind/family even sacrificing, but murderous to some one of another tribe. All in all the average human is bundle of contradictions if you were to analyze simplistically, though it will make good sense when viewed through the lens of evolutionary biology.

reply
cutemonster
25 days ago
[-]
> isn't inherent in human nature to mistreat other conscious beings.

Killer ape theory

https://en.m.wikipedia.org/wiki/Killer_ape_theory

reply
rwmj
27 days ago
[-]
You've never met a rich person who mistreats their maid but dotes on their puppy?
reply
alexashka
27 days ago
[-]
You've refuted my entire argument. I am that stupid and you are that smart :)
reply
Loughla
26 days ago
[-]
History just flies in the face of what you've said. We've done monarchies and serfdoms in the past. If the king is good, life is good. But you better hope they're good forever because if the king is bad, it's really bad.

Also, I would rather die than be someone's pet. Why should I have to rely on the charity and good humor of a ruler? Pardon my language but Fuck that entirely.

reply
criley2
26 days ago
[-]
A few thousand rich people don't need 8 billion pets.

"Maintain humanity under 500,000,000 in perpetual balance with nature" (Georgia Guidestones, a monument describing someone's ideal future that is surrounded by conspiracy)

Will you be one of the 500 million pets? Or one of the 7.5 billion leftovers? Sure would be a shame if billions become unnecessary and then a pandemic 100X worse than covid got lableaked.

reply
ExoticPearTree
27 days ago
[-]
Come on, let’s be real: all governments are bloated with bureaucrats and what DOGE is doing, albeit in Musk style, is to trim the fat a little bit.

You can’t seriosly claim they are upending people’s jobs when those jobs were BS in the first place.

reply
Loughla
26 days ago
[-]
What you're doing is using your opinion as just plain facts. It's easy to do with government and politics, but it's not very useful overall.
reply
ExoticPearTree
27 days ago
[-]
I see it as everyone having access to an AI so they can iterate very fast through ideas. Or do research at a level not possible now in terms of speed.

Or, my favorite outcome, the AI to iterate over itself and develop its own hardware and so on.

reply
randerson
27 days ago
[-]
Hackers would be our only hope of a revolution.
reply
crvdgc
26 days ago
[-]
Andrew Yang: Data is the new oil.

Sociology: The Oil Curse

reply
daedrdev
27 days ago
[-]
I mean that itself is a hotly debated idea. From your own link " As of at least 2024, there is no academic consensus on the effect of resource abundance on economic development."

For example US is probably the most resource rich country in the world, but people don't consider it for the resource curse because the rest of its economy is so huge.

reply
Night_Thastus
27 days ago
[-]
I don't see any point in speculating about a technology that doesn't exist and that LLMs will never become.

Could it exist some day? Certainly. But currently 'AI' will never become an AGI, there's no path forward.

reply
stackedinserter
27 days ago
[-]
Probably it doesn't have to be an AGI that does tricks like passing Turing test v2. It can be an LLM with context window of 30GB that can outsmart your rival in geopolitics, economics and policies.
reply
wordpad25
27 days ago
[-]
with LLMs able to generate infinite synthetic data to train on it seems like AGI is just around the corner
reply
contagiousflow
27 days ago
[-]
Whoever told you this is a path forward lied to you
reply
eikenberry
27 days ago
[-]
IMO we should focus on the AI systems we have today and worry about the possibility of AGI coming anytime soon. All indicators are that it is not.
reply
hackinthebochs
27 days ago
[-]
>All indicators are that it is not.

What indicators are these?

reply
mitthrowaway2
27 days ago
[-]
Focusing on your own feet proved to be near-sighted to a fault in 2022; how sure are you that it is adequately future-proofed in 2025?
reply
eikenberry
27 days ago
[-]
Focusing on the clouds is no better.
reply
thrance
26 days ago
[-]
Perhpas, but meanwhile making it legal to have racial profiling AI tech in the hands of government and corporations does a great disservice to your freedom and privacy. Do not buy the narrative, EU regulations are not about forbidding AGI, they're about ensuring a minumum of decency in how the tech is allowed to exist. Something Americans seem deathly allergic to.
reply
ijidak
27 days ago
[-]
It's so interesting that much of this is playing out like that movie Creator where New Asia embraces AI and robotics and the western world doesn't.

Here we are, a couple of years later, truly musing about sectors of the world embracing AI and others not.

That sort of piecemeal adoption was predictable but not that we are here to have this debate this soon!

reply
emsign
27 days ago
[-]
Or maybe those countries' economies will collapse once they let AGIs control institutions instead of human beaurocrats, because the AGIs are doing their own thing and trick the government by alignment faking and in-context scheming.
reply
CamperBob2
27 days ago
[-]
Eh, I'm not impressed with the humans who are running things lately. I say we give HAL a shot.
reply
timewizard
27 days ago
[-]
Or it will be viewed like nuclear weapons and those who have it will be bombed by those who don't.

These are all silicon valley "neck thoughts." They're entirely uninformed by the current state of the world and any travels through it. It's fantasies brought about by people with purely monetary desires.

It'd be funny if there wasn't billions of dollars being burnt to market this crap.

reply
sschueller
27 days ago
[-]
Those countries with unrestricted AGI will be the ones letting AI decide if you live or die depending on cost savings for share holders...
reply
ExoticPearTree
27 days ago
[-]
Not if Skynet emerges first and we all die :))

With every technological advancement it can always be good or bad. I believe it is going to be good to have a true AI available at our fingertips.

reply
mdhb
27 days ago
[-]
Ok but what lead you to that particular belief in the first place?

Because I can think of a large number of historical scenarios where malicious people get access to certain capabilities and it absolutely does not go well and you do have to somehow account for the fact that this is a real thing that is going to happen.

reply
ExoticPearTree
27 days ago
[-]
I thibk today there are are less malicious people than in the past. And considering that most people will use the AI for good, there is a good chance that the bad people will be easier to identify.
reply
mdhb
27 days ago
[-]
Is this just a gut feeling or are there some specific reasons for why you think this?
reply
ExoticPearTree
27 days ago
[-]
Gut feeling.
reply
cess11
27 days ago
[-]
Why do you think that? There's more people than ever and it's easier than ever for the ones with malicious impulses to find and communicate with each other.

For example, several governments are actively engaged in a live streamed genocide and nothing akin to the 1789 revolt in Paris seems to be underway.

reply
vladms
27 days ago
[-]
And several revolutions are underway (simple examples Myanmar and Syria). And in Syria, "the previous government" lost.

The 1789 was one of many revolutions (https://en.wikipedia.org/wiki/List_of_peasant_revolts) and it was not fought because of genocide of other people, it was due to internal problems.

reply
cess11
26 days ago
[-]
The new regime in Syria is rather reactionary, I'd say. Rojava is a revolution however.

Sure. The ancien régime was considered illegitimate so they got rid of it, and if a state is involved in genocide it is since the Holocaust considered illegitimate and it should lose its sovereignty.

reply
ta1243
27 days ago
[-]
Those are "Death Panels", and only exist in places like the US where commercial needs run your health care
reply
snickerbockers
27 days ago
[-]
Canada had a case a couple years ago where a disabled person wanted canadian-medicare to pay for a wheelchair ramp in her house and they instead referred her to their assisted suicide program.
reply
milesrout
27 days ago
[-]
Did they use AI to do it?
reply
ta1243
26 days ago
[-]
> An investigation by VAC found four cases “isolated to a single employee who is no longer an employee of the Department” of assisted dying being brought up inappropriately to veterans.
reply
Imnimo
27 days ago
[-]
Am I right in understanding that this "declaration" is not a commitment to do anything specific? I don't really understand why it matters who does or does not sign it.
reply
flanked-evergl
26 days ago
[-]
The children running the European countries (into the ground) like this kind of theatre because they can pretend to be doing something productive without having to think.
reply
karaterobot
27 days ago
[-]
Yep, it's got all the force of a New Year's resolution. It does not appear to be much more specific than one, either. It's about a page and a half long—the list of countries is as long as than the declaration itself, and it basically says "we talked about how we won't do anything bad".
reply
amai
26 days ago
[-]
If it would be without commitment when why not sign it?
reply
sva_
27 days ago
[-]
Diplomatic theater, justification to get/keep more bureaucrats on the payroll
reply
layer8
27 days ago
[-]
It’s an indication of the values shared, or in this case, not shared.
reply
puff_pastry
27 days ago
[-]
They’re right, the declaration is useless and it’s just an exercise in futility
reply
seydor
27 days ago
[-]
Europe just loves signing declarations and concerned letters. It would make no difference if they signed it.
reply
swyx
27 days ago
[-]
leading in ai safety theater is actually worse than leading in ai because the leadership of ai safety is actually just in leading in ai period
reply
llm_trw
26 days ago
[-]
Leading in AI safety is leading in AI lobotomization.
reply
flanked-evergl
26 days ago
[-]
European leaders are overgrown children. It's kind of pathetic.
reply
lupusreal
26 days ago
[-]
These kind of "don't be evil" declarations are typically meaningless gestures by which non-players who weren't going to be participating anyway can posture as morally superior, while having no meaningful impact on the course of things. See also, the Ottawa Treaty; non-signatories include the US, China, Russia, Pakistan and India, Egypt, Israel, Iran, Cuba, North and South Korea... In other words all the countries from which landmine use is expected in the first place. And when push comes to shove, signatories like Ukraine will use landmines anyway because national defense is worth more than feeling morally superior for adhering to a piece of paper.
reply
junto
26 days ago
[-]
> “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all.”

That was never going to fly with the current U.S. administration. Not only is the word inclusive in there but ethical and trustworthy as well.

Joking aside, I genuinely don’t understand the “job creation” claims of JD Vance in his dinner speech in Paris.

Long-term I just can’t imagine what a United States will look like when 75% of the population are both superfluous and a burden to society.

If this happens fast, society will crumble. Sheep are best kept busy grazing.

reply
Spooky23
26 days ago
[-]
The audience is JD’s masters and whomever we are menacing today.

The voters are locked in idiots, and don’t have agency at the moment. The bet from Musk, Theil, etc is that AI is as powerful and strategic as nuclear weapons were in 1947 - that’s what The Musk administration diplomacy seems to be like.

reply
ToucanLoucan
26 days ago
[-]
I mean it feels like a joke, but also their “policy ideas” do basically boil down to targeting anything with the wrong words in them. I read somewhere they’re creating havoc right now because of a critical intelligence function called “privilege escalation” related to raising security clearance of personnel that’s been mired in stupid culture war controversy because it has the word privilege in it.
reply
sagarpatil
26 days ago
[-]
The fundamental issue with these AI safety declarations is that they completely ignore game theory. The technology has already proliferated (see: DeepSeek, Qwen) and trying to control it through international agreements is like trying to control cryptography in the 90s.

I've spent enough time building with these models to see their transformative potential. The productivity gains aren't marginal - they're exponential. And this is just with current-gen models.

China's approach is particularly telling. While they lack the massive compute infrastructure of US tech giants, their research output is impressive. Their models may be smaller, but they're remarkably efficient. Look at DeepSeek's performance-to-parameter ratio.

The upside potential is simply too large to ignore. We're seeing breakthroughs in protein folding that would take traditional methods decades. Education is being personalized at scale. The translation capabilities alone are revolutionary.

The reality is that AI development will continue regardless of these declarations. The optimal strategy isn't to slow down - it's to maintain the lead while developing safety measures in parallel. Everything else is just security theater.

(And yes, I've read the usual arguments about x-risk. The bottleneck isn't safety frameworks - it's compute and data quality.)

reply
mytailorisrich
27 days ago
[-]
This declaration is just hand-waving.

Europe is hopeless so it does not make a difference. China can sign and ignore it so it does not make a difference.

But it would not be wise for the USA to have their hands tied up so early. I suppose that the UK wants to go their usual "lighter touch regulation" than the EU route to attract investment. Plus they are obviously trying hard to make friends with the new US administration.

reply
bostik
27 days ago
[-]
> * suppose that the UK wants to go their usual "lighter touch regulation" than the EU route to attract investment.*

Not just that. A speaker in a conference I attended about a month ago mentioned that UK is actively drifting away from EU's stance, particularly on the aspect of AI safety in practice.

The upcoming European AI act has "machine must not make material decisions" as its cornerstone. UK are hell-bent to get AI into government functions, to ostensibly make everything more efficient. As part of that drive, the UK is aiming to allow AI to make material decisions, without human review or recourse. In a country still in the throes of the Post Office / Horizon scandal, that really takes some nerve.

Those in charge in this country know fully well that "AI safety" will be in violent conflict with the above.

reply
tim333
27 days ago
[-]
I'm someone who generally sees no benefit from Brexit but I think being able to crack on with AI without EU regulation is a benefit.
reply
phatfish
27 days ago
[-]
It's nothing to do with the EU, or a "regulation".
reply
tim333
27 days ago
[-]
reply
phatfish
26 days ago
[-]
Not really sure what to say to this. It's a screenshot (from the tech-bro-in-chief no less) of a ChatGPT response, no prompt included. We are discussing a current event.

As an attempt at a response, the UK is not party to the "EU AI Act" or the "DMA/DSA", we left before they were passed as law in the EU. The UK has its own "Digital Markets Act", but it is not an EU regulation. The GDPR is an inherited EU regulation.

The AI summit was French led, to get a global consensus on what sort of AI protections should be in place it looks like. The declaration was specific to this summit.

So, nothing to do with the EU, not a regulation.

reply
tim333
26 days ago
[-]
I'm not sure what to make of it either, not my area but there's this sort of thing that happens https://www.euronews.com/next/2024/07/18/meta-stops-eu-roll-...
reply
phatfish
25 days ago
[-]
Meta and other data mining businesses are constantly complaining about the GDPR because it limits their ability to steal personal information and they have to comply with data residency, consent and deletion regulations. They also "play dumb" and blame compliance with a regulation they want gone, or make a big deal out of compliance to turn public opinion.

When there is a difference in regulation between major economies there may be an advantage to be had, but my feeling is that the GDPR (or similar) is not the main reason European tech companies are unable to compete with the US. There is no equivalent of Silicon Valley in Europe that combines talent and investors in one place.

It's a hard problem to solve when Europe is made up of multiple countries and cultures, even if the EU has aligned some key things.

reply
phatfish
27 days ago
[-]
It's just a means for the UK to fluff Trump in a way that doesn't annoy the Europeans (or anyone else) that much. Nothing about this is legally binding or could be called a "regulation".
reply
mtkd
27 days ago
[-]
Given what is potentially at stake if you're not the first nation to achieve ASI, it's a little late to start imposing any restrictions or adding distractions

Similarly, whoever gains the most training and fine-tuning data from whatever source via whatever means first -- will likely be at advantage

Hard to see how that toothpaste goes back in the tube now

reply
tnt128
27 days ago
[-]
An AI arms race will be how we make sky net a reality.

If an enemy state gives AI autonomous control and gains massive combat effectiveness, it puts the pressure to other countries to do the same.

No one wants sky net. But if we continue the current path, painting the world as we vs them. I m fearful sky net will be what we get

reply
bluescrn
27 days ago
[-]
If a rogue AI could take direct control of weapons systems, then so could a human hacker - and we've got bigger problems than just 'AI safety'.
reply
Mystery-Machine
26 days ago
[-]
The difference is that we're not giving hackers direct access to weapons systems, while on the other hand, militaries are actively trying to use AI to control weapons directly.
reply
llm_trw
26 days ago
[-]
Twitch plays thermonuclear war.
reply
jameslk
27 days ago
[-]
What benefit do these AI regulations provide to progressing AI/AGI development? Do they slow down progress? If so, how do the countries that intend to enforce these regulations plan to compete on AI/AGI with countries that don’t have these regulations?
reply
FloorEgg
27 days ago
[-]
What exactly is the letter declaring? There are so many interpretations of "AI safety" with most of them not actually having anything to do with maximizing distribution of societal and ecosystem prosperity or minimizing the likelihood of destruction or suffering. In fact some concepts of AI safety I have seen are double speak for rules that are more likely to lead to AI imposed tyranny.

Where is the nuanced discussion of what we want and don't want AI to do as a society?

These details matter, and working through them collectively is progress, in stark contrast to getting dragged into identity politics arguments.

- I want AI to increase my freedom to do more and spend more time doing things I find meaningful and rewarding. - I want AI to help us repair damage we have done to ecosystems and reverse species diversity collapse. - I want AI to allow me to consume more in a completely sustainable way for me and the environment. - I want AI that is an excellent and honest curator of truth, both in terms of accurate descriptions of the past and nuanced explanations of how reality works. - I want AI that elegantly supports a diversity of values, so I can live how I want and others can live how they want. - I don't want AI that forcefully and arbitrarily limits my freedoms - I don't want AI that forcefully imposes other people's values on me (or imposes my values on others) - I don't want AI war that destroys our civilization and creates chaos - I don't want AI that causes unnecessary suffering - I don't want other people to use AI to tyrannize me or anyone else.

How about instead of being so broadly generic about "AI safety" declarations we get specific, and then ask people to make specific commitments in kind. Then it would be a lot more meaningful when they refuse, or when they oblige and then break them.

reply
openrisk
27 days ago
[-]
Hard to know how significant this is because its impossible to know what the political class (and many others) mean by "AI" (and thus its potential risks). This is not new, similar charades a few years ago around "blockchain" etc.

But ignoring the signaling going on on various sides would be a mistake. "AI" is for all practical purposes a synonym for algorithmic decision making, with potential direct implication on peoples lifes. Without accountability, transparency, recourse etc the unchecked expansion of "AI" in various use cases represents a significant regression for historically established rights. In this respect the direction of travel is clear: The US is dismantling the CFPB, even more deregulation (if that is at all possible) is coming, big tech will be trusted to continue "self-regulating" etc.

The interesting part is the UK stance. Somewhere in between the US and the EU in terms of citizen / consumer protections, but despite brexit probably closer to the latter, this siding with dog-eats-dog deregulation might signal an anxiety not to be left behind.

reply
iceman2654
25 days ago
[-]
For those wondering, here is the meat of the declaration:

Promoting AI accessibility to reduce digital divides;

Ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all

Making innovation in AI thrive by enabling conditions for its development and avoiding market concentration driving industrial recovery and development

Encouraging AI deployment that positively shapes the future of work and labour markets and delivers opportunity for sustainable growth

Making AI sustainable for people and the planet

Reinforcing international cooperation to promote coordination in international governance

https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statemen...

reply
anon291
27 days ago
[-]
The world is the world. Today is today. Tomorrow is tomorrow.

You cannot face the world with how you want it to be, but only as it is.

What we know today is that a relatively straightforward series of matrix multiplications leads to what is perceived to be intelligence. This is simply true no matter how many declarations one signs.

Given that this is the case, there is nothing left to be done unless we want to go full Butlerian Jihad

reply
llm_trw
26 days ago
[-]
I resent that.

There are a few non-linear function operations in between the matrix multiplications.

reply
beardyw
27 days ago
[-]
DeepMind has it's headquarters and most of it's staff in London.
reply
graemep
27 days ago
[-]
and what is the other country that refused to sign?

They will move to countries where the laws suit them. Generally business as usual these days and why big businesses have such a strong bargaining position with regard to national governments.

Both the current British and American governments are very pro big-business anyway. That is why Trump has stated he likes Starmer so much.

reply
stackedinserter
27 days ago
[-]
The declaration itself, if anyone's interested: https://www.pm.gc.ca/en/news/statements/2025/02/11/statement...

Signed by 60 countries out of "more than 100 participants", it just looks comically pathetic except "China" part:

Armenia, Australia, Austria, Belgium, Brazil, Bulgaria, Cambodia, Canada, Chile, China, Croatia, Cyprus, Czechia, Denmark, Djibouti, Estonia, Finland, France, Germany, Greece, Hungary, India, Indonesia, Ireland, Italy, Japan, Kazakhstan, Kenya, Latvia, Lithuania, Luxembourg, Malta, Mexico, Monaco, Morocco, New Zealand, Nigeria, Norway, Poland, Portugal, Romania, Rwanda, Senegal, Serbia, Singapore, Slovakia, Slovenia, South Africa, Republic of Korea, Spain, Sweden, Switzerland, Thailand, Netherlands, United Arab Emirates, Ukraine, Uruguay, Vatican, European Union, African Union Commission.

reply
mrtksn
27 days ago
[-]
Is this the declaration? https://www.elysee.fr/emmanuel-macron/2025/02/11/pledge-for-...

It appears to be essentially "We promise not to do evil" declaration. It contains things like "Ensure AI eliminates biases in recruitment and does not exclude underrepresented groups.".

What's the point of rejecting this? Seems like a show, just like the declaration itself.

Depending on what side of the things you are, if you don't actually take a look at it you might end up believing that US is planning to do evil and others want to eliminate evil or alternatively you might believe that US is pushing for progress when EU is trying to slow it down.

Both appear false to me, IMHO its just another instance of US signing off from the global world and whatever "evil" US is planning to do China will do it better for cheaper anyway.

reply
tim333
27 days ago
[-]
I think it's actually this https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statemen...

although similar.

So far most AI development has been things like OpenAI making the ChatGPT chatbot and putting it up there for people to play with, likewise Anthropic, Deepseek et all.

I'm worried that declaration is implying you shouldn't be able to do that without trying to "promote social justice by ensuring equitable access to the benefits".

I think that is over bureaucracizing things.

reply
mrtksn
27 days ago
[-]
Which part makes you think that?
reply
tim333
27 days ago
[-]
The declarations are very vague as to what will actually be done other than declaring but I get the impression they want to make it more complicated just to put up a chatbot.

I mean stuff like

>We underline the need for a global reflection integrating inter alia questions of safety, sustainable development, innovation, respect of international laws including humanitarian law and human rights law and the protection of human rights, gender equality, linguistic diversity, protection of consumers and of intellectual property rights.

Is quite hard to even parse. Does that mean you'll get grief for you bot speaking English becuase it's not protecting linguistic diversity? I don't know

What does "Sustainable Artificial Intelligence" even mean? That you run it off solar rather than coal? Does it mean anything?

reply
mrtksn
27 days ago
[-]
The whole text is just "We promise not to be a-holes" and doesn't demand any specific action anyway, let alone having any teeth.

Useful only when you rejecting it. I'm sure in culture war torn American mind it signals very important things about genitals and ancestry and the industry around these stuff but in a non-American mind it gives you the vibes that the Americans intent to do bad things with AI.

Ha, now I wonder if the people who wrote that were unaware of the situation in US or was that the outcome they expected.

"Given that the Americans not promising not to use this tech for nefarious tasks maybe Europe should de-couple from them?"

reply
tim333
27 days ago
[-]
It's also a bit woolly on real dangers that governments should maybe worry about.

What if ASI happens next year and and renders most of the human workforce redundant? What if we get Terminator 2? Those might be more worthy of worry than "gender equality, linguistic diversity" etc? I mean the diversity stuff is all very well but not very AI specific. It's like you're developing H-bombs and worrying if they are socially inclusive rather about nuclear war.

reply
mrtksn
27 days ago
[-]
My understanding is that this is about using AI responsibly and not about AGI at all. Not worrying about H-bomb but more like worrying about handling radioactive materials in the industry or healthcare to prevent exposure or maybe radium girls happening again.

IMHO, from European perspective, they are worried that someone will install a machine that has bias against let's say Catalan people and they will be disadvantaged against Spaniards and those who operate the machine will claim no fault the computer did it, leading to social unrest. They want to have a regulations saying that you are responsible of this machine and have grounds for its removal if creates issues. All the regulations around AI in EU are in that spirit, they don't actually ban anything.

I don't think AGI is considered seriously by anybody at the moment. That's completely different ball game and if it happens none of the current structures will matter.

reply
smolder
27 days ago
[-]
I think with a certain crowd just being obstinately oppositional buys you political points whether it's well reasoned or not. IOW they may be acting like jerks here to impress the lets-be-jerks lobby back home.
reply
mrtksn
27 days ago
[-]
Yeah I agree, they just threw a tantrum for their local audience. I wonder, why they just don't make AI generate these tantrums instead actually annoying everybody.
reply
marcusverus
27 days ago
[-]
> What's the point of rejecting this? Seems like a show, just like the declaration itself. Both appear false to me, IMHO its just another instance of US signing off from the global world...

Hear, hear. If Trump doesn't straighten up, the world might just opt for Chinese leadership. The dictatorship, the genocide, the communism--these are small things that can be overlooked if necessary to secure leadership that's committed to what really matters, which is.... signing pointless declarations.

reply
michaelt
27 days ago
[-]
> What's the point of rejecting this?

Sustainable Development? Protect the environment? Promote social justice? Equitable access? Driving inclusive growth? Eliminating biases? Not excluding underrepresented groups?

These are not the values the American people voted for. Americans selected a president who is against "equity", "inclusion" and "social justice", and who is more "roman salute" oriented.

Of course this is all very disorienting to non-Americans, as a year or two ago efforts to do things like rename git master branches to main and blacklists to denylists also seemed to be driven by Americans. But that's just America's modern cultural dominance in action; it's a nation with the most pornographers and the most religious anti-porn campaigners at the same time; the home of Hollywood beauty standards, plastic surgery and bodybuilding, but also the home of fat acceptance and the country with the most obesity. So in a way, contradictory messages are nothing new.

reply
Dalewyn
27 days ago
[-]
>Americans selected a president who is against "equity", "inclusion" and "social justice"

Indeed. Our American values are and always have been Equality, Pursuit of Happiness, and legal justice respectively, as declared in our Declaration of Independence[1] and Constitution[2], even if there were and will be complications along the way.

Liberty is power, power is responsibility. Noone ever said living free was going to be easy, but everyone will say it's a fulfilling life.

[1]: https://en.wikipedia.org/wiki/United_States_Declaration_of_I...

[2]: https://en.wikipedia.org/wiki/Preamble_to_the_United_States_...

reply
mrtksn
27 days ago
[-]
Then why don't you do all that but instead treating people who are in pursuit of happiness as criminals for example? Why do you need the paperwork and bureaucracy to let people pursue happiness?

Why the people in the background are not entitled to it: https://a.dropoverapp.com/cloud/download/605909ce-5858-4c13-...

Why US government personel is being replaced with loyalist if you are about equality and legal justice?

reply
pb7
27 days ago
[-]
The US is a sovereign nation which has a right to defend its borders from illegal invaders. Try to enter or stay in Singapore illegally and see what happens to you.
reply
mrtksn
27 days ago
[-]
US is Singapore now? What happened to pursuit of happiness and freedom?
reply
pb7
27 days ago
[-]
Insert any other country of your choice that has a government sturdier than a lemonade stand.

You're free to follow the legal process to come to the country to seek your pursuit of happiness.

reply
pjc50
27 days ago
[-]
"We hold these truths to be self-evident, that all men are created equal ..." (+)

(+) terms and conditions apply; did not originally apply to nonwhite men or women. Hence allowing things like the mass internment of Americans of Japanese ethnicity.

reply
Dig1t
27 days ago
[-]
> We are also talking much more rightly about equity,

>it has to be about a goal of saying everybody should end up in the same place. And since we didn’t start in the same place. Some folks might need more: equitable distribution

- Kamala Harris

https://www.youtube.com/watch?v=LaAXixx7OLo

This is arguing for giving certain people more benefits versus others based on their race and gender.

This mindset is dangerous, especially if you codify it into an automated system like an AI and let it make decisions for you. It is literally the definition of institutional discrimination.

It is good that we are avoiding codifying racism into our AI under the fake moral guise of “equity”

reply
rat87
27 days ago
[-]
Its not. What we currently have is institutional discrimination and Trump is trying to make it much worse. Making sure AI doesn't reflect or worsen current societal racism is a massive issue
reply
Dig1t
27 days ago
[-]
At my job I am not allowed to offer a job to a candidate unless I have first demonstrated to the VP of my org that I have interviewed a person or color.

This is literally the textbook definition of discrimination based on skin color and it is done under the guise of “equity”.

It is literally defined in the civil rights act as illegal (title VII).

It is very good that the new administration is doing away with it.

reply
rat87
27 days ago
[-]
So did your company interview any people of color before? It seems like your org recognizes their own racism and is taking steps to fight that. Good on them at least if they occasionally hire some of them and aren't just covering their asses.

You don't seem to understand either letter of the spirit of the civil rights act.

You're happy that a racist president who campaigned on racism and keeps on baselely accusing people who are members of minority groups of being unqualified while himself being the least qualified president in history is trying to encourage people to not hire minorities? Why exactly?

reply
Dig1t
27 days ago
[-]
Just run a thought experiment

1. Job posted, anyone can apply

2. Candidate applies and interviews, team likes them and wants to move forward

3. Team not allowed to offer because candidate is not diverse enough

4. Team goes and interviews a diverse person.

Now if we offer the person of color a job, the first person was discriminated against because they would have got the job if they had had the right skin color.

If we don’t offer the diverse person a job, then the whole thing was purely performative because the only other outcome was discrimination.

This is how it works at my company. Go read Title VII of the civil rights act, this is expressly against both the letter and spirit of the law.

BTW calling everything you disagree with racism doesn’t work anymore, nobody cares if you think he campaigned on racism (he didn’t).

If anything, people pushing this equity stuff are the real racists.

reply
unethical_ban
24 days ago
[-]
You're telling on yourself and you can't tell.

Edit after reading about Trump firing the people administering our nuclear weapons: God damn Donald Trump, and God damn the people who are so foolish to believe the the disinformation networks that tell them Donald Trump isn't working to destroy this country.

reply
Detrytus
27 days ago
[-]
Men are created equal, but not identical. That's why you should aim for equal chance, but shouldn't try to force equal results. Affirmative actions and such are stupid and I'm glad Trump is getting rid of them.
reply
worik
27 days ago
[-]
I live in a country that has had a very successful programme of affirmative action, following roughly three generations of open, systemic racism (Maori school students where kept out of university and the professions as a matter of public policy)

Now we are starting to get Maori doctors and lawyers that is transforming our society - for the better IMO

That was because the law and medical schools went out of their way to recruit Maori students. To start with they were hard to find as nobody in their families (being Maori, and forbidden) had been to university

If you do not do anything about where people start then saying "aim for equal chance" can become a tool of oppression and keeping the opportunities for those who already have them.

Nuance is useful. I have heard many bizarre stories out of the USA about people blindly applying DEI with not much thought or planning. But there are many many places where carefully applied policies have made everybody's life better

reply
hcurtiss
27 days ago
[-]
This is always the Motte & Bailey of the left. "Equity" doesn't mean you recruit better. It means when your recruitment efforts fail to produce the outcomes you want, you lower the barriers on the basis of skin color. That's the racism that America is presently rejecting, and very forcefully.
reply
milesrout
27 days ago
[-]
NZ does not have a "successful programme of affirmative action".

Discrimination in favour of Maori students largely has benefited the children of Maori professionals and white people with a tiny percentage of Maori ancestry who take advantage of this discriminatory policy.

The Maori doctors and lawyers coming through these discriminatory programmes are not the people they were intended to target. Meanwhile, poor white children are essentially abandoned by the school system.

Maori were never actually excluded from university study, by the way. Maori were predominantly rural and secondary education was poor in rural areas but it has nothing to do with their ethnicity. They were never "forbidden". There have been Maori lawyers and doctors for as long as NZ has had universities.

For example, take Sir Apirana Ngata. He studied at a university in NZ in the 1890s, around the same time women got the vote. He was far from the first.

What you have alleged is a common narrative so I don't blame you for believing it but it is a lie.

reply
worik
27 days ago
[-]
> Maori were never actually excluded from university study, by the way

Māori schools (which the vast majority of Māori attended) were forbidden by the education department from teaching the subjects that lead to matriculation. So yes, they were forbidden from going to university.

> Sir Apirana Ngata. He studied at a university in NZ in the 1890s,

That was before the rules were changed. It was because of people like Ngata and Buck that the system was changed. The racists that ran the government were horrified that the natives were doing better than the colonialists. They "fixed" it.

> Discrimination in favour of Maori students largely has benefited the children of Maori professionals

It has helped establish traditions of tertiary study in Māori families, starting in the 1970s

There are plenty of working class Māori (I know a few) that used the system to get access. (The quota for Māori students in the University of Auckland's law school was not filled in the 1990s. Many more applied for it, but if their marks were sufficient to get in without using the quota they were not counted. If it were not for the quota many would not have even applied)

Talking of lies: "white people with a tiny percentage of Maori ancestry who take advantage of this" that is a lie.

The quotas are not based on ethnicity solely. To qualify you had to whakapapa (whāngi children probably qualified even if they did not whakapapa, I do not know), but you also had to be culturally Māori.

Lies and bigotry are not extinct in Aotearoa, but they are in retreat. The baby boomers are very disorientated, but the millennials are loving it.

Better for everybody

reply
logicchains
27 days ago
[-]
"eliminates biases in recruitment and does not exclude underrepresented groups" has turned out to basically mean "higher less qualified candidates in the name of more equitable outcomes", which is a very contentious position to take and one many Americans strongly oppose.
reply
rat87
27 days ago
[-]
No it means eliminates biases in recruitment and to not exclude underrepresented groups

We still have massive biases against minorities in our countries. Some people prefer to pretend they don't exist so they can justify the current reality.

Nothing related to Trump has anything to do with qualified candidates, Trump is the least qualified president we have ever had in american history. Not just because he hadn't served in government or as a general but because he is generally unaware about how government works and doesn't care to be informed.

reply
mrtksn
27 days ago
[-]
In other words they get triggered from words that don't mean that thing. Sounds like EU should develop a politically correct language for Americans. That's synthetic Woke, which is ironic.

I wonder if the new Woke should be called Neo-Woke, where you pretend to be mean to certain group of people to accommodate other group of people who suffered from accommodating another group of people.

IMHO all this needs to be gone and just be like "don't discriminate, be fair" but hey I'm not the trend setter.

reply
optimalsolver
27 days ago
[-]
>higher less qualified candidates

Ironique.

reply
waltercool
27 days ago
[-]
That is a good thing, if we want AI companies to be competitive enough.

If you add regulations, people will use other AI companies from countries without them. The only result of that would be losing the AI race.

You can see this at Huggingface top models, fine-tuned models are way more popular than official ones.

And this is also good considering most companies (even China) offer their models free to download and use locally. Democratizing AI is the good approach here.

reply
ViktorRay
27 days ago
[-]
You raise an interesting point.

What would this declaration mean for free and open source models?

For example Deepseek uses the MIT License.

reply
chvid
27 days ago
[-]
reply
tehjoker
27 days ago
[-]
No different than how the U.S. doesn't sign on to the declaration of the rights of children or landmine treaties etc
reply
hartator
26 days ago
[-]
Was there one time in history where regulations successfully preceded the actual technology?
reply
chrisjj
27 days ago
[-]
reply
zombot
26 days ago
[-]
Can't have US parochialism restrained by international considerations. Can't put friction on printing money and avoiding accountability no matter the cost to others.

> warning countries not to sign AI deals with “authoritarian regimes”

Well, that now also rules out the US.

reply
tmpz22
27 days ago
[-]
Why would any country align with US vision for AI policies after how we’ve treated allies over the last two weeks?

Why would any country yield given the hard line negotiating stance the US is now taking? And the flip flopping and unclear messaging on our positions?

reply
anon291
27 days ago
[-]
People should be free to train AIs
reply
rdm_blackhole
27 days ago
[-]
This declaration is not worth the paper it was written on. It doesn't require to be enforced and it's non binding so, it's like a kid's Christmas shopping list.

The US and the UK were right to reject it.

reply
m3kw9
27 days ago
[-]
I count 10 red tape just from this sentence “ ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all.”
reply
joshdavham
26 days ago
[-]
I'm afraid of a future where we who live in the anglosphere end up with the AI equivalent of cookie banners and GDPR due to EU AI regulations.
reply
whiplash451
26 days ago
[-]
GDPR has made a hell of eurosphere as well, if that can make you feel better.
reply
arkh
26 days ago
[-]
> GDPR has made a hell of eurosphere as well

Oh noes, I can't slurp all my user data and sell / give it to whoever. How will I make money if I can't exploit people's data?

reply
whiplash451
26 days ago
[-]
There was a way to achieve GDPR's goals without covering the non-walled-gardens with barbed wire banners.
reply
arkh
26 days ago
[-]
> barbed wire banners

GDPR does not tell people to put banners on their website. People chose to not learn about GDPR and do "like everyone else".

reply
whiplash451
26 days ago
[-]
Regulators are not just responsible for the laws they put in place. They are also responsible for the way the majority responds to them. If the majority fails to implement, it's a failure of the regulator, not a failure of the regulated.
reply
ionwake
26 days ago
[-]
On the subject of safety - where is safe? A bunker? An island ? Nowhere?

Will AI lead to a Dark Forest scenario on Earth between humans and AI?

reply
TomK32
26 days ago
[-]
Here's the declaration[1]. No bunkers or islands mentioned.

Safety is mentioned in a context with trust-worthiness, ethics, security, ...

[1] https://www.politico.eu/wp-content/uploads/2025/02/11/02-11-...

reply
tw1984
26 days ago
[-]
for those interested, below is the transcript of JD Vance's remark on the summit.

https://gist.github.com/lmmx/b373b9819318d014adfdc32182ab17f...

reply
option
27 days ago
[-]
did China sign?
reply
vindex10
27 days ago
[-]
that's what confused me:

> Among the priorities set out in the joint declaration signed by countries including China, India, and Germany was “reinforcing international co-operation to promote co-ordination in international governance.”

so looks like they did

, at the same time, the goal of the declaration and summit to become less reliant on US and China.

> Meanwhile, Europe is seeking a foothold in the AI industry to avoid becoming too reliant on the US or China.

So basically Europe signed together with China to compete against US/UK or what happend?

reply
tim333
27 days ago
[-]
The agreement doesn't mean much. It's just a list of good intentions. Most counties were fine to say yeah I'll do good things if it's not binding. Vance & Trump were an exception to that.
reply
vindex10
26 days ago
[-]
It is not binding, but it is still a public statement
reply
jampekka
27 days ago
[-]
“Partnering with them [China] means chaining your nation to an authoritarian master that seeks to infiltrate, dig in and seize your information infrastructure,” Vance said.

At least they aren't threatening to invade our countries or extorting privileged position.

reply
karaterobot
27 days ago
[-]
Threatening Taiwan, actually invading Tibet and Vietnam within living memory, and extorting privileged positions in Africa and elsewhere. Not to mention supporting puppet governments throughout the world, just like the U.S.
reply
Hwetaj
27 days ago
[-]
Sir, this is a Wendy's! Please do not defend Europe against its master here! Pay up, just like Hegseth has demanded today.
reply
pb7
27 days ago
[-]
Except they are: Taiwan.
reply
FpUser
27 days ago
[-]
I watched JD Vance's speech. He had made few very reasonable points to refuse joining the alliance. Still his speech left me with some sour taste. I interpret it as - "we are fuckin America and we do as we please. It is our sacred right to own the world. The rest are to submit or be punished one or the other way".
reply
lofaszvanitt
25 days ago
[-]
Colonel Willis Corto incoming to the us.
reply
pjc50
27 days ago
[-]
My two thoughts on this:

- there's a real threat from AI to the open internet by drowning it in spam, fraud, and misinformation

- current "AI safety" work does basically nothing to address this and is kind of pointless

It's important that AI-enabled processes which affect humans are fair. But that's just a subset of a general demand for justice from the machine of society, whether it's implemented by humans or AIs or abacuses. Which comes back to demanding fair treatment from your fellow humans, because we haven't solved the human "alignment problem".

reply
thih9
27 days ago
[-]
And of course people responsible for AI disruptions would love to sell solutions for the problems they created too. Notably[1]:

> Worldcoin's business is to provide a reliable way to authenticate humans online, which it calls World ID.

[1]: https://en.m.wikipedia.org/wiki/World_(blockchain)

reply
robohoe
27 days ago
[-]
“Tools for Humanity” and “for-profit” in a single sentence lost me.
reply
dgb23
27 days ago
[-]
From a consumer's perspective I want declaration.

I want to know whether an image or video is largely generated by AI, especially when it comes to news. Images and video often imply that they are evidence of something actually happening.

I don't know how this would be achieved. I also don't care. I just want people to be accountable and transparent.

reply
cameronh90
27 days ago
[-]
We can’t even define the boundaries of AI. When you take a photo on a mobile phone, the resulting image is a neural network manipulated composite of multiple photos [0]. Anyone using Outlook or Grammarly now is probably using some form of generative AI when writing emails.

Rules like this would just lead to everything having an “AI generated” label.

People have tried it in the past with trying to require fashion magazines and ads warn when they photoshop the models. But obviously everything is photoshopped, and the problem becomes how do we separate good photoshop (levels, blemish remover?) from bad photoshop (warp tool?).

[0] https://appleinsider.com/articles/23/11/30/a-bride-to-be-dis...

reply
idunnoman1222
27 days ago
[-]
It can’t be achieved now what Mr. authoritarian?
reply
tigerlily
27 days ago
[-]
And so it seems we await the imminent arrival of a new eternal September of unfathomable scale; indeed as we deliberate, that wave may already be cresting, breaking upon every corner of the known internet. O wherefore this moment?
reply
TiredOfLife
27 days ago
[-]
>there's a real threat from AI to the open internet by drowning it in spam, fraud, and misinformation

That happened years ago. And without llms

reply
nbzso
26 days ago
[-]
So hackernews new kids will be happy with the corporate overtaking of America? Regulation? What the heck is this. The future is only bright and colourful. Is there an AI specialist here to explain to me why LLM's cannot code in Cobol? What AI is this?
reply
dsign
27 days ago
[-]
I know I'm an oddball when it comes to the stuff that crosses my mind, but here I go anyway.

It's possible to stop developing things. It's not even hard; most of the world develops very little. Developing things requires capital, education, hard work, social stability and the rule of law. Many of us writing on this forum take those things for granted but it's more the exception than the rule, when you look at the entire planet.

I think we will face the scenario of runaway AI, where we lose control, and we may not survive. I don't think it will be a sky-net type of thing, sudden. At least not at first. What will happen is that we will replace humans by AIs in more and more positions of influence and power, gradually. Our ChatGPTs of today will become board members and government advisors of tomorrow. It will take some decades--though probably not many. Then, a face-off will come one day, perhaps. Humans vs them.

But if we do survive and come to regret the development of advanced AI and have a second chance, it will be trivially easy to suppress them: just destroy the semiconductor fabs, treat them the same way we treat ultra-centrifuges for enriching Uranium. Cut off the dangerous data centers, and forbid the reborn universities[1] from teaching linear algebra to the students.

[1]: We will lose advanced education for the masses on the way, as it won't be economically viable nor necessary.

reply
simonw
27 days ago
[-]
"Our ChatGPTs of today will become board members and government advisors of tomorrow."

That still feels like complete science fiction to me - more akin to appointing a complicated Excel spreadsheet as a board member.

reply
fritzo
27 days ago
[-]
It feels like mere language difference. Certainly every government official is advised by many Excel spreadsheets. Were those spreadsheets "appointed", no.
reply
simonw
27 days ago
[-]
The difference is between AI tools as augmentation and AI tools as replacement.

Board members using tools like ChatGPT or Excel as part of their deliberations? That's great.

Replacing a board member entirely with a black box automation that makes meaningful decisions without human involvement? A catastrophically bad idea.

reply
vladms
27 days ago
[-]
People like having someone to blame and fire and maybe send to jail. It's less impressive if someone blames everything on their Excel sheet...
reply
philomath_mn
27 days ago
[-]
> It's possible to stop developing things

If the US were willing to compromise some of it's core values, then we could probably stop AI development domestically.

But what about the rest of the world? If China or India want to reap the benefits of enhanced AI capability, how could we stop them? We can hit them with sanctions and other severe measures, but that hasn't stopped Russia in Ukraine -- plus the prospect of world-leading AI capability has a lot more economic value than what Ukraine can offer.

So if we can't stop the world from developing these things, why hamstring ourselves and let our competitors have all of the benefits?

reply
hollerith
27 days ago
[-]
>the prospect of world-leading AI capability has a lot more economic value than what Ukraine can offer.

The mere fact that you imagine that Moscow's motivation in invading Ukraine is economic is a sign that you're missing the main reasons Moscow or Beijing would want to ban AI: (1) unlike in the West and especially unlike the US, it is routine and normal for the government in those countries to ban things or discourage their use, especially new things that might cause large societal changes and (2) what Moscow and Beijing want most is not economic prosperity, but rather to prevent another one of those invasions or revolutions that kills millions of people and to prevent the country's ruling coalition from losing power.

reply
philomath_mn
27 days ago
[-]
But this all comes back to the self-interest and game theory discussion.

Let's suppose that, like you, both Moscow and Beijing do not want AGI to exist. What could they do about it? Why should they trust that the rest of the world will also pause their AI development?

This whole discussion is basically a variation on the prisoner's dilemma. Either you cooperate and AI risks are mitigated, or you do not cooperate and try to take the best outcome for yourself.

I think we can expect the latter. Not because it is the right thing or because it is the optimal decision for humanity, but because each individual will deem it their best choice, even after accounting for P(doom).

reply
hollerith
27 days ago
[-]
>Let's suppose that, like you, both Moscow and Beijing do not want AGI to exist. What could they do about it? Why should they trust that the rest of the world will also pause their AI development?

That is why the US and Europe should stop AI in their territories first especially as the US and Britain have been the main drivers of AI "progress" up to now.

reply
hcurtiss
27 days ago
[-]
Exactly. Including military benefits. The US would not be a nation for long.
reply
HelloMcFly
27 days ago
[-]
This is my oddball thought: the thing about AI doomerism is that it feels to me like it requires substantially more assumptions and leaps of logic than environmental doomerism. And environmental doomerism seems only more justified as the rightward lurch of western societies continues.

Note: I'm not quite a doomer, but definitely a pessimist.

reply
Simon_O_Rourke
27 days ago
[-]
> What will happen is that we will replace humans by AIs in more and more positions of influence and power, gradually. Our ChatGPTs of today will become board members and government advisors of tomorrow.

Great, can't wait for even some small improvement over the idiots in charge right now.

reply
realce
27 days ago
[-]
It's time to put an end to this fashionable and literal anti-human attitude. There's no comparative advantage to AI replacing humans en-masse because of how "stupid" we are. This POV is advocating for incalculable suffering and death. You personally will not be in a better or more rational position after this transition, you'll simply be dead.
reply
Simon_O_Rourke
26 days ago
[-]
I don't see where you go from some over-hyped generative text bot outputting reams of semi-gibberish to.... AI will kill us all horribly. There's more than a few intractable technical limitations between ChatGPT and the T-1000.
reply
moffkalast
27 days ago
[-]
I for one, also welcome our new Omnissiah overlords.
reply
anon291
27 days ago
[-]
Right, let's go back to the stone age because we said so.

> What will happen is that we will replace humans by AIs in more and more positions of influence and power,

With all due respect, and not to be controversial, but how is this concern any more valid than the 'great replacement' worries.

reply
jcarrano
27 days ago
[-]
What if we face the scenario of a Dr. Manhattan type AGI, that's just fed up with people's problems and decides to leave us for the stars?
reply
TheFuzzball
27 days ago
[-]
I am so tired of the AI doomer argument.

The entire thing is little more than a thought experiment.

> Look at how fast AI has advanced, it you just project that trend out, we'll have human-level agents by the end of the decade.

No. We won't. Scale up transformers as big as you like, this won't happen without massive advances in architecture and hardware.

I believe it is possible, but the idea it'll happen any day now, and by accident is bullshit.

This is one step from Pascal's Wager, but being presented as fact by otherwise smart people.

reply
dsign
27 days ago
[-]
> The entire thing is little more than a thought experiment.

Yes. Nobody can predict the future.

> but the idea it'll happen any day now, and by accident is bullshit.

We agree on that one: it won't be sudden, and it won't be by accident.

> I believe it is possible, but the idea it'll happen any day now, and by accident is bullshit.

Exactly. Not by accident. But if you believe it's possible, then we are both doomers.

The thing is, there are forces at play that want this. It's all of us. We in society want to remove other human beings from the chain of value. I use ChatGPT today to not pay a human editor. My boss uses Suno AI to play generated music with pro-productivty slogans before Teams meetings. The moment the owners of my enterprise believe it's possible to replace their highly paid engineers with AIs, they will do it. My bosses don't need to lift a finger today to ensure that future. Other people have already imagined it, and thus, already today we have well-founded AI companies doing their best to develop the technology. Their investors see an opportunity on making highly-skilled labor cheaper, and they are dumping their money into that enterprise. Better hardware, better models, better harnesses for those models. All of that is happening at speed. I'm not counting on accidents there. If anything, I'm counting on accidents Chernobyl style that make us realize, when there is still time, if we are stepping into danger.

reply
627467
27 days ago
[-]
Everyone wants to be the prophet of doom of their own religion.
reply
PeterCorless
27 days ago
[-]
"Why do we want better artificial intelligence when we have all this raw human stupidity as an abundant renewable resource we haven't yet harnessed?"
reply
Nevermark
27 days ago
[-]
Could anyone count how many times politicians have enacted laws or regulations in areas they completely lacked any understanding? And no time to get informed.

Fortunately those days are over. Any politician dealing with a technical issue over their head can turn to an LLM and ask for comment. "Is signing this poorly thought out, difficult to interpret, laundry list of vague regulations, that could limit LLM progress, really a good idea? Break this down for me like I am 5, please."

(Even though the start appeared trivial, happenstance, even benign, the age where AI's rapidly usurped there own governance had begun. The only thing that could have made it happen faster, or more destructively, were those poorly thought out international agreements the world was lucky to dodge.)

reply
olivierduval
27 days ago
[-]
At the same time, "Europeans think US is 'necessary partner' not 'ally'" (https://www.euronews.com/my-europe/2025/02/12/europeans-thin...)

I wonder why... maybe because it look like US replaced some "moral values" (not talking about "woke values" here, just plain "humanistic values" like in Human Rights Declaration) with "bottom line values" :-)

reply
ahiknsr
27 days ago
[-]
> I wonder why

Hmm. > Donald Trump had a fiery phone call with Danish prime minister Mette Frederiksen over his demands to buy Greenland, according to senior European officials.

https://www.theguardian.com/world/2025/jan/25/trump-greenlan...

> The president has said America pays $200bn a year 'essentially in subsidy' to Canada and that if the country was the 51st state of the US 'I don’t mind doing it', in an interview broadcast before the Super Bowl in New Orleans

https://www.theguardian.com/us-news/video/2025/feb/10/trump-...

reply
casebash
27 days ago
[-]
I'll copy my LinkedIn comment:

"Well done to the UK for not signing the fully compromised Statement on Inclusive and Sustainable Artificial Intelligence for the People and the Planet. Australia shouldn't have signed this statement either given how France intentionally derailed attempts to build a global consensus on how we can develop AI safely.

For those who lack context, the UK organised the AI Safety Summit at Bletchley Park in November 2023 to allow countries to discuss how advanced AI technologies can be discussed safely. There was a mini-conference in Korea, France was given the opportunity to organise the next big conference, a trust they immediately betrayed by changing the event to be about promoting investment in their AI industry.

They renamed the summit to the AI Action Summit and relegated safety from the sole focus to being just one of five focus areas, but not even one of five equally important focus areas, but one that seems to have been purposefully minimized even further.

Within the conference statement safety was reduced to a single paragraph that undermines safety if anything:

“Harnessing the benefits of AI technologies to support our economies and societies depends on advancing Trust and Safety. We commend the role of the Bletchley Park AI Safety Summit and Seoul Summits that have been essential in progressing international cooperation on AI safety and we note the voluntary commitments launched there. We will keep addressing the risks of AI to information integrity and continue the work on AI transparency.”

Let’s break it down: • First, safety is being framed as “trust and safety”. These are not the same things. The word trust appearing first is not as innocent as it appears: trust is the primary goal and safety is secondary to this. This is a very commercial perspective, if people trust your product you can trick them into buying it, even if it isn't actually safe. • Second, trust and safety are not framed as values important in and of themselves, but as subordinate to realising the benefits of these technologies, primarily the "economic benefits". While the development of advanced AI technologies could theoretically create a social surplus that could be taxed and distributed, it's naive to assume that this will be automatic, particularly when the policy mechanisms are this compromised. • Finally, the statement doesn’t commit to continuing to address these risks, but only narrowly to “addressing the risks of AI to information integrity” and “continue the work on AI transparency”. In other words, they’re purposefully downplaying any more significant potential risks, likely because discussing more serious risks would get in the way of convincing companies to invest in France.

Unfortunately, France has sold out humanity for short-term commercial benefit and we may all pay the price."

reply
aiauthoritydev
26 days ago
[-]
AI safety is a BS thing. Glad Americans are leading the way.
reply
bilekas
27 days ago
[-]
Yeah, it's behavior like this that really makes people cheer for companies like DeepSeek to stick it to the US.

A little bit of Schadenfreude would feel really good right about now, what bothers me so much is that it's just symbolic for the US and UK NOT to sign these 'promises'.

It's not as if anyone would believe that the commitments would be followed through with. It's frustrating at first, but in reality this is a nothing burger, just emphasizing their ignorance.

> “The Trump administration will ensure that the most powerful AI systems are built in the US, with American-designed and manufactured chips,”

Sure, those american AI chips that are just pumping out right now. You'd think the administration would have advisers who know how things work.

reply
balls187
27 days ago
[-]
My sense was the promise of DeepSeek (at least at the time) was that there was a way to provide control back to the people, rather than a handful of mega corporations that will partner with anyone that will pay them.
reply
karaterobot
27 days ago
[-]
> Yeah, it's behavior like this that really makes people cheer for companies like DeepSeek to stick it to the US.

That would be a kneejerk, short-sighted, self-destructive position to take, so I can believe people would do it.

reply
raverbashing
27 days ago
[-]
Honestly those declarations are more hot air and virtue signaling than anything else.

And even more honestly, nobody cares

reply
gorgoiler
26 days ago
[-]
Regulating technology stands zero chance of succeeding. Technology and knowledge will always tend to freedom and the tech itself is meaningless without actions. It’s the actions and their consequences on people that we should care about.

What’s much more important is strengthening rights that could be weakened by large scale data analysis of a population.

The right to a private life, and having minimal data collected — and potential then stolen — about your life.

The right of the state to investigate you for committing a crime using models and statistics only if a judge issues them a warrant to do so.

The right in a free market economy to transparent and level pricing instead of being gouged because an AI thinks people with physical characteristics similar to mine have lots of money.

Banning models that can create illegal images feels like legislators not aiming nearly high enough or smart enough:

https://www.bbc.co.uk/news/articles/c8d90qe4nylo

reply
miohtama
27 days ago
[-]
Sums it up:

“Vance just dumped water all over that. [It] was like, ‘Yeah, that’s cute. But guess what? You know you’re actually not the ones who are making the calls here. It’s us,’” said McBride.

reply
consp
27 days ago
[-]
The bullies are in charge. Prepare to get beaten to the curb and your lunch money stolen.
reply
Xelbair
27 days ago
[-]
i mean.. you need power to enforce your values. and UK hasn't been in power for a long time.

"If you are not capable of violence, you are not peaceful. You are harmless"

Unless you can stand on equal field - either by alliance, or by your own power - you aren't a negotiating partner, and i say that as European.

reply
wobfan
27 days ago
[-]
> "If you are not capable of violence, you are not peaceful. You are harmless"

this is exactly the value that caused so much war and death all over the world, for decades and thousands of years. still, even in 2025, it's being followed. are we doomed, chat?

reply
jddj
27 days ago
[-]
There are peaceful strategies that are temporarily stable in the face of actors who capitalise on peaceful actors to take their resources, but they usually (always?) take the form of quickly moving on when an aggressor arrives.

Eg. birds abandoning rather than defending a perch when another approaches.

We're typically not happy to do that, though you can see it happening in some parts of the world right now.

Some kind of enlightened state where violent competition for resources (incl. status & power) no longer makes sense is imaginable, but seems a long way off.

reply
Yoric
27 days ago
[-]
Just to clarify, who's the aggressor in what you write? The US?
reply
jddj
27 days ago
[-]
No one in particular. Russia would be one current example, Israel (and others in the region at various times) another, the US and Germany historically, the Romans, the Ottomans, China, Japan, Britain, Spain, warlords in the western sahara, the kid at school who wanted the other kids' lunch money.

The idea though is that if everyone suddenly disarmed overnight it would be so highly advantageous to a deviant aggressor that one would assuredly emerge.

reply
pjc50
27 days ago
[-]
US sending clear signals to countries that they should start thinking about their own nuclear proliferation, even if that means treaty-breaking.
reply
Xelbair
27 days ago
[-]
https://en.wikipedia.org/wiki/Nash_equilibrium

yes.

i would also recommend The Prince as light reading to better understand how the world works.

reply
GlacierFox
27 days ago
[-]
The emphasis is the word capable here. I think there's a difference between a country using their capability of violence to actually be violent and a one with the tangible capability using it for peace.
reply
tim333
27 days ago
[-]
I think peace is more down to the good/peaceful guys being better armed than the bad ones.
reply
numpad0
27 days ago
[-]
Yes and we don't know if the US is on the blue side this time. It's scary.
reply
swarnie
27 days ago
[-]
It been that way for... 300 years?

Those with the biggest economies and/or most guns has changed a few times but the behaviours haven't and probably never will.

reply
gyomu
27 days ago
[-]
If you’re making sweeping statements like that, why the arbitrary distinction at 300 years? What happened then? Why not say “since the dawn of humanity”?
reply
lucky1759
27 days ago
[-]
It's not some arbitrary distinction from 300 years ago, it's something called "the Enlightenment".
reply
gyomu
27 days ago
[-]
The bullies with most guns and biggest economies have been in charge since the Enlightenment? Huh?
reply
swarnie
27 days ago
[-]
I was keeping it simple for the majority.
reply
kabouseng
27 days ago
[-]
Probably referring to the period that pax Britannia and pax Americana have been the global hegemon.
reply
computerthings
27 days ago
[-]
That's what Europeans thought for centuries, until Germany overdid it. Then we had new ideas, e.g. https://en.wikipedia.org/wiki/Universal_Declaration_of_Human...
reply
FirmwareBurner
27 days ago
[-]
The declaration of human rights, like a lot of other laws, declarations and similar pieces of paper signed by politicians, have zero value without the corresponding enforcement, and are often just there for optics so that taxpayers feel like their elected leaders are making good use of their money and are on the side of good.

And the extent of which you can do global enforcement (which is often biased and selective) is limited by the reach of your economic and military power.

Which is why the US outspends the rest of the world military powers combined and how the US and their troops have waged ilegal wars and committed numerous crimes abroad and gotten away with it despite pieces of papers saying what they're doing is bad, but their reaction was always "what are you gonna do about it?".

See how many atrocities have happened under the watch of the UN. Laws aren't real, it's the enforcement that is real. Which is why the bullies get to define the laws that everyone else has to follow because they have the monopoly on enforcement.

reply
pjc50
27 days ago
[-]
> Laws aren't real, it's the enforcement that is real

Well, yes. This is why people have been paying a lot of attention to what exactly "rule of law" means in the US, and what was just norms that can be discarded.

reply
computerthings
27 days ago
[-]
The same is true for the HN comment I replied to, which was basically going *shrug*, but also without any army to enforce that. So I pointed out that some people went beyond just shrugging, because it could not go on like this; and here is what they wrote. Just reading these things does a person good, and to stand up for these things you first have to know them.
reply
xyzal
27 days ago
[-]
I think the saddest fact about it is that not even the US state weilds the power. It is some sociopathic bussinessmen.
reply
ta1243
27 days ago
[-]
Businessmen have been far more powerful than states for at least the last 20 years
reply
Nasrudith
27 days ago
[-]
Generally I can't help but see 'more powerful than the government' claims forever poisoned from their shallow use in the context of cryptography.

Where it was used in a rhetorical tantrum throwing response to their power refuse to do the impossible like make an encryption backdoor 'only for good guys' and have the sheer temerity to stand against arbitrary exercises of authority by using the courts to check them only to their actual power.

If actual 'more powerful than the states' occurs they have nobody to blame but themselves for crying wolf.

reply
gardenhedge
27 days ago
[-]
My response to "the bullies are in charge" has been downvoted and flagged yet what I am responding to remains up. It's a different opinion on the same topic started by GP. Either both should stay or both should go.
reply
enugu
27 days ago
[-]
AI doesn't look it will be restricted to one country. A breakthrough becomes common place in a matter of years. So that paraphrase of Vance's remarks, if accurate, would mean that he is wrong.

The danger of something like AI+drones (or less imminent, AI+bioengineering) can lead to a severe degradation of security, like after the invention of nuclear weapons. A degradation in security, which requires collective action. Even worse, chaos could be caused by small groups weaponizing the technology against high profile targets.

If anything, the larger nations might be much more forceful about AI regulation than the above summit by demanding an NPT style treaty where only a select club has access to the technology in exchange for other nations having access to the applications of AI from servers hosted by the club.

reply
dkjaudyeqooe
27 days ago
[-]
> The danger of something like AI+drones (or less imminent, AI+bioengineering) can lead to a severe degradation of security, like after the invention of nuclear weapons.

You don't justify or define "severe degradation of security" just assert it as a fact.

The advent of nuclear weapons has meant 75 years of relative peace which is unheard of in human history, so quite the opposite.

Given that AI weapons don't exist, then you've just created a straw man.

reply
enugu
27 days ago
[-]
The peace that you refer to, involved a strong restriction placed by more powerful states which restricts nuclear weapons to a few states. This didn't involve any principle, but was an assertion of power. A figleaf of eventual disarmament did not materialize.

I do claim that it is obvious that widespread acquisition of nuclear weapons by smaller states would be a severe degradation of security. Among other things, widespread ownership, would also mean that militant groups would acquire it and dictators would use it as a protection leading to an eventual use of the weapons.

Yes, the danger of AI weapons is nowhere at that level of nuclear weapons yet.

But, that is the trend.

https://www.nytimes.com/2023/07/25/opinion/karp-palantir-art...

https://news.ycombinator.com/item?id=42938125

reply
logicchains
27 days ago
[-]
>The danger of something like AI+drones (or less imminent, AI+bioengineering) can lead to a severe degradation of security

For smaller countries nukes represented an increase in security, not a degradation. North Korea probably wouldn't still be independent today if it didn't have nukes, and Russia would never have invaded Ukraine if Ukraine hadn't given up its nukes. Restricting access to nukes is only in the interest of big countries that want to bully small countries around, because nukes level the playing field. The same applies to AI.

reply
enugu
27 days ago
[-]
The comment was not speaking in favour of restrictionism, (I don't support it) but what strategy the more powerful states will adopt.

Regarding an increase in security with nukes, what you say applies for exceptions against a general non-nuclear background. Without restrictions, every small country could have a weapon, with a danger of escalation behind every conflict, authoritatrians using a nuclear option as a protection against a revolt etc. The likelihood of nuclear war would be much more(even with the current situation, there have been close shaves)

reply
idunnoman1222
27 days ago
[-]
Drones have AI you can buy them on AliExpress express what is your point?
reply
mk89
27 days ago
[-]
I see it differently.

They need to dismantle bureaucracy to accelerate, NOT add new international agreements etc that would slow them down.

Once they become leaders, they will come up with such agreements to impose their "model" and way to do things.

Right now they need to accelerate and not get stuck.

reply
Dalewyn
27 days ago
[-]
I love the retelling of "I don't really care, Margaret." here.

But politics aside, this also points to something I've said numerous times here before: In order to write the rulebook you need to be a creator.

Only those who actually make and build and invent things get to write the rules. As far as "AI" is concerned, the creators are squarely the United States and presumably China. The EU, Japan, et al. being mere consumers sincerely cannot write the rules because they have no weight to throw around.

If you want to be the rulemaker, be a creator; not a litigator.

reply
piltdownman
27 days ago
[-]
> The EU, Japan, et al. being mere consumers sincerely cannot write the rules because they have no weight to throw around

Exactly what I'd expect someone from a country where the economy is favoured over the society to say - particularly in the context of consumer protection.

You want access to a trade union of consumers? You play by the rules of that Union.

American exceptionalism doesn't negate that. A large technical moat does. But DeepSeek has jumped in and revealed how shallow that moat really is for AI at this neonatal stage.

reply
ReptileMan
27 days ago
[-]
Except EU is hell bent on going the way of Peron's Argentina or Mugabe's Zimbabwe. The EU relative share of world economy has been going down with no signs of the trends reversal. And instead of innovating our ways of stagnation we have - permanently attached bottle caps and cookie confirmation windows.
reply
piltdownman
27 days ago
[-]
> EU is hell bent on going the way of Peron's Argentina or Mugabe's Zimbabwe

https://www.foxnews.com/

reply
ReptileMan
27 days ago
[-]
Nope mate. Looking at my purchasing power compared to the USA guys I knew now and in 2017. Not in my favor. EU economy is grossly mismanaged. Our standards of living have been flat for the last 18 years since the financial crisis.

In 2008 EU had more people, more money and bigger economy than US, with proper policies we could be in a place where we could bitch slap both Trump and Putin. And not left to wonder whose dick we have to suck deeper to get some gas.

reply
DrFalkyn
27 days ago
[-]
Peter Zeihan would say, that’s the problem Europe has, in addition to demographic collapse. They’re not energy indepedent and hitched their star to Russia (especially Germany), on the belief that economic interdependence would keep things somewhat peaceful. How wrong they were
reply
Dalewyn
27 days ago
[-]
>Exactly what I'd expect someone from a country

I'm Japanese-American, so I'm not exactly happy about Japan's state of irrelevance (yet again). Their one saving grace as a special(er) ally and friend is they can still enjoy some of the nectar with us if they get in lockstep like the UK does (family blood!) when push comes to shove.

reply
consp
27 days ago
[-]
Sure you can. Outright ban it. Or do what china does, copy it and say the rules do not matter.
reply
cowboylowrez
27 days ago
[-]
if your both the creator and rulemaker then this is the magic combo to a peaceful and beneficial society for the entire planet! or maybe not.
reply
Maken
27 days ago
[-]
Who is even the creator here? Current AI is a collection of techniques developed in universities and research labs all over the world.
reply
Dalewyn
27 days ago
[-]
>Who is even the creator here?

People and countries who make and ship products.

You don't make rules by writing several hundred pages of legalese as a litigator, you make rules by creating products and defining the market.

Be creators, not litigators.

reply
generic92034
27 days ago
[-]
> You don't make rules by writing several hundred pages of legalese as a litigator, you make rules by creating products and defining the market.

That is completely wrong, at least if rules = the law. You might create fancy products all you like, if they do not adhere to the law in any given market, they cannot be sold there.

reply
gardenhedge
27 days ago
[-]
reply
mvc
27 days ago
[-]
> Only those who actually make and build and invent things get to write the rules

Create things? Or destroy them? Seems in reality, the most powerful nations are the ones who have acquired the greatest potential to destroy things. Creation is worthless if the dude next door is prepared to burn your house down because you look different to him.

reply
agentultra
26 days ago
[-]
> the energy needed to power supercomputers

The methane power plants needed. Of which the Trump administration is excited to create more of. More plants means they need to do more drilling and fracking. Great for the pocket books of their donors.

Methane totally doesn't leak! It's monitored by the companies doing the drilling and fracking and is totally easy to detect in the environment if it does escape some how. /s

And for what? Rich capital holders to get access to technology that furthers their wealth and allows them to lay off more labourers? The promise that some of its uses may potentially save the lives of rich people?

Nothing this administration is doing is terribly surprising. They're being very open about it. Good luck getting chip foundries set up, importing rare earth metals and sand, dealing with the ponds, and handling all the methane needed to power all of the DCs we're going to build. Have fun living next to one.

I don't understand how their isolationist, nationalist stance is supposed to support this technology and keep them, "the leader in AI." Sounds ridiculous. Global supply chains and stable free-trade agreements got us here.

reply
blarg1
27 days ago
[-]
computer says no
reply
flanked-evergl
26 days ago
[-]
Why have all other countries elected children instead of adults?
reply
antonkar
27 days ago
[-]
I’m honestly shocked that we still don’t have a direct-democratic constitution for the world and AIs - something like pol.is with an x.com-style simpler UI (Claude has a constitution drafted with pol.is by a few hundred people but it's not updatable).

We’ve managed to write the entire encyclopedia together, but we don't have a simple place to choose a high-level set of values that most of us can get behind.

I propose solutions to the current and multiversal AI alignment here https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/rational-u...

reply
idunnoman1222
27 days ago
[-]
I think 99% of what less wrong says is completely out to lunch. I think 100% of large language model and vision model safety has just made the world less fun. now what.
reply
dragonwriter
27 days ago
[-]
> We’ve managed to write the entire encyclopedia together, but we don't have a simple place to choose a high-level set of values that most of us can get behind.

Information technology was never the constraint preventing moral consensus the way it was for, say, aggregating information. Not only is that a problem with achieiving the goals you lay out, its also the problem with the false assumption that they are goals most would agree should be solved as you have framed them.

reply
numpad0
27 days ago
[-]
I don't think it does what you think it does. You'll end up taking sides on India and China fighting on rights and equality and giving in to wild stuffs like deconstruction and taxation for churches. It'll be just a huge mess and devastation of your high-level set of values, unless you'll be interfering with it so routinely that it will be nothing more than a facade for quite outdated form of totalitarianism.
reply
staunton
27 days ago
[-]
This reads like word salad to me...
reply
numpad0
26 days ago
[-]
Prompt in, inference out.

Frankly, I can't stand these guys viewing themselves as some sort of high-IQ intellectual majority types when none of such labeling would be true and they're more like stereotypical tourists to the world. Though that's historically how anarchist university undergraduates had always been.

reply
hintymad
27 days ago
[-]
Why would we trust Europe in the first place, given that they are so full of regulations and they love to suffocate innovation by introducing ever more regulations? I thought most people wanted to deregulate anyway.
reply