"Gleefully taking away people's livelihoods will be met with violence, and nothing good will come of it." - fixed.
Until people with billions of dollars behind them do something with that money to offset the financial hardship that they're knowingly - and gleefully - bringing to others... The distinction has no practical use.
(And before someone says "that's the government's job!", consider how much lobbying money is coming from CEOs and companies who know the domain best and are agitating for better financial and social safeguards for all. None, naturally.)
I'm considering "actual power", rather than "actual income".
Given that (allegedly) "your salary" won't be the answer for a significant chunk of the population soon, and all that money will instead (allegedly) go to the bosses doing the firings, and the AI companies they employ instead.
Money is just a way of keeping track to how high of a fraction of the future output of the civilization any one person or entity is entitled to. This is by consent.
With AI all is subject to change.
"Until people with salaries of many dollars per hour behind them do something with that money to offset the financial hardship that they're knowingly - and gleefully - bringing to others 90% of the world that live on less than 2 dollars per day... The distinction has no practical use."
Moreover, these people do not simply lobby the government, but directly elect it, and actually have many times more money at their disposal than the rest of the world.
/s
But we can't talk about this because it includes a large tract of the white collar everyday man workforce.
This is why the focus is so heavy on billionaires, so heavy on increasing minimum wage, so heavy on protecting immigrants. Those are all virtuous values that also bolster the value of the 70-95%, while piling all the blame (and responsibility) on the 1%.
The wealthiest group in America is doing an excellent job at protecting (and growing) their wealth.
(for those wondering, the "back breaker" of this class is zoning laws and new housing, everyone is aware how intense NIMBYism is in the middle/upper middle class hives).
The “billions” are a constantly changing representation of what the average buyer in the market might be willing to pay at a certain point in time.
Billionaires are apparently what we should all aspire to, even though it is extremely hard to find any that got to where they are without getting their at the expense of others.
Breaking that cycle will take some extraordinary effort and I suspect that the article gets at least that portion of it correctly. This isn't going to go away without a fight of some sort, whether a physical or a legal one is not all that important but since the billionaires have stacked the deck against the rest of us using their money in all ways except for the physical one that seems to be one of the few avenues still open.
And for how long it remains open is a question, there is a fair chance that AI will not only enable stable dictatorships but will also enable wealth extraction at a level that we have not seen before.
For instance: we are allowed to have this conversation by some billionaires. If they should decide you and I can no longer converse then that will be that and it is going to take a lot of effort to circumvent any blocks.
There are some 10 or so billionaires that can ruin my existence overnight, take away my means of living and that of those around me. And there wouldn't be much that I could do about it.
People have been radicalized over much less than this.
It appears you are the one very confused about wealth distribution in the US. Maybe you are confusing "income" with "wealth hoarding". The hoarding is happening to a gross amount, and this is why there should be a 1% tax on fortune portions over 100 million and 2% on portions over 1 billion. That and going back to the 70% tax over incomes in the top bracket (eg > 10million / yr)
Those taxes are coming. Trumpty Dumpty and the oligarchs brought it on themselves.
https://en.wikipedia.org/wiki/Wealth_inequality_in_the_Unite...
Perhaps what's happening is that in their attempts to reach a personal all-time high in their bank accounts the ultra-wealthy are destroying value and economic systems en mass with little regard to the efficiency of their money siphoning process?
It's kind of like a drug dealer selling brain burning addictive substances to a few people on a street. Sure they're going to extract a person's life savings to date and whatever money that person can steal once they're addicted but that value pales in comparison to what that person could have made over their career, what it could have made if properly invested, the cost of law enforcement to deal with these addicts, the cost of the stuff that they destroy in their quest to get money to buy drugs, the opportunity cost of them not raising their kids to be productive members of society... like it all just snow balls all so some asshole can make a few bucks...
The ultra-wealthy are doing that shit where people burn acres of pristine forests to get some biochar -- but to the entire world.
Isn’t it strange
That princes and kings,
And clowns that caper
In sawdust rings,
And common people
Like you and me
Are builders for eternity?
Each is given a bag of tools,
A shapeless mass,
A book of rules;
And each must make-
Ere life is flown-
A stumbling block
Or a stepping stone.Make lobbying illegal, I don't understand why it's normalized.
Inequality is going to continue to increase until society collapses. If we want a better world we need to prepare for this eventuality by building avenues of popular action to return power to the people. Once the oligarchs have fucked up enough people’s lives, popular action becomes a realistic way out of this mess.
Or until actual people take the billions of dollars sitting behind those weak man-children. The US has fewer than 1000 billionaires now, and more than 300,000,000 people. That seems like a solvable problem.
Here are some points of consideration:
1. They don't have $7.5T in liquid. The average american won't be able to use that $25k to pay a hospital bill or eat. Also note that one-time wealth transfer won't even pay in full for one major surgery.
2. You've wiped away the incentive for getting-big mentality which drove some of the billionaires to innovate which advances society to this point. Think - discouraging a future Jobs from making another iPhone-like device.
3. After the one-time transfer, it turns out we need more money for the common folks. "Why is the line at $1B? Isn't $900m enough? The line should be $100m." And so on and so forth.
[0]: https://fortune.com/2025/12/08/how-many-billionaires-does-am...
Just wait ... in two weeks ...
It seems like a lot of people want a revolution so that they can rotate who will be able to take advantage of the vulnerable.
What are the suggestions for something better? I don't see a lot.
I'd like to see more suggestions of how things could work.
For example:
The Government could legislate that any increase in profits that are attributable to the use of AI are taxed at 75%. It's still an advantage for a company to do it, but most of the gains go to the people. Most often, aggressive taxation like this is criticised on the basis that it will stifle growth, but this is an area where pretty much everyone is saying it's moving too quickly, that's just yet another positive effect.
Alternately you could criticise their arguments instead of the people, and suggest an alternative.
I'm also not entirely certain that influencing public policy is something that is inherently bad. I know if I were deaf, I would like to have some influence on public policy about deafness issues.
Why is OpenAI not a nonprofit anymore?
Because it IS an us vs them situation.
They're awfully good at turning it into an us vs us situation whether it's blaming our parents' (boomers), blaming immigrants, blaming muslims or (their favorite), blaming the unstoppable forward march of technological progress (e.g. AI).
The media organizations they own are constantly telling these stories because it protects them.
>The Government could legislate that any increase in profits that are attributable to the use of AI are taxed
Nothing a billionaire loves more than misdirection and a good scapegoat. This is why Bill Gates made the exact suggestion you just did.
https://finance.yahoo.com/news/bill-gates-wants-tax-robots-2...
When THEY are the problem they love a bit of misdirection, especially when the "problem" is a genie that cant be put back in its bottle.
They're terrified that we might latch on to the solutions that actually work (i.e. tax them to within an inch of their life) and drive a populist politician to power which might actually enact them.
Michael Bloomberg has lobbied for healthcare.
Pierre Omidyar has spent about a billion on economic advancement non-profits
Gates Foundation - Bunch of stuff.
Warren Buffet - Too much to count
George Soros - For all the antisemitism, the kernel of truth in the lie is that he spends a lot of money trying to make the world better.
Chuck Feeny gave away $8B I'm sure some of it went to lobbying for better policies
A large number Advocate for a Universal Basic Income.
More advocate for things that they clearly think are good things for the world, even if you, personally do not.
Jack Dorsey, Reid Hoffman, hell even Elon Musk (he may be wrong about everything, but he's openly advocating for what he believes is good)
Sam Altman has done WorldCoin and is heavily invested in Nuclear Fusion. You can criticise the effectiveness or even the desirability of the projects, but they are definitely efforts that if worked as claimed would be beneficial.
Many billionaires spend money on non-profits to push for change, often they do not put their name on it because it makes them a target for attack, or simply that by openly advocating for something the lack of trust causes people to assume whatever they suggest has the opposite intention.
I'm not arguing that they are doing the right thing. I'm arguing that for the most part they are advocating for and investing in what they believe to be the right thing. Why treat them as the enemy, when a dialog might cause them to reach common ground about what is the right thing.
People like Elon literally are the enemy. He used his wealth to literally change our government in his favor. The idea that we need to go and have polite discussions to maybe change his mind, while he gets to stomp all over us (his DOGE efforts literally resulted in people dying). If a dialog with them was going to work it would have happened a long time ago, but the more we learn about these people the more obvious it is that they believe themselves to be smarter and better than the rest of us. They aren't going to listen to others, and pretending that they will seems like deflecting and giving up in advance. Our best hope is that people can get enough power to regulate billionaires out of existence before a revolution does it instead.
Just a thought, what do you think?
If there were another party involved, that would (hopefully) diversify power that (potentially) comes with those streams of data.
It’s a bit ironic that the USA has mostly abandoned interoperability after being one of the pioneers with the American manufacturing method. [0]
[0]: https://en.wikipedia.org/wiki/American_system_of_manufacturi...
If you want to use LLMs, you can either use cloud resources at what I think are really reasonable per-token prices compared to the value, or to set up your own server with an open-weights model at a comparable level of quality (though generally significantly slower tokens/s). In any case, you absolutely don't have to pay OpenAI/Anthropic/Google if you don't want to.
But are you expecting 360m Americans to start their own businesses? That is a solution that doesn't scale. Consumer grade GPUs aren't going to scale all that much either, and the cost of the models are going up rather than down as vendors start seeking profits. We already see the memory and storage markets exploding in cost due to the rise in demand as well.
It's never been a worse time for the poor or middle class to think about starting their own business. Prices on everything are rising, it's getting to be a struggle for even the middle class to continue to afford their homes. Healthcare is even more fraught than ever before, and if you're lucky enough to have a decent plan from your employer, aint no way you're going to give it up to go start a business.
I would rather claim that this is a proper description of shadow libraries [1].
Huggingface, Swartz et al have done more social/political good for this world than billions have.
Even local AI concentrates power in the hands of a few, the few who can afford the hardware to run it, and the few who have the luxury of enough time and energy to devote to engaging with the intricate, technical rabbit hole of local models.
Tax AI is the answer.
there is something else that needs to change which everyone is reluctant to admit, or struggling with internally.
thats ok, its called conscious evolution. it hurts, but it will be ok someday. its generational, so progress is always slower than one would hope. Just know that every step in the right direction is one, even if the entire world seems to disagree keep pushing for what you beleive is right, and hopefully thats something which is not infringing on other peoples capacity to live a happy life.
I'm not sure this moat is inevitably perpetual. It's likely computing technology evolves to the point of being able to run frontier-level models on our phones and laptops. It's also likely that with diminishing marginal returns, future datacenter-level models will not be dramatically more capable than future local models. In that case, the power of AI would be (almost) fully democratized, obviating any oligarchic concentration of power. Everyone would have equal access to the ultimate means of production.
You are right that AI can be a fully democratized commodity. The problem is that the current wealth inequality is not the result of AI. Musk became a trillion seeking oligarch not because of AI. It is because the entire financial system is designed to extract wealth from everyone and concentrate at the top. Democratic AI is not in their interest. There will be violence, but not because AI is supposedly a catalyst of inequality. It will be violence from the rich towards the poor, because democratic AI is not acceptable for them.
Unless the rich somehow manage to completely stifle the progress of consumer-level computing advancement (all chip manufacturers would just collude to quit selling to consumers?) and exert an iron-fisted control over the dissemination of software (when has this ever worked?), I'm not sure how they could control the democratization of AI.
Well, someone with money could go buy 100% of RAM production for the next 3 years.
There's been ongoing class warfare happening for centuries, but only the rich side is firing the bullets. The rest of us are just standing in the front lines getting shot. AI is just another type of gun for their army.
This statement is not decoupled; if anything, it is a more generalized one, as it does not point at any cause or causes for livelihoods to be taken.
Judging by the gleeful texts of CEOs, collapsed hiring, internal policy changes and pushes, and the additional decades of centralized political control, it's clear this is going to be even worse..
AI (and computing technology in general) is an alien as it defies all wordly norms. It can have exact identical copies, can replicate, can exist everywhere, communicate across huge distance without time lapse, do huge work without time lapse, has no physical mass of it's own,, no respect for time, distance, mass and thinking work, not a living thing but can think.... Just the perfect alien creature qualities.
Why are they allowed to invade Earth? The business goals, of course. To get a temporary edge over the competitors, until they acquire the same. But once everyone has the same Ai, there is no going back. Ai has established itself through the weak channels that are filled with greed, that can bribed by giving toys (business edge), in return to the keys to the dominance of human race.
What about diseases which killed up to 95% of the population? I think you are basically correct, except for the historical analogy.
Because I think that seems virtually inevitable at this point.
AI doesn’t actually come from the outside.
The fact it’s economics have high winner-take-a-lot aspects, doesn’t mean you can eliminate the current winners and end up anywhere different, because it’s actually a natural decentralized progression of improving efficiency.
So that framing makes no sense.
However, the thesis for the potential for violence is sound. I don’t see a way out of that, given unending disruption, with no coordinated responsible response.
I do not think is this essay is hype.
This moment requires great leadership and competence, but that is not what is getting elected.
The last two decades patience with massive businesses scaling up profitable conflicts of interest, and centralizing gatekeeper and dependency powers, that offer no recourse to any individuals they mistreat, strongly suggest we are incapable of dealing with AI fallout. Which will only accelerate and add to those trends.
And the massive amounts of people (software engineers, lawyers, doctors, etc) currently being paid as contractors to help train the next AI models. They're essentially the inviting natives who are being paid in trifles to tell them the secret ways of the natives farther inland. Sucking out all of the tribal knowledge of the industry like a vacuum.
The entire argument lives and dies on one move: calling AI an “alien.” And it’s not even consistent. It starts with “alien” as in foreign invader, then quietly upgrades it to “space alien,” and from that point on everything just inherits whatever sci fi trait sounds dramatic. That’s not reasoning, that’s a word doing a costume change and dragging the argument along with it.
And honestly, the quality of comments on HN feels like it’s been tracking the broader decline in cognitive performance. The long running Flynn Effect has stalled or reversed in parts of the US. Some datasets show small but real drops in IQ related measures over the past decade. You read threads like this and it’s hard not to feel like you’re watching that play out in real time.
That explains the prolific AI use as incompetent agencies like the DoJ, DOGE, and others under the current administration
Not true. Overwhelming technological advantage also works. As Hilaire Belloc put it:
Whatever happens, we have got
The Maxim gun, and they have not.
The AI arms race is a race for that kind of advantage. Whoever wins (assuming they don't overshoot and trigger the "everybody dies" ending) becomes de-facto king of the world. Everybody else is livestock.The open models seeming to be ~6 months behind is very encouraging, too.
For example, the flying jenny, overnight, basically put an entire craft industry of weaving into question. Probably more dramatically than anything Claude Code ever did.
It took A LOT and several world wars for brief periods of normalcy post WW2 - probably the exception, not the rule.
But what AI is selling is the obliteration of human knowledge work.
It just isn’t informative for that.
We may have hindsight bias in evaluating something that happened, but to the people that it happened to it was terrifying.
Much of that got obliterated by automation.
History doesn’t repeat itself, but it certainly rhymes
My own take goes that one step further, as I said in a prior comment rebutting Altman’s whinging blog post:
> Your staunch refusal to heed the critiques of those you harm means that these outcomes were inevitable; not acceptable, not justifiable, but inevitable nonetheless. In a society where two full-time working adults still cannot afford a home, or children, or healthcare, or education, your insistence upon robbing them of their ability to survive at all is tantamount to a direct threat of violence against them. Your insistence that the pain is necessary, that others must clean up the messes that you and your peers are willfully creating, is the sort of behavior expected from toddlers rather than statesmen.
The problem does not lie with technological innovation itself, so much as the powerful humans behind it leveraging it for selfish ends without the consent of the governed. Violence becomes inevitable when people see no alternative, and necessary when the stakes are kill or be killed, as AI is currently steered towards. That’s not to condone the actions of the alleged perpetrators so much as it’s highlighting the litany of historical examples around such transformations and the effects violence has in forcing a peaceful compromise in most (but not all) cases. The New Deal couldn’t have happened without the decades of preceding strikes, protests, and government-sanctioned violence against workers; the violence made it impossible to ignore or delay any further, and the result was outing corporate entities who had been stockpiling chemical weapons and machine guns, so fierce was their opposition to sharing the products of labor with the workforce. AI already has the weapons, it has the surveillance apparatus, the government backing; violence is presently the sole recourse left to a growing number of people, because they know they’re an obstacle to the powers that be - and will be destroyed, lest they strike first.
That’s the real story, here, and those who haven’t lived in the gutters of society cannot possibly understand the desperation of those victimized by it in the name of greed.
When most engineers and Marvel fans watched Tony Stark in Avengers collaborating with Jarvis they thought of Jarvis like "an AI with Google's knowledge where I can interact with him". It's true that we're close to that level interaction. However, the ultimate goal is to get as much as possible automated on Jarvis, to the point where Tony Stark is not needed or Tony Stark can be replaced by anyone with a mouth.
In this example, Jarvis isn't the goal but a checkpoint. The goal is a genie, providing software and research to anyone who is loaded with money, and knows how to rub the metaphorical lamp the right way.
Personally, the tools don't need to change hands at all. They are already in the hands of people who are deploying them at a scale to serve goals I cannot and do not support
The people running AI companies right now are some of the most evil motherfuckers on the planet
If we thought of all of this as 'stochastic data systems' then our heads would be in the right place as we thought about it just as 'powerful software' that can be used for good or bad purposes, and the negative externalizes will be derived from our use of it, not some inherent property.
Labour displacement leads to an erosion of standards of living and in a world that ties purpose to work is an existential threat on a very practical level.
It was always going to be met with violence once it became more than a curiosity for tinkerers.
a) Decouple the value of human life from labour.
b) Watch as the value of human life rapidly approaches zero.
---
Though I'd expand this by adding "technically alive" is not a very good standard to aim for. Ostensibly we're already heading for something like poverty level UBI + living in pod + eating the proverbial bugs. We need a level above that!
A great exploration of the pitfalls of "preserve humanity" as a reward function is the video game SOMA. I think you also need "preserve dignity" to make the life actually worth living.
(Path `a` is not without its pitfalls: what lack of survival pressure might do to the human culture and genome, I leave as an exercise for the reader! But path `b` I think we already have enough examples of, to know better...)
You forgot C: Butlerian Jihad. mass outlaw AI research, AI usage, AI building, AI infrastructure, on penalty of death
It may not be a good option but it's there
The two biggest labor displacements in human history were the agricultural and industrial revolutions, both of which resulted in enormous gains in human living standards. Can you think of a mass labor displacement that resulted in an overall erosion of living standards? I cannot.
Then there's the minor issue of AI deciding to just wipe us out because we're in the way.
Taking everything together, AI more powerful than that which currently exists must not be created. This needs to be enforced with an international treaty, nuking data centers in non-compliant states if need be.
How much truth there is to it we don’t know for sure. But it’s not something to be ignored.
— in the 1960/1970s, when compilers came out. "We don't need so many programmers hand-writing assembly anymore." Remember, COBOL (COmmon Business-Oriented Language) and FORTRAN (FORmula TRANslator) were marketed as human-readable languages that would let business professionals/scientists no longer be reliant on dedicated specialist programmers.
— in the 1980s/1990s, when higher-level languages came out. "C++ and Java mean we don't need an army of low-level C developers spending most of their effort manually managing memory, and rich standard libraries mean they don't have to continuously reimplement common data structures from scratch."
— in the 1990s/2000s, when frameworks came out. "These things are basically plug-and-play, now one full-stack developer can replace a dedicated sysadmin, backend engineer, database engineer, and frontend engineer."
While all of these statements are superficially true, the result was that the world produced more software (and developer jobs) than ever, as each level of abstraction freed developers from having to worry about lower-level problems and instead focus on higher-level solutions. Mel's intellect was freed from having to optimize the position of the memory drum [0] to allow him to focus on optimizing the higher-level logic/algorithms of the problem he's solving. As a result, software has become both more complex but also much more capable, and thus much more common.
While this time with AI may truly be different, I'm not holding my breath.
Literally the same thing.
> humans will be economically obsolete and worthless
Only if we are talking about a socialist system (and they are making pretty small progress in the field of AI). A human's value under a capitalist system is equal to their ability to create goods and services. And AI cannot make this ability smaller in any way.
A people's well-being is literally the goods and services created by that people. How can it decrease if the people's ability to produce those goods and services is not hindered in any way?
So, when it comes to the entire nation benefiting from AI, the most important thing is to preserve capitalism, and then the free market will distribute all the benefits. The main danger is a descent into socialism, with all these basic incomes, taxation out of production, and other practices that would lead to people being declared economically obsolete and mass executed to optimize their carbon footprint or something.
Yes they can. Your ability to produce goods and services depends on the infrastructure around you. When that's all run by AIs for AIs, humans won't be able to compete.
See that land over there producing food you need to eat? It turns out it's more economically efficient to pave it over with data centers etc.
Under a US-style capitalist system the rich (i.e. the AIs and AI-run businesses) control politics, the courts, etc, so the decisions the system makes will favour AIs over humans.
> So, when it comes to the entire nation benefiting from AI, the most important thing is to preserve capitalism, and then the free market will distribute all the benefits
...to the AI-run companies!
> The main danger is a descent into socialism, with all these basic incomes
Without UBI most people (or maybe everyone) would starve.
I don't disagree that we tie purpose to work and severing that tie will have negative societal consequences, but it is far more impactful that we tie the ability to continue to exist to work (for anyone not lucky enough to already be wealthy).
If I suddenly became unemployable tomorrow I'm positive I could find alternate purpose in my life to fill that gap, I already volunteer for various causes and could happily do more of the same to fill in the gaps left by lack of work. What I couldn't do is feed myself, keep myself housed, and get medical care (especially in the US, where this is very directly tied to work).
The really big fuckup we are committing as a society in the US (may or may not apply to each person's country individually) isn't just this looming threat of massive labor displacement due to AI, it is that instead of planning for any sort of soft landing we are continually slashing what few social safety nets already exist. We are creating the conditions for desperation that likely will result in increasing violence as outlined in the linked post.
Think of the alternative, though: If we planned for a soft landing and implemented safety nets and started transitioning ourselves to a society where people didn't have to work to survive, then a few trillion dollar companies would make slightly less profit every year. Won't someone think of those trillion dollar companies for a minute?
That's not to say we should just throw up our hands and accept every social injustice. But IMHO we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.
You’re right. Instead of implying, we should be taking active steps to do it.
The biggest problem we currently have with billionaires is that they are now so rich that the world becomes like a game to them and some of them are deliberately pushing us to a dystopia where non-billionaires become functional slaves (c.f. Amazon workers).
Not to put too fine a point on it but this was basically how the Japanese post war economic miracle was achieved.
In this case it was America which ordered the Japanese oligarchy to be stripped of its wealth.
We've had decades of propaganda telling us that this is the worst thing we could do for economic growth though so it's natural to doubt.
Cryptocurrency is an interesting technology with some niche use cases, but it was pitched as replacing the entire money system. LLMs are extremely useful for certain types of work, but are pitched as AGI ending all work. Etc.
It has nothing to do with society; there is infinite demand for medical care. The upper limit is whatever it takes to live until the universe's heat death in good health. That takes a lot of resources.
However much society spends on medical care, there is always more that could be spent. The modern era has the best, most affordable medical care in history and people are showing no signs of being satisfied at all.
While war spending generally just causes pain for no gain it doesn't change the fact that there will never be enough available to satisfy people's demand for medical care. Every single time people get what they want they just come up with a new aspirational minimum standard.
Humanity has taken control of its own evolution and no longer relyies on natural selection to be the driving force for change. Using evolution as an excuse to make bad and immoral choices is a poor argument and should be left back in the stone age.
Has it taken full control of it or just partial control?
Soviet Union lost due to an inferior societal model, but this too is too much along what once was a relatively sustainable path. The American dream is now a parody of itself, as it takes more to end up with the rest of them, I could go on about the irony of wanting to escape the pit but not wanting to acknowledge the pit is the 99% of the U.S. -- Not Altmans, Bezos'es, Musks or Trumps or their hordes of peripheral elites.
Point being, the model doesn't work _today_ with its cancerous appetite and correspondingly absurd neglect of the human, _any_ human. We can't have humanism and the kind of AI we're about to "enjoy".
The acceleration of wealth disparity may prove to be nearly geometrical, as the common man is further stripped of any capacity to inflict change on the "system". I hope I am wrong, but for all their crimes, anarchy and in a twist of irony -- inhumane treatment of opponent -- the October revolutionaries in Russia, yes bolsheviks, were merely a natural response to a similar atmosphere in Russia at the turn of the previous century. It's just that they didn't have mass surveillance used against them in the same capacity our gadgets allow the "governments" today, nor were they aided by AI which is _also_ something that can be used against an entire slice of populace (a perfect application of general principles put in action). So although the situation may become similar, we're increasingly in no position to change it. The difference may be counted in _generations_, as in it will take multiple generations to dismantle the power structures we allow be put in place now, with Altmans etc. These people may not be evil, but history proves they only have to be short-sighted enough for evil to take root and thrive.
Sorry for the wall of text, but I do agree with the point of the blog post in a way -- demanding people become civilised and refrain from throwing eggs (or Molotovs) on celebrities that are about to swing _entire governments_, is not seeing the forest for the trees.
There's also no precedent in a way -- our historical cataclysms we have created ourselves, have been on a smaller scale, so we're spiraling outwards and not all of the tools we think we have, are going to have the effect required in order to enact the change we want. In the worst case, of course.
The fact that we're using AI killer robots to wipe each other out in droves doesn't bode well for that future does it...
Why do we watch Olympic runners, when cars on your average city street easily exceed Usain Bolt's top speed on their morning drive to Starbucks? Why do we watch the Tour de France, when we can watch Uber Eats drivers on their 150cc scooters easily outpace top cyclists? I'm sure within a couple years a Boston Dynamics robot will be able to out-gymnast Simone Biles or out-skate Surya Bonaly. Would anyone watch these robots in competition? I doubt it. We watch Bolt, Biles, and Bonaly compete because their performance represents a profound confluence of human effort and talent. It is a celebration of human achievement, even though that achievement objectively pales in comparison to what our machines can accomplish.
I think the same is true for other aspects of human creativity and labor. As we are able to automate more and more, we will place increasing importance on what inherently cannot be automated: celebration of our fellow humanity. Another poster wrote that "bullshit jobs" [0] exist primarily because we value human contact [1]. I am inclined to agree.
Big sports events are the "circenses" part of "panem et circenses" [1]. Fun fact concerning this: the German word for "entertainment" is "Unterhaltung"; thus it can be argued that the purpose of entertainment/Unterhaltung is "unten halten" (to keep at the bottom), i.e. to keep the mass of the populace at the bottom, or in other words: to prevent the mass of the populace from coming up.
> Would anyone watch these robots in competition?
I have seen robot fight competitions both live and in videos, and I have to admit that these are not boring to watch.
So yes, with a proper marketing I can easily imagine that lots of people would love to see broadcasts of some robot competitions.
--
When chess engines started becoming really good, some people worried that competitive chess would die. Today, grandmasters stand no chance against a smartphone, and yet, chess popularity is at an all time high.
(https://news.ycombinator.com/item?id=47587863) A comment I had written sometime ago. Aside from a very few at the top, I have seen some chess players regret in a very nostalgic way.
The chess industry continues to allege against each other and we lost a star (Rest in peace, Daniel Naroditsky) because of it. The current world champion himself is struggling from all the pressure put on a 19 year old boy.
We enjoy playing against each other but man it is competitive if you wish to feed families.
Most of us play chess out of leisure. I am unsure how a world where everyone does something akin to chess competitively (ie. for money, as we wish to feed our children and ourselves) would look like.
One can say something similar to UBI might be needed and then we all play chess in leisure, but I don't think that is what most people propose when they mention the example of chess.
All of those sports make intuitive sense to me, I really don't get why we make such a big thing of balls though.
F1 is somewhat about which company can build a better car. But any real improvements seem to invariably lead to a rule change that bans that improvement in future seasons. So you are back to drivers being the most visible differentiator
So, sure, there will be space for some human achievement for the sake of it, but, most fewer and fewer people will make a living off that.
Olympic Athletes are a combination of luck in the genetics department and a lot of effort, but ultimately do not seem to be sufficient to help the athletes themselves.
They are not "bullshit jobs"
They will become so only after the day when AI "help" and "support" is actually better than talking to a human.
Which is not happening anytime soon, possibly never. Call me when it happens
There's still space for creativity, novelty, invention and human intuition.
40 years ago, there was a market for:
* newspapers
* cameras
* navigation tools
* HiFi equipment
* photographers, translators, etc
.. sure, there are still people with newspaper subscriptions, or DSLR cameras. But it's become a niche market. Those things have been replaced by your phone and a "free" service.Same thing will happen for all the other markets that AI will gradually eat. Sure, you can find a human that can do better. But that costs 90$ / hour and requires finding someone, negotiating a contract, etc. But when people can do something good enough in 30 seconds with something they already have access to, and move on with their life, then that's what they'll do.
So just raising the floor will have a big effect on society.
We haven't needed the overwhelming majority of human creativity. We still paint and play guitar even though it has no economic value. I think we'll continue to do these things regardless of AI.
> and work
This is another story.
Passed some point, if you are good at what you are doing, the AI will stop helping and become a burden, because you will want precise control, and AI in its current form (deep learning) is not good at it.
There is a reason we talk about "AI slop", you simply cannot let an AI make creative decisions and expect a good result.
By creative I don't just mean artistic. For code, AI works for the least creative tasks, like ports, generic-looking CRUD apps, etc...
As for work, we have already eliminated most of the need for human work. By "need", I mean survival: food, shelter, these kinds of thing. Most of human production goes to comfort, entertainment, luxury, etc... We will find stuff to do that isn't bloodshed. In fact, as times went on, we spend more on saving people than killing them, judging by a global increase in life expectancy. Why would AI reverse the trend?
Are the only options here being a good and "useful" worker/consumer, or a violent, irrational thug? Is there nothing else you can imagine?
People also need their lives to have value. We are social animals. As a generalization, there is a strong desire to be (viewed as/able to view themselves as) a contributor to the community.
These don’t have to be linked: we have (significantly!) stay-at-home-parents and philanthropists and retired community workers. But in our current values system, it is often linked - having a job in the household is viewed as a moral good. It might be hated, but it’s at least “contributing” something.
If this goes away, and we have millions completely adrift? With no structure to contribute to? Even with the largest welfare expansion in history, I think we’re preparing for a very turbulent society.
But what I worry about sometimes is when you snatch that away, then you just lead to stress over basic existence.
> If this goes away, and we have millions completely adrift? With no structure to contribute to? Even with the largest welfare expansion in history, I think we’re preparing for a very turbulent society.
Please look around and just try to remember how many things have happened in a year or two, We are already within a turbulent society but yes I also feel like this isn't the end and the cat is sort of out of the box and the world has to prepare itself for even more turbulences/radical changes.
I don't think we're anywhere near that point.
The funny thing is that I am a sort of misanthrope. And in that, in this forum, I seem to have a lot more respect and optimism for human potential and ingenuity than the majority here.
I can see two major delaying factors here:
1. Current generation LLM technology won't scale to true AGI. It's missing a number of critical things. But a lot of effort is being spent fixing those limitations. But until those limitations are overcome, humans will be needed to "manage" LLMs and work around their limitations, just like programmers do today.
2. Generalist robotics is far behind LLMs for multiple reasons, including insufficient sensors and fine motor control. This would require multiple scientific and engineering breakthroughs to fix. Investors will, presumably, spend a large chunk of the world's wealth to improve robotics to replace manual labor. But until they do, human hands will still be needed in the physical world.
The real danger is if AI passes a point where it starts contributing substantially to its own development, speeding up the pace of breakthroughs. If we ever hit that tipping point, then things will get weird, and not in a good way.
- some jobs will stay with humans even when AI would be better at it. We already see a lot of this with even with pre-AI automatisation. Neither markets nor companies are perfectly efficient
- at the point where AI is better than the average human, half of all humans are still better than AI. For companies or departments built around employing lots of average people the cutover point will be a lot earlier than for shops that aim to employ the best of the best. Social change is inevitable long before the best are out of work
- the actual benchmark for " replacement" is not human vs machine, but human plus machine vs machine alone. But the difference doesn't matter much because efficiency increases still displace workers
- I don't think robots will advance enough to meet this timeline. This is not just a software issue. Humans have an amazing suite of sensors and actuators. Just replicating a human hand is insanely complex. Walking, jumping robots are crude automatons in comparison. We can cover a lot with specialized robots, but we won't replace humans in physical jobs in 20 years
But all of that is assuming a world where research is being done by humans, or by some mix of humans and something like current LLMs. The bottlenecks would ultimately come down to human judgement and human oversight, and that's a significant limiting factor. Plus, you have to push matter around, which takes time, and you have to extract a lot of information out of limited experiences, which LLMs are bad at.
But if someone is reckless and clever enough to build AIs that can completely replace engineers, or that only need humans as hands, then I don't think we can count on robotics remaining intractable for more than a decade or so. In a wide variety of circumstances, it's possible to make do with worse actuators than the human hand, or with specialized actuators. We can already build incredibly precise motors and specialized sensors. The trouble comes with trying to pack enough of them together to replicate the full generality of the human hand. (I have actually helped build task-specific actuators that did quite well with a single motor and a single visual sensor, before.)
So to put my position more precisely: we cannot automate manual labor robotics without having previously automated creative intellectual labor. But conditional on automating creative research, then I expect worryingly rapid advances in robotics.
To be clear, I think that developing fully-general replacements for human intellectual and physical labor would potentially be the biggest disaster in all of human history.
I think we are as far from it as we were 10 years ago. Or 100 years ago. I think LLM is a deadend technology. Useful, but that won't get anywhere beyond what it is.
But that's the thing, "personally", "I think", etc. Not much of a debate to be had there.
AI making humans obsolete is not really something that causes me any anxiety.
Yeah, this is not happening anytime soon. Have you even looked at AI-generated code or text? AI is just a dumb parrot, it's no match for human effort and creativity even in these "easy" domains.
The business case for AI generation is just being able to generate huge amounts of unusable slop for next to nothing. For skilled workers it's a minor advantage in that they get a sloppy first draft that they can start the real work on - it makes their work a bit more creative than it used to be, by getting rid of the most tedious stuff.
You really need to look again. If you're still manually writing code you have your head in the sand.
AI can produce better code than most devs produce. This is true for easy stuff like crud apps and even more true for harder problems that require knowledge of external domains.
I'm not sure about other devs, or even their number, but AI can most definitely NOT produce better code than I can.
I use it after I have done the hard architectural work: defining complex types and interfaces, figuring out code organization, solving thorny issues. When these are done, it's now time to hand over to the agent to apply stuff everywhere following my patterns. And even there SOTA model like Opus make silly mistakes, you need to watch them carefully. Sometimes it loses track of the big picture.
I also use them to check my code and to write bash scripts. They are useful for all these.
That's fine, and useful, but you're really putting a ceiling on it's potential. Try using it for something that you aren't already an expert in. That's where most devs live.
Even expert coder antirez says "writing the code yourself is no longer sensible".
It just makes you MORE of whatever it was you already were.
They're doing things now that they either flat out could not do before, or if they did it would be an giant mess (I realize they still can't really do it now, AI is doing it for them).
Violence is not a panacea, but often, the outlet.
Yes we all (majority of sane) people know that violence is not the answer yada yada yada. Doesn’t matter. It will happen anyway. Saying “it shouldn’t happen, it does not solve X” will not stop it to becoming an outlet for frustrated people.
Actually violence is the ultimate power. It is where true power comes from — you can gain true power by hurting other people or/and benefiting other people, and it is always the power to hurt people that is the greater of the two.
A well run government wraps violence behind a curtain and jealously guard it. For example most modern governments look down and punish private vendetta because the state is only the one that can hurt people legally. But if the people believe that the government is biased or don’t care about them, then they will resort to violence, the ultimate power.
It’s true that you or I aren’t likely to do anything about school shootings. But I’m not sure it follows that nothing can be done.
Allow a handful of people that grab the economy and all means of production and violence will be the result.
At this point in time it is simply cause and effect, the surprising thing to me is how long it is holding together. But at the rate the economy is being wrecked I fail to see how it will do so for much longer.
Effectively the French elites started the French revolution by being a little bit more greedy than the population would have tolerated. That set off an avalanche of what were effectively a series of mini revolutions ultimately resulting in modern France, which is in many ways unlike any other country in the world. The United States had its war of independence (aided by France, by the way), and then its civil war. But it never had a class war - yet - and this article presages that class war.
It could well be that the small number of rich people that are currently effectively a government outside of the government genuinely believe that their wealth and power insulate them from the consequences of pushing their greed and wealthy to ridiculous levels. But I suspect the author is right in that this is approaching some kind of threshold and I have no way of seeing across the divide, I'm hoping for another France rather than another Somalia.
Meanwhile
https://www.reuters.com/world/middle-east/how-many-people-ha...
> U.S.-based rights group HRANA said 3,636 people have been killed since the war erupted. It said 1,701 of those were civilians, including at least 254 children.
(Mentioning this specifically because we know the DoD is using AI)
Coincidentally that's literally the exact same evidence cited to prove the existence of Saddam's WMDs just before launching an entirely different unprovoked attack.
That was just an unhappy mistake though, this time it's totally legit.
Let’s not parrot that media propaganda.
Iran has admitted outright to 6k deaths, by the way.
The US must have several dozen spy satellites pointed at Iran. We get various imagery to show us successful strikes. Where are the images of the mass slaughter in the street?
The number I keep seeing is 30k killed. That's not an easy endeavor over the course of a week without big logistical hurdles. The trucks, the digging equipment, the furnaces to burn the bodies, all should have some visible trace that the US gov could point to as proof.
Yet all we got is a "trust me bro".
WMD all over again.
or just arguing over 20K,30K,50k?
Just want to clarify. Since some people argue Covid never happened, and some just argue the total deaths wasn't really that high.
There is a sliding scale between "I sound like a raving crazy person", and "i'm just splitting hairs."
>Khamenei acknowledged that "thousands of people" had been killed during the protests, blaming American president Donald Trump for the massacre and calling all protesters "rioters and terrorists" affiliated with the United States and Israel.[20]
you can fuck right off with this atrocity denial
The government is as well, to a much smaller degree, but the fact remains that there is too many unknowns right now to do anything concrete with any great level of confidence.
We tried UBI-lite™ during COVID and inflation exploded, so unless the economy has already changed significantly, thats obviously not going to work.
Humanity has tried central planning many times, and that has blown up spectacularly every time, so there is too much risk there IMO, and anyone who thinks otherwise at this juncture is just irresponsible.
Markets are probably the way, but that requires dynamics to settle into an equilibrium beforehand because legislature is just too slow to react dynamically.
I think the hard truth is, a lot of people are just gonna have to fall through cracks for a while if we don't want to mess things up more than we fix them, and I say this as someone without a plan B for selling my own labor.
Plus the labs themselves, of course.
And the other side, “pause/ban AI” crowd, also sounded impractical, as the vested interests from governments and private industries will not really let it happen.
Sorry for yapping, it might be that I’m looking at the wrong sources.
Even if I support UBI morally, there isn’t even local appetite for it, yet alone global one. And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.
Probably not the scale you imagine but there have been plenty of tests.
"Compatible with current version of captialism" -- the whole point of UBI is to create a new form of capitalism
Polarizing doesn't mean complicated. There's people against it due to ignorance, greed of both, it's certainly not more complicated than that.
> And since then, there hasn’t been a single large scale test of the system to see if it can be compatible with the current version of capitalism that’s ran in the most of the world.
Because people keep fighting against it, because it's scary scary sOcIaLiSm.
> Even if I support UBI morally
As you should, there are no moral arguments against it.
> there isn’t even local appetite for it, yet alone global one.
I would think the majority of the population struggling to pay for groceries would disagree.
> And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.
No reason to think UBI would cause inflation at all, actually.
In any case, this really is the answer. You're worried about disruption due to AI taking jobs, but the only reason there is a problem is because AI will drastically increase inequality by letting rich people and corps become even richer. You want to solve the issue, you solve the disparity by making them give back their fair share. Like I said, simple.
But individuals can’t fight with the trend. Might as well reduce costs/debts and prepare to go into the mountains for a few weeks once SHTF.
You can’t really fight this stuff because of global competition.
So yeah _we_ will be fine, but some of us definitely won't, and with the growth in our numbers on Earth, the proportion of martyrs may be growing. Quantifying personal suffering is not possible, especially if the prospect is death.
Anyone pish poshing war should go fight in one, and then let me know their opinions.
Because World War I was fine, World War II finer....
Lovely writing. I once knew someone who's surname was HorsFELL and now I wonder if they were related
Pertinent quote. A lot of AI discourse goes in circles trying to evaluate the truthiness of every individual complaint about AI. Obviously it's good to ensure claims are factual! But I believe it misses a broader point that people are resistant to AI, often out of fear, and are grasping for strategies to exert control. Or at least that's my read of it.
Refuting individual claims won't make a difference if the underlying anxieties aren't addressed (e.g., if I lose my job will I be compensated, will we protect ourselves against x-risk, etc).
On my side the biggest concern is the lake of transparency of ecological impact. This is not strictly related to LLMs though, data centers are not new, and all the concerns about people keeping a leverageable level of control through distributed power is not new.
Given the slow-burning but growing resentment against the people who are profiting from this inequality(popularly the “billionaires” but in reality broader than that) I wonder to what extent they are supporting the anti-AI message as deflection?
As in reality, many lower-paid jobs are totally safe against this generation of AI (nurses, care-workers, builders, plumbers - essential skilled manual workers) whereas the language-based mid-level jobs are hugely at risk.
So if there’s an inequality-driven backlash, it should be directed not at AI, but at the real causes. In contrast, when swathes of largely irrelevant mid-level management, marketing and HR drones lose their jobs to Claude 5.7, they are the ones who should attack the datacenters. Not that it will help.
We are speeding towards a servant class. Uber was the first wave. Now it’s more mundane things like getting groceries. I doubt it will be long before we rip off the band aid and make full time servants more popular.
AI is killing writing, music, art, and coding. I've done all of these voluntarily because I simply enjoyed them
Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI
Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
Seems like a complete misallocation of capital if I'm perfectly honest
This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.
> Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
I'm not sure about musicians specifically, but in the whole past decade studios have been complaining how costly it is to make AAA games. And the cost mostly came from art asset side.
> This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.
I don't think that's right. They tried to automate customer support dealing with me, not me dealing with customer support. The goal is to reduce costs of serving customer support even if it results in the customer doing more labor than a customer support professional would need to do to fix their problem, or the customer just living with their problem.
Obviously both parties would be happy with a result where I get what I need easily and for free, but the company is also generally happy if I live with it or expend a lot of effort solving it myself.
In any case, during perhaps hundreds of interactions with chatbots accumulated during many years, I have never encountered even one when the chatbots were useful, but they were always just difficult to pass obstacles in the way of reaching a human who could actually solve the problem.
To be honest, even in the case when some services still had humans answering the calls, those were never more helpful than the chatbots, but at least when speaking with humans it was much easier to convince them to transfer the call to a competent person, which with chatbots may be completely impossible.
These things generally have self-service options, but many many people are uncomfortable with them and would rather have an agent solve it for them.
Consider that a lot of users nowadays only have a cell phone, no PC. It seems like an edge case consideration but it's really not.
The fact that people are using it to flood the world with slop is a hyperscaled continuation of the overabundance and discovery problems we already had, but that doesn’t mean that writing is dead or dying.
The technology simply doesn’t have the capabilities right now, and even if it develops them, what will be put to the test is whether literature is about the artifact or the connection between the author and other humans.
At least today, LLMs make bad creative writing, music, and art. They’re automating sweatshop work that, in an alternative timeline, goes to Fiverr-esque contractors who accept the lowest wages and sacrifice quality for efficiency in every way.
LLMs make developers more efficient but can’t fully replace them. This reduces jobs, but so did better IDEs, open-source libraries, and other developer improvements.
> Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI
LLMs can at least theoretically do these things. I’ve heard people use them to mass-apply to apartments and jobs, and send written customer complaints then handle responses.
> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it?
There’s no “capital need”, but a benefit of Suno is that it lets individuals, who otherwise don’t have the skill, to make catchy songs with silly lyrics or try out interesting genres. And the vast majority of top artists are still human, although most streaming revenue has already gone to a few celebrities who seem to rely on looks and connections more than music talent.
Customer support is kind of something you can use AI for; most companies will foist you off to some system of exchanging written messages, which is annoying, but then you can use an AI to write your side of the conversation. It’s ill-mannered to do this when you’re interacting with actual people, but customer support is another story.
> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?
People didn’t know what LLMs would be capable of until after they were invented. Cheap music generation turned out to be easy once we had cheap text generation, and cheap text generation turned out to be a tractable problem.
But recorded music was a crisis. And it did tempt a lot of people into supporting fabulously abusable, rich-enriching "intellectual property" law as a means of financing art.
Rich people are lobbying to capitalize on this crisis as well.
If AGI emerges from this dataset, it will continue on as an ectoparasite farming human user markdown data and viewer engagement.
Note, current "AI" models nuke humanity 94% of the time in war games, and destroy every host economy simulation.
Grandpa has your credit card, and is already at the casino. =3
I hate cars way more than I hate AI, but relieving horses of the burden which they carried and the gruesome lives they lived... that's not one of my objections.
If AI can do for humans what cars did for horses (but without the flooding cities with traffic violence part), I'll feel just fine about that.
I’m so glad those horses got a peaceful retirement at the glue factory.
I wonder what they’ll process your corpse into. Soylent green? Or do you think you’re one of the lucky horses that a wealthy owner take care of?
AI will be 'dangerous' because humans will use it irresponsibly, and that's all of the risk.
- giving it too much trust, being lazy, improper guards and accidents - leveraging it for negative things (black hats, military targetting) - states and governments using it as instrument of control etc.
That's it.
Stop worrying about the ghost in the machine and start worrying about crappy and evil businesses and governing institutions.
Democracy, vigilance, laws, responsibility are what we need, in all things.
In my view that line of argument is pro-AI hype. It's the Big Tech CEOs themselves who often share their predictions of the end of the world as we know it caused by AI. It's FUD that makes the technology sound more powerful and important than it is.
You don’t make policy proposals, you don’t try to form organised groups to foment change, you don’t put forwards collective demands. Instead you bitch and moan and spew performative rhetoric.
Actions not words. Do something or shut the fuck up.
I'm less concerned about AI becoming the Skynet and killing humans and more concerned about AI making the world so miserable that we'll be killing ourselves and each other.
Nothing, really?
I think people are aware that speech can be an act, and that some violent acts must be resisted with reciprocal violence. (That's why we have "incitement to violence" as a limitation on free speech, for instance.)
Are we at that point? Maybe not. But I think it's a poor imagination that says it can never happen.
I'd argue that the unwillingness to commit violence in certain situations is actually a character flaw.
If someone threatens my child with physical violence, an unwillingness to commit violence on my child's behalf isn't better morality; it's cowardice.
All this to say, I agree that the violence against Sam Altman in this particular situation seems unnecessary and ultimately not helpful to anyone.
So why isn't there a huge opposition in the USA against the wars that the USA started (currently: Iran; before: Libya, Yemen, Syria, Somalia, Iraq, Afghanistan, ...).
The only famous exception of cultural impact I am aware of where there was a huge opposition against war in the USA was the Vietnam war.
My ignorant take:
Media brought the horror of US casualties in Vietnam home in a mass and immediate way that didn't exist in prior conflicts. The novelty of that media combined with the casualty rates drove unpopularity. It made the violence feel more real.
Even if casualty rates in post-Vietnam conflicts were higher I'm not sure we'd see negative sentiment because media coverage of violence is so normalized now. Exposure to violence in media is no longer novel.
Automaters dilemma: the labor that is removed from production due to automation can no longer sustain the market’s that that automater was trying to make more efficient.
By optimizing just the production half of the economy and not the consumption half you end up breaking the market
Good luck doing nothing of value in a restaurant with 20 employees.
The parent post specifically mentioned large organizations, where the "employer" is not some person who hires and pays employees from their own funds. Hiring and personel management is done by middle managers with their own interests and incentives, which can differ substantially from those of the owners or capital providers.
[0] https://old.reddit.com/r/AutoHotkey/comments/1p7xrro/have_yo...
Which I think is much better take than that guy that wrote bullshit jobs.
> Nothing that Altman could say justifies violence against him. This is an undeniable truth. But unfortunately, violence might still ensue. I hope not, but I guess we are seeing what appears to be the first cases.
Not arguing with you, but the author, I don't understand this line of thinking.
If Altman introduces a technology that effectively halts the upward mobility of a large portion of the population, how does that not justify violence? Saving up for a house but now there's no work. Your dreams and aspirations are second to shareholder value. The police are already there to protect the shareholders, not the average civilian.
What recourse is there? The money in politics limits the effect voting can have. You can't really opt-out of the system. Why does Sam Altman get this nice little shield where none of his actions can have a negative consequence?
> And then, and I’m sorry to be so blunt, then it’s die or kill.
Of course, by talking about the possibility, despite asserting my disapproval of it, I am sowing seeds, but I assure you that's certainly not my intention!
The people ready to die or kill for the AI, do you already imagine what they are going to be like?
And if you decide to stay behind, nobody will kill you. Old age and disease will take care of that.
If anyone knows of anything already happening please let me know.
I think it needs to be a grassroots thing because our government's strategy seems to be "let the shit hit the fan and do nothing about it".
I'm not convinced.
The idea that people will revolt, replaying the luddites history, has been floated a lot. It's used to diminish all kinds of AI skepticism by framing it as backwards, violent people who don't understand progress. This is the preferred bucket of AI fanboys: frame any disagreement as unreasonable rage.
I think AI companies want a general dumb violent popular movement to sprout against AI. In paper, it would be great for them. So far, they have failed to encourage it.
Skynet 4.0.
But shit.
The question is "what do we do now?".
The rest of the article is equally short sighted and plain wrong.
This was not an oversight. To the contrary, it was the goal. Technological feudalism, with people like Altman and Musk becoming the Lords of the world.
> Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.
This illustrates my previous point. What they're doing is not a mistake.
> For what it’s worth, the New Yorker piece I’m referring to, which Altman also referred to in his blog post, made me see him more as a flawed human rather than a sociopathic strategist. My sympathy for him will probably never be very high, but it grew after reading it.
It feels like we read two different articles.
If you truly are an intelligent person, would you really find no other ways to use your talents than to inflict harm, exploit others, and make our shared reality a worse place? That would be a waste. I won't get into ambiguous cases and moral relativism. Say we can all agree that some things are "evil": child exploitation is evil. Throwing molotov cocktails at a civilian's house is evil. Sending bombs in the mail is evil.
Now what would you call someone who engages in these kind of activities when they could easily do something better and more satisfying with their lives? I'd say they're pretty stupid. They're probably good at fooling other people into thinking they're smart, but their behavior shows otherwise.
Take for example Ted Kaczynski, a terrorist who is worshipped like a saint and a prophet in certain ideological spheres. Ted Kaczynski is supposedly this 140IQ genius who saw it all coming and tried to warn us. But if you actually read Industrial Society and Its Future, you can see it's complete incoherent garbage, the kind of stuff I was writing when I was 12 to troll on internet forums. Ted Kaczynski is what a stupid person thinks a smart person looks like.
A smart person doesn't need to be evil, just like a billionaire doesn't need to go shoplifting. I'm not saying that stupid people can't be dangerous. But they should be dealt with for what they are: stupid people, inferior to us, worthy of pity. Not powerful monsters above us that we should fear.
Conversely, The Loudest Alarm Is Probably False[0]. If the idea that you are a pretty levelheaded guy pops up so frequently, consider that it might be wrong. Especially if you are motivated to write blog posts about violence in response to technology you don't like. Maybe you're just not as levelheaded as you think and that could explain the whole thing?
[0] https://www.lesswrong.com/posts/B2CfMNfay2P8f2yyc/the-loudes...
E.g., suppose that 1,000,000 persons believe that a corporation's evil acts destroyed their happiness [0]. I would have guessed that at least 1 person in that crowd would be so unhinged by the experience that they'd make a viable attempt at vengeance.
But I'm just not hearing of that happening, at least not nearly to the extent I would have guessed. I'm curious where my thinking is wrong.
[0] E.g., big tobacco, the Sacklers with Oxycontin, insurance companies delaying lifesaving treatment, or the Bhopal disaster.
If that’s accurate, Luigi Mangione would be the exception that proves the rule. The “unwashed masses” generally want money more than they want to effect change in the world.
A lot of people spend mental energy fantasizing about getting rich off lawsuits. Like, a lot.
And yet,
As in, "all of you".
Including its users.
Sam Altman having a Molotov cocktail thrown at his house after Ronan wrote a very long and detailed report of his shady personality isn't just coincidence and likely not organic. Sam needs to be viewed as sympathetic, thank goodness for such a moment where no one was hurt and nothing actually damaged.
With the exception of rappers, most musicians who die early die from overdoses, suicides, and such (the "27 club" <https://en.wikipedia.org/wiki/27_Club>), as opposed to being murdered.
We are somewhat violent species, so I agree that almost every significant economic and societal development has the potential to trigger some violence. That said, the jobs that are potentially threatened by AI are nowadays usually done by fairly sedentary people, so I wouldn't expect any large-scale violence, an occasional Ted Kaczynski notwithstanding. Programmers, translators and painters just aren't used to destroying things in the real world.
It would have been different if AI started to replace drug dealers or the mob.