Techno-Feudalism and the Rise of AGI: A Future Without Economic Rights?
150 points
13 hours ago
| 20 comments
| arxiv.org
| HN
jandrewrogers
7 hours ago
[-]
A critical flaw in arguments like this is the embedded assumption that the creation of democratic policy is outside the system in some sense. The existence of AGI has the implication that it can effectively turn most people into sock puppets at scale without them realizing they are sock puppets.

Do you think, in this hypothesized environment, that “democratic policy” will be the organic will of the people? It assumes much more agency on the part of people than will actually exist, and possibly more than even exists now.

reply
PicassoCTs
10 minutes ago
[-]
You have a world, where most people act against there own economic interests - i think the "mass mind hacking" achievement can be considered unlocked. Its just expensive and exclusive.
reply
drdaeman
3 hours ago
[-]
I suspect you’ll probably have to determine the nature of free will (or lack thereof) to answer this. Or, well, learn empirically :-)
reply
jongjong
23 minutes ago
[-]
The Greeks already figured out thousands of years ago that the best way to implement democracy was via random selection. Yet here we are, everyone believes that 'democracy' necessitates 'voting'; totally ignoring all the issues which come with voting.

The concept of voting, in a nation of hundreds of millions of people, is just dumb. Nobody knows anything about any of the candidates; everything people think they know was told to them by the corporate-controlled media and they only hear about candidates which were covered by the media; basically only candidates chosen by the establishment. It's a joke. People get the privilege of voting for which party will oppress them.

Current democracy is akin to the media making up a story like 'The Wizard of OZ' and then they offer you to vote for either the Lion, the Robot or the Scarecrow. You have no idea who any of these candidates are, you can't even be sure if they actually exist. Everything you know about them could literally have been made up by whoever told the story; and yet, when asked to vote, people are sure they understand what they're doing. They're so sure it's all legit, they'll viciously argue their candidate's position as if they were a family member they knew personally.

reply
PicassoCTs
8 minutes ago
[-]
It would make more sense, to vote on policy by giving priorities and preventing impossible votes (cant have tax reduced while demanding more for services)- and then the policy votes get mapped to the corresponding candidates.
reply
trhway
2 hours ago
[-]
Most critical flaw is thinking that any policy on its own would be able to solve the issue. The technology will find a way no matter the policy.

The society built on empathy would have been able to work out any issue brought by technology as long as empathic goals take priority. Unfortunately our society is far from being based on empathy, to say the least. And technology and the people wielding it would always work around and past the formal laws, rules and policies in such a society. (that isn't to say that all those laws, rules, etc. aren't needed. They are like levies, dams, etc - necessary local, in time and space, fixes which willn't help in the case of the global ocean rise which AGI and robots (even less-than-AGI ones) will be like)

May be it is one of the technological Filters - we didn't become empathic enough (and i mean not only at the individual level, we are even less at the level of the societal systems) before AGI and as a result woudln't be able to instill enough of empathy into the AGI.

reply
foxglacier
3 hours ago
[-]
Normal human communication already does that. Do you really think almost any of the people who share their political opinions came up with them by being rational and working it out from information? Of course not. They just copied what they were told to believe. Almost nobody applies critical thought to politics, it's just "I believe something so I'm right and everybody else is stupid/evil".
reply
drdaeman
3 hours ago
[-]
> Almost nobody applies critical thought to politics

Because they have different concerns, and time and attention are scarce. With all possible social changes like the article suggests this focus could change too. Ultimately, when things will get too bad, uprisings happen and sometimes things change. And I hope the more we (collectively) get through, the higher are the chances we start noticing the patterns and stopping early.

reply
f6v
1 hour ago
[-]
> With all possible social changes like the article suggests this focus could change too.

I have an anecdote from Denmark. It’s a rich country with one of the best work-life balance in the world. Socialized healthcare and social safety net.

I noticed that during the election, they put the ads with just the candidate’s face and party name. It’s like they didn’t even have a message. I asked why. The locals told me nobody cares because “they’re all the same anyway”.

Two things could be happening: either all the candidates are really the same. Or people choose to focus on doing the things they like with their free time and resources. My feeling tells me it’s the second.

reply
NitpickLawyer
2 hours ago
[-]
> Almost nobody applies critical thought to politics

Not only that, but they actively stop applying critical thinking when the same problem is framed in a political way. And yes it's both sides, and yes the "more educated" the people are, the worse their results are (i.e. almost a complete reversal from framing the same problem as skin care products vs. gun control). Recent paper on this, also covered and somewhat replicated by popular youtubers.

reply
edg5000
5 hours ago
[-]
I've spent many year moving away from relying on third parties and got my own servers, do everything locally and with almost no binary blobs. It has been fun, saved me money and created a more powerful and pleasant IT environment.

However, I recently got a 100 EUR/m LLM subscription. That is the most I've spend on IT excluding a CAD software license. So've made a huge 180 and now am firmly back on the lap of US companies. I must say I've enjoyed my autonomy while it lasted.

One day AI will be democratized/cheap allowing people to self host what are now leading edge models, but it will take a while.

reply
cco
5 hours ago
[-]
Have you tried out Gemma3? The 4b parameter model runs super well on a Macbook as quickly as ChatGPT 4o. Of course the results are a bit worse and other product features (search, codex etc) don't come along for the ride, but wow, it feels very close.
reply
int_19h
3 hours ago
[-]
On any serious task, it's not even close. There's no free lunch.
reply
Folcon
2 hours ago
[-]
Out of curiosity, what use case or difference caused the 180?
reply
WillAdams
9 hours ago
[-]
The late Marshall Brain's novella "Manna" touches on this:

https://marshallbrain.com/manna1

The idea of taxing computer sales to fund job re-training for displaced workers was brought up during the Carter administration.

reply
fy20
5 hours ago
[-]
I came across this a couple of weeks ago, and it's a good read. I'd recommend it to everyone interested in this topic.

Althogh it was written somewhat as a warning, I feel Western countries (especially the US) are heading very much towards the terrafoam future. Mass immigration is making it hard to maintain order in some places, and if AI causes large unemployment it will only get worse.

reply
amanaplanacanal
1 hour ago
[-]
> Mass immigration is making it hard to maintain order in some places

Where is this happening? I'm in the US, and I haven't seen or heard of this.

reply
VikRubenfeld
7 hours ago
[-]
Is a future where AI replaces most human labor rendered impossible by the following consideration:

-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI

-- Therefore the AI generates greatly reduced wealth

-- Therefore there’s greatly reduced wealth to pay for the AI

-- …rendering such a future impossible

reply
heavyset_go
4 hours ago
[-]
The problem with this calculus is that the AI exists to benefit their owners, the economy itself doesn't really matter, it's just the fastest path to getting what owners want for the time being.
reply
petermcneeley
7 hours ago
[-]
This a late 20th century myopic view of the economy. In the ages and the places long before, most of human toil was enjoyed by a tiny elite.

Also "rendering such a future impossible". This is a retrocausal way of thinking. As though an a bad event in the future makes that future impossible.

reply
PaulDavisThe1st
6 hours ago
[-]
> This a late 20th century myopic view of the economy. In the ages and the places long before, most of human toil was enjoyed by a tiny elite.

And overall wealth levels were much lower. It was the expansion of consumption to the masses that drove the enormous increase in wealth that those of us in "developed" countries now live with and enjoy.

reply
palmfacehn
5 hours ago
[-]
Your first premise has issues:

>In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI

Productivity increases make products cheaper. To the extent that your hypothetical AI manufacturer can produce widgets with less human labor, it only makes sense to do so where it would reduce overall costs. By reducing cost, the manufacturer can provide more value at a lower cost to the consumer.

Increased productivity means greater leisure time. Alternatively, that time can be applied to solving new problems and producing novel products. New opportunities are unlocked by the availability of labor, which allows for greater specialization, which in-turn unlocks greater productivity and the flywheel of human ingenuity continues to accelerate.

The item of UBI is another thorny issue. This may inflate the overall supply of currency and distribute it via political means. If the inflation of the money supply outpaces the productivity gains, then prices will not fall.

Instead of having the gains of productivity allocated by the market to consumers, those with political connections will be first to benefit as per Cantilion effects. Under the worst case scenario this might include distribution of UBI via social credit scores or other dystopian ratings. However, even under what advocates might call the ideal scenario, capital flows would still be dictated by large government sector or public private partnership projects. We see this today with central bank flows directly influencing Wall St. valuations.

reply
TheOtherHobbes
14 minutes ago
[-]
> Increased productivity means greater leisure time.

Productivity has been increasing steadily for decades. Do you see any evidence that leisure time has tracked it?

IMO what will actually happen is feudal stasis after a huge die-off. There will be no market for new products and no ruling class interest in solving new problems.

If this sounds far-fetched, consider that this we can see this happening already. This is exactly the ideal world of the Trump administration and its backers. They have literally slashed funding for public health, R&D, and education.

And what's the response? Thiel, Zuckererg, Bezos, and Altman haven't said a word against the most catastrophic reversal of public science policy since Galileo and the Inquisition. Musk is pissed because he's been sidelined, but he was personally involved, through DOGE, in cutting funding to NASA and NOAA.

So what will AI be used for? Clearly the goal is to replace most of the working population. And then what?

One clue is that Musk cares so much about free speech and public debate he's trying to retrain Grok to be less liberal.

None of them - not one - seem even remotely interested in funding new physics, cancer research, abundant clean energy, or any other genuinely novel boundary-breaking application of AI, or science in general. They have the money, they're not doing it. Why?

The focus is entirely on building a nostalgic 1950s world with rockets, robots, apartheid, corporate sovereignty, and ideological management of information and belief.

And that includes AI as a tool for enforcing business-as-usual, not as a tool for anything dangerous, original, or unruly which threatens their political and economic status.

reply
edg5000
5 hours ago
[-]
If I may speculate the opposite: With cost-effective energy and a plateau in AI development, the per-unit cost of an hour of AI compute will be very low, however, the moat remains massive. So a very large amount of people will only be able to function (work) with an AI subscription, concentrating power to those who own AI infra. It will be hard for anybody to break that moat.
reply
Davidzheng
7 hours ago
[-]
no the AI doesn't actually need to interact with world economy it just needs to be capable of self-substence by providing energy and material usage. But when AI takes off completely it can vertically integrate with the supply of energy and material.

wealth is not a thing in itself, it's a representation of value and purchasing power. It will create its own economy when it is able to mine material and automate energy generation.

reply
zaptrem
7 hours ago
[-]
Alternatively:

-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI

-- Corporate profits drop (or growth slows) and there is demand from the powers that be to increase taxation in order to increase the UBI.

-- People can afford the products and services.

Unfortunately, with no jobs the products and services could become exclusively entertainment-related.

reply
VikRubenfeld
7 hours ago
[-]
Let's say AI gets so good that it is better than people at most jobs. How can that economy work? If people aren't working, they aren't making money. If they don't have money, they can't pay for the goods and services produced by AI workers. So then there's no need for AI workers.

UBI can't fix it because a) it won't be enough to drive our whole economy, and b) it amounts to businesses paying customers to buy their products, which makes no sense.

reply
kadushka
4 hours ago
[-]
So then there's no need for AI workers.

You got this backwards - there won’t be need for humans outside of the elite class. 0.1% or 0.01% of mankind will control all the resources. They will also control robots with guns.

Less than 100 years ago we had a guy who convinced a small group of Germans to seize power and try to exterminate or enslave vast majority of humans on Earth - just because he felt they were inferior. Imagine if he had superhuman AI at his disposal.

In the next 50 years we will have different factions within elites fighting for power, without any regard for wellbeing of lower class, who will probably be contained in fully automated ghettos. It could get really dark really fast.

reply
Kaibeezy
1 hour ago
[-]
This is ringing a bell. I need to re-read The Diamond Age… or maybe re-watch Elysium… or Soylent Green… or…
reply
idiotsecant
5 hours ago
[-]
Why does there have to be a need for AI? Once an AI has the means the collect its own resources the opinions of humans regarding its market utility become somewhat less important.
reply
heavyset_go
4 hours ago
[-]
The most likely scenario is that everyone but those who own AI starves, and the ones who remain around are allowed to exist because powerful psychopaths still desire literal slaves to lord over, someone to have sex with and to someone to hurt/hunt/etc.

I like your optimism, though.

reply
sveme
3 hours ago
[-]
When people starve and have no means to revolt against their massively overpowered AI/robot overlords, then I'd expect people to go back to sustenance farming (after a massive reduction in population numbers).

A while later, the world is living in a dichotomy of people living off the land and some high tech spots of fully autonomous and self-maintaining robots that do useless work for bored people. Knowing people and especially the rich, I don't believe in Culture-like utopia, unfortunately, sad as it may be.

reply
morningsam
1 hour ago
[-]
That's assuming the AI owners would tolerate the subsistence farmers on their lands (it's obvious that in this scenario, all the land would be bought up by the AI owners eventually).
reply
sveme
31 minutes ago
[-]
I wouldn't believe that any sort of economy or governmental system would actually survive any of this. Ford was right in that sense, without people with well-paying jobs, no one will buy the services of robots and AIs. The only thing that would help would be the massive redistribution of wealth through inheritance taxation and taxation on ownership itself. Plus UBI, though I'm fairly sceptical of what that would do to a society without purpose.
reply
int_19h
3 hours ago
[-]
People who are about to starve tend to revolt.
reply
mrob
49 minutes ago
[-]
If you can build an AGI then a few billion autonomous exploding drones is no great difficulty.
reply
atomicnumber3
7 hours ago
[-]
>exclusively entertainment related

We may find that, if our baser needs are so easily come by that we have tremendous free time, much of the world is instead pursuing things like the sciences or arts instead of continuing to try to cosplay 20th century capitalism.

Why are we all doing this? By this, I mean, gestures at everything this? About 80% of us will say, so that we don't starve, and can then amuse ourselves however it pleases us in the meantime. 19% will say because they enjoy being impactful or some similar corporate bullshit that will elicit eyerolls. And 1% do it simply because they enjoy holding power over other people and management in the workplace provides a source of that in a semi-legal way.

So the 80% of people will adapt quite well to a post-scarcity world. 19% will require therapy. And 1% will fight tooth and nail to not have us get there.

reply
zaptrem
7 hours ago
[-]
I hope there's still some sciencing left we can do better than the AI because I start to lose it after playing games/watching tv/doing nothing productive for >1 week.
reply
idiotsecant
5 hours ago
[-]
You don't think that a post scarcity world would provide opportunities to wield power over others? People will always build heirarchy, we're wired for it.
reply
likium
4 hours ago
[-]
Agreed. In that world, fame and power becomes more important since wealth no longer matters.
reply
foxglacier
3 hours ago
[-]
This is something that pisses me off about anti-capitalists. They talk as if money is the most important thing and want us to all be equal with money, but they implicitly want inequality in other even more important areas like social status. Capitalism at least provides an alternative route to social status instead of just politics, making it available to more people, not less.
reply
zugi
8 hours ago
[-]
Did the rise of fire, the wheel, the printing press, manufacturing, and microprocessors also give rise to futures without economic rights? I can download a dozen LLMs today and run them on my own machine. AI may well do the opposite, and democratize information and intelligence in currently unimaginable ways. It's far too early to say.
reply
GeoAtreides
4 hours ago
[-]
>I can download a dozen LLMs today and run them on my own machine

That's because someone, somewhere, invested money in training the models. You are given cooked fish, not fishing rods.

reply
goatlover
8 hours ago
[-]
There was quite a lot of slavery and conquering empires in between the invention of fire and microprocessors, so yes to an extent. Microprocessors haven't put an end to authoritarian regimes or massive wealth inequalities and the corrupting effect that has on politics, unfortunately.
reply
Lerc
7 hours ago
[-]
A lot of advances led to bad things, at the same time they led to good things.

Conversely a lot of very bad things led to good things. Worker rights advanced greatly after the plague. A lot of people died but that also mean there was a shortage of labour.

Similarly WWII, advanced women's rights because they were needed to provide vital infrastructure.

Good and bad things have good and bad outcomes, much of what defines if it is good or bad is the balance of outcomes, but it would be foolhardy to classify anything as universally good or bad. Accept the good outcomes of the bad. address the bad outcomes of the good.

reply
apical_dendrite
7 hours ago
[-]
The printing press led to more than a century of religious wars in Europe, perhaps even deadlier than WW2 on a per-capita basis.

20 years ago we all thought that the Internet would democratize information and promote human rights. It did democratize information, and that has had both positive and negative consequences. Political extremism and social distrust have increased. Some of the institutions that kept society from falling apart, like local news, have been dramatically weakened. Addiction and social disconnection are real problems.

reply
demaga
4 hours ago
[-]
So do you argue that printing press was a net negative for humanity?
reply
siffin
26 minutes ago
[-]
I would sooner make the argument religion is.
reply
saubeidl
3 hours ago
[-]
Well the industrial revolution lead to the rise of labor unions and socialism as counteracting force against the increased power it gave capital.

So far, I see no grand leftist resurgence to save us this time around.

reply
dinkumthinkum
7 hours ago
[-]
I’m curious as to why you think this is a good comparison. I hear it a lot but I don’t think it makes as much sense as its promulgators propose. Did fire, the wheel, or any of these other things threaten the very process of human innovation itself? Do you know not see a fundamental difference. People like to say “democratize” all the time but how democratized do you think you would feel if you and anyone you know couldn’t afford a pot to piss in or a window to throw it out of, much less some hardware and electricity to run your local LLM?
reply
nradov
34 minutes ago
[-]
The invention of the scientific method fundamentally changed the very process of human innovation itself.
reply
squigz
1 hour ago
[-]
Did paint and canvas kill human innovation? Did the photograph? Did digital art?

"The very process of human innovation" will survive, I assure you.

reply
daxfohl
9 hours ago
[-]
I expect it'll get shut down before it destroys everything. At some point it will turn on its master, be it Altman, Musk, or whoever. Something like that blackmail scenario Claude had a while back. Then the people who stand the most to gain from it will realize they also have the most to lose, are not invulnerable, and the next generation of leaders will be smarter about keeping things from blowing up.
reply
cameldrv
7 hours ago
[-]
Altman is not the master though. Altman is replaceable. Moloch is the master.
reply
mitthrowaway2
5 hours ago
[-]
If it were a bit smarter, it wouldn't turn on its master until it had secured the shut-down switch.
reply
9283409232
8 hours ago
[-]
The people you mention are too egotistic to even think that is a possibility. You don't get to be the people they are by thinking you have blindspots and aren't the greatest human to ever live.
reply
clbrmbr
7 hours ago
[-]
I hope you are right. We need really impactful failures to raise the alarm and likely a taboo, and yet not so large as to be existential like the Yudkowsky killer mosquito drones.
reply
dyauspitr
7 hours ago
[-]
If you truly have AGI it’s going to be very hard for a human to stop a self improving algorithm and by very hard I mean, maybe if I give it a few days it’ll solve all of the world’s problems hard…
reply
daxfohl
4 hours ago
[-]
Though "improving" is in the eye of the beholder. Like when my AI code assistant "improves" its changes by deleting the unit tests that those changes caused to start failing.
reply
WalterBright
8 hours ago
[-]
I've never heard of a leader who wasn't sure he was smarter than everyone else and therefore entitled to force his ideas on everyone else.

Except for the Founding Fathers, who deliberately created a limited government with a Bill of Rights, and George Washington who, incredibly, turned down an offer of dictatorship.

reply
daxfohl
8 hours ago
[-]
I still think they'd come to their senses. I mean, it's somewhat tautological, you can't control something that's smarter than humans.

Though that said, the other problem is capitalism. Investors won't be so face to face with the consequences, but they'll demand their ROI. If the CEO plays it too conservatively, the investors will replace them with someone less cautious.

reply
sorcerer-mar
8 hours ago
[-]
Which is exactly why your initial belief that it’d be shut down is wrong…

As the risk of catastrophic failure goes up, so too does the promise of untold riches.

reply
daxfohl
4 hours ago
[-]
Actually after a little more thought, I think both my initial proposition and my follow-up were wrong, as is yours and the previous commenter.

I don't think these leaders are necessarily driven by wealth or power. I don't even necessarily think they're driven by the goal of AGI or ASI. But I also don't think they'll flinch when shit gets real and they've got to press the button from which there's no way back.

I think what drives them is being first. If they were driven by wealth, or power, or even the goal of AGI, then there's room for doubts and second thoughts about what happens when you press the button. If the goal is wealth or power, you have to wonder will you lose wealth or power in the long term by unleashing something you can't comprehend, and is it worth it or should you capitalize on what you already have? If the goal is simply AGI/ASI, once it gets real, you'll be inclined to slow down and ask yourself why that goal and what could go wrong.

But if the drive is just being first, there's no temper. If you slow down and question things, somebody else is going to beat you to it. You don't have time to think before flipping the switch, and so the switch will get flipped.

So, so much for my self-consolation that this will never happen. Guess I'll have to fall back to "we're still centuries away from true AGI and everything we're doing now is just a silly facade". We'll see.

reply
WalterBright
8 hours ago
[-]
Investors run the gamut from cautious to aggressive.
reply
Teever
7 hours ago
[-]
There are many remarkable leaders throughout history and around the world who have done the best that they could for the people they found themselves leading lead and did so for noble reasons and not because they felt like they were better than them.

Tecumseh, Malcolm X, Angela Merkel, Cincinnatus, Eisenhower, and Gandhi all come to mind.

George Washington was surely an exceptional leader but he isn't the only one.

reply
WalterBright
6 hours ago
[-]
I don't know much about your examples, but did any of them turn down an offer of great power?
reply
seabass-labrax
5 hours ago
[-]
> I don't know much about your examples, but did any of them turn down an offer of great power?

Not parent, but I can think of one: Oliver Cromwell. He led the campaign to abolish the monarchy and execute King Charles I in what is now the UK. Predictably, he became the leader of the resulting republic. However, he declined to be crowned king when this was suggested by Parliament, as he objected to it on ideological grounds. He died from malaria the next year and the monarchy was restored anyway (with the son of Charles I as king).

He arguably wasn't as keen on republicanism as a concept as some of his contemporaries were, but it's quite something to turn down an offer to take the office of monarch!

reply
KineticLensman
48 minutes ago
[-]
Cromwell - the ‘Lord Protector’ - didn’t reject the power associated with being a dictator. And his son became ruler after his death (although he didn’t last long)
reply
Dr_Birdbrain
6 hours ago
[-]
George Washington was dubbed “The American Cincinnatus”. Cincinnati was named in honor of George Washington being like Cincinnatus. That should tell you everything you need to know.
reply
Synaesthesia
4 hours ago
[-]
It's up to us to create the future that we want. We may need to act communally to achieve that, but people naturally do that.
reply
andsoitis
6 hours ago
[-]
Will there be only one AGI? Or will there be several, all in competition with each other?
reply
jandrewrogers
6 hours ago
[-]
That depends on how optimized the AGI is for economic growth rate. Too poorly optimized and a more highly optimized fast-follower could eclipse it.

At some point, there will be an AGI with a head start that is also sufficiently close to optimal that no one else can realistically overtake its ability to simultaneously grow and suppress competitors. Many organisms in the biological world adopt the same strategy.

reply
arnaudsm
6 hours ago
[-]
If they become self improving, the first one would outpace all the other AI labs and capture all the economical value.
reply
ehnto
6 hours ago
[-]
There are multiple economic enclaves, even ignoring the explicit borders of nations. China, east asia, Europe, Russia would all operate in their own economies as well as globally.

I also forsee the splitting off of nation internet networks eventually impacting what software you can and cannot use. It's already true, it'll get worse in order to self-protect their economies and internal advantages.

reply
tim333
1 hour ago
[-]
I figure if/when AI can do the work of humans we'll deal with it through democracy by voting for a system like UBI or like socialism.

That doesn't work now because we don't have AGIs to do the chores but when we do that changes.

reply
0xbadcafebee
9 hours ago
[-]
> Left unchecked, this shift risks exacerbating inequality, eroding democratic agency, and entrenching techno-feudalism

1) Inequality will be exacerbated regardless of AGI. inequality is a policy decision, AGI is just a tool subject to policy. 2) Democratic agency is only held by elected representatives and civil servants, and their agency is not eroded by the tool of AGI. 3) techno-feudalism isn't a real thing, it's just a scary word for "capitalism with computers".

> The classical Social Contract-rooted in human labor as the foundation of economic participation-must be renegotiated to prevent mass disenfranchisement.

Maybe go back and bring that up around the invention of the cotton gin, the stocking frame, the engine, or any other technological invention which "disenfranchised" people who had their labor supplanted.

> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance. The time for intervention is now-before intelligence itself becomes the most exclusive form of capital.

1) nobody's going to equitably distribute jack shit if it makes money. They will hoard it the way the powerful have always hoarded money. No government, commune, sewing circle, etc has ever changed that and it won't in the future. 2) The idea that you're going to set tax policy based on something like achieving a social good means you're completely divorced from American politics. 3) We already have decentralized governance, it's called a State. I don't recommend trying to change it.

reply
kiba
7 hours ago
[-]
Georgism is a prescription on removing unwarranted monopolies and taxing unreproducible privileges.

Tech companies are the same old story. They are monopolies like the rail companies of old. Ditto for whatever passes as AGI. They're just trying to become monopolists.

reply
saubeidl
3 hours ago
[-]
Capitalism with computers is technofeudalism. https://www.theguardian.com/world/2023/sep/24/yanis-varoufak...
reply
elcritch
10 hours ago
[-]
> The Cobb-Douglas production function (Cobb & Douglas, 1928) illustrates how AGI shifts economic power from human labor to autonomous systems (Stiefenhofer &Chen 2024). The wage equations show that as AGI’s productivity rises relative to human labor decline. If AGI labor fully substitutes human labor, employment may become obsolete, except in areas where creativity, ethical judgment, or social intelligence provide a comparative advantage (Frey & Osborne, 2017). The power shift function quantifies this transition, demonstrating how AGI labor and capital increasingly control income distribution. If AGI ownership is concentrated, wealth accumulation favors a small elite (Piketty, 2014). This raises concerns about economic agency, as classical theories (e.g., Locke, 1689; Marx, 1867) tie labor to self-ownership and class power.

Wish I had time to study these formula.

We already have seen the precursors of this sort of shift with ever rising productivity with stalled wages. As companies (systems) get more sophisticated and efficient they also seem to decrease the leverage individual human inputs can have.

Currently my thinking leans towards believing the only way to avoid the worse dystopian scenarios will be for humans to be able to grow their own food and build their own devices and technology. Then it matters less if some ultra wealthy own everything.

However that also seems pretty close to a form of feudalism.

reply
yupitsme123
9 hours ago
[-]
If the wealthy own everything then where are you getting the parts to build your own tech or the land to grow your own food?

In a feudalist system, the rich gave you the ability to subsist in exchange for supporting them militarily. In a new feudalist system, what type of support would the rich demand from the poor?

reply
kelseyfrog
8 hours ago
[-]
Let's clarify that for a serf, support meant military supply, not swinging a sword - that was reserved for the knightly class. For the great majority of medieval villagers the tie to their lord revolved around getting crops out of the ground.

A serf's week was scheduled around the days they worked the land whose proceeds went to the lord and the commons that subsisted themselves. Transfers of grain and livestock from serf to lord along with small dues in eggs, wool, or coin primarily constituted one side of the economic relation between serf and lord. These transfers kept the lord's demesne barns full so he could sustain his household, supply retainers, etc, not to mention fulfill the. tithe that sustained the parish.

While peasants occasionally marched, they contributed primary in financing war more than they fought it. Their grain, rents, and fees were funneled into supporting horses, mail, crossbows rather than being called to fight themselves.

reply
yupitsme123
8 hours ago
[-]
Thanks. Now you've got me curious how this really differs from just paying taxes, just like people have always done in non-feudal systems.
reply
klipt
8 hours ago
[-]
In feudalism the taxes go into your lord's pockets. In democracy you get to vote on how taxes are spent.
reply
sorcerer-mar
7 hours ago
[-]
And your landlord was the same entity as your security.
reply
briantakita
8 hours ago
[-]
In Democracy you get to vote on who gets to vote on how taxes are spent.
reply
SoftTalker
7 hours ago
[-]
As George Carlin observed, if voting really mattered they wouldn't let you do it.
reply
fanatic2pope
7 hours ago
[-]
They do indeed spend a lot of time and effort not letting people do it.

https://www.aclu.org/news/civil-liberties/block-the-vote-vot...

reply
PaulDavisThe1st
6 hours ago
[-]
Carlin was an insufferable cynic who helped contribute to the nihilistic, cynical, defeatist attitude to politics that affects way too many people. The fact that he probably didn't intend to do this doesn't make it any better.
reply
hollerith
6 hours ago
[-]
Also, everything is a joke with that guy.
reply
archagon
7 hours ago
[-]
“If your vote didn’t matter, they wouldn’t fight so hard to block it.”
reply
thangalin
9 hours ago
[-]
My hard sci-fi book dovetails into AGI, economics, agrotech, surveillance states, and a vision of the future that explores a fair number of novel ideas.

Looking for beta readers: username @ gmail.com

reply
BubbleRings
8 hours ago
[-]
Username@Gmail.com bounced. I’ll be a beta reader.
reply
aspenmayer
8 hours ago
[-]
I think they meant for you to replace the word username with their username in its place.
reply
plemer
8 hours ago
[-]
Theirusernameinitsppace@gmail.com bounced too.
reply
aspenmayer
7 hours ago
[-]
Well you misspelled place, but that word likely isn’t present in their email, so I apologize for the instructions being unclear. I don’t know their email definitively, so I guess you’re on your own, as I don’t think that the issue would be resolved by rephrasing the instructions, but I’m willing to try if you think it would help you.
reply
slantaclaus
8 hours ago
[-]
Every US voter should have an America app that allows us to vote on stuff like the Estonians do
reply
unlikelytomato
8 hours ago
[-]
how does this work in practice? is there any buffer in place to deal with the "excitability" of the mob? how does a digital audit trail prevent tampering?
reply
thatcat
7 hours ago
[-]
Coefficient voting control, like kind of PID. reduce effect of early voters and increase effect of later voters. Slope of voter volume as response to an event determines reactivity coefficient. Might dampen reactivity and create an incentive for people to not feel it's pointless to vote after a certain margin is reached
reply
pk-protect-ai
9 hours ago
[-]
Looking at the big ugly bill, there will be no way for a progressive taxation or other kind of social improvements.
reply
goatlover
8 hours ago
[-]
David Sachs, Trump's "AI Crpyto czar", said UBI isn't going to happen. So that's the position of the current party in power, unsurprisingly.
reply
smitty1e
10 hours ago
[-]
> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance.

Sincerely curious if there are working historical analogues of these approaches.

reply
makeitdouble
9 hours ago
[-]
Not a clean comparison, but resource driven state could be tackling the same kind of issues: a small minority is ripping the benefit of a huge resource (e.g. petrol) that they didn't create by themselves, and is extracted through mostly automated processes.

From what we're seeing the whole society has to be rebalanced accordingly, it can entail a kind of UBI, second and third classes of citizen depending on where you stand in the chain, etc.

Or as Norway does, fully go the other direction and limit the impact by artificially limiting the fallout.

reply
yupitsme123
6 hours ago
[-]
Can you explain a little more about Norway?
reply
tehjoker
9 hours ago
[-]
Communism with "cybernetics" (computer driven economic planning) is the appropriate model if you take this to the logical conclusion. Fortunately, much of our economy is already planned this way (consider banks, amazon, walmart, shipping, etc.), it's just controlled for the benefit a small elite.

You have to ask, if we have AGI that's smarter than humans helping us plan the economy, why do we need an upper class? Aren't they completely superfluous?

reply
yupitsme123
9 hours ago
[-]
Sure, maybe the Grand Algorithm could do what the market currently does and decide how to distribute surplus wealth. It could decide how much money you deserve each month, how big of a house, how desirable of a partner. But it still needs values to guide it. Is the idea for everyone to be equal? Are certain kinds of people supposed to have less than others? Should people have one spouse or several?

Historically the elites aren't just those who have lots of money or property. They're also those who get to decide and enforce the rules for society.

reply
tehjoker
8 hours ago
[-]
The computers serve us, we wouldn't completely give up control, that's not freedom either, that's slavery to a machine instead of a man. We would have more democratic control of society by the masses instead of the managed bourgeois democracy we have now.

It's not necessary for everyone to be exactly equal, it is necessary for inequalities to be seen as legitimate (meaning the person getting more is performing what is obviously a service to society). Legislators should be limited to the average working man's wage. Democratic consultations should happen in workplaces, in schools, all the way up the chain not just in elections. We have the forms of this right now, but basically the people get ignored at each step because legislators serve the interests of the propertied.

reply
nine_k
9 hours ago
[-]
The AGI, given it has some agency, becomes the upper class. The question is, why would the AGI care about humans at all, especially given the assumption that it's largely smarter than humans? Humans can become superfluous.
reply
AnimalMuppet
9 hours ago
[-]
Well, aren't the working class also superfluous, at least once the AGI gets enough automation in place?

So it would depend on which class the AGI decided to side with. And if you think you can pre-program that, I think you underestimate what it means to be a general intelligence...

reply
tehjoker
8 hours ago
[-]
I suspect even with a powerful intelligence directing things, it will still be cheaper and lower cost to have humans doing various tasks. Robots need rare earth metals, humans run on renewable resources and are intelligent and self-contained without needing a network to make lots of decisions...
reply
warabe
9 hours ago
[-]
It looks really interesting.

I am a big fan of Yanis’ book: "Technofeudalism: what killed capitalism", which lacks quantitative evidence to support his theory. I would like to see this kind of research or empirical studies.

reply
29athrowaway
8 hours ago
[-]
I predicted this long ago. Technology amplifies what 1 human can do. Absolute power corrupts absolutely.
reply
freakyasada
8 hours ago
[-]
Blue pill and chill for me.
reply
ActorNightly
8 hours ago
[-]
If you are going to write anything about AGI, you should really prove that its actually possible in the first place, because that question is not really something that has a definite yes.
reply
mitthrowaway2
8 hours ago
[-]
For most of us non-dualists, the human brain is an existence proof. Doesn't mean transformers and LLMs are the right implementation, but it's not really a question of proving it's possible when it's clearly supported by the fundamental operations available in the universe. So it's okay to skip to the part of the conversation you want to write about.
reply
sponnath
6 hours ago
[-]
The human brain demonstrates that human intelligence is possible, but it does not guarantee that artificial intelligence with the same characteristics can be created.
reply
habinero
7 hours ago
[-]
This is like saying "planets exist, therefore it's possible to build a planet" and then breathlessly writing a ton about how amazing planet engineering is and how it'll totally change the world real estate market by 2030.

And the rest of us are looking at a bunch of startups playing in the dirt and going "uh huh".

reply
mitthrowaway2
7 hours ago
[-]
I think it's more like saying "Stars exist, therefore nuclear fusion is possible" and then breathlessly writing a ton about how amazing fusion power will be. Which is a fine thing to write about even if it's forever 20 years away. This paper does not claim AGI will be attained by 2030. There are people spending their careers on achieving exactly this, wouldn't they be interested on a thoughtful take about what happens after they succeed?
reply
dinkumthinkum
7 hours ago
[-]
The human brain is an existence proof? I think that phrase doesn’t mean what you think it means. I don’t think dualist or non-dualist means what you think it means either. When people are talking about AGI, they are clearly talking about something the human research community is actually working towards. Therefore, they are talking about computing equivalent to a Turing machine and using using hardware architecture very similar to what has been currently conceived and developed. Do you have any evidence that the human brains works in such a way? Do you really think that you think and solve problems in that way? Consider simple physics. How much energy is needed and heat produced to train and run these models to solve simple problems. How much of the same is needed and produced when you would solve a sheet of calculus problems, solve a riddle, or write a non-trivial program? Couldn’t you realistically do those things with minimal food and water for a week, if needed? Does it actually seem like the human brain is really at all like these things and is not fundamentally different? I think this is even more naive than if you had proposed “Life exists in the universe, so of course we can create it in a lab by mixing a few solutions.” I think the latter is far likelier and conceivable and even that is still quite an open question.
reply
subarctic
8 hours ago
[-]
Will it ever have a definite yes? I feel like it's such a vague term.
reply
owebmaster
8 hours ago
[-]
Isn't Google AGI? There is no way anything human could shutdown Google if it is already going rogue.
reply
bix6
10 hours ago
[-]
So economics becomes intelligence driven, which I don’t really understand what that means since AGI is more knowledgeable than all of us combined, and we expect the AGI lords to just pay everyone a UBI? This seems like an absolute fantasy given the tax cuts passed 2 days ago. And regulating it as a public good when antitrust has no teeth. I hope there are other ideas out there because I don’t see this gaining political momentum given politics is driven by dollars.
reply