Do you think, in this hypothesized environment, that “democratic policy” will be the organic will of the people? It assumes much more agency on the part of people than will actually exist, and possibly more than even exists now.
The concept of voting, in a nation of hundreds of millions of people, is just dumb. Nobody knows anything about any of the candidates; everything people think they know was told to them by the corporate-controlled media and they only hear about candidates which were covered by the media; basically only candidates chosen by the establishment. It's a joke. People get the privilege of voting for which party will oppress them.
Current democracy is akin to the media making up a story like 'The Wizard of OZ' and then they offer you to vote for either the Lion, the Robot or the Scarecrow. You have no idea who any of these candidates are, you can't even be sure if they actually exist. Everything you know about them could literally have been made up by whoever told the story; and yet, when asked to vote, people are sure they understand what they're doing. They're so sure it's all legit, they'll viciously argue their candidate's position as if they were a family member they knew personally.
The society built on empathy would have been able to work out any issue brought by technology as long as empathic goals take priority. Unfortunately our society is far from being based on empathy, to say the least. And technology and the people wielding it would always work around and past the formal laws, rules and policies in such a society. (that isn't to say that all those laws, rules, etc. aren't needed. They are like levies, dams, etc - necessary local, in time and space, fixes which willn't help in the case of the global ocean rise which AGI and robots (even less-than-AGI ones) will be like)
May be it is one of the technological Filters - we didn't become empathic enough (and i mean not only at the individual level, we are even less at the level of the societal systems) before AGI and as a result woudln't be able to instill enough of empathy into the AGI.
Because they have different concerns, and time and attention are scarce. With all possible social changes like the article suggests this focus could change too. Ultimately, when things will get too bad, uprisings happen and sometimes things change. And I hope the more we (collectively) get through, the higher are the chances we start noticing the patterns and stopping early.
I have an anecdote from Denmark. It’s a rich country with one of the best work-life balance in the world. Socialized healthcare and social safety net.
I noticed that during the election, they put the ads with just the candidate’s face and party name. It’s like they didn’t even have a message. I asked why. The locals told me nobody cares because “they’re all the same anyway”.
Two things could be happening: either all the candidates are really the same. Or people choose to focus on doing the things they like with their free time and resources. My feeling tells me it’s the second.
Not only that, but they actively stop applying critical thinking when the same problem is framed in a political way. And yes it's both sides, and yes the "more educated" the people are, the worse their results are (i.e. almost a complete reversal from framing the same problem as skin care products vs. gun control). Recent paper on this, also covered and somewhat replicated by popular youtubers.
However, I recently got a 100 EUR/m LLM subscription. That is the most I've spend on IT excluding a CAD software license. So've made a huge 180 and now am firmly back on the lap of US companies. I must say I've enjoyed my autonomy while it lasted.
One day AI will be democratized/cheap allowing people to self host what are now leading edge models, but it will take a while.
https://marshallbrain.com/manna1
The idea of taxing computer sales to fund job re-training for displaced workers was brought up during the Carter administration.
Althogh it was written somewhat as a warning, I feel Western countries (especially the US) are heading very much towards the terrafoam future. Mass immigration is making it hard to maintain order in some places, and if AI causes large unemployment it will only get worse.
Where is this happening? I'm in the US, and I haven't seen or heard of this.
-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
-- Therefore the AI generates greatly reduced wealth
-- Therefore there’s greatly reduced wealth to pay for the AI
-- …rendering such a future impossible
Also "rendering such a future impossible". This is a retrocausal way of thinking. As though an a bad event in the future makes that future impossible.
And overall wealth levels were much lower. It was the expansion of consumption to the masses that drove the enormous increase in wealth that those of us in "developed" countries now live with and enjoy.
>In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
Productivity increases make products cheaper. To the extent that your hypothetical AI manufacturer can produce widgets with less human labor, it only makes sense to do so where it would reduce overall costs. By reducing cost, the manufacturer can provide more value at a lower cost to the consumer.
Increased productivity means greater leisure time. Alternatively, that time can be applied to solving new problems and producing novel products. New opportunities are unlocked by the availability of labor, which allows for greater specialization, which in-turn unlocks greater productivity and the flywheel of human ingenuity continues to accelerate.
The item of UBI is another thorny issue. This may inflate the overall supply of currency and distribute it via political means. If the inflation of the money supply outpaces the productivity gains, then prices will not fall.
Instead of having the gains of productivity allocated by the market to consumers, those with political connections will be first to benefit as per Cantilion effects. Under the worst case scenario this might include distribution of UBI via social credit scores or other dystopian ratings. However, even under what advocates might call the ideal scenario, capital flows would still be dictated by large government sector or public private partnership projects. We see this today with central bank flows directly influencing Wall St. valuations.
Productivity has been increasing steadily for decades. Do you see any evidence that leisure time has tracked it?
IMO what will actually happen is feudal stasis after a huge die-off. There will be no market for new products and no ruling class interest in solving new problems.
If this sounds far-fetched, consider that this we can see this happening already. This is exactly the ideal world of the Trump administration and its backers. They have literally slashed funding for public health, R&D, and education.
And what's the response? Thiel, Zuckererg, Bezos, and Altman haven't said a word against the most catastrophic reversal of public science policy since Galileo and the Inquisition. Musk is pissed because he's been sidelined, but he was personally involved, through DOGE, in cutting funding to NASA and NOAA.
So what will AI be used for? Clearly the goal is to replace most of the working population. And then what?
One clue is that Musk cares so much about free speech and public debate he's trying to retrain Grok to be less liberal.
None of them - not one - seem even remotely interested in funding new physics, cancer research, abundant clean energy, or any other genuinely novel boundary-breaking application of AI, or science in general. They have the money, they're not doing it. Why?
The focus is entirely on building a nostalgic 1950s world with rockets, robots, apartheid, corporate sovereignty, and ideological management of information and belief.
And that includes AI as a tool for enforcing business-as-usual, not as a tool for anything dangerous, original, or unruly which threatens their political and economic status.
wealth is not a thing in itself, it's a representation of value and purchasing power. It will create its own economy when it is able to mine material and automate energy generation.
-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
-- Corporate profits drop (or growth slows) and there is demand from the powers that be to increase taxation in order to increase the UBI.
-- People can afford the products and services.
Unfortunately, with no jobs the products and services could become exclusively entertainment-related.
UBI can't fix it because a) it won't be enough to drive our whole economy, and b) it amounts to businesses paying customers to buy their products, which makes no sense.
You got this backwards - there won’t be need for humans outside of the elite class. 0.1% or 0.01% of mankind will control all the resources. They will also control robots with guns.
Less than 100 years ago we had a guy who convinced a small group of Germans to seize power and try to exterminate or enslave vast majority of humans on Earth - just because he felt they were inferior. Imagine if he had superhuman AI at his disposal.
In the next 50 years we will have different factions within elites fighting for power, without any regard for wellbeing of lower class, who will probably be contained in fully automated ghettos. It could get really dark really fast.
I like your optimism, though.
A while later, the world is living in a dichotomy of people living off the land and some high tech spots of fully autonomous and self-maintaining robots that do useless work for bored people. Knowing people and especially the rich, I don't believe in Culture-like utopia, unfortunately, sad as it may be.
We may find that, if our baser needs are so easily come by that we have tremendous free time, much of the world is instead pursuing things like the sciences or arts instead of continuing to try to cosplay 20th century capitalism.
Why are we all doing this? By this, I mean, gestures at everything this? About 80% of us will say, so that we don't starve, and can then amuse ourselves however it pleases us in the meantime. 19% will say because they enjoy being impactful or some similar corporate bullshit that will elicit eyerolls. And 1% do it simply because they enjoy holding power over other people and management in the workplace provides a source of that in a semi-legal way.
So the 80% of people will adapt quite well to a post-scarcity world. 19% will require therapy. And 1% will fight tooth and nail to not have us get there.
That's because someone, somewhere, invested money in training the models. You are given cooked fish, not fishing rods.
Conversely a lot of very bad things led to good things. Worker rights advanced greatly after the plague. A lot of people died but that also mean there was a shortage of labour.
Similarly WWII, advanced women's rights because they were needed to provide vital infrastructure.
Good and bad things have good and bad outcomes, much of what defines if it is good or bad is the balance of outcomes, but it would be foolhardy to classify anything as universally good or bad. Accept the good outcomes of the bad. address the bad outcomes of the good.
20 years ago we all thought that the Internet would democratize information and promote human rights. It did democratize information, and that has had both positive and negative consequences. Political extremism and social distrust have increased. Some of the institutions that kept society from falling apart, like local news, have been dramatically weakened. Addiction and social disconnection are real problems.
So far, I see no grand leftist resurgence to save us this time around.
"The very process of human innovation" will survive, I assure you.
Except for the Founding Fathers, who deliberately created a limited government with a Bill of Rights, and George Washington who, incredibly, turned down an offer of dictatorship.
Though that said, the other problem is capitalism. Investors won't be so face to face with the consequences, but they'll demand their ROI. If the CEO plays it too conservatively, the investors will replace them with someone less cautious.
As the risk of catastrophic failure goes up, so too does the promise of untold riches.
I don't think these leaders are necessarily driven by wealth or power. I don't even necessarily think they're driven by the goal of AGI or ASI. But I also don't think they'll flinch when shit gets real and they've got to press the button from which there's no way back.
I think what drives them is being first. If they were driven by wealth, or power, or even the goal of AGI, then there's room for doubts and second thoughts about what happens when you press the button. If the goal is wealth or power, you have to wonder will you lose wealth or power in the long term by unleashing something you can't comprehend, and is it worth it or should you capitalize on what you already have? If the goal is simply AGI/ASI, once it gets real, you'll be inclined to slow down and ask yourself why that goal and what could go wrong.
But if the drive is just being first, there's no temper. If you slow down and question things, somebody else is going to beat you to it. You don't have time to think before flipping the switch, and so the switch will get flipped.
So, so much for my self-consolation that this will never happen. Guess I'll have to fall back to "we're still centuries away from true AGI and everything we're doing now is just a silly facade". We'll see.
Tecumseh, Malcolm X, Angela Merkel, Cincinnatus, Eisenhower, and Gandhi all come to mind.
George Washington was surely an exceptional leader but he isn't the only one.
Not parent, but I can think of one: Oliver Cromwell. He led the campaign to abolish the monarchy and execute King Charles I in what is now the UK. Predictably, he became the leader of the resulting republic. However, he declined to be crowned king when this was suggested by Parliament, as he objected to it on ideological grounds. He died from malaria the next year and the monarchy was restored anyway (with the son of Charles I as king).
He arguably wasn't as keen on republicanism as a concept as some of his contemporaries were, but it's quite something to turn down an offer to take the office of monarch!
At some point, there will be an AGI with a head start that is also sufficiently close to optimal that no one else can realistically overtake its ability to simultaneously grow and suppress competitors. Many organisms in the biological world adopt the same strategy.
I also forsee the splitting off of nation internet networks eventually impacting what software you can and cannot use. It's already true, it'll get worse in order to self-protect their economies and internal advantages.
That doesn't work now because we don't have AGIs to do the chores but when we do that changes.
1) Inequality will be exacerbated regardless of AGI. inequality is a policy decision, AGI is just a tool subject to policy. 2) Democratic agency is only held by elected representatives and civil servants, and their agency is not eroded by the tool of AGI. 3) techno-feudalism isn't a real thing, it's just a scary word for "capitalism with computers".
> The classical Social Contract-rooted in human labor as the foundation of economic participation-must be renegotiated to prevent mass disenfranchisement.
Maybe go back and bring that up around the invention of the cotton gin, the stocking frame, the engine, or any other technological invention which "disenfranchised" people who had their labor supplanted.
> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance. The time for intervention is now-before intelligence itself becomes the most exclusive form of capital.
1) nobody's going to equitably distribute jack shit if it makes money. They will hoard it the way the powerful have always hoarded money. No government, commune, sewing circle, etc has ever changed that and it won't in the future. 2) The idea that you're going to set tax policy based on something like achieving a social good means you're completely divorced from American politics. 3) We already have decentralized governance, it's called a State. I don't recommend trying to change it.
Tech companies are the same old story. They are monopolies like the rail companies of old. Ditto for whatever passes as AGI. They're just trying to become monopolists.
Wish I had time to study these formula.
We already have seen the precursors of this sort of shift with ever rising productivity with stalled wages. As companies (systems) get more sophisticated and efficient they also seem to decrease the leverage individual human inputs can have.
Currently my thinking leans towards believing the only way to avoid the worse dystopian scenarios will be for humans to be able to grow their own food and build their own devices and technology. Then it matters less if some ultra wealthy own everything.
However that also seems pretty close to a form of feudalism.
In a feudalist system, the rich gave you the ability to subsist in exchange for supporting them militarily. In a new feudalist system, what type of support would the rich demand from the poor?
A serf's week was scheduled around the days they worked the land whose proceeds went to the lord and the commons that subsisted themselves. Transfers of grain and livestock from serf to lord along with small dues in eggs, wool, or coin primarily constituted one side of the economic relation between serf and lord. These transfers kept the lord's demesne barns full so he could sustain his household, supply retainers, etc, not to mention fulfill the. tithe that sustained the parish.
While peasants occasionally marched, they contributed primary in financing war more than they fought it. Their grain, rents, and fees were funneled into supporting horses, mail, crossbows rather than being called to fight themselves.
https://www.aclu.org/news/civil-liberties/block-the-vote-vot...
Looking for beta readers: username @ gmail.com
Sincerely curious if there are working historical analogues of these approaches.
From what we're seeing the whole society has to be rebalanced accordingly, it can entail a kind of UBI, second and third classes of citizen depending on where you stand in the chain, etc.
Or as Norway does, fully go the other direction and limit the impact by artificially limiting the fallout.
You have to ask, if we have AGI that's smarter than humans helping us plan the economy, why do we need an upper class? Aren't they completely superfluous?
Historically the elites aren't just those who have lots of money or property. They're also those who get to decide and enforce the rules for society.
It's not necessary for everyone to be exactly equal, it is necessary for inequalities to be seen as legitimate (meaning the person getting more is performing what is obviously a service to society). Legislators should be limited to the average working man's wage. Democratic consultations should happen in workplaces, in schools, all the way up the chain not just in elections. We have the forms of this right now, but basically the people get ignored at each step because legislators serve the interests of the propertied.
So it would depend on which class the AGI decided to side with. And if you think you can pre-program that, I think you underestimate what it means to be a general intelligence...
I am a big fan of Yanis’ book: "Technofeudalism: what killed capitalism", which lacks quantitative evidence to support his theory. I would like to see this kind of research or empirical studies.
And the rest of us are looking at a bunch of startups playing in the dirt and going "uh huh".