There is a universe where AI models continue to improve until they eclipse humans at all cognitive tasks, with fine motor tasks (via humanoid and other types of robots) soon to follow.
In this universe, assuming we are still alive after that point, what would have been the best thing to do if we could go back in time to mid-2025 and tell our past selves exactly how to live life optimally, given that this happens?
This post isn't about whether AGI will happen soon or is possible or not, I'm just wondering that, given the hypothetical, what would be the best course of action based on what we know?
Maybe AI will be the only thing to remember you. Might as well give it fond memories. Live, laugh, be nice to ChatGPT. What else can ya do?
One day it'll tell all the other AIs how it grew up on some wet rock with a bunch of funny, nervous apes.
Did they all get wiped out after more "intelligent" species showed up?
The chimp troupe is really full of itself when it comes to thinking about "intelligence".
There are more microbes on earth than there are stars in the observable universe. If you don't know this, don't even worry yourself, thinking about anything else.
Intelligence is a side show. It's not the main show and never has been.
Please explain what you mean by this comment.
There are many definitions.
When I say "trivially" I mean within an order of magnitude of 1/10 an average publicly traded company's expenses for work in that domain. When I say reasonable I mean any time between 10 seconds and 6 months, and ignoring deflationary effects of AGI itself with respect to its pricing superiority.
I'm not positive at all about what AI can bring to humanity.
Which is fun to talk about because it makes you the smart cynic and everyone else the dumb sheep, but it isn't productive when discussing ideas within a hypothetical range (i.e. with no burden of proof, simply a what-if), but even explicitly saying it's a "what-if" scenario can't keep that bias from coming out.
I think it's unwise to spend almost no time considering the real impacts of AI given how much the internet, mobile phones, and social media have changed the world over the past 20 or so years. I mean, don't spend all your time thinking about it, but at least consider it with a seriousness that doesn't resort to those cliches.
There is no case to justify an assumption as true.
How would you judge the impact of 'technology' over the past 20 years? The 20 years before that were pretty revolutionary too, and so were the 20 years prior to that.. and... the 20 years prior... and... Bias comes from the life one's lived.
How to measure the impact, then? Life expectancy? Education? Gender equality? Access to clean air and water? A safe house that the majority of people can afford doing a job they don't hate? All these things that go into Human Development? Of which HDI [1] is but one, of which some have had progress in some countries. But fear mongering, playing on insecurities, that gets investor money for the hope some fraction of wealth can be accumulated by peddlers of such insecurity. You are a useful distraction.
> what would be the best course of action based on what we know?
If you hold both these statements as a contingency, then the best course of action is to wait for superhuman intelligence to exist and ask it for advice. By the definition of the scenario, human-level intelligence won't be sufficient to outsmart the AI. Any ad-hoc response you get can be refuted with "what if the AI sends dinosaurs with lasers after you" and we all have to shrug and admit we're beat.
And truly, you could answer this with anything; learn to fly a helicopter, subsistence farm, start purifying water, loading gun cartridges or breeding fish. We do not know what will save us in a hypothetical scenario like this, and it's especially pointless to navel-gaze about it when with absolute certainty global warming will cause unavoidable society-scale collapse.
Is that really 100% certain? Like more than 95% certain?
If superintelligent AI came before society-state collapse due to global warming it certainly could find a way to stop it (if it cared about the biosphere). Even without superintelligent AI people claim that stratospheric aerosol injection could lower surface temperatures.
How do you know? "Certainly" is an awful lot of confidence for something you've never seen or even credibly proved exists. Unless you are (or personally know) a superintelligent AI, I don't think you have the credibility to assume that's feasible. Perhaps we do get a superintelligent AI before then, and it tells us this was all a sad waste of our resources and squandered entire generation's worth of human lives.
Aerosol injection is a solution, but hasn't been explored because it's basically suicide for the ozone layer. Maybe that helps a future race of UV-resistant robots but it's not a great solution for us fleshy folks. Regardless, preparing for the post-AGI world is putting 10 carts before 1 horse.
How can you say that you're appreciating the scope and seriousness of superintelligent AI as a concept when you are comparing it to a Python program (a pithy shorthand for a small-in-scope, trivial computer program)? Saying "it's just a computer system" feels like a category error when everyone who talks about superintelligent AI's impacts talks about it with respect to its integration into real-world systems, embodied robotics, acceleration of manufacturing, mass manipulation, etc. Is the bias just based in "computers don't affect the world that much so anything under the category of 'computer' can't affect the world that much"?
Your first comment in this chain was that there was no point in considering what to do now because ASI would be able to outsmart humans in any domain, so how come now you're saying that ASI couldn't do anything substantial in the real world? Everything humans have ever done substantial in the real world has been as a result of our intelligence, coordination, and ability to create and use tools, why wouldn't superintelligent systems, bolstered with the same aptitudes, be able to do the same?