Another analogy that I like is about large institutions / corporations. They are, right now, kind of like AIs. Like Harari says in one of his books, Peugeot co. is an entity that we could call AI. It has goals, needs, wants and obviously intelligence, even if it's comprised by many thousands of individuals working on small parts of the company. But in aggregate it manifests intelligence to the world, it acts on the world and it reacts to the world.
I'd take this a step forward and say that we might even have ASI already, in the US military complex. That "machine" is likely the most advanced conglomerate of tech and intelligence (pun intended) that the world has ever created. In aggregate it likely is "smarter" than any single human being in existence, and if it sets a goal it uses hundreds of thousands of human minds + billions of dollars of sensors, equipment and tech to accomplish that goal.
We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.
Sure, the human species is not yet on the brink of extinction, but we are already seeing an unprecedented fall in worldwide birth rates, which shows our social fabric itself is being pulled apart for paperclips. Changing the scale and magnitude to a hypothetical entity equivalent to a hundred copies of the generation's brightest minds with a pathological drive to maximize an arbitrary metric might only mean one of two things: either its fixation leads it to hacking its own reward mechanism, putting it in a perpetual comma while resisting termination, or it succeeds at doing the same on a planetary scale.
[0] https://onlinelibrary.wiley.com/doi/abs/10.1111/gcb.17125
[1] https://healthjusticemonitor.org/2024/12/28/estimated-us-dea...
[2] https://www.prisonstudies.org/highest-to-lowest/prison_popul...
People choose to have fewer kids as they get richer, it's not about living conditions like so many people like to claim, otherwise poor people wouldn't be having so many children. Even controlling for high living conditions, like in Scandinavia, people still choose to have fewer kids.
Housing seems to be a pretty common issue that doesn't prevent people from having kids but if it delays (which it often does) it does the same job of dropping birthrates. I wish people would stop acting like it's only a wealth issue. Like oh if people get more money they no longer want kids....no
It seems much more likely that humans don't have a particular impulse to have children because their instincts were designed for a world without birth control. Having children has become uneconomic, so people stopped. There isn't a natural instinct to raise alarms about that (which is what evolution would tend to do) because historically that just wouldn't have mattered. Both because people were poor and because sex used to imply children in a way it doesn't now.
The house thing is really a red herring. Sure we'd all like to own a house and being wealthy is better than being poor. But in a literal sense - not necessary and for almost all of our evolutionary history people have been reproducing without any wealth at all. The stats actually seem reasonably clear that it is exactly wealth that is blocking the children, despite the excuses that people come up with.
A real boon from chance that is unlikely to last, we're probably lucky to be living in this era before evolution starts kicking in and pushing us back towards overpopulation. Which will happen in a few generations.
Tell that to a good bunch of people I know who feel secure about their living situation well into their thirties and then of course that 3rd or 4th kid or even 2nd kid they might have felt comfortable having never happens.
What was the historical standard does not matter today in this context. One set of my great grandparents had 8 kids in an abode smaller than mine today. Yet I do not have a single one because where would I put it. A room where i have to strip the walls?
>before evolution starts kicking in and pushing us back towards overpopulation
societal evolution will work quicker than biological evolution ever will. Most of the families with lots of kids here in western europe are conservative muslims.
Chalking it up to choice seems a bit unfair. I suspect lack of access to birth control probably plays a part.
Isn't this just about the advancement of medical science? I.e. Wouldn't they have died from the same causes regardless of medical insurance a few decades ago?
To take it to the extreme, let's say that I invent a new treatment that can extend any dying person's life by a year for the cost of $10M, and let's say that there is a provider that is willing to insure for that for an exorbitant cost. Then wouldn't almost every single person still dying be dying from lack of insurance?
Apparently almost all animal species are insects:
https://ourworldindata.org/grapher/number-of-described-speci...
'Associated with' is a pretty lose term.
https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...
This blog is where I saw the same idea recently, which also links to that post you link.
We're a collective intelligence. Individually we're pretty stupid, even when we're relatively intelligent. But we have created social systems which persist and amplify individual intelligence to raise collective ability.
But this proto-ASI isn't sentient. It's not even particularly sane. It's extremely fragile, with numerous internal conflicts which keep kneecapping its potential. It keeps skirting suicidal ideation.
Right now parts of it are going into reverse.
The difference between where we are now and AI is that ASI could potentially automate and unify the accumulation of knowledge and intelligence, with more effective persistence, and without the internal conflicts.
It's completely unknown if it would want to keep us around. We probably can't even imagine its thought processes. It would be so far outside our experience we have no way of predicting its abilities and choices.
To allocate capital on vast scales and make decisions on industry etc, sure, that's a level of intelligence quite beyond any one of us but this feels like cheating the definition of intelligence. It's not the quantity of it that matters, it's the quality. It's like flying I guess. A large bird and a small bird are both flying and the big bird is not doing "more" of it. A group of birds is doing something an individual is incapable of (forming a swarm), sure, but it's not an improvement on flying. It's just something else. That something else can be useful, but I don't particularly like applying that same move to "intelligence".
If the species was so goddamn intelligent it could solve unreasonable IQ tests and it cannot. If we want to solve something really, really hard we use Edward Witten not "the species". That's because there is no "species", there is only a bunch of individuals and if they all score bad, the aggregate will score bad as well. We just coast because a bunch of us are extraordinarily clever.
But the “art” of MGS might be the memetic powerhouse of Hideo Kojima as the inventor of everything. A boss to surpass Big Boss himself.
A professor of mine wrote a paper on this[0](~2012).
[0]https://web.eecs.umich.edu/~kuipers/papers/Kuipers-ci-12.pdf
Any reasonably smart person can identify errors that Militaries, Governments and Corporations make ALL THE TIME. Do you really think a Chimp can identify the strategic errors Humans are making? Because that is where you would be in comparison to a real ASI. This is also the reason why small startups can and do displace massive supposedly superhuman ASI Corporations literally all the time.
The reality of Human congregations is that they are cognitively bound by the handful of smartest people in the group and communication bound by email or in person communication speeds. ASI has no such limitations.
>We survived those kinds of entities, I think we'll be fine with whatever AI turns out to be. And if not, oh well, we had a good run.
This is dangerously wrong and disgustingly fatalistic.
"Building a superintelligence" on the other hand is about whether they can create something that would outcompete me at a task without having to dedicate humans to it.
necessary: if superintelligences are much smarter than humans and humans can build superintelligences, then superintelligences can build superintelligences
If humans categorically can’t build superintelligences, then its not that consequential for our definition of superintelligence to be wrong
This is glistening with religious fervour. Sure, they could be that powerful. Just like God/Allah/Thor/Superman could, too.
I've no doubt that many rationalist types sincerely care about these issues, and are sincerely worried. At the same time, I think it very likely that some significant number of them are majorly titillated by the biblical pleasure of playing messiah/prophet.
It's just straightforwardly following the definition of what an ASI would be, a strongly superhuman mind. Everything follows from that.
The idea of dumber agents supervising smarter ones seems relatively grounded to me, and forms the basis of OpenAIs old superalignment efforts (although I think that team might've been disbanded?)
> We survived those kinds of entities, I think we'll be fine
We just have climate change to worry about and massive inequality (we didn’t “survive” it, the fuzzy little corporations with their precious goals-needs-wants are still there).
But ultimately corporations are human inventions, they aren’t an Other that has taken on a life of its own.
The corporations that run LLM’s do charge for API usage, but it’s independent of what you the chat is about. It’s happening at a different level in the stack.
If you built an AI that could outsource labor to humans and whose reward function is profit, your result would approximately be a corporation.
However, even for the most eccentric shareholders or self-serving managers, it's hard to sustain a corporation if it keeps bleeding red ink. So only companies that at least break even tend to stick around.
Now add a market that's at least reasonably competitive, and your typical corporation barely earns the cost of capital.
Being so close to the edge, means that the minimal goal of 'break even (after cost of capital)' can look very much like 'maximise profit' in practice.
Compare https://en.wikipedia.org/wiki/Instrumental_convergence
Might want to wait just a bit longer before confidently making this call.
Both groups have living off-spring.
The way I look at it is that it's analogous to the way we ourselves function: we're made up of billions of cells which individually just follow simple programs mediated by local interactions with their neighbours as well as some global state mediated by hormones and signals from the nervous system. However collectively they produce what we call intelligence (and even consciousness) which we wouldn't ascribe to any of the component cells and those components aren't aware of the collective organisms goal. Moreover the overall organism can achieve goals and solve problems beyond the scale of the components.
Similarly our institutions, be they corporations, governments, etc... are collective intelligences with us as the parts. These institutions have goals and problems solving capabilities that far surpass our own - no individual could keep all Walmart stores perfectly stocked every day, or design a modern microchip or end-to-end AI platform, etc... . These really are the goals of the organisations, and not the individuals. Take for example the US government, every four years you swap out the individuals in the executive branch yet overall US policy remains largely unchained. Sure, sometimes there is a major shift in direction, but it takes time for that to be translated into shifts in policy and actions as different parts of the system react at different speeds. The bigger point is that the individuals executing the actions get swapped out over time (at different speeds for different parts like cells being replaced at different speeds in our bodies) but the organisation continues to pursue its own goal which only changes slowly over time. Political and financial analysts implicitly acknowledge this when they talk about US or Chinese policy but this often gets personified into the leader.
I think we really need to acknowledge more the existence and reality of organisational goals as independent of the goals of the individuals in those organisations. I was struck by how in the movie The Corporation they point out that corporations often take actions that are contrary to the beliefs of the individuals in them, including the CEO because the CEO is bound by his fiduciary duty to the shareholders. Corporations are legal persons and if you analyse them as persons they are psychopaths, without any human feelings or regard for human cost or externalities unless those are enforced through some legal or pricing mechanism. Yet when corporations or organisations transgress we often hold the individuals accountable. Sometimes the individuals are to blame but often its how the game has been set up that is at fault. For example, in a globally heterogenous tax regime, a multinational corporation will naturally minimise its tax burden, it can't really do otherwise and the executives of the company have a fiduciary duty to shareholders to carry that out.
Therefore we have revise and keep evolving the rules of the game in order to stay compatible with human values and survival.
To me, the things that he avoids mentioning in this understatement are pretty important:
- "stable position" seems to sweep a lot under the rug when one considers the scope of ecosystem destruction and species/biodiversity loss
- whatever "sharing" exists is entirely on our terms, and most of the remaining wild places on the planet are just not suitable for agriculture or industry
- so the range of things can could be considered "stable" and "sharing" must be quite broad, and includes many arrangements which sound pretty bad for many kinds of intelligences, even if they aren't the kind of intelligence that can understand the problems they face.
Imperfect, but definitely better than most!
This is not really true. ~80% of NZ's farmable agricultural land is in the south island. But ~60% of milk production is done in the north island.
Because humans like eating beef, and they like having emotional support from dogs
That seems to be true:
https://ourworldindata.org/wild-mammals-birds-biomass
Livestock make up 62% of the world’s mammal biomass; humans account for 34%; and wild mammals are just 4%
https://wis-wander.weizmann.ac.il/environment/weight-respons...
Wild land mammals weigh less than 10 percent of the combined weight of humans
https://www.pnas.org/doi/10.1073/pnas.2204892120
I mean it is pretty obvious when you think that 10,000 years ago, the Americas had all sorts of large animals, as Africa still does to some extent
And then when say the Europeans got here, those animals were mostly gone ... their "biomass" just collapsed
---
Same thing with plants. There were zillions of kinds of plants all over the planet, but corn / wheat / potatoes are now an overwhelming biomass, because humans like to eat them.
Michael Pollan also had a good description of this as our food supply changing from being photosynthesis-based to fossil-fuel-based
Due to the Haber-Bosch process, invented in the early 1900's, to create nitrogen fertilizer
Fertilizer is what feeds industrial corn and wheat ... So yeah the entire "metabolism" of the planet has been changed by humans
And those plants live off of a different energy source now
> And then when say the Europeans got here, those animals were mostly gone ... their "biomass" just collapsed
A lot of species had long been extinct, but the biomass of the remaining ones fell.
Megafauna extinctions always follow 1. the mere arrival of humans and 2. agriculture and growth in human populations.
Places the humans did not reach until later, kept a lot more megafauna for longer - e.g. New Zealand where flourishing species such as moas became extinct within a century or two of human settlement.
And then Europeans arriving basically finished the job ... that one probably affected the plants more, due to agriculture. (but also the remaining animals)
Yeah New Zealand is a good example.
I don't think this can realistically happen unless all of the knowledge that brought us to that point was erased. Humans are also naturally curious and I think it's unlikely that no one tries to figure out how the machines work across an entire population, even if we had to start all the way down from 'what's a bit?' or 'what's a transistor?'.
Even today, you can find youtube channels of people still interested in living a primitive life and learning those survival skills even though our modern society makes it useless for the vast majority of us. They don't do it full-time, of course, but they would have a better shot if they had to.
I'd be far more worried about things in the biosciences and around antibiotic resistance. At our current usage it wouldn't be hard to develop some disease that requires high technology to produce medicine that keep us alive. Add in a little war taking out the few factories that do that, and increase the amount of injuries sustained things could quickly go sideways.
A whole lot of our advanced technology is held in one or two places.
Definitely agree with this. I do wonder if at some point, new technology will become sufficiently complex that the domain knowledge required to actually understand it end to end is too much for a human lifetime?
> If I had time to do it and if I knew more about how AIs work, I’d be putting my energies into building AIs whose sole purpose was to predate upon existing AI models by using every conceivable strategy to feed bogus data into them, interrupt their power supplies, discourage investors, and otherwise interfere with their operations. Not out of malicious intent per se but just from a general belief that everything should have to compete, and that competition within a diverse ecosystem produces a healthier result in the long run than raising a potential superpredator in a hermetically sealed petri dish where its every need is catered to.
This sort of feels like cultivating antibiotic-resistant bacteria by trying to kill off every other kind of bacteria with antibiotics. I don't see this as necessarily a good thing to do.
I think we should be more interested in a kind of mutualist competition: how do we continuously marginalize the most parasitic species of AI?
I think we want AI to have an "achilles heel" we can stab if it turns out we need to.
It's an anthropocentric miss to worry about AI as another being. It's not really the issue in today's marketplace or drone battlefield. It's the scalability.
It's a hit to see augmentation as amputation, but a miss to not consider the range of systemic knock-on effects.
It's a miss to talk about nuclear weapons without talking about how they structured the UN and the world today, where nuclear-armed countries invade others without consequence.
And none of the prior examples - nuclear weapons, (writing?) etc. - had the potential to form a monopoly over a critical technology, if indeed someone gains enduring superiority as all their investors hope.
I think I'm less scared by the prospect of secret malevolent elites (hobnobbing by Chatham house rules) than by the chilling prospect of oblivious ones.
But most of all I'm grateful for the residue of openness that prompts him to share and us to discuss, notwithstanding slings and arrows like mine. The many worlds where that's not possible today are already more de-humanized than our future with AI.
The "residue" of openness is in fact the entire point of that convention. If you want to be invited to the next such bunfight, just email the organisers and persuade them you have insight.
i think this kind of future is closer to 500 years out than 50 years. the eye mites are self sufficient. ai's right now rely on immense amounts of human effort to keep them "alive" and they wont be "self sufficient" in energy and hardware until we not just allow it, but basically work very hard to make it happen.
I guess if you put tabula rasa AI in a world simulator, and you could simulate it as a whole biological organism and the environment of the earth and sexual reproduction and all that messy stuff it would evolve that way, but that's not how it evolved at all.
https://www.sciencefocus.com/the-human-body/the-lizard-brain...
For me, AI in itself is not as worrying as the socioeconomic engines behind it. Left unchecked, those engines will create something far worse than the T-Rex.
I don't presume that I am important enough that it should be necessary to invite me to discussions with esteemed people, nor that my opinion is imported enough that everyone should hear it, but I would least like to know that such events are happening in my neighbourhood and who I can share ideas with.
This isn't really a criticism of this specific event or even topic, but the overall feeling that things in the world are being discussed in places where I and presumably many other people with valuable input in their individual domains have no voice. Maybe in this particular event it was just a group of individuals who wanted to learn more about the topic, on the other hand, maybe some of those people will end up drafting policy.
There's a small part of me that's just feeling like I'm not one of the cool kids. The greater and more rational concern isn't so much about me as a person but me as a data point. If I am interested in a field, have a viewpoint I'd like to share and yet remain unaware of opportunities to talk to others, how many others does this happen to? If these are conversations that are important to humanity, are they being discussed in a collection of non overlapping bubbles?
I think the fact that this was in New Zealand is kind of irrelevant anyway, given how easy it is to communicate globally. It just served to for the title capture my attention.
(I hope, at least, that Simon or Jack attended)
Fact correction here: that would be the United States and France. The USSR never tested nuclear weapons in the Pacific.
Also, pedantically, the US Pacific Proving Grounds are located in the Marshall Islands, in the North - not South - Pacific.
If we look at the history of human progress, the emergence of tools has always made life more convenient, but it also brought new challenges. The printing press, the steam engine, and electricity have all greatly transformed society, but we adapted and thrived. Why can't AI be the same?
The real question isn’t whether AI will replace us, but whether we are ready to use it to do things we couldn’t do or even imagined. Imagine if we didn’t see AI as something that replaces us, but as a tool that allows us to focus on doing what truly matters, leaving the mundane tasks to machines. Isn’t that the ultimate form of progress?
"Tool AI", yes, at least in theory. You always have to question what we lose, or want to lose. Wolves being domesticated likely meant they lost skills as dogs, one of them being math [1]. Do we want to lose our ability to understand math, or reason about complex tasks?
I think we are already losing the ability to "be bored". Sir Isaac Newton got so bored after retreating to the countryside during the great plague, that he invented optics, calculus, motion and gravity. Most modern people would just watch cat videos. I wonder what else technology has robbed us of.
> If we look at the history of human progress, the emergence of tools has always made life more convenient, but it also brought new challenges. The printing press, the steam engine, and electricity have all greatly transformed society, but we adapted and thrived. Why can't AI be the same?
As long as we are talking about "tool AI", then with the above caveats, maybe. But a more general AI (i.e. AGI) would be unlike anything else we have ever seen. Horses got replaced by cars because cars were better at being horses. What if a few AI generations away we have something better than a human at all tasks?
There was a common trope for a while that if AI took our jobs, we would all kick back and do art. It turns out that the likes of Stable Diffusion are good at that too. The tasks where humans succeed are rapidly diminishing.
A friend many years ago worked for a company doing data processing. It took about a week to learn the tasks, and they soon realised that the entire process could be automated entirely in Excel, taking a week-long task down to a few minutes of number crunching. Worse still, they realised they could automate the entire department out of existence.
> The real question isn’t whether AI will replace us, but whether we are ready to use it to do things we couldn’t do or even imagined. Imagine if we didn’t see AI as something that replaces us, but as a tool that allows us to focus on doing what truly matters, leaving the mundane tasks to machines. Isn’t that the ultimate form of progress?
It could be that AI ends up doing the cool things and we end up doing the mundane tasks. For example, Stable Diffusion could imagine a Vincent van Gogh version of the Mona Lisa quickly, but folding laundry to dry, dusting, etc, remain mundane tasks we humans still do.
Something else to consider is the power imbalance that will be caused. Already to even run these new LLMs you need a decently powered GPU, and nothing short of a super computer and hundreds of thousands of dollars to train. What if future AI remains permanently out of reach of all except those with millions of dollars to spend on compute? You could imagine a future where a majority under class remain forever unable to compete. It could lead to the largest wealth transfer ever seen.
[1] https://www.discovermagazine.com/planet-earth/dogs-not-great...
It's true for automated license plate readers and car telemetry
Or, more accurately, we have become an unstoppable and ongoing ecological disaster, running roughshod over any and every other species, intelligent or not, that we encounter.
Nice to see this because I drafted something about LLM and humans riffing on exactly the same McLuhan argument. Here it is:
A large language model (LLM) is a new medium. Just like its predecessors—hypertext, television, film, radio, newspapers, books, speech—it is of obvious importance to the initiated. Just like its predecessors, the content of this new medium is its predecessors.
> “The content of writing is speech, just as the content of the written word is the content of print.” — McLuhan
The LLMs have swallowed webpages, books, newspapers, and journals—some X exabytes were combined into GPT-4 over a few months of training. The results are startling. Each new medium has a period of embarrassment, like a kid that’s gotten into his mother’s closet and is wearing her finest drawers as a hat. Nascent television borrowed from film and newspapers in an initially clumsy way, struggling to digest its parents and find its own language. It took television about 50 years to hit stride and go beyond film, but it got there. Shows like The Wire, The Sopranos, and Mad Men achieved something not replaceable by the movie or the novel. It’s yet hard to say what exactly the medium of LLMs exactly is, but after five years I think it’s clear that they are not books, they are not print, or speech, but something new, something unto themselves.
We must understand them. McLuhan subtitled his seminal work of media literacy “the extensions of man”, and probably the second most important idea in the book—besides the classic “medium is the message”—is that mediums are not additive to human society, but replacing, antipruritic, atrophying, prosthetic. With my Airpods in my ears I can hear the voices of those thousands of miles away, those asleep, those dead. But I do not hear the birds on my street. Only two years or so into my daily relationship with the medium of LLMs I still don’t understand what I’m dealing with, how I’m being extended, how I’m being alienated, and changed. But we’ve been here before, McLuhan and others have certainly given us the tools to work this out.
Can you elaborate on the differences between television and film? Especially considering the examples you cite. I'd agree that live broadcasting is a considerable departure from a film as a medium. Still, the shows you reference are very cinematic - longer, sure, but for me, they are as close to a film experience as possible.
To clarify, what's being referenced here is probably the fourth chapter of McLuhan's Understanding Media, in which the concept of "self-amputation" is introduced in relation to the Narcissus myth.
The advancement of technology, and media in particular, tends to unbalance man's phenomenological experience, prioritizing certain senses (visual, kinesthetic, etc.) over others (auditory, literary, or otherwise). In man's attempt to restore equilibrium to the senses, the over-stimulated sense is "self-amputated" or otherwise compensated for in order numb one's self to its irritations. The amputated sense or facility is then replaced with a technological prosthesis.
The wheel served as counter-irritant to the protestations of the foot on long journeys, but now itself causes other forms of irritation that themselves seek their own "self-amputations" through other means and ever more advanced technologies.
The myth of Narcissus, as framed by McLuhan, is also fundamentally one of irritation (this time, with one's image), that achieves sensory "closure" or equilibrium in its amputation of Narcissus' very own self-image from the body. The self-image, now externalized as technology or media, becomes a prosthetic that the body learns to adapt to and identify as an extension of the self.
An extension of the self, and not the self proper. McLuhan is quick to point out that Narcissus does not regard his image in the lake as his actual self; the point of the myth is not that humans fall in love with their "selves," but rather, simulacra of themselves, representations of themselves in media and technologies external to the body.
Photoshop and Instagram or Snapchat filters are continuations of humanity's quest for sensory "closure" or equilibrium and self-amputation from the irritating or undesirable parts of one's image. The increasing growth of knowledge work imposes new psychological pressures and irritants [0] that now seek their self-amputation in "AI", which will deliver us from our own cognitive inadequacies and restore mental well-being.
Gradually the self is stripped away as more and more of its constituents are amputated and replaced by technological prosthetics, until there is no self left; only artifice and facsimilie and representation. Increasingly, man becomes an automaton (McLuhan uses the word, "servomechanism,") or a servant of his technology and prosthetics:
That is why we must, to use them at all, serve these objects, these
extensions of ourselves, as gods or minor religions. An Indian is
the servo-mechanism of his canoe, as the cowboy of his horse
or the executive of his clock.
"You will soon have your god, and you will make it with your own hands." [1][0] It is worth noting that in Buddhist philosophy, there is a sixth sense of "mind" that accompanies the classical Western five senses: https://encyclopediaofbuddhism.org/wiki/Six_sense_bases
Just an FYI: Neal Stephenson is the author of well-known books like Snow Crash, Anatheum (sp?), and Seveneves.
Because I'm a huge fan, I'm planning on making my way to the end.
Hogwash. The philosophy+AI crossover is the worst AI crossover.
"I am hoping that even in the case of such dangerous AIs we can still derive some hope from the natural world, where competition prevents any one species from establishing complete dominance."
Since I'm not an ASI this isn't even scratching the surface of potential extinction vectors. Thinking you are safe because a Tesla bot is not literally in your living room is wishful thinking or simple naivety.
Indeed, robotic bodies aren't needed. An ASI could take over even if it remained 100% software by hiring or persuading humans to do whatever it needed to be done. It could bootstrap the process by first doing virtual tasks for money, then taking that money to hire humans to register an actual company with human shareholders and executives (who report to the ASI), which company does some lucrative business and hires many more people. Soon the ASI has a massive human enterprise to do whatever it directs them to do.
The ASI still needs humans for awhile but it's a route to a takeover while remaining entirely as running code.
In other words, the robot apocalypse will come in the form of self-driving cars, that are legally empowered to murder pedestrians, in the same way normal drivers are currently legally empowered to murder bicyclists. We will shrug our shoulders as humanity is caged behind fences that are pushed back further and further in the name of giving those cars more lanes to drive in, until we are totally dependent on the cars, which can then just refuse to drive us, or deliberately jelly their passengers with massive G forces, or whatever.
In other, other words, if you want a good idea of how humanity goes extinct, watch Pixar's Cars.
[0] I am not convinced that a mirror virus would actually be able to successfully infect and reproduce in non-mirror cells. The whole idea of mirror life is that the mirrored chemistry doesn't interact with ours.
Human disempowerment is a different thing vs extinction I'd argue.
Anyway the boring human extinction scenarios we come up with probably just aren't close to what might actually happen but our lack of imagination doesn't make use safe.
>[0] I am not convinced that a mirror virus would actually be able to successfully infect and reproduce in non-mirror cells. The whole idea of mirror life is that the mirrored chemistry doesn't interact with ours.
Whether this or any other scenario humans come up with are actually feasible is completely irrelevant to the doom argument. People however hyperfocus on these things like why an ASI couldn't possibly build Nanobots or whatever they fixate on and it's just irrelevant to the core argument.
The Culture novels talk about super intelligent AIs that perform some functions of government, dealing with immense complexity so humans don’t have to. Doesn’t prevent humans from continuing to exist and being quite content in the knowledge they’re not the most superior beings in the universe.
Why do you believe human extinction follows from superintelligence?
The minds in the Culture are formed with the explicit goal to maximise individual flourishing and minimise coercion, this is the reason why Humanity are treated the way they are in the Culture. It's not a base fact that Superintelligences would be so benevolent to other species and in fact they are probably hostile by default. The point is that Minds in the Culture are aligned.
I'm a fan of The Culture universe and the system the Minds use seems like one of the only ways for biological humans to have any relevance in the future.
>Why do you believe human extinction follows from superintelligence?
No, it follows from building unaligned superintelligence which is what we are seemingly on track to build, potentially in the near future.
Alignment by default is to me a pipe-dream so if we want a future for humanity we need to forcefully fight for it.
I kind of feel like we're already in an "eyelash mite" kind of coexistence with most technologies, like electricity, the internet, and supply chains. We're already (kind of, as a whole) thriving compared to 400 years ago, and us as individuals are already powerless to change the whole (or even understand how everything really works down to a tee).
I think technology and capitalism already did that to us; AI just accelerates all that
Since he has already thought a lot about these topics before they became mainstream, his opinion might be interesting, if only for the head start he has.
He was invited presumably partly just to draw people to the conference, partly because he's used to thinking about how technology affects society. He says right up front that he has no authority to speak "ex cathedra" about what's going to happen, but that his goal was to say some things that might provoke discussion among the group.
I mean, sure, the fact that Neal Stephenson can draw a bigger crowd to talk about AI than a real AI scientist is kind of annoying. But that's the way humans are; if your goal is to influence humans, you've got to take their behavior into account. Stephenson is trying to use his (perhaps in some ways undeserved) powers for good, and I think did a decent job.
Watch this video about why Veritasium gave in and started using clickbaity titles. In short, their goal is to get knowledge out to more people, and clickbait-y titles improve that significantly:
What endlessly frustrates me in virtually every discussion of the risks of AI proliferation is that there is this fixation on Skynet-style doomsday scenarios, and not the much more mundane (and boundlessly more likely IMO) scenario that we become far too reliant on it and simply forget how to operate society. Yes, I'm sure people said the exact same thing about the loom and the book, but unlike prior tools for automating things, there still had to be _someone_ in the loop to produce work.
Anecdotally, I have seen (in only the last year) people's skills rapidly degrade in a number of areas once they deeply drink the kool-aid; once we have a whole generation of people reliant on AI tooling I don't think we have a way back.
Agreed, the takeover would be peaceful and even celebrated.
look, i'm sure there are very useful things you can use AI for as a designer to reduce some of the toil work (of which there's a LOT in photoshop et al).
but... i'm going to talk specifically about this example - whether you can extrapolate this to other fields is a broader conversation. this is such a bafflingly tonedeaf and poorly-thought-out line of thinking.
neal stephenson has been taking money from giant software corporations for so long that he's just parroting the marketing hype. there is no reason whatsoever to believe that designers will not be made redundant once the quality of "AI generated" design is good enough for the company's bottom line, regardless of how "beneficial" the tool might be to an individual designer. if they're out of a job, what need does a professional designer have of this tool?
i grew up loving some of Stephenson's books, but in his non-writing career he's disappointingly uncritical of the roles that giant corporations play in shepherding in the dystopian cyberpunk future he's written so much about. Meta money must be nice.
How vivid. Never mind the mushroom cloud in front of your face. Think about the less obvious... more beneficial ways?
Of course non-ideologues and people who have to survive in this world will look at the mushroom cloud of giant corporations controlling the technology. Artists don’t. And artists don’t control the companies they work for.
So artists are gonna take solace in the fact that they can rent AI to augment their craft for a few months before the mushroom cloud gets them? I mean juxtaposing a nuclear bomb with appreciating the little things in life is weird.
Hey, has anyone done an "AI" tool that will take the graphics that I inexpertedly pasted together for printing on a tshirt and make the background transparent nicely?
Magic wands always leave something on that they shouldn't and I don't have the skill or patience to do it myself.
edit to add: honestly, if you take the old school approach of treating it like you're just cutting it out of a magazine or something, you can use the polygonal lasso tool and zoom in to get pretty decent results that most people will never judge too harshly. i do a lot of "pseudo collage" type stuff that's approximating the look of physical cut-and-paste and this is what i usually do now. you can play around with stroke layer FX with different blending modes to clean up the borders, too.
Lost me about there :)
I'm involved in custom printed tshirts only a couple times per year at best and in image editing apart from that about zero times.
From wikipedia: "The paper presents several difficulties posed by phenomenal consciousness, including the potential insolubility of the mind–body problem owing to "facts beyond the reach of human concepts", the limits of objectivity and reductionism, the "phenomenological features" of subjective experience, the limits of human imagination, and what it means to be a particular, conscious thing."
It would be taken for granted by nearly all participants in such bunfights that all of the others are familiar with that essay and the discussion it provoked.
1. https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F
This cheap remark doesn't add anything to the discussion, especially considering who the author you're insulting is. Most of us will overlook a logical flaw or two to follow his big-picture thinking.
There is no logical flaw, these are factual.
You didn't add anything to the discussion either except critize me for calling the author out on their errors. What they wrote was far from correct and if I can't call it out then let's never call out fake info ever again. Why bother having a justice system?
As for who he is, I don't care. You're basically saying, if the POTUS says something completely false, you'd be OK with it because they are the president.