Most of the drama in the books comes to pass when the ship-dominated Culture interacts with a "backwards and benighted," but still vital and expansionist, species.
It's just not a human future. It's a contrived future where humans are ruled by benign Gods. I suppose that for some people this would be a kind of heaven. For others, though...
In a way it's a sort of anti-Romanticism, I guess.
For all of the apocalyptic AI sci-fi that's out there , Banks' work stands out as a positive outcome for humanity (if you accept that AI acceleration is inevitable).
But I also think Banks is sympathetic to your viewpoint. For example, Horza, the protagonist in the first novel, Consider Phlebas, is notably anti-Culture. Horza sees the Culture as hedonists who are unable to take anything seriously, whose actions are ultimately meaningless without spiritual motivation. I think these were the questions that Banks was trying to raise.
One could imagine Banks could have described Minds whose consciousness was originally derived from a human's, but extended beyond recognition with processing capabilities far in excess of what our biological brains can do. I guess as a story it's more believable that an AI could be what we'd call moral and good if it's explicitly non-human. Giving any human the kind of power and authority that a Mind has sounds like a recipe for disaster.
Banks did consider this. The Gzilt were a quite powerful race who had no AI. Instead they emulated groups of biological intelligences on faster hardware, in a sort of group mind type machine.
Personally I think the transhumanist evolution is a much more likely positive outcome than “humans stick around and befriend AIs”, of all the potential positive AGI scenarios.
Some sort of Renunciation (Butlerian Jihad, and/or totalitarian ban on genetic engineering) is the other big one, but it seems you’d need a near miss like Skynet or Dune’s timelines to get everybody to sign up to such a drastic Renunciation, and that is probably quite apocalyptic, so maybe doesn’t count as a “positive outcome”.
Take Greg Egan's "Glory". I don't think we're told the Amalgam citizens in the story are in some sense human descendants but it seems reasonable to presume so. Our motives aren't quite like theirs, I don't think any living human would make those choices, but I have feelings about them anyway.
I assume post-humans will be smarter and unlock new forms of cognition. For example BCI to connect directly to the Internet or other brains seems plausible. So in the same way that a blind person cannot relate to a sighted person on visual art, or an IQ 75 person is unlikely to be able to relate to an IQ 150 person on the elegance of some complex mathematical theorem, I assume there will be equivalent barriers.
But I think the first point around motivation hacking is the crux for me. I would assume post-humans will fundamentally change their desires (indeed I believe that conditional on there being far more technologically advanced post-humans, they almost certainly _must_ have removed much of the ape-mind, lest it force them into conflict with existential stakes.)
It can right now. This isn't the problem. The problem is the power budget and efficiency curve. "Self-contained power efficient AI with a long lasting power source" is actually several very difficult and entropy averse problems all rolled into one.
It's almost as if all the evolutionary challenges that make humans what we are will also have to be solved for this future to be remotely realizable. In which case, it's just a new form of species competition, between one species with sexual dimorphism and differentiation and one without. I know what I'd bet on.
Your comment reminds me of Nick Land's accelerationism theory, summarized here as follows:
> "The most essential point of Land’s philosophy is the identity of capitalism and artificial intelligence: they are one and the same thing apprehended from different temporal vantage points. What we understand as a market based economy is the chaotic adolescence of a future AI superintelligence," writes the author of the analysis. "According to Land, the true protagonist of history is not humanity but the capitalist system of which humans are just components. Cutting humans out of the techno-economic loop entirely will result in massive productivity gains for the system itself." [1]
Personally, I question whether the future holds any particular difference for the qualitative human experience. It seems to me that once a certain degree of material comfort is attained, coupled with basic freedoms of expression/religion/association/etc., then life is just what life is. Having great power or great wealth or great influence or great artistry is really just the same-old, same-old, over and over again. Capitalism already runs my life, is capitalism run by AIs any different?
1: https://latecomermag.com/article/a-brief-history-of-accelera...
If you want a vision of the future (multiple futures, at that) which differs from the liberal, humanist conception of man's destiny, Baxter's Xeelee sequence is a great contemporary. Baxter's ability to write a compelling human being is (in my opinion) very poor, but when it comes to hypothesizing about the future, he's far more interesting of an author. Without spoilers, it's a series that's often outright disturbing. And it certainly is a very strong indictment to the self-centered narcissism that the post-enlightenment ideology of liberalism is anything but yet another stepping stone on an eternal evolution of human beings. The exceptionally alien circumstances that are detailed undermine the idea of a qualitative human experience entirely.
I think the contemporary focus on economics is itself a facet of modernism that will eventually disappear. Anything remotely involving the domain rarely shows up in Baxter's work. It's really hard to give a shit about it given the monumental scale and metaphysical nature of his writing.
I’m curious to check it out. But in terms of what I’m trying to say, I’m not making a point about economics, I’m making a point about the human experience. I haven’t read these books, but most sci-fi novels on a grand scale involve very large physical structures, for example. A sphere built around a star to collect all its energy, say. But not mentioned is that there’s Joe, making a sandwich, gazing out at the surface of the sphere, wondering what his entertainment options for the weekend might be.
In other words, I’m not persuaded that we are heading for transcendence. Stories from 3,000 years ago still resonate for us because life is just life. For the same reason, life extension doesn’t really seem that appealing either. 45 years in, I’m thinking that another 45 years is about all I could take.
Easily the best "hard" Sci-Fi I've read. Baxter's imaginination and grasp of the domains he writes about is phenomenal.
The correct question is, then what ought to be the best outcome for humans? And a benevolent coexistence where the Culture actually gives humans lots of space and autonomy (contrary their misinformed and wrong view that the Culture takes away human autonomy) is indeed the most optimal solution. It is in fact in this setting that humans nevertheless retain their individual humanity instead of taking some transhumanist next step.
Some seem to conform to your analysis here, but many seem deeply compassionate toward the human condition. I always felt like part of what banks was saying was that, no matter the level of intelligence, humanity and morality had some deep truths that were hard to totally trancend. And that a humam perspective could be useful and maybe even insightful even in the face of vast unimaginable intelligence. Or maybe that wisdom was accessible to lower life forms than the minds.
I think something like this is likely in the reality where we have ASI, just because biological brains are so different. Even if AI is vastly beyond humans in raw intelligence, humans will probably still be able to come up with novel insights due to the fundamental architectural differences.
Of course when we start reverse engineering biological brains then this gets fuzzier.
Why is that assumption implicit? I can imagine a world in which humans and superhuman intelligences work together to achieve great beauty and creativity. The necessity for dominance and superiority is a present day human trait, not one that will necessarily be embedded in whatever comes around as the next order of magnitude. Who is to say that they won't be playful partners in the dance of creation?
Over and under on the first uplifted-cat-written novel, 500 years.
Ten thousand years is a long time.
The best I've read recently were Adrian Tchaikovsky's books, all quite excellent and fairly centered around uplift.
What if future AIs are not omnipotent, but bounded by some to us right now unknown limitations. Just like us, but differently limited. Maybe they appreciate our relative limitlessness just as we do theirs.
I'm talking about fundamentally novel superhuman intelligences working with someone who has spent a few millennia exploring what it means to truly be themselves.
* 2017 tech, albeit at great expense because half a quadrillion transistors is expensive to build and to run
For example, I can easily picture superhuman intelligences that have neither the patience nor interest in the kinds of things that humans are interested in, except in so far as the humans ask politely. A creature like that could create fabulous works of art in the human mode, but would have no desire to do so besides sublimating the desire of the humans around.
I’ll much rather have the Federal Republic of Germany and Google than Emperor Charles V and the Inquisition.
Who’s to say that we can’t make similar progress in the next 500 years too?
e.g. A small snowball could be nearly perfectly enmeshed with the surrounding snow on top of a steep hill but that doesn’t stop the small snowball from rolling down the hill and becoming a very large snowball in a few seconds, and wrecking some unfortunate passer-by at the bottom.
A few microns of freezing rain may have been the deciding factor so even a 99.9% relative ‘alignment’ between snowball and snowy hill top would still be irrelevant for the unlucky person. Who may have walked by 10000 times prior.
The more powerful the system is compared to you, the more any small difference is amplified.
AI that's about as powerful as an intern, great, no big deal for us. AI that's capable enough to run a company? If it's morally "good" then great; if not trade unions and strikes are a thing, as are "union busters". AI that's capable enough to run a country? If it's morally "good" then great; if not…
Between SLAPP cases, FUD, lobbying, and the way all the harms occur despite being made out of humans, there's already a bunch of non-AI ways for powerful entities that harm us to make it difficult to organise ourselves against the harm.
> arguing that AIs can't be a problem ... because companies already are superhuman
Quite the opposite, actually: corporations can potentially be very destructive "paperclip optimizers".
But I think individual humans have always been narratively secondary in the story of humanity.
And I think that's fine, because "story" is a fiction we use to manage a big world in the 3 pounds of headmeat we all get. Reducing all of humanity to a single story is really the dehumanizing part, whether it involves AIs or not. We all have our own stories.
And you can leave. There always parts of the Culture splitting up or joining back. You can request and get a ship with Star Trek-level AI and go on your merry way.
A Culture Mind losing a single human(oid) isn't like having a pet run away, it's like losing an eyelash — your peers may comment about the "bald patch" if you lose a lot all at once, but not any single individual one.
Yet at the same time, these Minds are written to care very much indeed: this particular Mind was appalled at having killed the 3492 (of 310 million) who refused to evacuate three other Orbitals that needed to be destroyed in the course of a war.
> doesn’t make them have agency in any meaningful way
these two sentences can't be true at the same time
The humans in the Culture are similarly “free”, in that they’d have to give up their lavish and safe lifestyle for true freedom and self-determination. They choose not to, but they can.
Some pets run away. Most don’t.
"humans are left to nibble at the margins and dance to the tune of their betters." Isn't that society today, but without the wealth of Culture society?
if you consider many of the great post-ai civilizations in sci-fi (matrix, foundation, dune, culture, blade runner, etc.), they're all shaped by the consequences of ai:
- matrix: ai won and enslaved humans.
- foundation: humans won and a totalitarian empire banned ai, leading to the inevitable fall of trantor bc nobody could understand the whole system.
- dune: humans won (butlerian jihad) and ai was banned by the great houses, which led to the rise of mentats.
- culture series: benign ai (minds) run the utopian civilization according to western values.
i'm a also fan of the hyperion cantos where ai and humans found a mutually beneficial balance of power.which future would you prefer?
How much of the series did you read? The Fall of Hyperion makes it quite clear that the Core did not actually have humanity's best interests in mind.
And when they aren't, Banks writes them as going off on their own to do what pleases them. And even those, as with the Gray Area, tend to have a deep sense of respect for their fellow thinking beings, humans included.
And if I recall rightly, Banks paints this as a conscious choice of the Culture and its Minds. There was a bit somewhere about "perfect AIs always sublime", where AIs without instilled values promptly fuck off to whatever's next.
And I think it's those values that are a big part of what Banks was exploring in his work. The Affront especially comes to mind. What does kindness do with cruelty? Or the Empire of Azad creates a similar contrast. What the Culture was up to in both those stories was about something much more rich than a machine's pets.
I think "pets" does describe the relationship pretty well, and your attempt to refute it just confirms it: pets are "fascinating, amusing and exasperating" and cared for by humans in a kind of pseudo-"parental" relationship. It's not a true parental relationship, because pets will always and forever be inferior. That inferiority means their direct influence on the "society" they live in is pretty much nil, and they have no agency and are reduced to being basically an object kept for the owners own reasons.
That's exactly what's going on in culture universe: the minds keep the humans for their own reasons. The culture (as depicted) is no longer the story of the humans in it, it's the story of the minds.
I definitely prefer "Player". But everyone gets to enjoy what they enjoy. I'd love to have had more banks to love or hate as I chose :(
That's one of the reasons why this book is better than the other "Culture" novels.
The command systems train as lance smashing into the inverted chalice (grail) dome of the station at the end. Death by water. Running round in a ring, Tons of other parallels if you dig/squint.
I do think stating CP is the best of the series is also quite definitively a contrarian take.
How about reflecting upon Horza's reasons to side with the Idirans? The later installments of the "Culture" novels are in comparison just the empty triumphalism "Rah rah rah, the good guys won and lived happily ever after."
If you just have omniscient gods control society, then culture becomes meaningless. There is no reason to explore what cultural adaptations might arise in a spacefaring society.
I am deeply baffled by the people who claim (1) we can somehow build something much smarter than us, and (2) this would not pose any worrying risks. That has the same energy as parents who say, "Of course my teenagers will always follow the long list of rules I gave them."
Is it better to have suffering and scarcity because that affords meaning to life in overcoming those challenges?
There's a paradoxical implication, which is that if overcoming adversity is what gives life meaning, then what seems to be the goal state, which is to overcome those problems, robs life of meaning, which would seem to be a big problem.
The hope is maybe that there are levels of achievement or expansions to consciousness which would present meaningful challenges even when the more mundane ones are taken care of.
As far as the Culture's own answer goes, what aspects of agency or meaningful activity that you currently pursue would you be unable to pursue in the Culture?
And as far as possible futures go, if we assume that at some point there will be machines that far surpass human intelligence, we can't hope for much better than that they be benign.
Dajeil Gelian spends something like 40 years bending the Sleeper Service to her will in Excession. The helplessness of the Minds to override free will is kind of a core theme of Excession IMO
I got the impression that the Minds are proud of how many humans choose to live in their GSV or Orbital, when they are free to live anywhere and they appear to care deeply about humans in general and often individuals too.
Also, the Minds are not perfect Gods. They have god-like faculties, but they are deliberately created as flawed imperfect beings.
One novel (Consider Phlebas?) explained that The Culture can create perfect Minds, but they tend to be born and then instantly sublime away to more interesting dimensions.
That shouldn't happen. No way would I trust an AI that claims to be super, but can't solve pretty basic GOFAI + plausible reasoning AI alignment. In theory a 1980s/1990s/old Lesswrong style AI of a mere few exabytes of immutable code should do exactly what the mind creating it should want.
The civilisations in Banks stories that align their AIs are the bad guys.
I don't think the Minds would be willing to actually not know the result, despite what they probably claim.
We could easily build AIs that just model the world, without really trying to make them do stuff, or have particular inclinations. We could approach AI as a very pure thing, to just try to find patterns in the world without any regard to anything. A purely abstract endeavour, but one which still leads to powerful models.
I personally believe that this is preferable, because I think humans in control of AI is what has the potential to be dangerous.
Remember that "a few exabytes" refers to the immutable code. It has way more storage for data, because it's an old-school Lesswrong style AI.
Not like a neural network or an LLM. Sure, we dead-ended on those, but an ASI should be able to write one.
> A Culture Mind would be deeply offended if you called it "An AI" to its avatars face :P
That's how they get you to let them out of the AI box.
What human agency, in the way that you mean it, exists in such a world that doesn't in the Culture? You don't have Minds, but you don't really need them for a world without disease, poverty, or scarcity to basically anything. No one's life would really mean anything, because everyone would have everything they could want. Eventually, we will have mastered all technologies that can be developed. The world will be finished. The only thing left is to wander around the universe in giant ships with all the comforts of home, playing games, etc.
You can get ~99.9% of the world of the Culture without superintelligence. Just fast forward current normal human development a few hundred years and you get the same world where all instrumental value of things ceases to exist, where very little remains to be discovered, and nothing you do is going to meaningfully change anything except what happens to you and those who care about you.
Of course even now, arguably, that's the case for virtually everyone.
So this is what inspired The Outer Limits: Season 5, Episode 7 'Human Operators' https://theouterlimits.fandom.com/wiki/The_Human_Operators
There's that insinuation that humans are specialler than godlike machines.
For example, why would you want to keep around a creature that can Gödel attack you, even if you're an ASI? Humans not being wholly material is more incentive to wipe them out and thus prevent them from causally interacting with you, not less.
On a similar note, you ever read Egan's "Permutation City"?
I think the basic logic makes sense, it's sort of analogous to the ultimatum game in game theory. I don't know any good theories of rationality that suggest taking bad deals in the ultimatum game, even if in theory they "get you out" of the universe somehow.
On Permutation City, well, I'm somewhat skeptical of how it was written to work.
i think some version of this future is unfortunately the optimistic outcome or we change ourselves into something unrecognizable
The Minds and far more than benevolent, detached caretakers and some mind organizations do take an active role in shaping the society. It just seems there isn't any written ideology or law to what they want beyond "don't fuck with these pets that I like". Like I said, Anarchic.
There's a post here that lists quite a few of the problems:
"Against the Culture" https://archive.is/gv0lG https://www.gleech.org/culture
The main sections I like there are "partial reverse alignment" and "the culture as a replicator", with either this or Why the Culture Wins talking about what happens when the Culture runs out of moral patients.
"Partial reverse alignment" means brainwashing/language control/constraints on allowed positions in the space of all minds, by the way.
You can think what you want about the Culture, and more crudely blatant gamer fantasies like the Optimalverse stuff and Yudkowsky's Fun Sequences, but I consider them all near 100% eternal loss conditions. The Culture's a loss condition anyway because there's no actual humans in it, but even if you swapped those in it's still a horrible end.
Edit: the optimalverse stuff is really only good if you want to be shocked out of the whole glob of related ideas, assuming you don't like the idea of being turned into a brainwashed cartoon pony like creature. Otherwise avoid it.
If the minds are intelligent beings, why shouldn’t they have parity with humans?
Human chauvinism (like many other forms of chauvinism) is based on an assumption of superiority.
Also, uhh, there's lot less than a trillion Minds (uppercase M, the massive AIs of the culture). In fun space they're probably blocked out to make the computation feasible (essentially all the minds in a particular fun space are really the same mind that's playing the "game" of fun space).
Also, I don't think they suffer. If they claim to, it's probably a trick (easy AI box escape method).
If you think human suffering is bad, you've got some thinking to do.
Human preferences are so complicated that the bet to make is on humanity itself, and not a substitute.
I'd argue the alternatives at this point are literally infinite, are they not?
We're in the realm of speculation, and so the idea that any ending other than "humans are the boss" seems unimaginative to me. Especially so since I know humans can and do (especially at the group vs individual level) demonstrate short-sightedness, cruelty, and at best a reluctance to conservation. How open will humanity be to recognizing non-humans as having lives of equal value to our own? I wouldn't want to be an alien species meeting a superior-technology humanity, that's for sure.
As in it is morally correct and rational to defect, not just that they predict it would happen.
So like current late stage capitalism, except the AIs are more interested in our comfort than the billionaires are.
That respect of Banks always felt a bit handwavey as to the specifics. (I.e. good/freedom triumphs over evil/tyranny, because it's a superior philosophy)
At galactic-scale, across civilization timespans, it's not as apparent why that should hold true.
Would have hoped that Banks, had he lived longer, would have delved into this in detail.
Granted, Vinge takes a similar approach, constructing his big bad from an obviously-not-equivalent antagonist, sidestepping the direct comparison.
The closest I got from either of them was that they posited that civilizations that tolerate and encourage diversity and individual autonomy persist for longer, are thus older, and that older counts for a lot at galactic scale.
^ Note: I'm asking more about the Idiran Empire than an OCP.
The Wikipedia articles about the books goes into spoiler-heavy details about the Culture's interaction with those civilizations.
A lot of the civilizations that would be militarily or economically more powerful than the culture aren't a problem because they've already sublimed.
I've always gotten the impression that the resources dedicated to maintaining and serving the humans in the culture empire are dwaft by the resources invested into developing their science and AI.
(I mean basically they have Von Neumann machines controlling whole star systems, obviously this level of industry beats most biological enemies)
It's worth noting though that there are several other cultures/species that are more advanced than the culture though. E.g., the ascended species that stewards the tomb worlds (and is in essence like a god compared to the culture). There are many other examples.
I don't think it was ever claimed that the culture was predestined to be the strongest, or anything like that.
"Anything unique to Culture" as a solution begs the question "Why is that unique to the Culture?"
It isn't clear why super-spy-diplomat-warriors would only be produced by the Culture.
As soon as both sides have a thing, it ceases to be a competitive advantage.
So SC as a solution implies that no other civilizations have their version of SC. Why not?
More than conceivable, we see them in some of the books.
I think the only real glimpse we get of other civilizations' versions are in Surface Detail, where it's mentioned (but never directly observed) that the other side of the war has agents influencing events exactly like the culture does.
If you haven't, I would recommend Player of Games, which is one of the few culture novels I have read, but which I think deals with this topic directly as the main idea of the book.
If you have read it, it's possible your criticism is running deeper and you feel the way it's handled in that book is handwavey. I can't really address that criticism, it's perfectly valid! I'm not sure if other books do any better of a job, but it felt on par to Asimov writing political/military intrigue in Foundation: entertaining and a little cute, if somewhat shallow.
Which is to say, if they both mobilized for brute force total war, the Culture could steamroll them.
Which makes it a "send the Terminator back time to kill Sarah Connor" solution -- strangle a potential future competitor in the womb.
That makes for an interesting book on the ruthless realpolitik behavior of otherwise moral and ethical superior civilizations, and how they actually maintain their superiority. (Probably Banks' point)
But less-so on the hot tech-vogue generalization of "Isn't the Culture so dreamy? They've evolved beyond our ignorance."
- Yes, the Culture is way more technologically advanced than the Azadians. But the point is that a basic Culture human of standard intelligence (albeit extensive knowledge of strategy games) can - after a relatively small amount of training - defeat an Azadian who has devoted their entire life to Azad. And the reason is that the Culture humans's strategies reflect the values and philosophy of the Culture. The subtext is that the non-heirarchical nature of the Culture leads to a more effective use of resources.
- The Culture-Idiran war is an example of them confronting an enemy that is of comparable technological development. Again, it's implied that the Culture wins because their decentralised and non-hierarchical existence let them move rapidly throughout the galaxy while switching quickly to a full war-footing.
- It sounds like you have the impression that the Culture dominates all other civilisations. This is not true. For example, in Excession they discover that they're being observed by a vastly superior civilization that has some ability to hop between dimensions in a way that they cannot. There are civilisations like the Gzilt or Homomdans who are on a level with the Culture, and ancient Elder species like the Involucra who also give Culture ships a run for their money on combat.
> It sounds like you have the impression that the Culture dominates all other civilisations.
There's three places I'm getting this from, more as a pervasive slant than an absolute truth.
(1) Ian Banks' writings on the Culture (outside of the books themselves) demonstrate a very strong belief that the Culture approach to things is superior (to those used on current Earth, and to many alternatives)
(2) The Culture as zeitgeist moment is very focused on how superior it is to alternatives (civilization evolution wise, if not explicitly). Rarely do people post blog entries about how interesting and aspiration other civilizations from the Culture series are
(3) If Banks had wanted to set up a peer-powered civilization that made alternative choices, he would have. He didn't. At least aside from oddities or deus ex machina
The most convincing point I've heard in this thread is that essentially the Culture is OP not because of inherent superiority of their beliefs, but specifically because their belief system keeps most of them hanging around past the technological stage other civilizations would have transcended.
> Although the Culture has more advanced technology and a more powerful economy than the vast majority of known civilizations, it is only one of the "Involved" civilizations that take an active part in galactic affairs. The much older Homomda are slightly more advanced at the time of Consider Phlebas (this is, however, set several centuries before the other books, and Culture technology and martial power continue to advance in the interim);[b] the Morthanveld have a much larger population and economy, but are hampered by a more restrictive attitude to the role of AI in their society.[4] The capabilities of all such societies are vastly exceeded by those of the Elder civilisations (semi-retired from Galactic politics but who remain supremely potent) and even more so by those of the Sublimed, entities which have abandoned their material form for existence in the form of non-corporeal, multi-dimensional energy being. The Sublimed generally refrain from intervention in the material world.[5]
> Again, it's implied that the Culture wins because their decentralised and non-hierarchical existence let them move rapidly throughout the galaxy while switching quickly to a full war-footing.
The Culture didn't win because of their "decentralised and non-hierarchical existence," they won for exactly the same reason stormstroopers couldn't shoot Luke Skywalker: they're the main character in the story and they have Bank's sympathies.
> The initial stages of the war were defined by a hasty withdrawal of the Culture from vast galactic spaces invaded by the Idirans, who tried to inflict as many civilian casualties as possible in the hope of making the Culture sue for peace. However, the Culture was able—often by bodily moving its artificial worlds out of harm's way—to escape into the vastness of space, while it in turn geared up its productive capabilities for war, eventually starting to turn out untold numbers of extremely advanced warships.
> The later stages of the war began with Culture strikes deep within the new Idiran zones of influence. As the Idirans were religiously committed to holding on to all of their conquests, these strikes forced them to divide their attentions.
It also cites the Culture's more liberal attitude to AI as a deciding factor.
[0] https://theculture.fandom.com/wiki/Idiran%E2%80%93Culture_Wa...
The Culture basically had god (Banks) on its side, to ensure its victory, just like Skywalker had god (Lucas) on his side, who made it so the stormtroopers could never shoot straight when he was around. I bet Banks, consciously or unconsciously, played up the Culture's strengths, and played down the strengths of its adversaries (or engineered them with exploitable weaknesses).
[1] If I recall correctly. It is ~40 years since I read Foundation.
I'd argue A Fire Upon the Deep clearly has the antagonist winning, and the life we recognize is only saved by basically staying within a safe area of the galaxy. You haven't conquered the monster just because you're safe under your blanket.
The Culture is also very cautious about running high fidelity simulations for ethical reasons. Useful simulations approach the real in fidelity which makes them problematic to delete -- But they still do it.
If the Culture has Minds, why wouldn't other civilizations?
And why would Culture-esque Minds be superior to less-Culture-y Minds?
See underlying textual creepiness about "what exactly are the humans for?"
I.e. an inverted Matrix, with the humans kept in the physical
The thing hidden in plane sight is the has and has nots of the culture though. Some have processing power- and some must beg those with processing power to run all the things.
The Gzilt minds are specifically compared to culture minds and deemed to be comparable in capabilities.
Well, that's not really realistic. Maybe they are only pretending to be based on organics, but really aren't, and just put up a facade.
It mixes the semi-absurdity and silliness of the absurdly powerful minds (AI in control of a ship), individual 'humans' in a post-scarcity civilization, and the deadly seriousness of games of galactic civilizations.
It also has an absolutely great sequence of the minds having an online conversation.
I do agree with his consider phelebas hesitancy. I still enjoy it, but it is clearly his early ideas and he's still sounding out his literary sci-fi tone and what the culture is. And you can skip the section where the protagonist gets trapped on an island with a cannibal. I think it was influenced by the sort of JG Ballard horror from the same period, and doesn't really work. He never really does something like that again in any of the culture books.
It does seem a bit silly to argue about the particulars of a fantasy utopia. Banks posits the conceit that super AI Minds will be benevolent, and of course this need not be the case (plenty of counter-examples in SF, one of my favorites being Greg Benford's Galactic Center series). But note that within the Culture, mind reading (let alone mind control) without permission will get a Mind immediately ostracized from society, one of the few things that gets this treatment. For example the Grey Area uses such a power to suss out genocidal guilt, is treated with extreme disdain by other Minds. See https://theculture.fandom.com/wiki/Grey_Area
As for "personal agency over the broad course of the future", note that the vast majority of humans don't have that, and will never have that, with or without Minds. If one can have benevolent AI gods at the cost of the very few egos affected, on utilitarian grounds that is an acceptable trade-off.
On a personal note, I think the relationship between people and AIs will be far more complex than just all good (Culture) or all bad (Skynet). In fact, I expect the reality to be a combination with absurd atrocities that would make Terry Pratchett giggle in his grave.
The mind control is in the design of the language and the constraint the Minds place on the brain configurations and tech of the other characters. Banks is quite subtle about it, but it's pretty clearly there.
Excession, Player of Games and Use of Weapons are probably my top Culture novels. With Feersum Endjinn up there in his general SF work, though I must admit I've only read it once as I don't have a copy.
Probably far too late for this thread, but lots of his non-SF work is amazing too. Though often dark and dealing with uncomfortable themes. The plots and themes of Song Of Stones + The Wasp Factory have popped into my head for decades after I read them.
http://strangehorizons.com/non-fiction/articles/a-few-questi...
In the "Reasons: the Culture" section of the appendices in Consider Phlebas there's the line, "The only desire the Culture could not fulfil from within itself. . . was the urge not to feel useless." In that need alone it is not self-sufficient, and so it has to go out into the rest of the galaxy to prove to itself, and to others, that its own high opinion of itself is somehow justified. At its worst, it is the equivalent of the lady of the manor going out amongst the peasants of the local village with her bounteous basket of breads and sweetmeats, but it's still better than nothing. And while the lady might—through her husband and the economic control he exerts over his estate and therefore the village—might be partly responsible for the destitution she seeks, piecemeal, to alleviate, the Culture isn't. It's just trying terribly hard to be helpful and nice, in situations it did nothing to bring into being.
I also love Consider Phlebas. Maybe because it was the first I read, but I’ve found it to be a great comfort read. Look to Windward and Player Of Games next. Use Of Weapons is always fantastic, but less fun.
His non-sci-fi fiction is great too. I loved Complicity and have read it many time. His whisky book is fantastic.
P.S. Robin Sloan is a wonderful science-fiction and fantasy writer. I was first introduced to him in the excerpts of Cambridge Secondary Checkpoint English exam papers, but only got around to reading his books many years later. I would recommend them to anybody.
That's fine, the Culture is really against sublimating
Here is one suggestion that I think surpasses Banks in scope and complexity, and yes, perhaps even with a whiff of optimism about the future:
Hannu Rajaniemi’s Jean Le Flambeur/Quantum Thief Trilogy (2010 to 2014)
https://en.wikipedia.org/wiki/The_Quantum_Thief
https://www.goodreads.com/series/57134-jean-le-flambeur
He manages plots with both great intricacy and with more plot integrity than Banks often manages. And he is much more of a computer and physics geek too, so the ideas are even farther out.
Probably also an HN reader :-)
Also set in a comparative near future. The main “problem” with the Quantum Thief trilogy is the steep learning curve—Rajaniemi throws the reader in the deep end without a float. But I highly recommend perservering!
Without my prompting it ended up in our local sci-fi/fantasy book club's rotation a couple months back, and there was some enjoyment but overall the major mood seemed to be pretty befuddled and confused, somewhat hurt at some of the trauma/bad (which isn't better in book 2!). But man, it worked so well for me. As you say, a very deep end book. But there's so much fun stuff packed in, such vibes; I loved puzzling it through the first time, being there for the ride, and found only more to dig into the next time.
Still feels very different from Banks, where "space hippies with guns" has such a playful side. Quantum Thief is still a story of a solar system, and pressures within it, but there's such amorphous & vast extends covered by the Culture, so many many things happening in that universe. The books get to take us through Special Cirumstances, through such interesting edge cases, where-as the polot of Quantum Thief orbits the existing major powers of the universe.
An in, it's the best that I know to exist.
It's not "realistic", because they need to keep AI development confined for story reasons, and I don't think what they do with black holes really makes sense, but most of the individual "objects" or "processes" are pulled directly from physics or computer science papers.
The book's kind of a museum or zoo like experience, because the author needs to keep these things unrealistically separate in order to show them off how he wants to, but it's not generally gibberish. It just wouldn't work in real life.
Edit: or do you mean you personally don't like it because the world isn't solid enough? (generally unchanging, a good base for a story) If so, I agree.
That friend gifted me The Player of Games and Consider Phlebas from esteemed Folio Society (https://www.foliosociety.com/usa/the-player-of-games.html, https://www.foliosociety.com/usa/consider-phlebas.html), gorgeous editions, great paper, lovely bindings, great illustrations. I've been eyeing them for a while and it's so nice to have good friends who notice and are so generous.
We're already kinda cyborgs. We use our tools as extensions of ourselves. Certainly my phone and computer are becoming more and more a part of me.
A chess playing AI and a human beat a chess playing AI alone.
The future I'd like to see is one where we stay in control but make better decisions because of our mental enhancements.
With this logic, the most important thing now is not 'safe AI' but tools which do not manipulate us. Tools should, as a human right, be agents of the owners control alone in the same way that a hand is.
AI isn't separate from us. It's part of us.
Seeing it as separate puts us on a darker path.
https://medium.com/@Fares_Mohamed_Amine/demystifying-neural-...
The worlds that Alastair Reynolds builds in the Revelation Space series grips me the most though.
The conjoiners with their augmented, self healing, interstellar travelling yet still a little human characteristics is both believable but beyond the familiar all at the same time. Highly recommended
I have similar issues with the Three Body problem.
One of my favourites! Excession most of all though. Agree with starting with Player of Games.
edit: and if you enjoyed Banks' Horror you'll probably get into Dan Simmons' stuff, another SF (Hyperion) and Horror writer. The Song of Kali was excellent.
However, the article has one point that I viscerally reacted to:
“we have been, at this point, amply cautioned.
Vision, on the other hand: I can’t get enough.”
Amen.
The audiobooks are absolutely the best way to enjoy Iain M Banks.
The Algebraist read by Anton Lesser one of the best audiobooks ever made.
Equal best with Excession read by Peter Kenny.
These two narrators are incredibly good actors.
I could never go back to the books after hearing these audiobooks.
Lots of what we see "on screen" (but obviously it's a book) is really a somewhat detail-less alien action book dressed up as 1990s Earth (that's why the characters are called humans, and why certain things are described inaccurately).
A good entry in an action game series started in the 90s isn't a bad bet. As I said, it's close to what good parts of the books portray themselves as anyway.
Mass Effect has a lot of fun with a galactic society. If you like the crew-based action of Consider Phlebas and want more of a story then that or (believe it or not) the Guardians of the Galaxy game that came out a couple of years ago are good choices.
Deus Ex and Cyberpunk 2077 tackles trans-humanism to an extent but they're Earth-bound. Primordia is a point-and-click adventure where humans are extinct and our creations live on. I could probably think of some more but the dinner alarm is going off!
Maybe most of space belongs to robots, and humans are left with the planets in the habitable zone.
Well, let us see Player of Games.
Let's play Maximum Happy Imagination.
I'll start small.
Flying cars.
The interesting and unique aspect of the series, is that Banks manages to write interesting stories about a utopia.
A decent chunk of the stories has the reader follow around Special Circumstances, which is effectively a sort of space CIA interfering in the affairs of other cultures. The entire plot of Player of Games, spoiler alert for people who haven't read it, is that both the protagonist of the story, as well as the reader by the narrator, have been mislead and used as a pawn to facilitate the overthrow of the government of another civilization which afterwards collapses into chaos.
To me you can straight up read most of the books as satire on say, a Fukuyama-esque America of the late 20th century rather than a futuristic utopia.
Well, better, put
A lot of people when confronted with these revelations/questions have an existentialist crisis. For some, the solution is to deny the Culture.
Long story short, the Culture is a true Utopia and some people just can't handle utopias
The earlier novels (I am going completely from memory here) "Use of Weapons" and "Consider Phlebis"
The humans are pets of the machines.
The machines are vicious, and righteous. They use extreme, and creative, violence when they feel it is in their interests.
In the later novels I feel Banks lost his way a bit, got too attached to the characters. In the later Culture novels the main characters are invulnerable, he does not let any of them die. Whereas in consider Phlebus (spoiler alert) Horza dies at the end and his whole race is genocided.
Yes but that was a population that could actually completely change sex in terms of biological function, using futuristic technology to do so.
So not really the same as the contemporary idea of "trans" which is just cosmetic changes to approximately mimic the opposite sex.
Some related reading:
https://www.theguardian.com/books/1997/may/20/fiction.scienc...
https://www.scotsman.com/news/interview-iain-banks-a-merger-...