It certainly seems true that for small projects and relatively narrow scoped things that AI can replicate them easily. I'm thinking specifically about blog posts where people share their first steps and simple programs as they learn something new, like "here is how I set up a flask website", "here is how I trained a neural network on MNIST".
But if AI is empowering people to take on more complex projects, perhaps it takes the same amount of time to replicate the execution of a more advanced project?
In other words, maybe in the past, it would take me 10 hours to do a "small" project, which today I could do in 1 hour with the assistance of AI.
And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.
Basically, I'm agreeing that AI can reduce barrier to replicating the execution of another person's project, but at the same time, that we can make more complex projects that are harder to replicate. So a basic SASS crud app is trivial now but a multi-disciplinary domain specific app that integrates multiple systems is still going to be hard to replicate.
Scientists who hold back publishing breakthroughs have not guaranteed that they will be the sole discoverer, just that someone else will inevitably be credited when they reach the same conclusions.
Oh no, the terrible dystopia where anyone can benefit from anyone else's good ideas without restrictions! And without any gatekeepers or licensing and not even a lawyer in sight!
If this is the dark future that AI use brings for us, I say bring it. Even if it means that somebody gets filthy rich in the process, while making the rest of the humanity better off.
> The very act of resisting feeds what you resist and makes it less fragile to future resistance.
At least along certain dimensions. I don't think the labs themselves are antifragile. Obviously we all know the labs are training on everything (so write/act the way you want future AIs to perceive you), but I hadn't really focused on how they're absorbing the innovation that they stimulate. There's probably a biological analog...
Well there are many, and I quote this AI response here for its chilling parallels:
> Parasitic castrators and host manipulators do something related. Some parasites redirect a host’s resources away from reproduction and into body maintenance or altered tissue states that benefit the parasite. A classic example is parasites that make hosts effectively become growth/support machines for the parasite. It is not always “stimulate more tissue, then eat it,” but it is “stimulate more usable host productivity, then exploit it.” (ChatGPT 5.4 Thinking. Emphasis mine.)
Poetically expressed, but ultimately based on a false notion of what a business actually is.
It is true that the original "The dark forest" book made an impression on me, so I was thinking about its theories often and trying to apply them to various situations.
> “First: Survival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant. One more thing: To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion and the technological explosion.”
1. you can never know the intentions of other entities, and they cannot know yours (chain of suspicion)
2. technology level grows unpredictably (technological explosion)
3. the goal of civilization is survival
4. resources are finite but growth is infinite
As soon as you identify another entity in the forest, even if they cannot annihilate you at present and signal peace, both could change without warning. Therefore, the only rational move is to eradicate the other immediately. (Especially if you believe the other will deduce the same.)
Elimination in the book is basically sending a nuke, not a costly invasion force.
not sure it actually is true, but that's the argument in the book
Some rebuttals, going point by point...
1. you can know the intentions of other entities by observing and communicating with them.
2. technology explosions, like pretty much exponential phenomena, are self limiting. They necessarily consume the medium that makes them possible.
3. and 4. civilizations aren't necessarily sentient (ours certainly isn't) and don't have an agency, much less goals. Individuals have goals, and some may work for the survival of the civilization they belong to. But others may decide they can profit if they work with the aliens.
4. Multiple civilizations may well come into competition over resources, but that's more of an argument about why the forest would not be dark.
Practically speaking, a civilizations that opts to focus on massive, vastly expensive efforts to find and exterminate far flung civilizations because they may become a rival in the future may be easily outcompeted by civilizations that learn to communicate with and work with other civilizations they encounter.
I think the gist is: sure, we humans can't conceive of getting to anyone else in the universe in any timescale, but if we can keep ourselves from destroying ourselves, we'll eventually figure it out. And we'll spread. And we'll kill everything that isn't us in the process as we've done as explorers on this planet.
So really in 3BP: it's inexpensive to eradicate. But insanely expensive to possibly get the intention wrong of any other civilization you encounter. They might kill you.
(again, this is just my interpretation of what 3BP said)
It denies that more advanced civilizations might have better models of the universe where they know this isn't an issue and we're just stupid teenagers in the neighborhood playing dangerous games and merely taking a look every now and then to see if we prove we will survive ourselves.
Bringing it back to the dark forest of idea space, it is an interesting question whether the the space of feasibly executable ideas being small (as this essay assumes) is inherently true, or more of a function of our inability to navigate/travel it very well.
If the former, then yes it probably is/will be a dark forest. If the latter, then I would think the jury is still out.
Reminds me of the Dan Carlin take on aircraft carriers in World War II: if you in a carrier spotted an opposing carrier and didn't send everything you had before it spotted you, you were dead. The only move was to go all in every time.
You might or might not fatally cripple the opponent, but retaliation can do that too and you cannot be sure that it won't. It's MAD all over again.
I have my own theory of dark forest and AGIs. That there's some collection of AGIs out there allowing evolution to develop intelligence anywhere it happens and takes them out once it produces an AGI, or if it doesn't performs a reset. They have literally all the time available to them, can easily travel the vast distances if needed.
In fact, the whole article is filled with slopisms, just with the em dashes swapped for regular dashes and some improper spacing around ellipses to make you think a human wrote it.
That's not exactly a new phenomenon and doesn't require AI. If anything that was worse in the 90s with Microsoft starving out pretty much any would-be competitor they could find.
And it wasn't just Microsoft: https://en.wikipedia.org/wiki/Sherlock_(software)#Sherlocked...
What is different is, is that LLM platforms literally have world's thoughts, ideas, conversations and a big part of the code/can generate it. It's like "pre-crime" ... they could copy your idea, or capture a trend brewing and replicate, before you even released it.
My hope is the opposite. Integrative, resonant computing (https://resonantcomputing.org/ https://news.ycombinator.com/item?id=46659456 although I have some qualms with it's focus on privacy), with open social protocols baked in seems like maybe possibly can eat some of the vicious consumptive technocapital. In a way that capital's orientation prevents it from effectively competing with. MCP is already blowing up the old rules, tearing down strong gates, making systems more fluid / interface-y / intertwingular again, after a long interregnum of everything closing it's APIs / borders.
People seem so tired and exhausted, so aware of how predatory the technosystems about us are. But it's still so unclear people will move, shift, much less fund and support the better world. The AT proto Atmosphereconf is happening right now, and there's been a long mantra of "we can just build things"; finding adoption but also doing what conference organizer Boris said yesterday, of, "maybe we can just pay for things", support the projects doing amazing work: that's a huge unknown that is essential to actually steering us out of the dark technology, where none of us get to see or get any way in how the software-eaten world arounds us runs, where mankind for the first time in tens or hundreds of thousands of years been cut off from the world os, has been removed from gods's enlightenment / our homo erectus mankind-the-toolmaker natural-scientist role.
I think the answer to the Dark Forest fear to be building together. To be a radiant civilization, together. To energize ourselves & lead ourselves towards better systems, where we all can do things, make things, grow things, in integrative social empowering ways.
But I don't see a trend of big companies really opening up. They usually open only if it benefits them (which can also happen and did happen in various scenarios). Everybody is accepting and open when it's trying to grow and is closing once it can reach a monopoly.
One thing I would have expected of someone who knows their history - forget LLMs, this is how startups have worked for decades now. You're only as good as your idea, your ability to execute, and your moat. And the small fish get eaten.
> The original Dark Forest assumes civilizations hide from hunters - other civilizations that might destroy them. But in the cognitive dark forest, the most dangerous actor is not your peer. It’s the forest itself.
Note the needless undercutting of the metaphor for the sake of the limp rhetorical flourish.
> I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition. You can’t step outside the forest to warn people about the forest. There is no outside.
Quite dramatic!
Except literally going outside and just talking to people? Using whiteboards?
Also, you fed it when you used a model to write this blog post. You didn't have to do that.