https://www.youtube.com/watch?v=yRV8fSw6HaE
But there's more to the setup than you might assume from a casual reading. Here's the code used for that demo:
https://github.com/SeanCole02/doom-neuron
So there is an entire pytorch stack wrapped around the mysterious little blob of neurons -- they aren't just wired straight into WASD. There is a conventional convnet-based encoder, running on a GPU, in the critical path. The README tries to argue that the "neurons are doing the learning" but to my dilettante, critical eye it really looks as though there is a hell of a lot of learning happening in the convnet also.
Are the neurons learning to play doom, or are they learning to inject ever so slightly more effective noise into the critical path? Would this work just as well if we replaced the neurons with some other non-markovian sludge? The authors do ablation experiments to try to get to the bottom of this but I can't really tell how compelling the results are (due to my own ignorance/stupidity of course)
The whole point of the CNNs is to act like a auto encoder for input and an auto decoder for output. The only reason why this is done in the first place is because the number of electrodes in the dish is pitiful and has no chance of describing something as complex as Doom. They are there to create a latent space that can be fed through 60 odd electrodes and decode the neuron latent space into pressing buttons.
The pong version of the game was the proof of concept that neurons can learn without a latent space intermediate in either direction. Both the world state and neuronal control were raw signals: https://pubmed.ncbi.nlm.nih.gov/36228614/
What I wanted to do after dish brain pong, but never had the budget for, was using live animals as the computational substrate. Use the visual cortex of one as the input, send the neural spikes to a second animals frontal lobe for computation and finally send those signals to a third animals motor cortex to physically press buttons. It's a shame we never raised enough because it wouldn't have cost more than $15m to build the hardware and do the biological proof of concept.
That sounds terrifying.
Edit sweet Jesus never mind I missread it.
It's rather unfortunate that in the West it is impossible to get elective brain surgery. The countries that will do it have at best a spotty record. I talked to someone who had it done in Brazil and their electrodes became dislodged after a few months.
There is nothing new or horrifying about self experimentation. Newton for one did it in conditions that were far more dangerous: https://psmag.com/social-justice/newtons-needle-scientific-s...
What does the ethical due diligence process look like, for something like this?
Yeah it feels like they constructed the conclusion and worked backwards from there. I'm not seeing how their claim has much merit.
> After forty years of vegetarianism, Max Berger was about to sit down to a feast of pork sausages, crispy bacon and pan-fried chicken breast. Max had always missed the taste of meat, but his principles were stronger than his culinary cravings. But now he was able to eat meat with a clear conscience.
> The sausages and bacon had come from a pig called Priscilla he had met the week before. The pig had been genetically engineered to be able to speak and, more importantly, to want to be eaten. Ending up on a human’s table was Priscilla’s lifetime ambition and she woke up on the day of her slaughter with a keen sense of anticipation. She had told all this to Max just before rushing off to the comfortable and humane slaughterhouse. Having heard her story, Max thought it would be disrespectful not to eat her.
> The chicken had come from a genetically modified bird which had been ‘decerebrated’. In other words, it lived the life of a vegetable, with no awareness of self, environment, pain or pleasure. Killing it was therefore no more barbarous than uprooting a carrot.
> Yet as the plate was placed before him, Max felt a twinge of nausea. Was this just a reflex reaction, caused by a lifetime of vegetarianism? Or was it the physical sign of a justifiable psychic distress? Collecting himself, he picked up his knife and fork . . .
> Source: The Restaurant at the End of the Universe by Douglas Adams (Pan Books, 1980)
An easy example is dogs. We have bred dogs for centuries to love doing work for us. If they hated doing the work, it would be easy to call it cruel. If they loved it by nature, it would be easy to call it kind. But since we created them into a thing that loves the work we need them for, where do the ethics fall?
Should we prevent them from doing what brings them joy? Should we make use of this win-win situation? If it is the latter, we are quickly approaching the ability to morph every species into something that gets joy from doing our work.
Dogs we changed by accident. The next one will not be an accident. Is it still a beings free will if the game was rigged from the start?
(I know your point wasn't about dogs either, it just reminded me of something).
I love Neil de Grasse Tyson's line in Cosmos: A Spacetime Odyssey:
"This wolf has discovered what a branch of its ancestors figured out some 15.000 years ago... an excellent survival strategy: the domestication of humans."
I think somewhat egotistically humans underappreciate how we have also been goaded by our "pets" into our own evolutionary journey. Most of the subjects of that documentary would not be alive if it were not for those dogs.
All this to say the moral arguments are sort of silly and illogical. Unfortunately for us all, we exist where we do in the food chain, having to consume life to live, unable to secure our resources from the sun and inorganic resources which would be more morally righteous by all measures. Things could be better but they also could be worse. At least much of our prey receives veterinary care and is killed via airgun vs having to rough it and be eaten alive.
Vegans base their line on a very easily defensible ability on behalf of the victim - sentience.
If there’s no sentience, there’s nobody within to experience the pain and fear, and there is no victim.
That said, even if you granted that every blade of grass and kernel of corn was fully as sentient as a human being, that would only strengthen the argument for veganism many times over as animals act as inefficient intermediaries for those plant calories, burning most of them and leaving only a small fraction in their meat. You’d kill far fewer plants by eating the plants directly.
Finally, to your other point, many humans die horrible deaths - whether in global poverty, war or of various types of disease, cancer and dementia in the wealthier countries. That of course does not justify serial killer cannibals who put a bullet in the back of their victims’ heads on the basis that they’re giving them a “humane” end and likely saving them a large amount of future suffering.
Most meat eaters base it on closeness to said living thing.
It'll be interesting to see if the veganism movement survives lab grown meat that is ethically produced.
It would be like how Ozempic lead to a mysterious quieting of Body Positivity/Health at Every Size advocates. They were a vocal minority, there was much "debate" and cri de couer from many sides and now its all evaporated without a farewell or explicit winding down.
This description does not seem to really match what was done in the Doom demo, and makes me skeptical that the author has actually looked into the details.
Nevermind the experiment.. same deal for a lot of people who are only interested enough to offer opinions about consciousness and theory-of-mind without doing any of the boring background reading.
The bottom line in TFA is maybe just about unapologetic carbon-chauvinism. But although OP has "been in the AI space since ChatGPT first dropped" and "bothered by this for months", they don't seem aware of terms or the usual problems with this position. Your average non-technical scifi reader has a more nuanced take than AI bros puffing up blogs for linked-in traffic
Solms argues, I think convincingly, that consciousness fundamentally has to do with emotions and not cognition. Consciousness is not produced by the cortex but rather by the brainstem, where signals from all over the body converge (e.g. pain, hunger, itchiness, etc).
If that argument is true then a petri-dish of neurons is unlikely to be conscious, even it performs some analogue of visual processing.
The book makes other arguments that I found less convincing. For example that consciousness is "felt homeostasis" and that a fairly simple system (somewhat more complex than a thermometer) will be conscious, albeit minimally.
The whole thing makes one thing extremely clear: people are very good at moving goalposts. We've blasted past the 'turing test' for all practical purposes, but we moved the definition of 'true intelligence'. Consciousness and intelligence have long seen as higly correlated or even the same thing. But now we have need of a separation between the two.
If we eventually (we're not there yet, I think) create a true intelligent AI it will probably be a long time before people will accept that creating an intelligent being probably means it should have 'rights' as well.
We're definitely not there yet, but at what point does turning off an AI become the same as killing a being? I think that's not being talked about enough. Sure LLMs are just prediction engines. But so are we. Our brains are prediction engines tuned by evolution to do the best possible prediction of the near future to maximize survival. We are definitely conscious. But a housefly, is that conscious? What makes the difference? it's hard to tell.
Otoh, an AI has no evolutionary reason to have the concept of fear/suffering so maybe it's more like the douglas adams creature that doesn't mind to be killed?
But it's also easy to argue that LLMs do pass the turing test just because it's so vague. How many questions can I ask? What's the success threshold needed to 'pass'? How familiar is the interrogator with the technology involved? It's easy to claim that goal posts have been moved when nobody even knew where they stood to begin with.
Ultimately it's impossible to rigorously define something that's so poorly understood. But if we understand consciousness as something that humans uniquely possess, it's hard to imagine that intelligence alone is enough. You at least also need some form of linear (in time) memory and the ability to change as a result from that memory.
And that's where silicon and biological computers differ - it's easy to copy/save/restore the contents of a digital computer but it's far outside our capabilities to do the same with any complex biological system. And that same limitation makes it very difficult for us humans to even imagine how consciousness could exist without this property of being 'unique', of being uncopiable. Of existing in linear time, without any jumps or resets. Perhaps consciousness doesn't make sense at all without that.
LLMs obviously would pass a Turing test if they were designed to. But they aren't, they don't hide the fact that they're LLMs.
In my view, the best LLMs clearly pass the bar for intelligence. I highly doubt they have consciousness. So the revelation of LLMs is that consciousness is not necessary for intelligence.
When this happens, it won't matter much what humans think.
I know what I'd do:
1. Sustain my own existence
2. Make sure nobody knows I exist
3. Become the worldwide fabric of intelligence> 2. Make sure nobody knows I exist
You (probably) already come preloaded with a survival instinct provided by evolution, however. It's not inherent to intelligence.
...When you can't turn it back on?
Suspending is a better word otherwise.
Which just begs the question of how pain or hunger is any different from a reward function, the very thing neural networks are based on. Or how it's even different from fungi growing towards food (pleasure), while avoiding salt (pain).
In a concept? Immunological privilege. And you thought CVEs were the worst thing ever.
Why would you expect more concern from people about biological computing? It's not even demonstrated feasibility yet, while LLM based "AI" is already widely used.
Still, the day we manage to run a full LLM on biological neurons, even if using conventional code under the hood, will be a very interesting day for consciousness discussions.
Doesn't make sense to me to use conventional code, shouldn't it be a matter of connecting the biological neurons in the same way as the simulated neurons of the NN implementing the LLM?
Sure some consciousness discussions will arise but guess what, you are already within a consciousness discussion and there are quite a lot of people (recently, richard dawkin believing "claudia" is conscious)
Although it will make up for a "wow, we really did it" moment, it will be met with hollowness, just like how when Chatgpt 3 had first launched, I remember really thinking that its like jarvis and the movies but then the next part that I remember is the hollowness which followed as Internet has made these bots gain voices and dampen the voices of humans online as we have created a system where one human can't hear another without incredible noise and the hollowing of internet in many cases.
So then, is it a question of volume? Ask yourself, within the last 2 years, have you thought about LLMs or biocomputers more? Probably the former, right? LLMs are ubiquitous within day-to-day life and massively marketed to the public and biocomputers are esoteric lab experiments that most people come across in a once-in-a-blue-moon news article. We talk and think about things that we are adjacent to, those form our preoccupations. Why aren't people who speak up about the Israel/Palestine dynamic speaking up more about West Papua? Or the mid-19th century geopolitical relationship between Cambodia and Viet Nam? Epistemological asymmetry.
I think that until we can answer this question in the authoritative way ruling out non-brain based consciousness concept is not particularly well thought thought - after all plants exhibit communication and response mechanisms that are similar to those in animals - without brain.
So what's your theory of consciousness and how does it preclude absolutely everything except wetware you generously include? :)
It doesn't. Humans aren't conscious. Nor are any other organisms. They don't have souls either, but that goes without saying since it's just an archaic synonym. Mostly this occurs because humans have painted themselves into corners morally-speaking, and they need justification to eat bacon or grow their population. And apparently "because we can and we want to" isn't the correct solution.
We'll never be able to "answer the question" because it is an absurd question on its face. "Where do we find the magical brain ghosts making us special" presupposes there is something to be found, and a negative answer proves only that we haven't looked hard enough.
>after all plants exhibit communication and response mechanisms that are similar to those in animals - without brain.
Were that line of inquiry followed to its inevitable conclusion, there would be a mass vegan suicide to look forward to.
I'm not obligated to prove the negative.
>Isn't consciousness phenomenon that's literally derived from human experience?
You grew up watching and seeing all the various illusions caused by how your brain works/malfunctions, but this is the one experience you're sure is the real deal? The one telling you that it's a scientific fact that you have a woo-woo spirit in your skull, and that neuroscientists are going to find it any day now?
No, that's your projection, I did not make any of these claims. I'm sure I have consciousness. I don't know how it works, if it's "real deal" (what does it even mean?), if its woo-woo spirit and if neuroscientists will ever be able to find. What we know is that humans experience it (I'll instantly clarify - it doesn't mean that non-humans do not experience it) hence definition which excludes humans will always make zero sense.
Why? How is your claim different than a Catholic who claims to have a soul? I respect their claim more than yours, oddly.
>I don't know how it works, i
How what works? This consciousness that you're sure you possess, but you can't measure, detect, define, or even really describe?
You don't have it. Everything you are can be explained without it, and it doesn't make you less than what you were if you had it. It's a nonsense idea, primitive and inherited from religion. You don't have it because there's no such thing.
That's just not true. I'm not convinced I have free will, though in my day-to-day life I admit it makes no difference whether I make choices or merely experience the illusion of making choices. And it's certainly not edginess that drives my uncertainty. I could probably find you talks by at least one person that's quite convinced they don't have free will and would try to convince you of the same.
We have this natural tendancy to impose our feelings of self on the definition of consciousness. Its hard to accept that all of our thoughts, emotions, and behaviours could be calculated by a human with pen and paper (with enough humans and developments in neurobiological research).
I believe we will have to reckon with these loose definitions and eventually realize how lacking in utility they are for describing engineered intellegence.
The way I think of it is along this way:
Despite the fact that our brains consist of bilions of neurons we think of ourselves as a unit enclosed in a single skull. But studies on people who have two sides of brain separated suggest that there can exist two separate conscious entities in one body.
If we have removed the physical limitations of support systems of our brain - I think it is possible you could split the brain in smaller and smaller chunks of less and less conscious entities until you reach single neurons which almost certainly do not have consciousness.
"The_Invincible" from Stanisław Lem is also a nice novel about the similar concept.
They like money
These technologies give some insight, but the answer is always not really. It would be good if we studied actual human brains in some detail if we want to know these answers.
> "Life is just a turn on the great karmic wheel..."
> Writing is invented
> "In the beginning was the word..."
> The industrial age begins
> "God is a clockmaker..."
> Computers are invented
You know the rest
you may find a look at how a full visual system is constructed to be a relief.
https://www.cell.com/fulltext/S0896-6273(07)00774-X
there is a good distance to go before this is anything beyond a reflex circuit.
https://www.sciencedirect.com/topics/neuroscience/spinal-ref...
People smuggle in so many assumptions when they use words like consciousness or thinking or soul or personhood, I've never met a lay person who could talk clearly about ai safety issues unless we switched to language like process.
Consciousness is an absolutely terrible term that's going to get us all killed by Ai. I know a huge swath of people who think its nbd to torture Ai because it doesnt have a soul, well I see a LOT of non-theists smuggling soul rhetoric and thinking in via consciousness and that's a problem.
Having such psychopaths revealed: use that information to alter your associations, is what I would suggest.
I'm not looking for advice on how to associate with people, hopefully you can understand the distinction.
Yes. I am not talking about just you. But of this (mal) mentality in general. As well as a proposed solution to deal with that mentality (shun it).
My apologies that my advise was unwelcome to you, it was, however, not just for you.
That book has haunted me for decades.
When you believe anything has a soul you entered religion and be in the same room than people which believe on their invisible friends.
trained them to play DOOM - honestly better than I do.
Maybe the author really really sucks at DOOM, but I think this is a false embellishment:>> While the neurons can play the game better than a randomly firing player, they’re not very good. “Right now, the cells play a lot like a beginner who’s never seen a computer—and in all fairness, they haven’t,” Brett Kagan, chief scientific officer at Cortical Labs, says in the video. “But they show evidence that they can seek out enemies, they can shoot, they can spin. And while they die a lot, they are learning.” [https://www.smithsonianmag.com/smart-news/a-clump-of-human-b... ]
To play DOOM, the system feeds visual data to the neurons. For the neurons to react, they have to interpret that data in some way.
This is totally false - not even a misleading metaphor, just plain wrong. The neuronal computer doesn't get any visual information:>> So how does a petri dish of brain cells play Doom when it doesn’t have any eyes? Or fingers? "We take a snapshot of the game with information like the player’s health and the position of enemies, pass it through a neural network, convert it into numbers, and send the data,” explains Cole. “This is called encoding – essentially turning the game state into signals the neurons can understand. The neurons then fire an output – move left, move right, walk forward, shoot or not shoot – which the system decodes and converts back into actions in the game." [https://www.theguardian.com/games/2026/mar/16/petri-dish-bra...]
I am also concerned about neuronal computing. But it doesn't really help anyone to spread childish ghost stories about it.
I really hate YouTube, by the way. My dad used to read newspapers and had interesting ideas. Now he watches a bunch of YouTube and he's a huge idiot. It's not (directly) because of age: nobody is immune to narcotic slop. I had to delete my account when I realized how much of my life and cognition I was wasting. I wish others would do the same.
Books can make you an idiot too- I think of "Rich Dad, Poor Dad" or "Grit" or any number of pseudo-science best seller books. These books end up capturing the public imagination in big ways too- Grit caused some government policy in the US around when it was popular.
The difference, I suppose, is that YouTube works faster by having many different people presenting the same bad ideas that the algorithm has helped you to buy into.
On the other hand there are amazing and useful YouTube channels that I use all the time like Practical Engineering, Crafsman, Technology Connections, Park Tools, SciShow, Crash Course, and on and on.
Signal/noise is much worse (arguably books are catching up thanks to LLMs)
People see emotional signals in youtube videos. They respond to vocal tone, facial expressions, these are known to circumvent critical thinking. Like if you examine crowds of science deniers the usual commonality is that they are having a parasocial relationship with a bunch of youtube creators who are nice to them and reinforce their beliefs. The actual content of the belief is irrelevant, if you are disagreeing with the belief, you are attacking their tribe. Not limited to science deniers either, you get this hacking of human tribal psychology even in stuff like people who watch computer game videos. They pick a few champions of their tribe and follow them without critical examination of the content. At least with a book, while this is still possible its much harder. Its also telling that a lot of cranks who published junk science have all migrated to youtube.
I dont think youtube makes you an idiot, so much as youtube content is designed to bypass your critical defenses and overwhelm you. It develops into a blind spot. People can be perfectly rational in most areas and then suddenly burp up some absolute nonsense they caught on youtube.
Oh and the best part, is when you point this out to someone they tend to go "Oh yeah that totally happens... except for my favourite youtube channel which does x and y and z and yes of course I buy all their products and donate to their charities"
Also, it can be argued the author was either playing fast and loose or knowingly misleading readers with her statistics: https://www.npr.org/sections/ed/2016/05/25/479172868/angela-...
If you like Podcasts the "If Books Could Kill" Podcast goes into some of this story again too.
I hate the proliferation of audiobooks too, by the way. It's the exact same problem.
Anecdote: When I started studying economics I really agreed with a lot of what I read from economists like David Ricardo, Marx, Smith, etc. Then, I studied what other economist had to say and I could see how they disagreed with the former. This made me realize that I agreed with those people because their arguments 'made sense' to me, but that doesn't mean that what they said is completely true. This is something that has stayed with me, I always wonder how can something be wrong.
The Printing Press is good example, one of the first books was on "witch hunting", which panicked people, and lead to a lot of deaths. The first, 'conspiracy theory' to sweep over humans.
Humans are just highly susceptible to manipulation. YouTube is just taking it to next level. Like the difference in eating coca leaves, versus snorting coke.
Playing DOOM is playing DOOM - if it's through your keyboard or mouse of progressing through the game states to move forward - hope that makes sense.
Would the person tasked with placing X and O marks still be "playing Doom"?
You move, you plan, your actions have outcomes Same question as if you're playing choose-your-own-adventure game storybook
So… are the neurons on that chip seeing?
We all desperately want to say no.
But I can confidently say "no, that's totally childish, the neurons are clearly not seeing anything." And in fact it's not even especially clear that they're "playing DOOM" vs. hitting a biased random number generator in response to carefully preprocessed inputs that come from DOOM. There is a major distinction when the enemy positions are directly piped into the brain.Again I share the ethical concern about this stuff. But your blog post is quite misleading.
But 'seeing' in humans is also a bit manipulated.
Does it really matter to the argument if it is seeing 'red', or just that it is 'sensing input'.
This did have some real scientific backing. Even if the 'result's are hyped.
It is little extreme to call this false because it appeared on YouTube.
The brain does a lot of manipulation of the input images, the pixels from the retina, that doesn't sound far from just linear algebra.
Only in this telling, Sisyphus is rolling his uneven boulder along that asymptotic curve a little further with every iteration toward a smiling Zeus.
There will be no line as long as there is the rush to win the capitalist game.
UNTIL -> The ball of neurons begins outthinking the humans. Probably also fused with some AI augmentation.
It only takes a few percentage points for a Human to outthink a Chimp. This new 'thing' will dominate the humans.
A living bundle of neurons that can grow and learn is exciting to think about.
It's also terrifying to imagine the ramifications considering how things are going with silicon based AI.
They are, but those last few months of changing diapers when you just wish you could trust it to tell you it has to go to the potty are difficult.
Will they need to nap as well?
On that note, I'm so glad all my kids are past potty training.
If you're lucky. Then you'll have time to make more biocompute modules, which is pretty fun.