C. Elegans: The worm that no computer scientist can crack - https://news.ycombinator.com/item?id=43490290 - March 2025 (130 comments)
On one extreme, we cannot even solve the underlying physics equations for single atoms beyond hydrogen, let alone molecules, let alone complex proteins, etc. etc. all the way up to cells and neuron clusters. So that level of "good" seems enormously far off.
On the other hand, there are lots of useful approximations to be made.
If it looks like a duck and quacks like a duck, is it a duck?
If it squidges like a nematode and squirms like a nematode, is it a [simulation of a] nematode?
(if it talks like a human and makes up answers like a human, is it a human? ;)
ISTM that the answer is "in a way yes, in a way no".
Yes, in that we reasonably conclude something is a duck if it seems like a duck.
No, in that seeming like a duck is not a cause of its being a duck (rather, it's the other way round).
When we want to figure out what something is, we reason from effect to cause. We know this thing is a duck because it waddles, quacks, lays eggs, etc etc. We figure out everything in reality this way. We know what a thing is by means of its behavior.
But ontologically -- ie outside our minds -- the opposite is happening from how we reason. Something waddles, quacks & lays eggs because it is a duck. Our reason goes from effect (the duck's behavior) to cause (the duck), but reality goes in the other direction.
Our reasoning (unlike reality) can be mistaken. We might be mistaking the model of a duck or a robot-duck for a real duck. But it doesn't follow from this that a model duck or a robot-duck is a duck. It just means a different cause is producing [some of] the same effects. This is true no matter how realistic the robot-duck is.
So we may (may!) be able to theoretically simulate a nematode, though the difficulty level must be astronomical, but that doesn't mean we've thereby created a nematode. This seems to be the case for attempting to simulate anything.
At least this is my understanding, I could be mistaken somewhere.
I think this is also one possible answer to the famous 'zombie' question.
> Something waddles, quacks & lays eggs because it is a duck.
Or: something does those things, period. We notice several such somethings doing similar things, and come up with an umbrella term for them, for our own convenience: "duck." I'm not sure how far different that is from "is a duck", but it feels like a nonzero amount.
I guess where I'm going is: our labels for things are different from the "is-ness" of those things. Really, duck A and duck B are distinct from each other in many ways, and to call them by one name is in itself a coarse approximation.
So if "duckness" is a label that is purely derived from our observations, and separate from the true nature of the thing that waddles and quacks, then does some other thing (the robot duck) which also produces the same observations, also win the label?
Luckily, I'm a solipsist, so I don't have to worry about other things actually existing. Phew.
It's amazing how many philosophical debates end up at the question of universals that you've just alluded to.
My own position, very briefly, is that when we predicate 'duck' (as in "this is a duck") of a given thing, we are describing reality, not just conveniently labeling some part of it in our own minds. If 'duck' is merely a label that we apply to something, then anything we predicate of 'duck' is merely something we predicate of our own mental categories. But this isn't so: the sentence 'ducks quack' refers to something real, not just our thoughts. But at the same time, the sentence is not referring to Duck A or Duck B, but to ducks in general. From this, it seems to follow that some general 'ducky-ness' must have a kind of existence (otherwise how could we predicate of it?), and that this 'ducky-ness' must be shared by everything that is a duck (otherwise, by what is it a duck?).
In the opposite scenario that you've described, all predication would be limited to our thoughts. Someone could say "ducks quack", and someone else could say "ducks never quack", and both would be right, because both would merely be describing their own thoughts. Obviously, all reason, science, possibility of communication, etc, is finished at this point :-)
Of course our labels can be wrong. Someone could mistake a swan for a duck. Also, there is infinite variation from duck to duck, so the 'ducky-ness' of each duck in no way tells us everything about that duck. Duck A and B are unique individuals. Also, the 'ducky-ness' only ever exists in a given duck; it's not like it has some independent ethereal existence.
No, if it doesn't do everything else a duck does. You can have a robot dog, but you won't need to take it to the vet, feed it, sweep up it's hair, let it go outside to go potty, put up a warning sign for the mailman, or take it for a walk. You can have a simulated dog do all those things, but then how accurate will the biological functions be in trying to model it's physiology over time?
Will it give us insights into real dog psychology so we can better interact with our pets? Or does that need to happen with real dogs and real human researchers? Wildlife biologists aren't going to refer to simulated ducks to research their behavior in more depth. They'll go out and observe them, or bring them into the lab.
Not to say I'm fully convinced, but I can see the appeal.
I'm pretty sure behavior is simulated all the time in everything from migration to predator prey dynamics, to population dynamics, and so on. If we don't use simulations to understand all the little nuances and idiosyncrasies of behavior right now that's probably just because at present that's extremely difficult to model. But I suspect they absolutely would be used if such things were available. Of course, they would be treated as complementary to other forms of data, but wouldn't be disregarded outright.
What I'm trying to say is: as long as the simulation fulfils the objectives set out, it's useful, even if it is very far from the real thing.
Then the next question is: what are the objectives here?
Agreed, it depends on what data you want out of the simulation. If you want to see how your dog will react to a duck, maybe it's good enough. If on the other hand you want to see how a duck will react to getting poked, well... your raspberry pi is worse than useless.
Is there truly so little that makes up the soul of a duck? No mention of laying eggs? Caring for it's young? Viciously chasing children across the lawn of the local park? (I know that's usually the prevue of Geese, however I have seen ducks launch the occasional offensive against too curious little ones)
If it looks like a duck and quacks like a duck but isn't made of duck meat, you probably don't want to eat it.
If it looks like a duck and quacks like a duck but doesn't have feathers, you probably don't stuff its skin covering in your pillows.
GP, in their parenthesis, made the insinuation that if it, which I would take as a LLM, talks like a human and makes up answers, like a LLM is wont to do, like a human is it human? >(if it talks like a human and makes up answers like a human, is it a human? ;)
While I don't subscribe to the idea that humans have a soul, or some other dualist take, I do think that there is far more to a human than just our cognitive properties. So to convince me that something is human takes more than just listing to it talk to me or make things up during the discussion.
So, too, with a duck. Sure, if all I have to go on is hearing a quack then I would say yeah that's most probably a duck.
Just like if you told me a barn was red when we say just one side, I'd say it's probably red.
I know, I know, I am fun at parties.
For example you can simulate traffic without simulating the inner workings of every car's engine, or even understanding how the engine works.
Or maybe by "working understanding" you mean "we have a black box that does the thing we wanted."
https://en.wikipedia.org/wiki/Philosophical_zombie
We really have no idea whether consciousness is something that can arise from computation, or whether it is somehow dependent on the physical substrate of the universe. Maybe we can create a virtual brain that, from the outside, is indistinguishable from a physical brain, and which will argue vociferously that it is a real person, and yet experiences no more conscious qualia than an equation written on a piece of paper.
I don't understand this argument. How is the computer running the computation not part of the "physical substrate of the universe"? _Everything_ is part of the universe almost by definition.
I think you're kinda right, but the tricky thing here is that the computation itself is physical too. The abstraction may just be whatever it is that the computation has in common with the thing it's modeling in brains, which could mean it, too, has consciousness, or is 'doing' consciousness in some sense.
If the physical process is "the important part" then that can be modeled in an abstract way, too.
https://en.wikipedia.org/wiki/Billiard-ball_computer
What I’m saying is that we don’t know that consciousness is just an algorithm. If the physical implementation matters, then modelling it might be useful or interesting, but it wouldn’t actually create the real thing. Maybe consciousness arises from a specific pattern of movement of electrons, but not from any pattern of billiard ball movements.
I caught that, I think(?). I would flag that the upshot or implication can be (1) something outside of physics altogether which I think, while romantic, is at the extreme end of extreme in terms of tenuous and inviting bad metaphysics, bad notions of emergentism etc, but there's also (2) something about the difference between how something is "embodied" which, as you note, still is about billiard ball style simulation at the end of the day, but can raise interesting questions about what kinds of simulations work.
I also do wonder of there's some kind of physicalist essentialism working its way in there. If there's something different about electrons that's importantly different, something about it (hopefully) is a physical property and as such able to be modeled. If consciousness is intrinsically and preferentially tied to a certain kind of matter, e.g. atoms, or brain-stuff, that starts to sound a little woo-ey.
The answer to that would appear to be, no.
Given that brains are fundamentally governed by the same physical laws as everything else, there shouldn't be anything about them which cannot be replicated in some way by something sufficiently capable of emulation of their processes.
That's not to say it's simple. Just that brains obey the laws of physics, and as long as that's true, they should be able to be replicable.
Unless your contention is that brains are somehow able to operate outside the constraints of the laws of physics, in which case we're going to have a fundamental difference of opinion as to the nature of the universe and whether things with brains are particularly special.
We are unable to get two biologically identical (or at least extremely close) brains of identical twins to develop in the same way, let alone two distinct brains, or a simulated version of a brain.
The claim is potentially equivalent to a claim that since universe is theoretically computable, we'll eventually be able to simulate it.
Still, to me the fact that it could be theoretically achieved with computers is not very useful if it can't be achieved practically, and that certainly makes "biological" computation different from synthetic computation.
If i take every atom/molecule from one brain (assume a snapshot in time) and replicate it one by one at a different location, and replicate the external IO (stimulus, glucose...), what evidence do we have that this won't work? likely not much
Now instead of replicating ALL the atoms/molecules exactly, I replace one of the higher level entities like a single neuron with a computational equivalent - a tiny computer of sorts that perfectly replaces a neuron within the error bars of the biological neuron. Will this not work? I mean, will it not behave in the same exact way as the original biological brain with consciousness? (We have some evidence that we can replace certain circuits in the brain with man-made equivalents and it continues to work.)
You know where I'm going with this... FindAll, ReplaceAll. Why would it be any different?
---
If i had to argue that it wouldn't be the same, here's a quick braindump off the top of my head:
- some entities like neurons literally cannot be replicated without the goo. physics limitation? but the existence of the goo is a proof of existence. but still, maybe the goo has properties that cannot be replicated with other substances
- our model of the physical world has serious limitations. on the order of pre-knowing-speed-of-light-limitation. maybe putting the building blocks together does not create the full thing. maybe building blocks + magic is needed to create the whole.
- other fun limitation of our physical model
> What you really mean is is there any meaningful difference in what can be processed by biological computing and non-biological computing.
> The answer to that would appear to be, no.
So specifically to "appear to be, no". > i would rather ask one to think, what evidence is there that we cannot do brain on non-gooey stuff?
Because we haven't practically done it despite decades of trying?I don't think this should stop us from trying, and it's pretty obvious it won't. But there is no proof either way — potentially the problem is so complex that we never get there in practice?
(Also note that proving a general negative statement is pretty tricky and usually avoided — we usually look for counter-examples, evaluate a full finite/countable set of scenarios, etc)
Basically, even if it's a simple computation engine, can we put that simulation through the stimulus our brain experiences (not easily) and will lack of that turn into entirely differently behaving system?
(It's been a few months since the last time I rambled aimlessly through my house muttering consciousness is a parasite under my breath.)
First, there's the BLIT stories by David Langford. Several of these are online.
https://www.infinityplus.co.uk/stories/blit.htm
https://www.nature.com/articles/44964 (did you know that Nature did science fiction short stories? https://www.nature.com/nature/articles?type=futures )
https://www.lightspeedmagazine.com/fiction/different-kinds-o...
Then, you go to Accelerando
https://www.antipope.org/charlie/blog-static/fiction/acceler...
> Luckily, infowar turns out to be more survivable than nuclear war – especially once it is discovered that a simple anti-aliasing filter stops nine out of ten neural-wetware-crashing Langford fractals from causing anything worse than a mild headache.
This is followed by its sequel-ish Glasshouse https://www.goodreads.com/book/show/17866.Glasshouse
It's not technically a sequel, but one can see the universe of Glasshouse following from the ending of Accelerando.
A quick diversion to Vernor Vinge with Peace War and Marooned in Realtime https://www.goodreads.com/series/57273-across-realtime (there's a short story in there titled The Ungoverned)
Implied spaces Walter Jon Williams https://www.goodreads.com/book/show/2059573.Implied_Spaces which takes another approach to the unexplored events leading into Glasshouse and a possible path of escalation. The reference passage in this book is:
> “I and my confederates,” Aristide said, “did our best to prevent that degree of autonomy among artificial intelligences. We made the decision to turn away from the Vingean Singularity before most people even knew what it was. But—” He made a gesture with his hands as if dropping a ball. “—I claim no more than the average share of wisdom. We could have made mistakes.”
This then ends with... {spoilers}.
Ah, so this is where 45% of my salary goes.
https://www.humanbrainproject.eu/en/follow-hbp/news/2023/09/...
https://www.abc.net.au/news/science/2025-03-05/cortical-labs...
In short - we are a long way from being able to simulate a nervous system. Our knowledge of neuronal biochemistry is not there yet.
So many philosophical, ethical and legal questions. And unsettling possibilities.
We will probably have to deal with this someday.
This is quite an extraordinary claim with no extraordinary evidence.
As said elsewhere in this thread we can at this moment not even simulate single atoms.
I see no reason to believe at all that we will ever be able to simulate a human brain.
Unless you want my simulation here:
if ishungry:
Eat(FindFood())
else:
PracticeFingersnowboarding()
And that's with us having a pretty solid understanding of how fluid dynamics works. We have an extremely poor understanding of how a brain works, doubly something of the complexity of the human brain. We are fundamentally unable to study it during operation, because we don't have a non-invasive high resolution access to it's internals. We are basically butchers sticking electrodes into living tissue.
The article itself proposes that we may - barely - be able to study the workings of the brain of an extremely simple organism.
A rocketry analogy would be Archimedes dreaming about people traveling to the stars.
> A rocketry analogy would be Archimedes dreaming about people traveling to the stars.
What’s wrong with that? The claim was „some day“, not in 5 years. It could take a few centuries or longer.
That alone wouldn't be enough to fully clone a person's consciousness. There is information stored in the actively firing synapses. For example short-term memory seems to be stored by sending signals in a loop, and there might be more such mechanisms. Those signals are obviously lost once the brain is dead. Another issue are hormones. The same brain regulated by a different (simulated) body might behave completely different. And then there are probably a lot of unknown unknowns. Despite decades of research there are still a lot of open questions, and more questions will become apparent once we actually start simulating complex brains.
But that doesn't mean that those early methods wouldn't be useful, both for science and for more questionable efforts. For example accessing the long-term memory of a recently deceased might be comparatively viable if given enough funding
1: https://edition.cnn.com/2024/05/15/world/human-brain-map-har...
I suspect this wasn't your intention, but I feel this heavily undersells how much work is involved in "scaling up" to simulating a human brain. I wouldn't even say that it is inevitable, because there are so many unsolved questions and unknown-unknowns.
There are decades of research and we are still an unknown and large number of years away from doing this. Fusion power is more tractable that this.
It's not even clear whether our current approach to computation will ever be able to do this. We might need completely novel types of computers, maybe organic-machine hybrids.
I'm not even touching on the very real and serious ethical questions of simulating human level consciousnesses.
Also note how my "just" only applies to scaling from mapping a grain-of-rice-sized piece of human brain to mapping a full human brain. Going from there to simulating it would be another big leap, never mind the challenges to simulate it in a way that actually produces results comparable to the actual brain of that person.
To wit, no one expects human brains to be capable of arbitrarily complex computation.
This does not follow from any theorem of Gödel.
I'm actually happy it's a long way off. Feels like the richer humans would live with cheat codes, and the others wouldn't.
Ego death is a brutal suboptimum. It's tragic that any entity brought into and knowing of its own existence has to die and be forever annihilated.
If humanity has only one goal, and that goal was to achieve immortality for all humans henceforth [1], that would be a noble cause for our species.
I hate that those I care about will cease to exist.
Fuck death.
[1] Maybe we get lucky and they master physics, reverse the lightcone, and they pull each of us out of the ether of time with perfect memories to join them. Sign me up. I consent.
Just like we can't really predict weather (as another complex system) too far ahead, we can't really predict how something this significant changes brain development — IMHO at least.
I do share your view that positive direction is not a given, but what evidence do we have that it would be worse than right now. Maybe we should be cautious of the risks.
Heck, so much money has gone into preventing hair loss, and there does not seem to be a simple answer to that either ;)
I kindly disagree :-). I think I'd rather not be immortal but live in a world with nature and animals than be imortal in a jar. Right now we don't manage to be immortal, and we are extinguishing the animals... the worst of both worlds?
Nearly all mystics (and many if not most neuroscientists) also come to the conclusion that our world of the senses is an illusion. This doesn't mean that the illusion doesn't have rigid laws, but it does challenge the materialistic assumption that the soul, or consciousness, becomes nothing at the time of physical/biological death.
If that is too fuzzy and mystical, I'd also suggest reflecting more deeply on the concept of technologically facilitated immortality of physical life on earth. For me, it is clearly a dead end. It can only lead to a complete annihilation of every human value.
could you please elaborate on this? why is it clearly a dead end and why would human values clearly end? any resources you can point to would be great. thank you.
If there's consciousness after death (in whatever form), then it is clearly not the end, just a part of a much longer - possibly infinite - journey. Even better!
In either case: it's better to stop worrying about what may come after and enjoy the journey to the fullest!
Thought you might find it interesting.
[1] https://en.wikipedia.org/wiki/%C4%80tman_(Hinduism) [2] https://en.wikipedia.org/wiki/Sa%E1%B9%83s%C4%81ra [3] https://en.wikipedia.org/wiki/Dharma [4] https://en.wikipedia.org/wiki/Karma_in_Hinduism [5] https://en.wikipedia.org/wiki/Moksha [6] https://en.wikipedia.org/wiki/Brahman
> If death is oblivion, then you won't be able to feel sad that you don't exist since there won't be a consciousness to interpret these feelings.
I've also taken this to mean that any pleasure and pain in this life are meaningless on a geological time span. Apart from what comfort we need to keep our mental health while still alive [1], we don't need to optimize for self-gratification, pleasure, or wealth accumulation. It's my excuse for grinding so hard at the few things that bring me meaning.
We're all traversing these gradients differently. The sum of what we learn and build will feed into the next generations. I hope the future is brighter for them than the wildest of we could dream of today.
[1] Time with friends and family; enough resources to not worry about food, shelter, or bills; a fun distraction here or there
I need to croak so that there's room in the world for my great-grandchildren.
>If humanity has only one goal,
Humanity pursues, best that I can tell, extinction instead of immortality. It has this really weird premature transcendence hangup.
Except that's not the goal and never will be the goal. If some immortality technology is ever created, it won't be for all. The Elon Musks, Sam Altmans, and Donald Trumps of the world will live forever. You will die.
> I hate that those I care about will cease to exist.
> Fuck death.
There's a much simpler and more achievable solution to that problem: change your belief system.
Are you saying the gp needs to rethink their ideas on death? Wouldn't that be like accepting defeat because the problem is hard?