Reverse-Engineering the Wetware: Spiking Networks and the End of Matrix Math
29 points
by pgte
2 days ago
| 6 comments
| metaduck.com
| HN
bob1029
2 hours ago
[-]
The biggest problem with recurrent spiking neural networks is searching for them.

Neuromorphic chips won't help because we don't even know what topology makes sense. Searching for topologies is unbelievably slow. The only thing you can do is run a simulation on an actual problem and measure the performance each time. These simulations turn into tar pits as the power law of spiking activity kicks in. Biology really seems to have the only viable solution to this one. I don't think we can emulate it in any practical way. Chasing STDP and membrane thresholds as some kind of schematic on AI is absolutely the wrong path.

We should be leaning into what our machines do better than biology. Not what they do worse. My CPU doesn't have to leak charge or simulate any delay if I don't want it to. I can losslessly copy and process information at rates that far exceed biological plausibility.

reply
RaftPeople
1 day ago
[-]
From article:

> Cause and Effect: If Neuron A fires just a few milliseconds before Neuron B, the brain assumes A caused B. The synapse between them gets stronger.

A recent study from Stanford found that it's more complex than this rule, some synapses followed it, some did the opposite, etc.

reply
xico
7 minutes ago
[-]
Non-Hebbian and Anti-Hebbian potentiation have been well studied for decades. Anti-Hebbian notably for inhibitory connections.
reply
kevlened
3 hours ago
[-]
> A recent study from Stanford

Source?

reply
mike_hearn
1 day ago
[-]
I guess the obvious question is whether something that mimics biology closer is actually useful. Computers are useful exactly because they aren't the same as us. LLMs are useful because they aren't the same as us. The goal is not to be as close to biology as possible, it's to be useful.
reply
miki123211
1 hour ago
[-]
If you could get biology, but:

* taking less than 18 years to produce a new chip

* able to task-switch instantly (being a doctor one minute and being a lawyer the next, scaling up/down instantly based on current workload)

* having millions of identical clones that people intuitively understand how to work with

* with no need for toilet breaks, sleep, family emergencies, holidays, weekends and all that

It would be pretty damn useful.

reply
9wzYQbTYsAIc
3 hours ago
[-]
Neural networks have turned out to be pretty useful. The goal of distributed parallel processing wasn't to recreate the brain but to recreate it's capabilities.
reply
7777777phil
2 days ago
[-]
Neuromorphic chips have been 5 years away for 15 years now.. Nevertheless the Schultz dopamine-TD error convergence is one of the coolest results in neuroscience
reply
geremiiah
2 hours ago
[-]
Interesting topic, but why am I reading an LLM generated summary?
reply
voidUpdate
2 hours ago
[-]
> "If you’ve been following my recent posts on Metaduck, you know I spend my days building infrastructure for AI agents and wrangling LLMs into production"

Because LLMs users use LLMs for everything

reply
IshKebab
1 hour ago
[-]
Interesting, but SpiNNAker (ugh) has been around for 6 years now, and presumably they had smaller options than that before. Has it actually produced anything useful.

It seems very premature to say "let's build spiking NN hardware!" (or a million core cluster) before we even know how to write the software.

Spiking NNs need their Alexnet before it makes any sense to make dedicated hardware IMO.

reply