In particular the connection between the typical weighted-sum plus activation function and a simplistic spiking model where one considers the output simply by the spiking rate was illuminating (section 3).
[1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9313413/ Spiking Neural Networks and Their Applications: A Review
[1]: https://sorbet.org
Haven't checked if there's enough there to build it.
spiking neural networks are artificial neural networks that actually simulate the dynamics of spiking neurons. rather than sums, ramps and squashing, they simulate actual spike trains and the integration of energy that occurs in the dendrites.
neuromorphic hardware can range from specialized asics for doing these simulations efficiently to more experimental hybrid analog-digital systems that use analog elements to do more of the computation.
it's all very cool stuff, but i tend to think of snns as similar to the wings on the avion 3 where simplified unit functions look more like a modern jet wing.
but who knows, maybe the neuromorphic route will open the door to far more efficient computations. personally, i'm very excited about potential wins that could come from novel computational substrates!
I wouldn’t call pagerank intelligent, even though I can give it a text prompt and get relevant information back.
In my view, the only difference between that and an llm is the natural language interface.
I’m no expert on intelligence, but I’d expect being able to introspect and continually learn to be part of it.
One way to help you notice this is to try and estimate how many billions of people you've defined out of "being intelligent" with your latest goalpost movement.
Be honest, how many people do you think "introspect and continually learn" on a daily basis?
That's wild if you think that isn't quite literally one of the defining features of human consciousness (and many would say other animals as well).
If you think people thinking differently than you means they don't still indeed...think...then I don't know what tell you.
I guess I just think more highly of my fellow humans.
As an article of faith, yes. But I don't see what this adds to the discussion.
I disagree with this statement, which is the crux of their argument against my definition of intelligence.
I don’t think any credible survey of the intelligence or lack of a large enough population exists (due to there not being a common binary measure of intelligence), so it’s an issue you kind of need to take on faith.
Thanks for playing.
I think… I’m done talking with you now.
That's what you're doing when you keep moving the goalposts of "real intelligence" further and further right on the bell curve. You're denying the intelligence and consciousness of billions of people (and counting) just so you don't have to admit there's nothing magical about intelligence.
Sometime in the next 10 years, you'll have to start thinking of yourself as a soulless automaton to keep up the delusion. Good luck with that.
You’re the one here denying. I think the vast majority (if not all) of humans are intelligent under my definition. You do not.
I don’t think LLMs or other statistical models are.
So what's your plan when the fraction keeps shrinking? When you're no longer in it?
This is simple interpolation. It is plainly obvious that at some point soon, you will be faced with the fact that there's nothing magical about intelligence. When that happens, will you concede that, or start thinking of yourself as a soulless automaton?
If you can't project that far forward, I question whether you meet any meaningful definition of "intelligent" right now.
What fraction? How would it shrink?
I don’t think that humans, as a species, are becoming non intelligent en masse. In fact, I think that we are, by default, intelligent.
That’s where our opinions seem to irreconcilably differ.
> you will be faced with the fact that there's nothing magical about intelligence.
I dont think there’s anything “magical” about anything. I just don’t think that a statistical model can achieve intelligence as we think of it with regards to humans.
You may see the recent trend of text generation models as new intelligent machines, but I’ve been studying and working these kinds of statistical models for about a decade (since 2016) and have seen these opinions spouted only to quiet down once the logarithmic improvement curve is reached. I don’t see any reason why these LLMs wouldn’t follow the same pattern.
> This is simple interpolation
Interpolation of what? You’re assuming that the goalpost will always shift, but in reality we just don’t have a generally agreed upon definition all. Either way, any definition of intelligence that rules out the majority of humanity is an incorrect definition off the bat, as pretty much all humans are intelligence.
There exists some accurate definition of intelligent such that almost any human satisfies it, but statistical models do not. I’m sure if I studied the philosophy of intelligent I could put one into words, but I’m ill equipped to do so.
> If you can't project that far forward, I question whether you meet any meaningful definition of "intelligent" right now.
Are you just trying to be mean, or do you actually believe that people who disagree with you are not intelligent?
We’ll see in 5 years that this intelligence hype will fade just like that last 2 AI booms.
This isn’t at all to say that we will never make a machine with intelligence that rivals humans, just that I don’t think the statistical model route will get us there… and it hasn’t.
Here a short chatgpt convo about how personification bias can cause people to believe that statistical models are intelligent. I think it's what's fooling so many people.
https://chatgpt.com/share/670549a3-2f9c-8001-81c1-d950c626ad...
At the very least, every single person who plays sports, video games, tries finding a way around traffic, a faster route home, a way to do less work, take a longer break, or a way to save some extra money getting food.
Literally any optimization task at all requires an observation, analysis (read: introspection,) and adjustment. That’s why we model training loops as optimization problems.
We spoof that with REACT prompts in LLMs, but it becomes clear after a few iterations that there’s no real optimization going on, just guessing at tokens (a gross oversimplification, as this guessing has real uses). It’s doing what it was trained to do, completes text. Not to mention that those steps all disappear when the prompt is changed.
love this, I will use this in future rants.
There used to be a strident faction that would say "but AI can't produce original art/a symphony/novel/etc". My answer was usually (correctly), "neither can you."
Sure, but I think most people are intelligent according to my definition, but AI is not…
You’re already coming from the assumption that people are “souless automatons,” which is probably why the idea of a machine being “intelligent” is so easy for you to accept.
> There used to be a strident faction that would say "but AI can't produce original art/a symphony/novel/etc". My answer was usually (correctly), "neither can you."
This is a dumb apples and oranges comparison. AI as a concept is different than a concrete person.
AI as a concept can do anything, it’s a conceptual placeholder for an everything machine.
do your people obey the laws of physics? is the soul magical or physical?
for as far as i can see, the achievement would just be a spiking multimodel variant of transformers on neuromorphic hardware.
Words are useful to the extent they effectively communicate with the intended audience.
This can be accomplished by a mix of familiarity (has this word been already used enough in the target audience with the intended meaning) and the ability to evoke new meanings by intuitive derivation rules (word composition, affixes, ...)
In the case of this title, fwiw, it was perfectly clear to me what this was about because I'm already familiar with related topics and they were using the same terminology