Thermodynamic Computing https://knowm.org/thermodynamic-computing/
It's the most high-influence, low-exposure essay I've ever read. As far as I'm concerned, this dude is a silent prescient genius working quietly for DARPA, and I had a sneak peak into future science when I read it. It's affected my thinking and trajectory for the past 8 years
The explanations about quantum mechanics are also imprecise and go nowhere towards the point of the article. Add a couple janky images and the "crank rant" impression is complete.
To be honest, I don't expect much in the way of sorting through these fuzzy and high dimensional topics, from someone like yourself who gravitates toward formal logic system as a way to understand the world. I would expect someone from your world to dismiss such things
Anyhow, sorry for the initial disrespect, and thanks for taking the time to offer feedback
"Physics–and more broadly the pursuit of science–has been a remarkably successful methodology for understanding how the gears of reality turn. We really have no other methods–and based on humanity’s success so far we have no reason to believe we need any."
Physics, which is to say, physical methods have indeed been remarkably successful...for the types of things physical methods select for! To say it is exhaustive not only begs the question, but the claim itself is not even demonstrable by these methods.
The second claim contains the same error, but with more emphasis. This is just off-the-shelf scientism, and scientism, apart from what withering refutations demonstrate, should be obviously self-refuting. Is the claim that "we have no other methods but physics" (where physics is the paradigmatic empirical science; substitute accordingly) a scientific claim? Obviously not. It is a philosophical claim. That already refutes the claim.
Thus, philosophy has entered the chat, and this is no small concession.
It seems unlikely you could suggest a concrete alternative to physics which explains observable phenomena as well and making generalizable predictions. Showing this would move your theoretical philosophy. In the meantime the rest of us will stick to physics because nobody has a coherent alternative which explains our observations.
I just did. Please reread my post. Physics cannot claim to be "all there is", as that is not a proposition that belongs to physics, but to philosophy. And philosophy cannot make this claim, because it would be self-refuting.
As far as observables are concerned, physics can only deal with what is quantifiable. So anything that isn't quantifiable is by definition beyond the grasp of physical methods. It is circular to say that all that exists is that which is quantifiable. It would be putting the cart before the horse. Some call this the "streetlight effect".
Now physics vs other scientific disciplines sure. Physicists love to claim dominion just like mathematicians do. It is generally true however that physics = math + reality and that we don’t actually have any evidence of anything in this world existing outside a physical description (eg a lot of physics combined = chemistry, a lot of chemistry = biology, a lot of biology = sociology etc). Thus it’s reasonable to assume that the chemistry in this world is 100% governed by the laws of physics and transitively this is true for sociology too (indeed - game theory is one way we quantifiably explain the physical reality of why people behave the way they due). We also see this in math where different disciplines have different “bridges” between them. Does that mean they’re actually separate disciplines or just that we’ve chosen to name features on the topology as such.
Physics, biological sciences, these are tools the mind uses to try and make guesses about the future based on past events. But the abstraction isn't perfect, and its questionable on whether or not it could or should one day be.
The clear example is that large breakthroughs in science often comes from rethinking this fundamental abstraction to explain problems that the old implementation had trouble with. Case in point being quantum physics which has warped how we original understood newtonian physics. Einstein fucking hated quantum because he felt it undermined the idea of objective reality.
The reality (pun intended) is that it is much more complex than our abstractions like science and we would do well to remember they are pragmatic tools and are ultimately unconcerned with the practice of metaphysics which is the underlying nature of reality.
This all seems like philosophy ramblings until we get to little lines like this. Scientism, or the belief that science is the primary and only necessary lens to understand the world falls for the same trap as religion of thinking that you have the answer to reality so anything else outside is either unnecessary or even dangerous to one who holds these views.
Often such attempts try to just wholly put themselves outside the realm of science which I don’t think puts them on strong footing. Just like updates to standard models still have to explain our current understandings of quantum and relativity, alternate methodologies for observing reality have to hold up to scientific scrutiny.
But I claim ignorance here. What better mechanisms has humanity developed for observing and understanding reality?
Excuse me? According to whom? This is preposterously false. The claim you are making is a philosophical one, and an extremely naive one, and I invite you to study the philosophical analysis of that claim. A good, lucid, and synthetic introduction to the philosophy of science might be this book [0], but there are many others, and many less modest in scope. Thomas Nagel, for example, is known for his criticisms of physicalism. Bertrand Russell also had some things to say about the nature of physics that you might find interesting.
> It is generally true however that physics = math + reality
What does that mean? And how can math be so useful in physics if it has nothing to do with reality? [1]
I encourage you then to read mathematicians and physicists rather than philosophers. Math is basically a self-consistent formal language for describing ideas. With a language you can express all sorts of ideas that aren't actually physically possible. Physics is the exploration of which mathematical ideas can be used to describe observed reality. Math is useful precisely because it is formal and self-consistent. You can verify if the math is correct before you start building experiments to test if reality conforms to a given idea. For example, while we know black hole's exist, physicists generally acknowledge that our understanding of the interior are unlikely to be accurate because our current mathematical equations spit out infinities for the interior.
> Excuse me? According to whom? This is preposterously false. The claim you are making is a philosophical one, and an extremely naive one, and I invite you to study the philosophical analysis of that claim
Logic, math and the scientific method came out of philosophy. Philosophies contributions basically end there as far as I'm concerned. Philosophy continued to dick around with empirical approaches to logic and gave up in the 17th century until mathematicians revived it using formal methods and proofs. It poses hypothetical problems as if they're unanswerable only for science and mathematics to come by and render them obsolete (from Xeno's paradox which is answered by infinite sums to the trolly problem which is answered by "the answer is to avoid such artificial constructions at all costs rather than know how you behave in that scenario" which is also how we've trained AI to operate as well).
> Thomas Nagel, for example, is known for his criticisms of physicalism
Right. It's pretty expected for religious people to have a problem with physicalism because it's pretty atheistic in nature. The underlying philosophy of the scientific method is "everything that's physical follows the rules of physics and we can explore it, test it, observe it and understand it while everything else is not science". Religious people have a problem with this because it clearly separates God and science and they try to reinject God by trying to falsely equate physicalism and religion as the same thing.
> Bertrand Russell also had some things to say about the nature of physics that you might find interesting
I have no qualm with Bertrand Russell. He's a brilliant thinker and I agree with his critiques which aren't a refutation but a clarification that we can know the structure of the physical world (as expressed in equations and laws) but not necessarily the intrinsic nature of its entities.
I think a lot of scientists, physicists agree with this insofar as "intrinsic" by its definition is unknowable. There's an alternate branch of thought which says "reality is what the math says". The beauty of this is it's a metaphysical unknowable argument like religion and thus which side is right or wrong is completely unknowable and outside the real of science.
Russell saw physics and philosophy as mutually reinforcing. He believed that philosophy must take into account the best scientific knowledge, but also that science rests on philosophical foundations (like logic, mathematics, and epistemology). The problem I have is that while it does, the philosophical foundations are unchanging and not what people think of as "modern philosophy" which is just rehashing all the same old tired arguments and not actually coming up with ideas on how to solve them (which is what something like Xenox's paradox was). The "true" philosophy of "here's a problem we don't know how to solve" remains in the realm of physics (e.g. the open problems about gravity & quantum, the origin of the universe, etc)
Did Sandia pay list price? Or did SpiNNcloud Systems give it to Sandia for free (or at least for a heavily subsidsed price)? I conjecture the latter. Maybe someone from Sandia is on the list here and can provide detail?
SpiNNcloud Systems is known for making misleading claims, e.g. their home page https://spinncloud.com/ lists DeepMind, DeepSeek, Meta and Microsoft as "Examples of algorithms already leveraging dynamic sparsity", giving the false impression that those companies use SpiNNcloud Systems machines, or the specific computer architecture SpiNNcloud Systems sells. Their claims about energy efficiency (like "78x more energy efficient than current GPUs") seem sketchy. How do they measure energy consumption and trade it off against compute capacities: e.g. a Raspberry Pi uses less absolute energy than a NVIDIA Blackwell but is this a meaningful comparison?
I'd also like to know how to program this machine. Neuromorphic computers have so far been terribly difficult to program. E.g. have JAX, TensorFlow and PyTorch been ported to SpiNNaker 2? I doubt it.
So naturally, this thing should be called Deep Spike, Deep Spin, Deep Discount, or -- given its storage-free design -- Deep Void.
If it can accelerated nested 2D FORTRAN loops, you could even call it Deep DO DO, and the next deeper version would naturally be called Deep #2.
JD Vance and Peter Thiel could gang up, think long and hard hard, go all in, and totally get behind vigorously pushing and fully funding a sexy supercomputer with more comfortably upholstered, luxuriously lubricated, passively penetrable cushioned seating than even a Cray-1, called Deep Couch. And the inevitable jealous break-up would be more fun to watch than the Musk-Trump Bromance!
Sounds like the big brother of Dev Null? :)
If you don't handle effects in precisely the correct order, the simulation will be more about architecture, network topology and how race conditions resolve. We need to simulate the behavior of a spike preceding another spike in exactly the right way, or things like STDP will wildly misfire. The "online learning" promise land will turn into a slip & slide.
A priority queue using a quaternary min-heap implementation is approximately the fastest way I've found to serialize spikes on typical hardware. This obviously isn't how it works in biology, but we are trying to simulate biology on a different substrate so we must make some compromises.
I wouldn't argue that you couldn't achieve wild success in a distributed & more non-deterministic architecture, but I think it is a very difficult battle that should be fought after winning some easier ones.
There are some counterintuitive design choices that emerge, and it inevitably leads to vector machines with arbitrary order Tensor capabilities.
Have a wonderful day =)
- 152 cores per chip, equivalent to ~128 CUDA cores per SM
- per-chip SRAM (20 MB) equivalent to SM high-speed shared memory
- per-board DRAM (96 GB across 48 chips) equivalent to GPU global memory
- boards networked together with something akin to NVLink
I wonder if they use HBM for the DRAM, or do anything like coalescing memory accesses.
John Von Neumann's concept of the instruction counter was great for the short run, but eventually we'll all learn it was a premature optimization. All those transistors tied up as RAM just waiting to be used most of the time, a huge waste.
In the end, high speed computing will be done on an evolution of FPGAs, where everything is pipelined and parallel as heck.
Oh... 138240 Terabytes of RAM.
Ok.
So a paltry 2,304 GB RAM
1,440 boards × 96 GB/board = 138,240 GB
On the TRS-80 Model III, the reset button was a bright red recessed square to the right of the attached keyboard.
It was irresistible to anyone who had no idea what you were doing as you worked, lost in the flow, insensitive to the presence of another human being, until...
--
Then there was the Kaypro. Many of their systems had a bug, software or hardware, that would occasionally cause an unplanned reset the first time, after you turned it on, that you tried writing to the disk. Exactly the wrong moment.
It was comically vulnerable -- just begging to be pressed. The early models had one so soft and easy to trigger that your cat could reboot your Apple ][ with a single curious paw. Later revisions stiffened the spring a bit, but it was still a menace.
There was a whole cottage industry aftermarket of defensive accessories: plastic shields that slid over the reset key, mail-order kits to reroute it through an auxiliary switch, or firmware mods to require CTRL-RESET. You’d find those in the classified ads in Nibble or Apple Orchard magazines, nestled between ASCII art of wizards and promises to triple your RAM.
Because nothing says "I live dangerously" like writing your 6502 assembly in memory with the mini assembler without saving, then letting your little brother near the keyboard.
I got sweet sweet revenge by writing a "Flakey Keyboard Simulator" in assembly that hooked into the keyboard input vector, that drove him bonkers by occasionally missing, mistyping, and repeating keystrokes, indistinguishable from a real flakey keyboard or drunk typist.
RESET on the Apple II was a warm reset. You could set a value on page zero so that pressing it caused a cold start (many apps did that), but, even then, the memory is not fully erased on startup, so you'd probably be kind of OK.
https://cointelegraph.com/news/neuromorphic-computing-breakt...
At the end of the day, processors really just load data, process, and store back to durable data (or generate some visible side effect).
But at in this case, one wouldn't subject to macro-scale nonlinear effects arising from the uncertainty principle when trying to "restore" the system.
It's interesting that the article doesn't say that's what it's actually going to be used for - just event driven (message passing) simulations, with application to defense.
Wasn't that the plot of the movie War Games?
It took a lot of effort but it actually worked!
Cool experiment, but is this actually a practical path forward or just a dead end with a great headline? Someone convince me I'm wrong...
But sometimes you just have to let the academics cook for a few decades and then something fantastical pops out the other end. If we ever make something that is truely AGI, its architecture is probably going to look more like this SpiNNaker machine than anything we are currently using.
It doesn't have built-in storage, but that doesn't mean it can't connect to external storage, or that its memory cannot be retrieved from a front-end computer.
If you have two systems with opposite bottlenecks you can build a composite system with the bottlenecks reduced.
Yes, a different usage pattern could give you the worse of both worlds.
That's a very deep insight from queue theory and similar topics, I don't know if you wrote what you wrote with that in mind (I'm not being an ass, I just want to highlight this observation).
A system's behavior is highly dependent, to the point of it being fundamental, on how it is going to be used.
There's plenty to learn from endeavors like this, even if this particular approach isn't the one that e.g. achieves AGI.
One thing to remember is an operating system is just another computer program.
I also don't understand why this machine is interesting. It has a lot of RAM.... ok, and? I could get a consumer-grade PC with a large amount of RAM (admittedly not quite as much), put my applications in a ramdisk, e.g. tmpfs, and get the same benefit.
In short, what is the big deal?
(Sandia means watermelon in Spanish)
:(
NVIDIA step up your game. Now I want to run stuff on based cores.