Ask HN: Weirdest Computer Architecture?
94 points
by bckr
2 months ago
| 55 comments
| HN
My limited understanding of “the stack” is:

  Physical substrate: Electronics
  Computation theory: Turing machines
  Smallest logical/physical parts: Transistors, Logic gates
  Largest logical/physical parts: Chips
  Lowest level programming: ARM/x64 instructions 
  First abstractions of programming: Assembly, C compiler
  Software architecture: Unix kernel , Binary interfaces
  User interface: Screen, Mouse, Keyboard, GNU
Does there exist a computer stack that changes all of these components?

Or at the least uses electronics but substitutes something else for Turing machines and above.

runjake
2 months ago
[-]
Here are some architectures that might interest you. Note these are links that lead to rabbit holes.

1. Transmeta: https://en.wikipedia.org/wiki/Transmeta

2. Cell processor: https://en.wikipedia.org/wiki/Cell_(processor)

3. VAX: https://en.wikipedia.org/wiki/VAX (Was unusual for it's time, but many concepts have since been adopted)

4. IBM zArchitecture: https://en.wikipedia.org/wiki/Z/Architecture (This stuff is complete unlike conventional computing, particularly the "self-healing" features.)

5. IBM TrueNorth processor: https://open-neuromorphic.org/blog/truenorth-deep-dive-ibm-n... (Cognitive/neuromorphic computing)

reply
nrr
2 months ago
[-]
For those really wanting to dig into z/Architecture: <https://www.ibm.com/docs/en/module_1678991624569/pdf/SA22-78...>

The Wikipedia link has it as its first reference, but it's honestly worth linking directly here. I highly recommend trying to get someone to give you a TSO/E account and picking up HLASM.

reply
skissane
2 months ago
[-]
I put MVS 3.8J in a Docker image: https://github.com/skissane/mvs38j

Not TSO/E rather just plain old TSO. Not HLASM rather its predecessor Assembler F (IFOX00). Still, if you get the hang of the 1970s version, the 2020s version is just adding stuff. And some of the stuff it is adding is less unfamiliar (like Unix and Java)

reply
nrr
2 months ago
[-]
About the only thing that's truly limiting about using such an old release of MVS is the 24-bit addressing and maybe the older pre-XA I/O architecture.

Having a simulated 3033 running at 10+ MIPS is pretty nice though. (:

reply
skissane
2 months ago
[-]
> About the only thing that's truly limiting about using such an old release of MVS is the 24-bit addressing

I've never used it, but there's a hacked up version that adds 31-bit addressing [0].

It is truly a hack though – porting 24-bit MVS to XA is a monumental task, not primarily due to address mode (you can always ignore new processor modes, just like how a 286-only OS will happily run on a 386 without realising it is doing so), but rather due to the fundamental incompatibility in the IO architecture.

The "proper" way to do it would be to have some kind of hypervisor which translates 370 IO operations to XA – which already exists, and has so for decades, it is called VM/XA, but sadly under IBM copyright restrictions just like MVS/XA is. I suppose someone could always write their own mini-hypervisor that did this, but the set of people with the time, inclination and necessary skills is approximately (if not exactly) zero.

So instead the hacky approach is to modify Hercules to invent a new hybrid "S/380" architecture which combines XA addressing with 370 IO. Given it never physically existed, I doubt Hercules will ever agree to upstream it.

Also, IIRC, it doesn't implement memory protection/etc for above-the-line addresses, making "MVS/380" essentially a single-tasking OS as far as 31-bit code goes. But the primary reason for its existence is that GCC can't compile itself under 24-bit since doing so consumes too much memory, and for that limited purpose you'd likely only ever run one program at a time anyway.

I guess the other way of solving the problem would been to have modify GCC to do application-level swapping to disk - which is what a lot of historical compilers did to fit big compiles into limited memory. But I guess making those kinds of radical changes to GCC is too much work for a hobby. Or pick a different compiler altogether – GCC was designed from the beginning for 32-bit machines, and probably alternatives would go better in very limited memory – but people had already implemented 370 code generation for GCC (contemporary GCC supports 64-bit and 31-bit; I don't think contemporary mainline GCC supports 24-bit code generation any more, but people use an old version or branch which did.) I wonder about OpenWatcom, since that's originally a 370 compiler, and I believe the 370 code generator is still in the source tree, although I'm not sure if anybody has tried to use it.

[0] https://mvs380.sourceforge.net/

reply
nrr
2 months ago
[-]
Yeah, I've wondered what the lift would be to backport XA I/O to MVS 3.8j, among other things, but given that it's a pretty pervasive change to the system, I'm not surprised to learn that it's pretty heavy.

To your note about a hypervisor though: I did consider going this route. I already find VM/370 something of a more useful system anyway, and having my own VM/XA of sorts is an entertaining prospect.

reply
skissane
2 months ago
[-]
It arguably doesn't require anything as remotely complex/feature-rich as full VM/XA: it wouldn't need to support multiple virtual machines, or complicated I/O virtualisation.

Primarily just intercept SIO/etc instructions, and replace them with the XA equivalent.

Another idea that comes to mind: you could locate the I/O instructions in MVS 3.8J and patch them over with branches to some new "I/O translation" module. The problem I think with that, is while the majority of IO goes through a few central places in the code (IOS invoked via SVC call made by EXCP), there's I/O instructions splattered everywhere in less central parts of the system (VTAM, TCAM, utilities, etc).

reply
nrr
2 months ago
[-]
I leaned into the "well, what if my own VM/XA" because, uh, VM/CMS has the odd distinction among IBM's operating systems of the era of being both (1) source available and (2) site-assemblable. I've gone through my fair share of CMS and CP generations, which felt like a more complete rebuild of those nuclei than the MVS sysgens I've done.

That there makes me feel a little less confident in an MVS 3.8j patching effort.

reply
patterner
2 months ago
[-]
i loved working with z/Arch assembly. best job i ever had.
reply
PaulHoule
2 months ago
[-]
I wouldn't say the VAX was unusual even though it was a pathfinder in that it showed what 32-bit architectures were going to look like. In the big picture the 386, 68040, SPARC and other chips since then have looked a lot like a VAX, particularly in how virtual memory works. There's no fundamental problem with getting a modern Unix to run on a VAX except for all the details.

Z is definitely interesting from it's history with the IBM 360 and its 24 bit address space. (24 bit micros existed in the 1980s such as the 286 but I never had one that was straightforward to program in 24 bit mode) which, around the time the VAX came out, got expanded to 31-bits

https://en.wikipedia.org/wiki/IBM_System/370-XA

reply
skissane
2 months ago
[-]
> 24 bit micros existed in the 1980s such as the 286 but I never had one that was straightforward to program in 24 bit mode

Making clear that we are talking about 24-bit physical or virtual addressing (machines with a 24-bit data word we’re quite rare, mainly DSPs, also some non-IBM mainframes like SDS 940):

286’s 16-bit protected mode was heavily used by Windows 3.x in Standard mode. And even though 386 Enhanced Mode used 32-bit addressing, from an application developer viewpoint it was largely indistinguishable from 286 protected mode, prior to Win32s. And then Windows NT and 95 changed all that.

286’s protected mode was also heavily used by earlier DOS extenders, OS/2 1.x, earlier versions of NetWare and earlier versions of Unix variants such as Xenix. Plus 32-bit operating systems such as Windows 9x/Me/NT/2000/XP/etc and OS/2 2.x+ still used it for backward compatibility when running older 16-bit software (Windows 3.x and OS/2 1.x)

Other widely used CPU architectures with 24-bit addressing included anything with a Motorola 68000 or 68010 (32-bit addressing was only added with the 68020 onwards, while the 68012 had 31-bit addressing). So that includes early versions of classic MacOS, AmigaOS, Atari TOS - and also Apple Lisa, various early Unix workstations, and umpteen random operating systems which ran under 68K which you may have never heard of (everything from CP/M-68K to OS/9 to VERSAdos to AMOS/L).

ARM1 (available as an optional coprocessor for the BBC Micro) and ARM2 (used in the earliest RISC OS systems) were slightly more than 24-bit, with 26-bit addressing. And some late pre-XA IBM mainframes actually used 26-bit physical addressing despite only having 24-bit virtual addresses. Rather similar to how 32-bit Intel processors ended up with 36-bit physical addressing via PAE

reply
PaulHoule
2 months ago
[-]
I understand the 286 protected mode still made you mess around with segment registers, if you wanted to work with 24-bit long pointers you would have to emulate that behavior with the segment registers and it was a hassle.

I didn't think the segment registers were a big hassle in the 8086 real mode in fact I thought it was fun to do crazy stuff in assembly such as use segment register values as long pointers to large objects (with 16 byte granularity) I think the segment registers would have felt like more of a hassle if I was writing larger programs (e.g. 64k data + 64k code + 64k stack gets you further towards utilizing a 640k address space than it does towards a 16M address space).

I recently discovered

https://en.wikipedia.org/wiki/Zilog_eZ80

and think that 2001 design is close to an ideal 24 bit micro in that both regular and index registers are extended to 24 bits, you have long pointers and all the facilities to work in a 24 bit "problem space" based on an architecture that is reasonable to write compilers for. It would be nice to have an MMU so you could have a real OS, even something with bounds registers would please me, but with many reasonably priced dev boards like

https://www.tindie.com/products/lutherjohnson/makerlisp-ez80...

it is a dream you can live. I don't think anything else comes close, certainly not

https://en.wikipedia.org/wiki/WDC_65C816

where emulating long pointers would have been a terrible hassle and which didn't do anything to address the compiler hostility of the 6502.

---

Now if I wanted the huge register file of the old 360 I'd got to the thoroughly 8-bit AVR-8 where I sometimes have enough registers for your inner loop and interrupt handler variables. I use 24-bit pointers on AVR-8 to build data structures stored in flash for graphics and such and since even 16-bit operations are spelled out, 24 bit is a comfortable stop on the way to larger things.

reply
skissane
2 months ago
[-]
> I understand the 286 protected mode still made you mess around with segment registers, if you wanted to work with 24-bit long pointers you would have to emulate that behavior with the segment registers and it was a hassle.

As an application programmer it wasn't much different from 16-bit real mode. Windows 3.x, OS/2 1.x and 16-bit DOS extenders gave you APIs for manipulating the segments (GDT/LDT/etc). You'd say you'd want to allocate a 64KB segment of memory, it would give you a selector number you could load into your DS or ES register – not fundamentally different from DOS. For an OS programmer perspective it was more complex, of course.

It was true that with 16-bit real mode you could relatively easily acquire >64KB of contiguous memory, in 16-bit protected mode that was more difficult to come by. (Although the OS could allocate you adjacent selector numbers–but I'm not sure if 16-bit Windows / OS/2 / etc actually offered that as an option.)

That said, 16-bit GDTs/LDTs have a relatively sensible logical structure. Their 32-bit equivalents were a bit of a mess due to backward compatibility requirements (the upper bits of the base and limit being stored in separate fields from the lower fields). And the 386, while it has a much richer feature set than the 286, those added features bring a lot of complexity that 286 OS developers didn't need to worry about. Even if you try your hardest (as contemporary OSes such as Linux and 32-bit Windows do) to avoid the 386's more esoteric features (such as hardware task switching and call gates)

reply
abs0
2 months ago
[-]
> "There's no fundamental problem with getting a modern Unix to run on a VAX except for all the details"

Pretty much. The main issue with modern Unx on a VAX is memory size & performance, which combine to make native compiling under recent gcc versions "problematic", so cross building in gcc-10 or 12 is much easier.

The profusion of (from today's perspective) whacky addressing modes have made maintaining gcc for VAX more effort that it would be otherwise, but it's still there and in use for one modern UNx https://wiki.netbsd.org/ports/vax/ :)

You can download https://opensimh.org/ to get a VAX emulator and boot up to play

Simh also emulates a selection of other interesting and unusual architectures https://opensimh.org/simulators/

reply
Bluestein
2 months ago
[-]
> Transmeta

Whatever happened to them ...

They had a somewhat serious go at being "third wheel" back in the early 2000s, mid 1990s?

  PS. Actually considered getting a Crusoe machine back in the day ...
reply
runjake
2 months ago
[-]
They released a couple processors with much lower performance than the market expected, shut that down, started licensing their IP to Intel, Nvidia, and others, and then got acquired.
reply
sliken
2 months ago
[-]
They had a great plan, that was promising, and Intel was focused entirely on the pentium-4, which had high clocks for bragging rights, long pipeline (related to the high clocks), and high power usage.

However between Transmeta's idea and shipping a product Intel's Israel location came up with the Intel Core series. MUCH more energy efficienct, much better performance per clock, and ideal for lower power platforms like laptops.

Sadly transmeta's no longer had a big enough advantage, sales decreased, and I heard many of the engineers ended up at Nvidia, which did use some of their ideas in a nvidia product.

reply
Bluestein
2 months ago
[-]
> Sadly transmeta's no longer had a big enough advantage, sales decreased, and I heard many of the engineers ended up at Nvidia, which did use some of their ideas in a nvidia product.

Funny how that came about. Talent finds a way, Now they're all sitting in a 3T ship.-

reply
em-bee
2 months ago
[-]
i did get a sony picturebook with a transmeta processor. the problem was that as a consumer i didn't notice anything special about it. for transmeta to make it they would have had to either be cheaper or faster or use less power to be attractive for consumer devices.
reply
Bluestein
2 months ago
[-]
I seem to recall them machines not being cheaper, which was my main hope at the time :)
reply
em-bee
2 months ago
[-]
i got mine as a gift, so i don't remember the price, but i don't think it was cheap. however that would not even bother me. they would have at least had to be cheaper for manufacturers to make it worth it to put them into more devices.
reply
ramses0
2 months ago
[-]
Excellent summary, add "Water Computers" to the mix for completeness. https://www.ultimatetechnologiesgroup.com/blog/computer-ran-...
reply
mass_and_energy
2 months ago
[-]
The use of the Cell processor in the PlayStation 3 was an interesting choice by Sony. It was the perfect successor to the PS2's VU0 and VU1s, so if you were a game developer coming from the PS2 space and were well-versed in the concepts of "my programs job is to feed the VUs", you can scale that knowledge up to keep the cores of the Cell working. The trick seems to be in managing synchronization between them all
reply
lormayna
2 months ago
[-]
Why the Cell processor did not had success in AI/DL applications?
reply
crote
2 months ago
[-]
It was released a decade and a half too early for that, and at the time it was too weird and awkward to use to stay relevant once CUDA caught on.
reply
mass_and_energy
2 months ago
[-]
This. CUDA handles a lot of overhead that the dev is responsible for on the Cell architecture. Makes you wonder what PS3 games would have looked like with CUDA-style abstraction of the Cell's capabilities
reply
sillywalk
2 months ago
[-]
I don't know the details of CUDA, so this may not be a good comparison, but there were efforts to abstract Cell's weirdness. It wasn't for the PS3, but for supercomputers. In particular the Roadrunner which had both Operteron and Cell processors. It was called CellFS and was based on the 9p protocol from Plan 9. It says there was 10-15% overhead, which may not have worked for PS3 games.

https://fmt.ewi.utwente.nl/media/59.pdf

https://www.usenix.org/system/files/login/articles/546-mirtc...

http://doc.cat-v.org/plan_9/IWP9/2007/11.highperf.lucho.pdf

reply
jecel
2 months ago
[-]
"Computer architecture" is used in several different ways and that can lead to some very confusing conversations. Your proposed stack has some of this confusion. Some alternative terms might help:

"computational model": finite state machine, Turing machine, Petri nets, data-flow, stored program (a.k.a. Von Neumann, or Princeton), dual memory (a.k.a. Harvard), cellular automata, neural networks, quantum computers, analog computers for differential equations

"instruction set architecture": ARM, x86, RISC-V, IBM 360

"instruction set style": CISC, RISC, VLIW, MOVE (a.k.a TTA - Transport Triggered Architecture), Vector

"number of addresses": 0 (stack machine), 1 (accumulator machine), 2 (most CISCs), 3 (most RISCs), 4 (popular with sequential memory machines like Turing's ACE or the Bendix G-15)

"micro-architecture": single cycle, multi-cycle, pipelines, super-pipelined, out-of-order

"system organization": distributed memory, shared memory, non uniform memory, homogeneous, heterogeneous

With these different dimensions for "computer architecture" you will have different answers for which was the weirdest one.

reply
defrost
2 months ago
[-]
Setun: three-valued ternary logic computer instead of the common binary- https://en.wikipedia.org/wiki/Setun

Not 'weird' but any architecture that doesn't have an 8-bit byte causes questions and discussion.

EG: Texas Instruments DSP chip family for digital signal processing, they're all about deep pipelined FFT computations with floats and doubles, not piddling about with 8-bit ASCII .. there's no hardware level bit operations to speak of, and the smallest addressable memory size is either 32 or 64 bits.

reply
mikewarot
2 months ago
[-]
BitGrid is my hobby horse. It's a Cartesian grid of cells with 4 bit in, 4 bit out, LUTs (look up tables), latched in alternating phases to eliminate race conditions.

It's the response to the observation that most of the transistors in a computer are idle at any given instant.

There are a full rabbit hole worth of advantages to this architecture once you really dig into it.

Description https://esolangs.org/wiki/Bitgrid

Emulator https://github.com/mikewarot/Bitgrid

reply
bjconlan
2 months ago
[-]
On reading this I thought "oh someone's doing the Green Arrays thing" but this looks like it pre-dates those CPUs by some time.

But as nobody has mentioned it yet surprisingly: https://www.greenarraychips.com/ albeit perhaps not weird; just different

reply
mikewarot
2 months ago
[-]
The Green arrays chips are quite interesting in their own right. The ability to have a grid of CPUs each working at part of a problem could be used in parallelizing a lot of things, including the execution of LLMs.

There are secondary consequences of breaking computation down to a directed acyclic graph of binary logic operations. You can guarantee runtime, as you know a-priori how long each step will take. Splitting up computation to avoid the complications of Amdahl's law should be fairly easy.

I hope to eventually build a small array of Raspberry Pi Pico modules that can emulate a larger array than any one module can handle. Linear scaling is a given.

reply
LargoLasskhyfv
2 months ago
[-]
Nag nag...(meant inspirational in case you're unaware of it)...

Regarding Amdahl's law and avoiding its complications this fits:

https://duckduckgo.com/?q=Apple-CORE+D-RISC+SVP+Microgrids+U...

(Not limited to SPARC, conceptually it's applicable almost anywhere else)

From the software-/programming-/compiler-side this fits right on top it:

https://duckduckgo.com/?q=Dybvig+Nanopass

(Also conceptually, doesn't have to be Scheme, but why not? It's nice.)

reply
MaxBarraclough
2 months ago
[-]
Agreed, they belong on this list. 18-bit Forth computers, available today and intended for real-world use in low-power contexts.

Website: https://www.greenarraychips.com/home/documents/index.php#arc...

PDF with technical overview of one of their chips: https://www.greenarraychips.com/home/documents/greg/PB003-11...

Discussed:

* https://news.ycombinator.com/item?id=23142322

* https://comp.lang.forth.narkive.com/y7h1mSWz/more-thoughts-o...

reply
jononor
2 months ago
[-]
That is quite interesting. Seems quite easy and efficient to implement in an FPGA. Heck, one could make an ASIC for it via TinyTapeout - https://tinytapeout.com/
reply
jy14898
2 months ago
[-]
Transputer

> The name, from "transistor" and "computer", was selected to indicate the role the individual transputers would play: numbers of them would be used as basic building blocks in a larger integrated system, just as transistors had been used in earlier designs.

https://en.wikipedia.org/wiki/Transputer

reply
jy14898
2 months ago
[-]
Weird for its time, not so much today
reply
amy-petrik-214
2 months ago
[-]
they was some interesting funk in the 80s : Lisp computer: https://en.wikipedia.org/wiki/Lisp_machine (these were very hot in 1980s era AI) connection machine: https://en.wikipedia.org/wiki/Connection_Machine (a gorillion monobit processor supercluster)

let us also not forget The Itanic

reply
ithkuil
2 months ago
[-]
CDC 6000 was a barrel processor: https://en.m.wikipedia.org/wiki/Barrel_processor

Mill CPU (so far only patent-ware but interesting nevertheless) : https://millcomputing.com/

reply
sshine
2 months ago
[-]
These aren't implemented in hardware, but they're examples of esoteric architectures:

zk-STARK virtual machines:

https://github.com/TritonVM/triton-vm

https://github.com/risc0/risc0

They're "just" bounded Turing machines with extra cryptography. The VM architectures have been optimized for certain cryptographic primitives so that you can prove properties of arbitrary programs, including the cryptographic verification itself. This lets you e.g. play turn-based games where you commit to make a move/action without revealing it (cryptographic fog-of-war):

https://www.ingonyama.com/blog/cryptographic-fog-of-war

The reason why this requires a specialised architecture is that in order to prove something about the execution of an arbitrary program, you need to arithmetize the entire machine (create a set of equations that are true when the machine performs a valid step, where these equations also hold for certain derivatives of those steps).

reply
mikewarot
2 months ago
[-]
I thought magnetic logic was an interesting technology when I first heard of it. It's never going to replace semiconductors, but if you want to compute on the surface of Venus. You just might be able to make it work there.

The basic limit is the curie point of the cores, and the source of clock drive signals.

https://en.m.wikipedia.org/wiki/Magnetic_logic

reply
mikewarot
2 months ago
[-]
Vacuum tubes would be the perfect thing to generate the clock pulses, as they can be made to withstand the temperatures, vibrations, etc. I'm thinking a nuclear reactor to provide heat via thermopiles might be the way to power it.

However... it's unclear how thermally conductive the "atmosphere" there is, it might make heat engines unworkable, no matter how powerful.

reply
dann0
2 months ago
[-]
The AMULET Project was an asynchronous version of ARM microprocessors. Maybe one could design away the clock like with these? https://en.wikipedia.org/wiki/AMULET_(processor)
reply
mikewarot
2 months ago
[-]
In the case of magnetic logic, the multi-phase clock IS the power supply. Vacuum tubes are quite capable of operating for years in space, if properly designed. I assume the same could be done for the elevated pressures and temperatures on the surface of Venus. As long as you can keep the cathode significantly hotter than the anode, to drive thermionic emission in the right direction, that is.
reply
drakonka
2 months ago
[-]
This reminds me of a talk I went to at the 2020 ALIFE conference, in which the speaker presented an infinitely-scalable architecture called the "Movable Feast Machine". He suggested relinquishing hardware determinism - the hardware can give us wrong answers and the software has to recover, and in some cases the hardware may fail catastrophically. The hardware is a series of tiles with no CPU. Operations are local and best-effort, determinism not guaranteed. The software then has to reconcile that.

It was quite a while ago and my memory is hazy tbh, but I put some quick notes here at the time: https://liza.io/alife-2020-soft-alife-with-splat-and-ulam/

reply
theideaofcoffee
2 months ago
[-]
I was hoping someone was going to mention Dave Ackley and the MFM. It has really burrowed down into my mind and I start to see applications of it even when I'm not looking out for it. It really is a mindfuck and I wish it was something that was a bit more top of mind for people. I really think it will be useful when computation becomes even more ubiquitous than it is now, how we'll have to think even more about failure, and make it a first class citizen.

Though I think it will be difficult to switch the narrative of better-performance-at-all-costs mindset toward something like this. For almost every application, you'd probably be better off worrying about integrity than raw speed.

reply
sitkack
2 months ago
[-]
Dave Ackley

Now working on the T2 Tile Project https://www.youtube.com/@T2TileProject

reply
CalChris
2 months ago
[-]
Intel's iAPX 432. 1975. Instructions were bit-aligned, stack based, 32-bit operations, segmented, capabilities, .... It was so late+slow that the 16-bit 8086 was created.

https://en.wikipedia.org/wiki/Intel_iAPX_432

reply
wallstprog
2 months ago
[-]
I thought it was a brilliant design, but it was dog-slow on hardware at the time. I keep hoping someone would revive the design for current silicon, would be a good impedance match for modern languages, and OS's.
reply
muziq
2 months ago
[-]
The Apple ‘Scorpius’ thing they bought the Cray in the 80’s for emulating.. RISC, multi-core, but could put all the cores in lockstep to operate as pseudo’SIMD.. Or failing that, the 32-bit 6502 successor the MCS65E4.. https://web.archive.org/web/20221029042214if_/http://archive...
reply
mac3n
2 months ago
[-]
FPGA: non-sequential programming

Lightmatter: matrix multiply via optical interferometers

Parametron: coupled oscillator phase logic

rapid single flux quantum logic: high-speed pulse logic

asynchronous logic

https://en.wikipedia.org/wiki/Unconventional_computing

reply
nailer
2 months ago
[-]
The giant global computers that are Solana mainnet / devnet / testnet. The programs are compiled from Rust into (slightly tweaked) EBPF binaries and state updates every 400ms, using VDFs to sync clocks between the leaders that are allowed to update state.
reply
yen223
2 months ago
[-]
A lot of things are Turing-complete. The funniest one to me are Powerpoint slides.

https://beza1e1.tuxen.de/articles/accidentally_turing_comple...

https://gwern.net/turing-complete

reply
jasomill
2 months ago
[-]
I prefer the x86 MOV instruction:

https://web.archive.org/web/20210214020524/https://stedolan....

Removing all but the mov instruction from future iterations of the x86 architecture would have many advantages: the instruction format would be greatly simplified, the expensive decode unit would become much cheaper, and silicon currently used for complex functional units could be repurposed as even more cache. As long as someone else implements the compiler.

reply
trealira
2 months ago
[-]
> As long as someone else implements the compiler.

A C compiler exists already, based on LCC, and it's called the movfuscator.

https://github.com/xoreaxeaxeax/movfuscator

reply
dholm
2 months ago
[-]
reply
metaketa
2 months ago
[-]
HVM using interaction nets as alternative to Turing computation deserves a mention. Google: HigherOrderCompany
reply
0xdeadbeer
2 months ago
[-]
I heard of counter machines on Computerphile https://www.youtube.com/watch?v=PXN7jTNGQIw
reply
AstroJetson
2 months ago
[-]
Huge fan of the Burroughs Large Systems Stack Machines.

https://en.wikipedia.org/wiki/Burroughs_Large_Systems

They had an attached scientific processor to do vector and array computations.

https://bitsavers.org/pdf/burroughs/BSP/BSP_Overview.pdf

reply
convivialdingo
2 months ago
[-]
Incredibly odd system which had no assembler. Almost a pure stack machine, some models had a 51 bit word (48 bit and a 3 bit tag). One of the first machines with NUMA & SMP, segmented memory, and virtual memory (but with all the oddness of a first innovator).

I knew some of the engineers at Unisys who were still supporting Clearpath and Libra. Even they thought Burroughs was weird...

reply
AstroJetson
2 months ago
[-]
Assembler was possible in both ESPOL and NEWP, there was a way to load the initial stack at boot time to get things in place. But it was only a few instructions worth. There was actually a patch to the Algol compiler from UC Davis that let you put any instruction in place. I used that to create named pipes for applications to use.

There was interesting procedure names in the Master Control Program (yes, Tron fans, the real MCP) JEDGARHOOVER was central to system level security. I taught the customer facing MCP class for a few years.

In the early days they gave you the source code and it wasn't uncommon for people to make patches and share them around. Everyone sent patches into the plant and in a release or two you would see them come back as official code.

reply
phyalow
2 months ago
[-]
reply
Elfener
2 months ago
[-]
reply
gregorymtravis
2 months ago
[-]
reply
Hatrix
2 months ago
[-]
reply
GistNoesis
2 months ago
[-]
https://en.wikipedia.org/wiki/Unconventional_computing

There is also Soap Bubble Computing, or various form of annealing computing (like quantum annealing or Adiabatic quantum computation), where you set up your computation as the optimal value of a physical system you can define.

reply
elkekeer
2 months ago
[-]
A multi-core Propeller processor by Parallax (https://en.wikipedia.org/wiki/Parallax_Propeller) in which multitasking is done by cores (called cogs) taking turns: first, code is executed on the first cog, then, after a while, on the second, then on the third, etc.
reply
yencabulator
2 months ago
[-]
Linked in another comment, that seems to be an example of https://en.m.wikipedia.org/wiki/Barrel_processor
reply
vismit2000
2 months ago
[-]
How about water computer? https://youtu.be/IxXaizglscw
reply
yencabulator
2 months ago
[-]
Just the operating system, but I like Barrelfish's idea of having a separate kernel on every core and doing message passing. Each "CPU driver" is single-threaded, non-preemptible (no interrupts), shares no state, bounded-time, and runs to completion. Userspace programs can access shared memory, but the low-level stuff doesn't do that. Bounded-time run to completion kinda makes me think of seL4, if it was designed to be natively multicore.

https://en.wikipedia.org/wiki/Barrelfish_(operating_system)

https://barrelfish.org/publications/TN-000-Overview.pdf

reply
Joker_vD
2 months ago
[-]
IBM 1401. One of the weirdest ISAs I've ever read about, with basically human readable machine code thanks to BCD.
reply
jonjacky
2 months ago
[-]
Yes, it had a variable word length - a number was a string of decimal digits of any length, with a terminator at the end, kind of like a C character string.

Machine code including instructions and data was all printable characters, so you could punch an executable program right onto a card, no translation needed. You could put a card in the reader, press a button, and the card image would be read into memory and executed, no OS needed. Some useful utilities -- list a card deck on the printer, copy a card deck to tape -- fit on a single card.

https://en.wikipedia.org/wiki/IBM_1401

reply
29athrowaway
2 months ago
[-]
The Soviet Union water integrator. An analog, water based computer for computing partial differential equations.

https://en.m.wikipedia.org/wiki/Water_integrator

reply
trealira
2 months ago
[-]
The ENIAC, the first computer, didn't have assembly language. You programmed it by fiddling with circuits and switches. Also, it didn't use binary integers, but decimal ones, with 10 vacuum tubes to represent the digits 0-9.
reply
Paul-Craft
2 months ago
[-]
reply
jareklupinski
2 months ago
[-]
Physical Substrate: Carbon / Water / Sodium

Computation Theory: Cognitive Processes

Smallest parts: Neurons

Largest parts: Brains

Lowest level language: Proprietary

First abstractions of programming: Bootstrapped / Self-learning

Software architecture: Maslow's Theory of Needs

User Interface: Sight, Sound

reply
theandrewbailey
2 months ago
[-]
The big problem is that machines built using these technologies tend to be unreliable. Sure, they are durable, self-repairing, and can run for decades, but they can have a will of their own. While loading a program, there is a non-zero chance that the machine will completely ignore the program and tell you to go f*** yourself.
reply
variadix
2 months ago
[-]
From the creator of Forth https://youtu.be/0PclgBd6_Zs

144 small computers in a grid that can communicate with each other

reply
porridgeraisin
2 months ago
[-]
reply
RecycledEle
2 months ago
[-]
Using piloted pneumatic valves as logic gates blew my mind.

If you are looking for strangeness, the 1990's to early 2000's microcontrollers had I/O ports, but every single I/O port was different. None of them had a standard so that we could (for example) plug in a 10-pin header and connect the same peripheral to any of the I/O ports on a single microcontroller, much less any microcontroller they made in a family of microcontrollers.

reply
mbfg
2 months ago
[-]
I've got to believe x86 is in the running. We don't think of it because it is the dominate architecture, but it's kind of crazy.
reply
PeterStuer
2 months ago
[-]
In the 80's our lab lobbied the university to get a CM-1. We failed and they got a Cray instead. The connection machine was a realy different architectute aimed at massive partallel execution https://en.wikipedia.org/wiki/Connection_Machine
reply
dwrodri
2 months ago
[-]
If you really want to see some esoteric computer architecture ideas, check out Mill Computing: https://millcomputing.com/wiki/Architecture. I don't think they've etched any of their designs into silicon, but very fascinating ideas nonetheless.
reply
jacknews
2 months ago
[-]
Of course there are things like the molecular mechanical computers proposed/popularised by Eric Drexler etc.

I think Transport-triggered architecture (https://en.wikipedia.org/wiki/Transport_triggered_architectu...) is something still not fully explored.

reply
BarbaryCoast
2 months ago
[-]
Look at the earliest computers, that is, those around the time of ENIAC. Most were electro-mechanical, some were entirely relay machines. I believe EDSAC was the first _electronic_ digital computer.

As for weird, try this: ENIAC instructions modified themselves. Back then, an "instruction" (they called them "orders") included the addresses of the operands and destination (which was usually the accumulator). So if you wanted to sum the numbers in an array, you'd put the address of the first element in the instructions, and as ENIAC repeated that instruction (a specified number of times), the address in the instruction would be auto-incremented.

Or how about this: a computer with NO 'jump' or 'branch' instruction? The ATLAS-1 was a landmark of computing, having invented most of the things we take for granted now, like virtual memory, paging, and multi-programming. But it had NO instruction for altering the control flow. Instead, the programmer would simply _write_ to the program counter (PC). Then the next instruction would be fetched from the address in the PC. If the programmer wanted to return to the previous location (a "subroutine call"), they'd be obligated to save what was in the PC before overwriting it. There was no stack, unless you count a programmer writing the code to save a specific number of PC values, and adding code to all subroutines to fetch the old value and restore it to the PC. I do admire the simplicity -- want to run code at a different address? Tell me what it is and I'll just go there, no questions asked.

Or maybe these shouldn't count as "weird", because no one had yet figured out what a computer should be. There was no "standard" model (despite Von Neumann) for the design of a machine, and cost considerations plus new developments (spurred by wanting better computers) meant that the "best" design was constantly changing.

Consider that post-WWII, some materials were hard to come by. So much so that one researcher used a Slinky (yes, the toy) as a memory storage device. And had it working. They wanted acoustic delay lines (the standard of the time), but the Slinky was more available. So it did the same job, just with a different medium.

I've spent a lot of time researching these early machines, wanting to find the path each item in a now-standard model of an idealized computer. It's full of twists and turns, dead ends and unintentional invention.

reply
supercoffee
2 months ago
[-]
I'm fascinated by the mechanical fire control computers of WW2 battleships.

https://arstechnica.com/information-technology/2020/05/gears...

reply
ChristopherDrum
2 months ago
[-]
Mythic produces an analog processor https://mythic.ai/

There is also the analog computer The Analog Thing https://the-analog-thing.org/

reply
sshb
2 months ago
[-]
This unconventional computing magazine came to my mind: http://links-series.com/links-series-special-edition-1-uncon...

Compute with mushrooms, compute near black holes, etc.

reply
t312227
2 months ago
[-]
hello,

great collection of interesting links - kudos to all! :=)

idk ... but isn't the "general" architecture of most of our computers "von neumann"!?

* https://en.wikipedia.org/wiki/Von_Neumann_architecture

but what i miss from the various lists, is the "transputer"-architecture / ecosystem from INMOS - a concept of heavily networked arrays of small cores from the 1980ties

about transputers

* https://en.wikipedia.org/wiki/Transputer

about INMOS

* https://en.wikipedia.org/wiki/Inmos

i had the possibility to take a look at a "real life" ATW - atari transputer workstation - back in the days at my university / CS department :))

mainly used with the Helios operating-system

* https://en.wikipedia.org/wiki/HeliOS

to be programmed in occam

* https://en.wikipedia.org/wiki/Occam_(programming_language)

the "atari transputer workstation" ~ more or less a "smaller" atari mega ST as the "host node" connected to an (extendable) array of extension-cards containing the transputer-chips:

* https://en.wikipedia.org/wiki/Atari_Transputer_Workstation

just my 0.02€

reply
madflame991
2 months ago
[-]
> but isn't the "general" architecture of most of our computers "von neumann"!?

That's something I was also curious about and it turns out Arduinos use the Harvard architecture. You might say Arduinos are not really "computers" but after a bit of googling I found an Apple II emulator running on an Arduino and, well, an Apple II is generally accepted to be a computer :)

reply
HeyLaughingBoy
2 months ago
[-]
One of the most popular microcontroller series in history, Intel (and others) 8051, used a Harvard architecture.
reply
t312227
2 months ago
[-]
hello,

if i remember it correctly the main difference of the "harvard"-architecture was: it uses its own data and program/instruction buses ...

* https://en.wikipedia.org/wiki/Harvard_architecture

i think texas instruments 320x0 signal-processors used this architecture ... back in, you guessed it!, the 1980ties.

* https://en.wikipedia.org/wiki/TMS320

ah, they use a modified harvard architecture :))

* https://en.wikipedia.org/wiki/Modified_Harvard_architecture

cheers!

reply
dsr_
2 months ago
[-]
There are several replacements for electronic logic; some of them have even been built.

https://en.wikipedia.org/wiki/Logic_gate#Non-electronic_logi...

reply
floxy
2 months ago
[-]
reply
solardev
2 months ago
[-]
Analog computers, quantum computers, light based computers, DNA based computers, etc.
reply
mikequinlan
2 months ago
[-]
reply
osigurdson
2 months ago
[-]
I'm not sure what the computer architecture was, but I recall the engine controller for the V22 Osprey (AE1107) used odd formats like 11 bit floating point numbers, 7 bit ints, etc.
reply
CoastalCoder
2 months ago
[-]
Why past tense? Does the Osprey no longer use that engine or computer?
reply
joehosteny
2 months ago
[-]
The Piperench runtime reconfigurable FPGA out of CMU:

https://research.ece.cmu.edu/piperench/

reply
ranger_danger
2 months ago
[-]
9-bit bytes, 27-bit words... middle endian.

https://dttw.tech/posts/rJHDh3RLb

reply
vapemaster
2 months ago
[-]
since this is a bit of a catch-all thread, i'll toss Anton into the ring: a whole bunch of custom ASICs to do Molecular Dynamics simulations from D.E. Shaw Research

https://en.wikipedia.org/wiki/Anton_(computer)

reply
dongecko
2 months ago
[-]
Motorola used to have a one bit microprocessor, the MC14500B.
reply
Dr_Jefyll
2 months ago
[-]
"One Bit Computing at 60 Hz" describes a one-bit design of my own that folks have repeatedly posted to HN. It's notable for NOT using the MC14500... (and for puzzling some of the readers!)

The original 2019 post by Garbage [1] attracted the most comments. But in a reply to one of the subsequent posts [2] I talk a bit about actually coding for the thing. :)

[1] https://news.ycombinator.com/item?id=7616831

[2] https://news.ycombinator.com/item?id=20565779

reply
prosaic-hacker
2 months ago
[-]
Breadboard implementation using the MC14500b

Usagi Electric 1-Bit Breadboard Computer P.01 – The MC14500 and 555 Clock Circuit https://www.youtube.com/watch?v=oPA8dHtf_6M&list=PLnw98JPyOb...

reply
ksherlock
2 months ago
[-]
The tinker toy computer doesn't even use electricity.
reply
gjvc
2 months ago
[-]
rekursiv
reply
gjvc
2 months ago
[-]
reply