Notes on the Intel 8086 processor's arithmetic-logic unit
47 points
3 hours ago
| 1 comment
| righto.com
| HN
kens
3 hours ago
[-]
Author here for all your 8086 questions...
reply
bcrl
1 hour ago
[-]
Thanks for publishing your blog! The articles are quite enlightening, and it's interesting to see how semiconductors evolved in the '70s, '80s and '90s. Having grown up in this time, I feel it was a great time to learn as one could understand an entire computer, but details like this were completely inaccessible back then. Keep up the good work knowing that it is appreciated!

A more personal question: is your reverse engineering work just a hobby or is it tied in with your day to day work?

reply
kens
1 hour ago
[-]
Thanks! The reverse engineering is just for fun. I have a software background, so I'm entirely unqualified to be doing this :-) But I figure that if I'm a programmer, I should know how computers really work.
reply
gruturo
1 hour ago
[-]
Awesome article Ken, I feel spoiled! It's always nice to see your posts hit HN!

Out of curiosity: Is there anything you feel they could have done better in hindsight? Useless instructions, or inefficient ones, or "missing" ones? Either down at the transistor level, or in high level design/philosophy (the segment/offset mechanism creating 20 bit addresses out of 2 16-bit registers with thousands of overlaps sure comes to mind - if not a flat model, but that's asking too much to 1979 design and transistor limitations I guess) ?

Thanks!

reply
kens
18 minutes ago
[-]
That's an interesting question. Keep in mind that the 8086 was built as a stopgap processor to sell until Intel's iAPX 432 "micro-mainframe" processor was completed. Moreover, the 8086 was designed to be assembly-language compatible with the 8080 (through translation software) so it could take advantage of existing software. It was also designed to be compatible with the 8080's 16-bit addressing while supporting more memory.

Given those constraints, the design of the 8086 makes sense. In hindsight, though, considering that the x86 architecture has lasted for decades, there are a lot of things that could have been done differently. For example, the instruction encoding is a mess and didn't have an easy path for extending the instruction set. Trapping on invalid instructions would have been a good idea. The BCD instructions are not useful nowadays. Treating a register as two overlapping 8-bit registers (AL, AH) makes register renaming difficult in an out-of-order execution system. A flat address space would have been much nicer than segmented memory, as you mention. The concept of I/O operations vs memory operations was inherited from the Datapoint 2200; memory-mapped I/O would have been better. Overall, a more RISC-like architecture would have been good.

I can't really fault the 8086 designers for their decisions, since they made sense at the time. But if you could go back in a time machine, one could certainly give them a lot of advice!

reply
gruturo
8 minutes ago
[-]
> I can't really fault the 8086 designers for their decisions, since they made sense at the time. But if you could go back in a time machine, one could certainly give them a lot of advice!

Thanks for capturing my feeling very precisely! I was indeed thinking what they could have done better with the same approximate number of transistor and the benefit of a time traveler :) And yes the constraints you mention (8080 compatibility, etc) indeed limit their leeway so maybe we'd have to point the time machine at a few years earlier and influence the 8080 first

reply