Of course the deeper you dig anywhere, the more complexity gets unearthed, and the more fair credit must be distributed across more clever engineers, dilluting the "single genius" picture that movie makers and often sadly also journalists try to portray ("reality distortion field").
I would quite like a minimalistic b/w GUI as the PERQ had on the screen shot.
Leaving out all the transparency/rounded corners nonsense this should be bleeding fast, too, with today's graphics capabilities.
EDIT: typo fixed
Before the Apple Lisa, the SUN-1, the Mac, ...
These are Lisp Machine images/screen shots from MIT from 1980:
https://bitsavers.org/pdf/symbolics/LM-2/LM_Screen_Shots_Jun...
* a dual screen machine with a electronics design system
* Macsyma with plots
* Inspector, Window Debugger, Font Editor
* Music notation
* first tiled screen/window manager
* another electronic CAD tool
The UI was developed using Flavors, the object-oriented system for the Lisp Machine (message passing, classes&objects, multiple inheritance, mixins, ...).
i do think they made noticeably (and fascinatingly) different choices not only in what they designed for their user interfaces but also what they thought of as a good user interface
other lispm-related links, other than the one lispm posted above:
https://github.com/lisper/cpus-caddr #Lisp machine #gateware for the MIT CADR in modern #Verilog. “Boots and runs.” By #lisper (Brad Parker)
https://metebalci.com/blog/cadr-lisp-machine-and-cadr-proces... #CADR #Lisp-machine #retrocomputing
at some point i came across a treasure trove of lispm screenshots showing many aspects of the genera ui, but apparently it's not in my bookmarks file
one quibble: i don't think flavors was around until quite a bit later, maybe 01979, so the lispm software wasn't oo until quite late. weinreb and moon's aim-602 on it wasn't until 01980: https://apps.dtic.mil/sti/pdfs/ADA095523.pdf and of the five keywords they chose to describe it, one was 'smalltalk'
Though software written originally for the Xerox Interlisp systems got ported to the MIT Lisp Machine descendants, since the hardware was more capable with more memory. For example Intellicorp KEE (Knowledge Engineering Environment) was ported. It retained much of its original UI (which included extensive graphics features and an interface builder), when running on Symbolics Genera. It looks like one is using an Xerox Interlisp UI when running it on a Symbolics. https://vimeo.com/909417132 For example at 08:30 the screen looks like an Interlisp system, even though it is Symbolics Genera.
Xerox PARC also had real MIT CADRs in their network. I remember seeing a photo, with an office, where the Xerox PARC employee had both an Interlisp workstation and (IIRC) a Symbolics. There is also a video for the IJCAI 81 (International Joint Conference on Artificial Intelligence), with demos of an Interlisp system and a MIT Lisp Machine, both demos recorded at Xerox PARC.
https://github.com/lisper/cpus-caddr #Lisp machine #gateware for the
MIT CADR in modern #Verilog. “Boots and runs.” By #lisper (Brad
Parker)
It doesn't boot or work -- there are CDC issue and other stuff, plus
not working on anything that is easilly found (help wanted to get that
stuff working again!). Current version of the FPGA CADR is at
https://tumbleweed.nu/r/uhdl -- and CADR restoration and stuff work at
https://tumbleweed.nu/lm-3 . one quibble: i don't think flavors was around until quite a bit
later, maybe 01979, so the lispm software wasn't oo until quite
late. weinreb and moon's aim-602 on it wasn't until 01980 [...]
I wouldn't call it "quite late" -- it was only a 2-3 year gap from the
system booting to very heavy usage (1977 - 1979); the Lisp Machine
(specifically the MIT and LMI branches) survived for 10 more years
after that.Message passing was added on or around 03/09/79, and things got quickly adjusted and flushed after that.
MMCM@MIT-AI 03/09/79 21:26:00
To: (BUG LISPM) at MIT-AI
Band 6 on CADR1 is a new experimental world load.
In addition to quite a few accumulated changes, this load
has some important changes to the message passing stuff.
CLASS-CLASS now has three instance variables where formerly
it had two. Most importantly, instead of the SELECT-METHOD
being in the function cell of the xxx-CLASS symbol (typically) there is now a
gensym (called the CLASS-METHOD-SYMBOL) created for each instance
of CLASS-CLASS which holds the select-method. The CLASS-METHOD-SYMBOL
for a given class can be obtained by sending the class a :CLASS-METHOD-SYMBOL
message.
A forthcomming essay will describe the motovation for doing this,
etc. For the present, I note some incompatibilities and changes to
operating procedure which this makes necessary.
Incompatibilites:
(1) The <-AS macro has been changed, and any file which uses it
must be recompiled. If this is not done, the symptom will usually
be a xxx-CLASS undefined function error.
(2) Any other file which uses DEFMETHOD should eventually be recompiled.
There is a "bridge" which causes old files to win for the time being,
but this will eventually go away.
Since new QFASL files wont work in old loads, its probably a good
idea to rename your old QFASL files if you compile with the new load.
When you do a DEFCLASS, you always get a fresh class with no methods
defined. If you are redefining a class this means two things:
instances of the old class will continue to work
(this was not always true before. However, we still have the
problem that method names can conflict between new and old versions
of the class).
all the methods used by a class need to be reloaded for new instances of the
new class to win completely. Also subclasses of the given class
need to be reloaded.
Since reloading is a gross pain, you should not redo DEFCLASSes
unless you really mean it. Probably DEFCLASSes should be put in a
separate file so the rest of the file can be reprocessed without
redoing the DEFCLASS. Hopefully,
there will eventually exist system features to make this more convenient.
01979 is still a lot later than smalltalk-72 or the original cadr design from 01974 (https://www.researchgate.net/publication/221213025_A_LISP_ma... says 'several suitable [semiconductor memory] devices are strongly rumored to be slated for fourth quarter 74 announcement' so maybe they didn't actually get it working until 01975 when they ported emacs to it)
Neither Flavors or DEFCLASS got replaced -- they have different purposes though can often be replaced either way.
The PERQ was just in the timeline between the Alto and the LISA/Mac. But there isn't much of a relationship to the latter two and not much to the Alto. The Alto was used to write stuff like its OS in BCPL, the Office software written in Mesa, Smalltalk, Interlisp, ... but AFAIK no UNIX and no PASCAL system. The PERQ did not run Alto-Software.
Background information about the PERQ history: http://bitsavers.informatik.uni-stuttgart.de/pdf/perq/RD_Dav...
Followed by all the languages, those ones influenced as well.
The article states (and it makes sense) that the PERQ had influence on the modern Mac, the Macs running Mac OS X and later. Since NeXT and then Apple used UNIX on a Mach Kernel, the latter came as a direct influence from CMU.
Influence on the LISA or the early Mac (before Mac OS X)? Not so much..., those were influenced directly from Xerox PARC ideas, people and technology.
Related to PERQ: The SPICE project from CMU, using PERQs, was also developing SPICE Lisp, which was influenced by the LISP stuff from MIT, including an EMACS variant, influenced by the ZWEI editor from the Lisp Machine. SPICE Lisp from the PERQ, evolved into CMU CL and later SBCL.
But the Xerox PARC -> Apple route is broad and is well documented. Steve Jobs got a demo of the secret Xerox stuff in exchange of some Apple stock, including the Smalltalk system (though the Smalltalk team were less then happy about it). He hired Larry Tesler. Apple was also one of the selected companies getting Smalltalk-80 access and ported it to their machines (later the Apple sources were open sourced). Steve was also interested in the office domain, desktop publishing was one of the first applications of the Mac (incl. Postscript, the Laserwriter, Adobe Pagemaker, ...).
the story of smalltalk at apple is fairly complex
apple did also implement smalltalk-80 on the lisa, and there's a screenshot of that in the smalltalk-80 book (01984), but it was apparently never released, even as proprietary software. apple shipped a macintosh smalltalk in august 01985 (according to http://basalgangster.macgui.com/RetroMacComputing/The_Long_V...) but i don't think squeak (the apple smalltalk that eventually became open-source) was derived from it
alan kay didn't join apple until 01984 (the year after jobs hired sculley as ceo), jobs left in september 01985, and squeak wasn't announced until 01996, which is about the time kay left, and i think squeak as a whole departed for disney at that point. jobs came back in 01997, and though he didn't kill squeak, he did kill hypercard, the other main environment for individual computational autonomy. jobs's apple did eventually make squeak open-source—in 02006!
jecel assumpçao in https://news.ycombinator.com/item?id=23392074 explains that the reason for finally open sourcing squeak a decade after the project ended was that olpc (itself fairly locked down, though less so than iphones) demanded it. also, though, he does heavily imply that it descended from the original apple smalltalk
It wasn’t until Tim Cook took over that Macs became more locked-down in terms of user-serviceability and expandability, culminating with the switch to ARM, where Apple sells no Macs with user-upgradable RAM anymore.
Had Apple’s leadership been more focused in the “interregnum” years of 1985-1996, we could be using Dynabooks running some sort of modern Dylan/Common Lisp machine architecture with a refined Macintosh interface. Apple had all the pieces (Newton’s prototype Lisp OS, SK8, Dylan, OpenDoc, etc.), but unfortunately Apple was unfocused (Pink, Copland, etc.) while Microsoft gained a foothold with DOS/Windows. What could’ve been...my dream side project is to make this alternate universe a reality by building what’s essentially a modern Lisp machine.
Actually that ship has sailed. The M1 MacBook Air was a big step up on any prior "user serviceable" Mac. It's portable, fast, extremely efficient, light-weight, robust and totally silent. Upgrading RAM has mostly been a non-issue. The Symbolics Genera emulator on the M1 runs roughly 80 times faster than my hardware Symbolics board in my Mac IIfx. That hardware was fragile and expensive. I'm much happier now, given that this stuff runs much better.
I loved my 2006 MacBook. It was lightweight for the time, and it was remarkably easy to make RAM and storage upgrades. I also enjoyed my 2013 Mac Pro, which I purchased refurbished in 2017. While it didn’t have expansion slots, I did upgrade the RAM during the pandemic from 12GB to 64GB, which was wonderful!
This also applies to Pharo, at least for the initial versions as they forked out of Squeak.
Clascal came to be, because Smalltalk was too demanding for Lisa's hardware.
I don't think there was ever a move to use Smalltalk in Apple Products, anyway. Besides a pre-product version of Apple Smalltalk 80 itself, which was available for a short time.
Eventually also PASCAL (using BEGIN/END) lost at Apple against the curly braces in C/C++/Objective C/Java/JavaScript/Swift.
One thing to keep in mind: the UI state of the Art was fast evolving and applications under some of these systems might have UIs different from the underlying operating system. That would also be true on the Mac: HyperCard had a look&feel very different from the underlying Mac OS.
For example Xerox developed "The Analyst" in Smalltalk 80 for the CIA: http://www.bitsavers.org/pdf/xerox/xsis/XSIS_Smalltalk_Produ...
I would think that NoteCards (written by Xerox in Interlisp-D) had similar customers and that there might also be some joint UI influence.
in this context, it's amusing that p.6/36 of that scan you linked cites user interface uniformity as a key advantage of smalltalk: 'the environment's window facilities consistently adhere to a small number of user interface conventions that are quickly learned by casual and experienced users alike.'
[39]: https://dl.acm.org/doi/10.5555/2092.2093 "The future of interactive systems and the emergence of direct manipulation, by Ben Shneiderman, originally presented as the Keynote Address at the NYU Symposium on User Interfaces, 01982-05-26–28, then published with numerous typographical errors in 01982, Behaviour & Information Technology, 1:3, 237-256, DOI 10.1080/01449298208914450"
However, I think the author puts too fine a point on the literal exact geographic position of the technology, and not the historical & material forces that manifested. Obviously every computer advancement didn't occur in sunny Palo Alto directly (just reading where your device was "assembled" will tell you that). But even this article trying to highlight the other places where all of this was going on; the author cannot be unburdened by the massive forces coming out of the Bay Area. This is most obvious when the author has to mention Xerox PARC but not interrogate _why_ Xerox chose that of all locations to let them start a "wild unsupervised west-coast lab".
https://en.wikipedia.org/wiki/Augmentation_Research_Center
Very much a personal nitpick on a very well written entry so I hope this doesn't come off overly negative.
Imagine all the human hours that stretches back decades trying to develop this single model of computing.
I'm starting to believe that oral history and tradition is what moves the world along. All the written texts are transient. What we pass directly to each generation is our continuity of culture.
some current intel and amd parts also support 'hyperthreading', but as i understand it they sometimes run many instructions from the same thread sequentially, unlike the others mentioned above (except the padauk pmc251), and they are limited to 2 or 4 threads, again unlike the others mentioned except the pmc251
i'm a little unclear on the extent to which current gpu hardware supports this kind of every-clock-cycle alternation between different instruction streams; does anyone know?
I want to make big furniture size circuits for living environments. This family of chips represent about the most complication I want to consider. I could have the largest circuit create symbols through a busy board interface. The symbols would be understood at a human level and could also be monitored by more complex computing processes.
I used it's animated icon tool "cedra" to make tintin's captain haddock blow smoke out his ears.
We had the icl jv one. A beauty in reddish brown and cream. Made outside edinburgh near dalkeith I believe
Someone also added the PERQ A1 to Mame in 0.192, but as of now it is still marked as MACHINE_IS_SKELETON
I'm glad they didn't start out with only 128 K of RAM, that would have sucked.
It's interesting there's a heritage of code stretching all the way back to these old machines, although of course the changes since then have been massive.
For a long time I did not know where SBCL got its name, until someone explained that Carnegie got his fortune from steel and Mellon ran a bank.
http://www.bitsavers.org/pdf/perq/accent_S5/Accent_UsersManu...
I see windows and bitmap graphics in the screenshots I can find.
But I don't see menus, a desktop, standardized buttons, scroll bars, etc. In other words I don't see the hallmarks of the Xerox Star, Apple Lisa, and Macintosh. It looks influenced by the Xerox products but not as advanced.
the only reason my x-windows desktop at that time would have recognizable buttons was that at first i was running mwm, which was from motif, osf's ugly but comprehensible effort to fight off open look. later i switched to athena's twm (uglier still but customizable), then the much nicer olvwm, then fvwm, which was similarly comprehensible but looked good
PERQ Reference Manual: http://www.vonhagen.org/perqsystems/perq-cpu-ref.pdf
PERQ Workstations: http://www.bitsavers.org/pdf/perq/RD_Davis/Davis-PERQ_Workst...
PERQ FAQ: http://www.vonhagen.org/perq-gen-faq.html
PERQ History -- Overview: https://www.chilton-computing.org.uk/acd/sus/perq_history/
PERQ Publicity: https://www.chilton-computing.org.uk/acd/sus/perq_pr/
PERQ System Users Short Guide: https://www.chilton-computing.org.uk/acd/pdfs/perq_p001.pdf
More PERQ notes (click "Further Reading" for more pages): https://www.chilton-computing.org.uk/acd/literature/notes/di...
PERQ Book: Contents: https://www.chilton-computing.org.uk/acd/literature/books/pe...
1. Perq System Users Short Guide
2. Perq files information
3. Editor Quick Guide Guide
4. Perq Pascal Extensions
5. Perq Pascal Extensions Addendum
6. Perq Hard Disk Primer
7. Perq Operating System Programmers Guide
8. Perq QCode Reference Manual
9. Perq Microprogrammers Guide
10. Perq Fault Dictionary
11. Installation Guide
12. New PERQ Tablet and Cursor Interface
13. System B.1 Stream Package
14. Changes to Pix in System B.1
15. Installation of POS Version B.1
https://en.wikipedia.org/wiki/PERQ
>"Processor
The PERQ CPU was a microcoded discrete logic design, rather than a microprocessor. It was based around 74S181 bit-slice ALUs and an Am2910 microcode sequencer. The PERQ CPU was unusual in having 20-bit wide registers and a writable control store (WCS), allowing the microcode to be redefined.[4] The CPU had a microinstruction cycle period of 170 ns (5.88 MHz).[5]"
So the cpu board was all logic chips implementing the P-Code machine language, it wasn’t a cpu chip with supporting logic.
That gives you an idea of computing in the old days.
Back in the day PASCAL was the main teaching language at CMU.
(Edit) There seems to be some pushback on what I’m pointing out here, but it’s true, the cpu board is not built around a cpu chip, they built a microcode sequencer, ALU, etc to execute a p-code variant.
You can read about it here: http://bitsavers.org/pdf/perq/PERQ_CPU_Tech_Ref.pdf
Schematics here: http://bitsavers.org/pdf/perq/perq1/PERQ1A_Schematics/03_CPU...
Pic: http://bitsavers.org/pdf/perq/perq1/PERQ1A_PCB_Pics/CPU_top....
based on this it seems reasonable to continue believing that, as graydon says, it ran pascal via a p-code interpreter, but that that interpreter was implemented in microcode
and i don't think it's accurate to say 'the cpu board was all logic chips implementing the p-code machine language'. the logic chips implemented microcode execution; the microcode implemented p-code
i agree that this is the same extent to which lisp machines implemented lisp—but evidently the perq also ran cmucl, c, and fortran, so i don't think it's entirely accurate to describe it as 'a pascal machine'
yes
it would only be accurate, when one looks at BOTH the CPU and the microcode. The Xerox Interlisp-D machine was a Lisp Machine with the specific microcode. It was a Smalltalk machine and a Mesa machine - each with their microcode.
The original MIT Lisp machine was also microcoded, though I don't know other Microcode than the one for Lisp. The early Symbolics Lisp Machines were also microcoded, but again only for the Lisp OS, with the microcode evolving over time to support features of Common Lisp, Prolog and CLOS.
There were complaints that the Microcode on the Lisp Machines was very complex, which contributed to the death of the machines. For example here is an interview with Keith Diefendorff, who was also architect for the TI Explorer Lisp Machine. In his interview he talks about the Explorer Project and the Microcode topic: https://www.youtube.com/watch?v=9la7398ruXQ
EDIT: An example: The CADR has a nice API from Lisp to the CHAOSNET hardware, the microcode wakes up a stack group (thread) and passes it a buffer to process. Later machines had Ethernet but there isn't any microcode support for the hardware, Lisp code just polls the status of the ethernet controller and copies packets around a byte at a time. The microcode buffer handling routines for CHAOSNET could have been reused for Ethernet.
The CADR already had support for (pre-)Ethernet via microcode very early (~1979) and did it more or less the same way as for Chaosnet. The Lambda I think modified this quite heavily though to something else ...
Second I’m just making a distinction about what people refer to as emulation. Although you could change the microcode, that typically meant you had to reprogram the board. Microcode is typically inaccessible outside of the cpu. Microcode provides sub-operations within the cpu.
i don't think we have any substantive disagreements left, we're just getting tangled up in confusing, ambiguous terminology like 'native' and 'emulation'
Article:
> [...] user-written microcode and custom instruction sets, and the PERQ ran Pascal P-code. Through a microcode emulator. Things were wild.
PDF:
> It will also prove useful to advanced programmers who wish to modify the PERQ’s internal microcode and therefore need to understand how this microcode controls the operation of the CPU, [...]
It sounds like it came with microcode that interpreted P-code, but that was user-changeable.
The "wild" part is doing p-code interpretation in microcode, instead of a normal program. See also https://en.wikipedia.org/wiki/Pascal_MicroEngine
There is microcode inside cpu chips today too, they are used to implement parts of the instruction set. The microcode is not typically accessible outside of the cpu, and it is not considered the native machine language, the instruction set.
The article you link to uses the word “emulator” once, to describe emulation on top of another system without this native support.
microcode became a popular implementation technique in the 01960s and fell out of favor with the meteoric rise of risc in the 80s
i think it's reasonable to say that an instruction set emulated by the microcode is a 'native instruction set' or 'native machine language'; it's as native as 8086 code was on the 8086 or lisp primitives were on the cadr. but in this case there were evidently several machine languages implemented in microcode, p-code being only one of them. so it's incorrect to say that p-code was the native machine language, it's incorrect to say that 'the cpu board was all logic chips implementing the p-code machine language', it's incorrect to say that 'they built a microcode sequencer (...) to execute a p-code variant', and it's incorrect to say 'they designed a board to execute p-code directly'
The other thing the CPU board is designed for is fast graphics -- the rasterop hardware is set up so that with the right carefully designed microcode sequences it can do a 'load two sources, do a logical op and write to destination' as fast as the memory subsystem will let you do the 64 bit memory operations. It takes about four CPU cycles to do a memory operation, so you kick it off, do other microcode ops in the meantime, and then you can read the result in the microinsn that executes in 4 cycles' time. The rasterops microcode source code is all carefully annotated with comments about which T state each isnn executes in, so it stays in sync with the memory cycles.
The other fun thing is that the microcode sequencer gives you a branch "for free" in most insns -- there's a "next microinsn" field that is only sometimes used for other purposes. So the microcode assembler will happily scatter flow of execution all over the 4K of microcode ram as it places fragments to ensure that the parts that do need to go in sequence are in sequence and the parts at fixed addresses are at their fixed locations, and them fills in the rest wherever...
next-microinstruction fields are pretty common even in vertical microcode like this. do you have the microcode assembler running? have you written custom perq microcode?
So I think we are quibbling, but it’s their words.
i think we agree that it supports p-code as a native instruction set, but it's easy to draw incorrect inferences from that statement, such as your claim that the microcode sequencer executed a p-code variant. it would be reasonable inference from the literature you quote, but it's wrong
Fun fact, the cpu board ran Pascal P-Code as a machine language. The cpu wasn’t a chip, they designed a board to execute P-Code directly.
I had access to a PERQ in about 1985, and it was running Pascal firmware - at the time, Pascal was the baseline language for CS courses. I seem to recall it had a tiling WM and a mouse about the size of a softball. There were Altos upstairs though I think they only acted as queues for the building's laser printers (which implemented some pre-PS page description language). But those were the days when 9600 baud was the norm...
https://en.wikipedia.org/wiki/List_of_the_largest_software_c...
or do you mean that shenzhen is closer to taiwan than boston is to menlo park? i wouldn't dismiss the importance of intel, nvidia, micron, berkeley, stanford, asml, samsung, apple, etc., just yet
https://www.amazon.com/Dispersing-Population-America-Learn-E...
which describes a nearly universal perception in Europe at the time that it was a problem that economic and cultural power is concentrated in cities like Paris and London. (It takes a different form in the US in that the US made a decision to put the capital in a place that wasn’t a major center just as most states the did the same; so it is not that people are resentful of Washington but rather a set that includes that, New York, Los Angeles and several other cities.)
At that time there was more fear of getting bombed with H-Bombs but in the 1980s once you had Reagan and Thatcher and “globalization” there was very much a sense that countries had to reinforce their champion cities so they can compete against other champion cities so the “geographic inclusion” of Shenzhen and Tel Aviv is linked to the redlining of 98% of America and similar phenomena in those countries as well.
It is not so compatible with a healthy democracy because the “left behind” vote so you get things like Brexit which are ultimately destructive but I’d blame the political system being incapable of an effective response for these occasional spams of anger.
i'm not quite sure what you're saying about reagan and shenzhen
Until then China was more aligned with Russia but Nixon and Kissinger really hit it off Mao and Zhao Enlai and eventually you got Deng Xiaoping who had slogans like "To get rich is glorious" and China went on a path of encouraging "capitalist" economic growth that went through various phases. Early on China brought cheap labor to the table, right now they bring a willingness to invest (e.g. out capitalize us) such that, at their best, they bu make investments like this mushroom factory which is a giant vertical farm where people only handle the mushrooms with forklifts
https://www.finc-sh.com/en/about.aspx#fincvideo
(I find that video endlessly fascinating because of all the details like photos of the founders using the same shelving and jars that my wife did when she ran a mushroom lab)
Contrast that to the "Atlas Shrugged" situation we have here where a movie studio thinks they can't afford to spend $150 M to make a movie that makes $200M at the box office (never mind the home video, merchandise, and streaming rights which multiply that) which is the capitalist version of a janitor deciding they deserve $80 an hour.
This book by a left-leaning economist circa 1980
https://www.amazon.com/Zero-Sum-Society-Distribution-Possibi...
points out how free trade won hearts and minds: how the US steel industry didn't want to disinvest in obsolete equipment which had harmful impacts on consumers and the rest of our industry. All people remember though is that the jobs went away
https://www.youtube.com/watch?v=BHnJp0oyOxs
That focus on winning at increased international competition meant that there was no oxygen in the room for doing anything about interregional inequality in countries.
i have two doubts about this thesis, i'm not sure there were dozens of cities where you could launch something like perq 50 years ago, particularly considering they went bankrupt in 01985, possibly because of being in pittsburgh. i also suspect there are dozens of cities where you could do it today—i don't think you have to be in a 'champion city' to do a successful hardware startup. it's a little hard to answer that question empirically, because we have to wait 10 years to see which of today's hot startups survive that long
but maybe we can look at the computer companies that are newly-ish big, restricting our attention to mostly hardware, since that's what three rivers was doing and where you'd expect the most concentration. xiaomi, founded 02010 in beijing. huawei, founded 01987 in shenzhen. tencent, founded 01998 nominally in the caymans but really at shenzhen university. loongson, founded 02010 in beijing. sunway (jiangnan computing lab), founded 02009 (?) in wuxi. dji, founded 02006 in hong kong, but immediately moved to shenzhen. allwinner, founded 02007 in zhuhai (across the river from shenzhen and hong kong). mediatek, founded 01997 in hsinchu, where umc is. nanjing qinheng microelectronics (wch, a leading stm32 clone vendor), founded 02004 in nanjing. espressif, founded 02008 in shanghai. rockchip, founded 02001 in fuzhou. nvidia, founded 01993 in silicon valley. alibaba (parent company of t-head/pingtouge), founded 01999 in hangzhou, which is also where pingtouge is. bitmain, founded 02013 in beijing (?). hisilicon, founded 01991 in shenzhen, which designed the kirin 9000s cpu huawei's new phones use and is the largest domestic designer. smic, which csis calls 'the most advanced chinese logic chip manufacturer' (someone should tell csis about tsmc), and makes the kirin 9000s, founded 02000 in shanghai. smee, ('the most advanced chinese lithography company' —csis) founded 02002 in shanghai. mobileye, founded in jerusalem in 01999. biren, the strategic gpu maker, founded 02019 in shanghai. moore threads, the other strategic gpu maker, founded 02020 in beijing. unisoc (spreadtrum), fourth largest cellphone cpu company, founded 02001 in shanghai. tenstorrent, founded 02016 in toronto. zhaoxin, the amd64 vendor, founded 02013 in shanghai. changxin (innotron), the dram vendor, founded 02016 in hefei. fujian jinhua, the other dram vendor, founded 02016 in, unsurprisingly, fujian, i think in fuzhou. raspberry pi, founded 02009 in cambridge (uk). ymtc, the nand vendor, founded 02016 in wuhan. wingtech, the parent company of nexperia (previously philips), founded 02006 in jiaxing. huahong, founded 01996 in shanghai. phytium, makers of feiteng, founded 02014 in tianjin. oculus, founded 02012 in los angeles. zte, founded 01985 in shenzhen. gigadevice, founded 02005 in beijing. piotech, founded 02010 in shenyang. amec, founded 02004 in shanghai. ingenic, founded 02005 in beijing. silan, founded 01997 in hangzhou. nexchip, founded 02015 in hefei. united nova technology, founded 02018 in shaoxing. cetc, the defense contractor, founded 02002 in beijing. naura, the largest semiconductor equipment manufacturer in china, founded 02001 in beijing.
maybe my sampling here is not entirely unbiased, but we have 7 companies in beijing, 7 in shanghai, 6 in shenzhen, 2 in hangzhou, 2 in hefei, 2 in fuzhou, and then one each in each of wuxi, zhuhai, hsinchu, nanjing, silicon valley, jerusalem, cambridge, wuhan, toronto, jiaxing, tianjin, los angeles, and shaoxing. that seems like 19 cities where you could plausibly start a new computer maker today, given that somebody did in the last 30 years. did the usa ever have that many in the 01970s?
to me this doesn't support your thesis 'countries had to reinforce their champion cities so they can compete against other champion cities [resulting in] the redlining of 98% of america and similar phenomena in those countries as well' so that 'focus on (...) international competition [promoted] interregional inequality in countries'. rather the opposite: it looks like winning the international competition and heavy state investment has resulted in increasingly widespread computer-making startups throughout china, mostly in the prc, while computer makers founded elsewhere in the last few decades mostly failed. i mean, yes, none of these startups are in xinjiang or mississippi. but they're not all in one province of china, either; they're distributed across hebei province (sort of!), jiangsu, anhui, hubei, zhejiang, fujian, israel, taiwan, california, england, shanghai, and guangdong. taiwan and 7 of the 22 provinces† of the prc are represented in this list. they tend to be the more populous provinces, though there are some provinces conspicuously missing from the list, like shandong and henan
guangdong: 126 million
jiangsu: 85 million
hebei: 75 million
zhejiang: 65 million
anhui: 61 million
hubei: 58 million
fujian: 41 million
shanghai (city): 25 million
beijing (city): 22 million
tianjin (city): 14 million
the total population of the above regions is 570 million
taiwan is another 24 million people, bringing the total to some 600 million. so this is not quite half the population of china. so while there is surely some 'redlining' going on in china, i don't think it's against 98% or even 60% of the population
that is, probably nobody is going to start a company in inner mongolia making photolithography equipment for new chip fabs, but inner mongolia only has a population of 24 million people despite being 12% of china's land area, bigger than france and spain together. it has no ocean ports because it's nowhere near an ocean. it's a long airplane flight away from where people are building the chip fabs, and you can't fit a vacuum ultraviolet photolithography machine under your airline seat, or possibly on an airliner at all. wuhai didn't get its first airport until 02003. so, though i obviously know very little, i don't think even massive state investment to promote an electronics industry in chifeng would be successful
especially if we add california, israel, england, and ontario to the list, i think it's clear that the computer industry today is far more geographically inclusive than it was 50 years ago
but it's possible i'm just not understanding what you mean. what metrics of geographic inclusivity would you suggest?
______
† the prc also contains 5 'autonomous regions', two 'special administrative regions' (hong kong and macau), and four direct-administered municipalities, of which three are on the list
One option stands out: "Memory Parity Option - $500". Ahh... How the times don't change, with the ECC RAM being a premium feature.
https://imgur.com/gallery/3-rivers-computer-corporation-perq...
Blit (computer terminal):
https://en.wikipedia.org/wiki/Blit_(computer_terminal)
>The folk etymology for the Blit name is that it stands for Bell Labs Intelligent Terminal, and its creators have also joked that it actually stood for Bacon, Lettuce, and Interactive Tomato. However, Rob Pike's paper on the Blit explains that it was named after the second syllable of bit blit, a common name for the bit-block transfer operation that is fundamental to the terminal's graphics.[2] Its original nickname was Jerq, inspired by a joke used during a demo of a Three Rivers' PERQ graphic workstation and used with permission.
https://inbox.vuxu.org/tuhs/CAKzdPgz37wwYfmHJ_7kZx_T=-zwNJ50...
From: Rob Pike <robpike@gmail.com>
To: Norman Wilson <norman@oclsc.org>
Cc: The Eunuchs Hysterical Society <tuhs@tuhs.org>
Subject: Re: [TUHS] Blit source
Date: Thu, 19 Dec 2019 11:26:47 +1100 [thread overview]
Message-ID: <CAKzdPgz37wwYfmHJ_7kZx_T=-zwNJ50PhS7r0kCpuf_F1mDkww@mail.gmail.com> (raw)
In-Reply-To: <1576714621.27293.for-standards-violators@oclsc.org>
[-- Attachment #1: Type: text/plain, Size: 890 bytes --]
Your naming isn't right, although the story otherwise is accurate.
The Jerq was the original name for the 68K machines hand-made by Bart. The
name, originally coined for a fun demo of the Three Rivers Perq by folks at
Lucasfilm, was borrowed with permission by us but was considered unsuitable
by Sam Morgan as we reached out to make some industrially, by a company
(something Atlantic) on Long Island. So "Blit" was coined. The Blit name
later stuck unofficially to the DMD-5620, which was made by Teletype and,
after some upheavals, had a Western Electric BellMac 32000 CPU.
If 5620s were called Jerqs, it was an accident. All the software with that
name would be for the original, Locanthi-built and -designed 68K machines.
The sequence is thus Jerq, Blit, DMD-5620. DMD stood for dot-mapped rather
than bit-mapped, but I never understood why. It seemed a category error to
me.
-rob
https://inbox.vuxu.org/tuhs/CAKzdPgxreqfTy+55qc3-Yx5zZPVVwOW... The original name was Jerq, which was first the name given by friends at
Lucasfilm to the Three Rivers PERQ workstations they had, for which the
Pascal-written software and operating system were unsatisfactory. Bart
Locanthi and I (with Greg Chesson and Dave Ditzel?) visited Lucasfilm in
1981 and we saw all the potential there with none of the realization. My
personal aha was that, as on the Alto, only one thing could be running at a
time and that was a profound limitation. When we began to design our answer
to these problems a few weeks later, we called Lucasfilm to ask if they
minded us borrowing their excellent rude name, and they readily agreed.
Our slogan: A jerq at every desk.