But maybe the easiest way to do (very low resolution) photolithography at home is to use dry film photoresist, which is like tape you can stick onto a copper PCB you then expose and etch; a cheap roll is ~$20 from eBay/Amazon.
[1] https://docs.hackerfab.org/home [2] https://dspace.mit.edu/handle/1721.1/93835 [3] https://www.inchfab.com/
The photographic steps are pretty accessible.
It's not lithography, but you can build a working processor out of small surface mount chips, and you can solder these chips with lead-free solder. That seems very achievable for a motivated engineer, and probably involves much less toxic chemicals?
Another project is growing large salt crystals in saturated solution.
The Unitech Electric Static Wand Toy off amazon was also popular last year (poorly built mini Van de Graaff generator.)
Glow in the dark wall paint and a 5 second strobe light is also a classic silhouette demo.
Could also look for linear polarizing sheets, thermochromic sheets, and "Magnetic Viewing film".
Some will like this stuff, others only want to stare at a screen. =3
Don't get me wrong, I'm excited too about it, and can't wait to personally do some experiments as well, although not at the same scale. But I'm not sure it's world changing, at least until I've actually seen any changes :)
I started programming on an 8 MHz Mac Plus in the late 1980s and got a bachelors degree in computer engineering in the late 1990s. From my perspective, a kind of inverse Moore's Law happened, where single-threaded performance stays approximately constant as the number of transistors doubles every 18 months.
Wondering why that happened is a bit like asking how high the national debt would have to get before we tax rich people, or how many millions of people have to die in a holocaust before the world's economic superpowers stop it. In other words, it just did.
But I think that we've reached such an astounding number of transistors per chip (100 billion or more) that we finally have a chance to try alternative approaches that are competitive. Because so few transistors are in use per-instruction that it wouldn't take much to beat status quo performance. Note that I'm talking about multicore desktop computing here, not GPUs (their SIMD performance actually has increased).
I had hoped that FPGAs would allow us to do this, but their evolution seems to have been halted by the powers that be. I also have some ideas for MIMD on SIMD, which is the only other way that I can see this happening. I think if the author can reach the CMOS compatibility they spoke of, and home lithography could be provided by an open source device the way that 3D printing happened, and if we could get above 1 million transistors running over 100 MHz, then we could play around with cores having the performance of a MIPS, PowerPC or Pentium.
In the meantime, it might be fun to prototype with AI and build a transputer at home with local memories. Looks like a $1 Raspberry Pi RP2040 (266 MIPS, 2 core, 32 bit, 264 kB on-chip RAM) could be a contender. It has about 5 times the MIPS of an early 32 bit PowerPC or Pentium processor.
For comparison, the early Intel i7-920 had 12,000 MIPS (at 64 bits), so the RP2040 is about 50 times slower (not too shabby for a $1 chip). But where the i7 had 731 million transistors, the RP2040 has only 134,000 (not a typo). So 50 times the performance for over 5000 times the number of transistors means that the i7 is only about 1% as performant as it should be per transistor.
I'm picturing an array of at least 256 of these low-cost cores and designing an infinite-thread programming language that auto-parallelizes code without having to manually use intrinsics. Then we could really start exploring stuff like genetic algorithms, large agent simulations and even artificial life without having to manually transpile our code to whatever non-symmetric multiprocessing runtime we're forced to use currently.
During the fifties and the sixties, and even until the early seventies, it was common for everyone to publish research papers very unlike those that are published today, where the concrete information is minimal.
In the early research papers about semiconductor devices and integrated circuits, it was normal to give complete recipes, including quantities of chemicals, temperatures and times for the processing steps and so on. After reading such papers, you could reproduce the recipes and make the device described and you could measure for yourself to see how true are the claims presented in the paper.
That open sharing of information has led to a very quick evolution of the semiconductor technologies during the early years, until more traditional business-oriented management has begun to restrict the information provided to the public.
It is said that such sharing of information still exists in China in many fields, and it is the source of their rapid progress.
Curious to know why you think this cutthroat approach is 'traditional'. Is there another historical background to it? Every account that I've seen, including the origin story of free software (at MIT) and even the rest of your own explanation, seem to suggest that such institutionalized confiscation and hoarding of knowledge is a recent phenomenon - since about the 70s. Am I missing something?
The open sharing approach is traditional for research and academia, while the information restricting approach is traditional for business-oriented thinking.
So, a young field will typically start out fairly open and then get increasingly closed down. The long-term trajectory differs by field, and the modern open-source landscape shows that there can be a fair bit of oscillation.
We're seeing the same basic shape of story play out in generative AI.
Where can we read more about this?
Of course, that’s what they are doing it seems! https://atomicsemi.com/
It seems to me that if there were as much of a customer base for custom ICs as there is for PCBs, a fabricator like TSMC could easily offer a batch prototyping service on a 28 nm node, where you buy just a small slice of a wafer, provided you keep to some restrictive design and packaging rules.
One problem is, you need to create a photolithography mask set for any volume size of fabrication and those aren’t cheap. But that’s far from the _only_ problem with small volume.
However you cannot make useful digital circuits. For digital circuits, the best that you can do is to be content to only design them and buy an FPGA for implementing them, instead of attempting to manufacture a custom IC.
With the kind of digital circuits that you could make in a garage, the most complex thing that you could do would be something like a very big table or wall digital clock, made not with a single IC like today, but with a few dozen ICs.
Anything more complex than that would need far too many ICs.
You can do lithography small but slow and expensive. But small means you need a stack, which is even more expensive. At small sizes, defectivity/variation are really difficult.
So if you want a paradigmatic shift, you need low cost patterning, and the best way I can see is to use clever chemistry and a much different design style.
allegedly jim keller is one of the investors!
Followed a couple of links and ended up on his brother's page, reading about another example of the anti-immigrant insanity that's taken hold of this country: https://adam.zeloof.xyz/2025/04/01/karim/ . So sad.
Interesting you mention that, a few threads ago you were adamant that the EU wanting to enforce their speech laws on Twitter was 100% anti-free-speech insanity though.
It would seem that for you the insanity of the sheer fact of enforcement (since you clearly weren't talking about the character of enforcement) depends on your underlying sentiment on the given topic. Is that really intentional on your part? Sounds a bit perilous to me reasoning wise, if so.
Or did you simply change your mind since our last discussion?
Cause the whole "they're ordering around an American company" defense falls apart pretty quickly when said American company also operates (read: is accessible from) within EU borders, and in general can be used by citizens of EU member states (independent of their location).
https://www.statista.com/statistics/1541464/europe-quality-l...
https://www.visualcapitalist.com/ranked-european-countries-w...
* developed countries obviously develop slower than developing ones. Is easier to improve if your economy is shit especially if you join a union of more advanced countries.
* polish immigration actually skyrocketed recently since Russia invaded Ukraine. It didn't harm employment, safety, growth rate or anything else yet.
A display of "just doing things", no permission needed and no need for barriers and red tape.
It is another reason why I have huge promise for Substrate [1] founded by James Proud (UK native moved to US) another display of "just doing things".
However in Europe and the UK, it's "this law allows you to do this, this and this", "we've changed the law, here is a massive immediate fine", "ban encryption" (this nearly happened), "ban maths", "we are the first to regulate and ban this".
It is no wonder the US will continue to be great at building things.
Regulations can be bad but they can also stop environmental disasters from happening.
It makes me wonder how bad the situation is, when you feel the need to start your sentence with 'regulations can be bad', while corporations fight you for their right to release PFAS into your drinking water sources.
https://www.reddit.com/r/Semiconductors/s/jpuI772PJB
If Europe has an overregulation problem, the US may also have a grifter problem
Right now its ok to be a fraudster so long as you make at least a billion dollars doing the fraud.