> There’s an interesting observation here: The biggest programs so far are the ones designed for humans to interact with. Turns out humans are complicated and making computers do things they want is hard. There’s the argument that humans need to learn computers better, with simpler and more composable primitives, and it will result in great benefits in smaller and more powerful programs. Then there’s the counter-argument that if a program is written once and used many, many times, the extra time a programmer puts into making the learning curve shallower will rapidly be made up for by the time saved by the users. Both these arguments are valid. “Design” is the art of striking a balance between them, and different people will want different balances.
I get that this isn't the main point the author was trying to make (about language/ecosystem comparison, surrounding their approaches to the dependency problem), but.
I suppose containers or Ubuntu's snaps might be examples (for good or bad).
Instead, AI creates ed(1) scripts that refactor the code. Not only the rewritten code; the transform must be human readable.
These codemods modify library source at buildtime. If you don't want UI, they get edited out or swapped in as no-ops.
Maybe we could monetize code edits. However, each successive edit has to reward preceding ones. Like a chip wafer, each step makes it more valuable.
How to do that?