I think it's possible the amount of new software that will be written for an audience of 1-10 will be greater in 2026 than in any previous year, and then the same again for many years to come. I also think a lot of this software will be essentially 'hidden' - people just writing this stuff for themselves because the cost to say things to an agent is very low compared with the cost of actually planning out a software design and so forth.
Interoperability will probably be important in the next few years and I wonder if this is something solvable at the agent/LLM level (standing instructions like 'typically, use sqlite, use plaintext, use open standards' or whatever). I also think observability and ops will be pretty important - many people who want personal software but don't care for the maintenance and upkeep.
I now have tailor made apps with all kinds of bells and whistles that commercial products can’t offer easily ( I fall under non commercial usage which opens a lot of doors ), and that free software might offer, but later.
I have also learnt a lot technically in the process, since I’ve been able to venture into what was for me unknown territory but at controlled cost
I plan to create more such apps in the future. What is certain though is that my cooking app has immediately displaced all the others on the market , because none of the others cater to my requirements.
The production side is indeed of specific interest - most users don’t run production software so I had to think about that one. Tailscale and Cloudflare came in quite handy and there is indeed a market here
Basically, I am prepared to accept that there is a friction that LLMs lubricate away, but what is the source of the friction, and why am I (and a bunch of other colleagues) not feeling that friction daily in our practice?
[1]: And if so, where did we programmers and computer scientists go wrong? Were subroutines and macros not sufficient for automating all of that excess typing? Were Emacs and Vim simply not saving enough keystrokes? Did people forget how to touch-type?
Interestingly, I also converged on the "reverse dictionary" usage of LLMs, in around 2024[1], mostly to indulge in (human) language-learning.
An excerpt from the post below:
``` It is a phenomenal reverse dictionary (i.e. which English words mean "of a specific but unspecified character, quality, or degree"). It not only works for English, but also for Esperanto (i.e. which Esperanto words mean "of a specific but unspecified character, quality, or degree"), as well as my own obscure native language. This is a huge time-saver when learning languages (normal dictionaries won't cut it, and bi-lingual dictionaries are limited, if they are available at all). Even if you are just using a language you are fluent in, a reverse-dictionary-prompt can help you find words and usages, and can also help you find "dark spots" in the language's lexicon. ```
[1]: https://galacticbeyond.com/chat-room-dispatches-intelligence...
Well. You should have seen the look on their faces. I might as well have morphed into the Steve Buscemi meme "How do you do, fellow kids?" They looked at me like I was a total relic or greybeard and said things like "Nah, nobody reads tech books anymore; I learned Typescript from YouTube videos."
I write plenty of code at my job, and generally don't have the desire to write more code as a hobby, except in rare cases when the mood really strikes.
Too bad this is all on the work computer and need to bring it to my personal one but can’t copy paste lol. It’s been thrilling building g and using them and the time from an ideating a small enhancement/ optimization to actually using it is like 5 to 15 minutes away. Soo cool.
I think the instinct that APIs, validation layers, and so on take on a much higher importance is right.. I have a few internal tools that made sense to make libraries out of, and once the first library is good, and a test suite is comprehensive, porting to a bunch of different languages is extremely simple.
Everting that, it's also going to be simple for someone to hook up to this library with custom tooling.
Really interesting period in computing, for sure.
What period were we for the past 50 years?
Identifying a vulnerability that can be exploited against many thousands or millions of targets is perhaps more attractive than a single one of individually low value.
This of course would assume that vulnerabilities are in fact unique (which is admittedly questionable).
My wm, shell, terminal, editor, file manager, pop-up menu (dmenu-like) are all pure ruby (including font rendering and X11 bindings). These all started before I started using Claude to improve them, so they're still mostly hand-written, but that is changing.
They're messy, they have bugs and "misfeatures" that works for me but likely would be painful for others.
Like OP, I don't really recommend anyone else use my code, at least not directly, and that is extremely liberating.
Overall, the projects covers the largest surface of what I use beyond the kernel, a browser, and Xorg (I'm so, so tempted, but I think an LLM will need to get a lot further first before I could fit it into my schedule).
It doesn't need to be polished because it's mostly for me. It's okay for them to have bugs as long as they work better for me than the alternatives.
I strongly believe more people should do this. It's both a great learning experience, and it gives you a system that has exactly the features you actually want and use.
And it's only going to get easier to do this.
[1]: https://fortune.com/2026/04/28/nvidia-executive-cost-of-ai-i...
[2]: https://www.briefs.co/news/uber-torches-entire-2026-ai-budge...
As a hobby, normal rates don’t apply, but just not to be misleading on the equivalent cost.
The whole point of this sentiment is that the personal tools wouldn't EXIST due to the time sink needed.
The tradeoff makes sense for a lot of people even if it's not a good fit for you.
A word of warning: a reliable lock tool for X11 is difficult. You should look at XSecureLock, which uses a multiprocess approach to avoid leaving the desktop unprotected in case of crash. It also implements a number of countermeasure to ensure the desktop stays locked and the locker stays in the front of the display. It's small too, so easy to audit (but written in C).
Some of the folks who make things will go on to make things that suit not just their preferences but also those of a small audience.
Some of those audiences will go on to grow and grow and disrupt the big players.
The capital intensive part of software construction is melting away and being converted to opex (payg token costs and your time) and that will blast open the possibility space and lead to a massive new commons.
If the thing was so cheap to create why not open source it!
And if you like someone else’s open source thing but don’t want to take it wholesale why not give it to your agent and say “put the ideas from this onto my thing”!
It’s a new way of thinking about code too.
Another thing to watch for is how chatty the internet is about to become. A great many of these apps will hit APIs, ping each other, and so forth.
On this software itself: I’d like to know how this feels to use. It’s so very lightweight. Does it feel categorically different to what we are used to?
One of the things I miss about the 1980s home computers is that they booted into a usable command line in a handful of seconds, from a few KB in ROM. Imagine what today’s HW could do if we’d retained that level of efficiency.
We waste a ton of energy on ineffeciencies in hardware and software today all because we managed to "just go faster".
There are big benefits to using a language that has good static analysis with LLMs.
Still a cool project, thanks for sharing.
I have wondered about having LLMs output machine code directly and skipping the compiler/assembler altogether. Then you'd just commit your spec/prompt and run it through the LLM to get your binary.
rust can do that. You can run a hyper stripped down rust that was made for embedded devices specifically because those devices don't have room for a runtime.
And it apparently can. And very well.
One advantage seems to be that the complete asm file fits easily into CC context window.
well, I can respect that for sure
Would it be possible to share the jsonl files too, like how Mario Zechner shared his chats with the AI, while working on his Pi coding agent?
I struggle to understand why, though.
Most software is done after the first or second version and the developers just keep working on it to justify their job; adding features no one needs and just get in the way or make the program worse. It'll be nice when the software I have does exactly what I need and doesn't change until I tell it to change for something I need.
The only feature Macos has shipped in the past 10 years that I actually like is air-drop. Everything else is a PITA annoyance, or as I've found out from upgrading, just bug ridden slop that doesn't work well anymore.
Brother mine, you will learn that the future you is ignorant of all the things, and every bit of documentation goes a long way
The other thing is that other people's applications are rarely useful. Their libraries are, the feature description READMEs are, but the software itself is full of attempts at generality that make them overly annoying for me to use. Instead I have extremely idiosyncratic software - anyone else would find it insufferable.
The wild thing, though, is that my software is outrageously useful for me. I can see why Anthropic and OpenAI are (or shortly will be) the trillion-dollar behemoths they are. They are enabling a personal productivity increase of epic proportions[2]. The highly specific functionality also means strange things performance wise. I don't need to use Electron or Tauri or whatever. Instead, my thing is Rust with objc2 and it starts instantaneously. On my M1 Max, it's the fastest text viewer I can start. 100s of megabytes of JSON and it's launching is imperceptible for my tool, pretty-printing is instantaneous, breadcrumbs are live.
Because I can make it do only the thing I want it to do. It can't do other things. I cannot edit or auto-complete or anything. And this is great. Useless to others and fantastic to me.
Likewise, my blog is on Mediawiki (which I like so anyone can edit) but the authoring flow is kind of annoying. Uploading images causes a break from writing, and requires a lot of form-filling that interrupts my thought. So I now have this software that does everything I want: link autocompletion, background image uploads, post-hoc publishing, previews and diffs, built-in Wikipedia search to interwiki link. Who would want this but me? It only brings me pleasure.
What a revolution in software.
0: https://wiki.roshangeorge.dev/w/Blog/2026-04-25/The_rise_of_...
1: https://wiki.roshangeorge.dev/w/Blog/2026-04-30/Personal_Sof...
2: Predictably, I have chosen to use the spare time on leisure
Also, reading it is probably not the intended use. It’s probably: “Hey Claude, give me a TLDR of this”
But the incessant “AI was used here, thus is it garbage” is long past time to enter the grave.
I usually stop reading at the first LLM-ism, but I found the premise of this post interesting enough to keep going - and guess what, the entire article was literally just "I prompt CC to make software tailored for me" blown out to 8 sections.
The parent comment does. Why do you care that they care?