- "embedded" (rpi) controller for a boxfan that runs in my lab
- VSR distributed consistency protocol library
- dead simple CQRS library
- OT library
I now have the CQRS library deployed to do accounting for a small SAAS that might generate revenue for me...
On the docket is:
- yard watering "embedded" (rpi) device
- fully personalized home thermostat
etc.
I don't think that "just don't use AI" is really a solution here. It can feel really pointless doing something the hard way when you know there is an easy way even if you prefer the hard way.
Oh just you wait!
—-
You can get the challenge back by designing something instead of coding it. Lots of wonderfully designed things are not actually that remarkable from the implementation / manufacturing standpoint.
Create a new board game. Completely unchallenging from a coding standpoint, vibe away. But the fast coding steps open up the ability to actually explore and adjust game play in real time. Start by replicating a favorite game.
Create your own organizational software tools. Whatever you would use and other tools dissappointed.
Those are just examples. Go creative on what a thing does, how it looks, etc.
Nintendo’s generations of game hardware are a repeated lesson in great design despite, even because of, modest internals.
And well, I entered the field professionally because I liked it, and I feel sort of like the rug was pulled under me. Sucks to be one of us I guess.
you interface with a machine to get the machine to do what you, a human, want it to do relevant to your human purposes
but out of necessity - turns out we need to control a lot of machines - we made the act ergonomic
this fit the aesthetic of some people. All to do with them and little to do with the act. Akin to alchemists or other purveyors of exotic knowledge, the relevance of their skill always had a countdown on it
all that's left is to circle back. Coding is instrumental. Now our alchemy is one more level abstracted from the artifice of the machine. Good. It's closer to people now - like management, now. That's bad for the current generation of alchemist but good for the world
earnest RIP. On the upside, there's always a next generation
If you don't have any content for your personal blog, work on that first, find some niche thing to obsess over and have something to say about.
When I want to stimulate my brain coding, I do things like AOC or Euler, and when I want to test out a quick prototype app I have AI do all the grunt work of setting it up and getting it to a point to see if I even think it's a good idea or not.
I'm not sure how I feel about it. On one hand, in certain situations it speeds up my work 10x. On the other, it's way, way too easy to just stop thinking and have the AI iterate on a problem for an hour or two only to end up with utter gibberish, revert everything, and realize the fix was trivial with the slightest application of critical thinking.
I'm certainly a faster programmer with AI, but I'm not sure if I'm any more productive or producing better code instead of just more.
The one thing it's utterly amazing at is my most hated part: going from a blank page to something. Once there's some code and a hint of structure, it's much easier for me to get started.
I will say that I was shocked at how well codex handled "transform this react native typescript into a C++ library". I estimated a week or two of manual refactoring. Codex did it in half an hour (and used 25% of my weekly token budget).
I'm using AI plenty but looking at my use with a different lens. I like to code. It's fun. It's rewarding. I produce things with it. But it is also practically a means to an end for me. My job isn't purely code but also analysis, strategy, etc.
So, having lots of fun zooming through code problems that slowed me down in the past. I have more time for the analysis/strategy/etc.
I'm not a professional dev but I would encourage author to find a similar lens in their work, if possible. Not saying its easy! And if that solution isn't helping or attainable, maybe it's time to move on?
Mr. Bryce died of cancer a couple years back, and the web site he had up hosting all his materials has since bit-rotted away, so I'm giving you a Wayback link.
Anyhoo, back when it came out, it fomented much discussion and anger among programmers, including here on Hackernews:
https://news.ycombinator.com/item?id=990185
The problem is, Tim Bryce was absolutely, 100% correct. He stood by those words until his dying breath, and he related how programmers would be angry or offended, but management at the companies that employed programmers found the essay to be accurate.
You have to put the "Theory P" essay in broader context. Tim was a management consultant and salesperson for his father Milt, who developed the first software methodology for general commercial use in 1971: PRIDE (Profitable Information by Design). At the time, Milt already had about 20 years of experience in the software field, starting in the UNIVAC days. He was there since the very beginning of commercial computing. Milt discovered that one of the things we learned about LLMs applies to human programmers as well: unless you give them careful guidance, structured and detailed specifications, rigorous standards for how the code should be written, and hold them accountable to those specifications and standards, programmers will go off and develop the wrong thing, wasting the organization's time and money. Unless the programmers know what to build, they're worse than useless. Theory P is more about correcting the overvaluation of programmers from an organizational standpoint; in 2005, organizations particularly in the United States were still under the sway of the myth of the "genius programmer" who could solve the company's problems and make them lots of money if you just left them alone to hack code. This is explicitly not the case; building information systems needs standards, specifications, and process control just like any factory assembly line. (We are still struggling to learn the lessons Milt figured out in 1971. If you have heard of a data dictionary, or a "software bill of materials", those were concepts introduced by PRIDE. They are just two elements to a complete IRM solution.)
In light of this, PRIDE is incredibly comprehensive. The purpose of a methodology is to define WHO is to perform WHAT task, WHERE, WHEN, WHY, and HOW, and most modern "methodologies" fail in that regard; PRIDE defines these essentials organization wide, for everyone involved with an information system (including its users). PRIDE is actually an information systems methodology, not just a software methodology; under PRIDE, writing code is just one of the last technical steps of implementing an information system, which includes software and computers but is not coterminous with them. Business information systems also encompass telecommunications links, pieces of paper such as forms and correspondence, and the most important element: people, and the information they need or provide.
So the hard part of building an information system for an organization, so the Bryces found out, is not programming but systems analysis and design. Consequently, and unsurprisingly, PRIDE is a Big Design Up Front methodology, which makes programmers tetchy—done properly, a PRIDE project spends about 60% of its time in systems analysis and design and only about 15% actually coding. There is some, but not a lot, of room for iteration, but by the time the first line of code has been written the major decisions for how the software—or rather, information system—should function have been made, and not by a programmer. That information is encoded in the extensive documents and flowcharts that have been prepared by the systems analysts and approved by management.
Now enter AI. Tim Bryce also wrote about the maturity of an organization's information systems (https://web.archive.org/web/20160407164521fw_/http://phmains...) in terms of the Before Times, when organizations mainly used paper and ink, to the introduction of the computer in which organizations took a "tool-oriented approach": figuring out what the computer could do and applying it to various tasks without concern for when and how it is properly applied at the organizational level. Mature organizations, by contrast, take a "management-oriented approach", in which the information needs of the business are defined and codified by management, and computers and software are written specifically to address those needs. The problem is that programmers, not systems people, have dominated the information-systems departments of organizations since about the late 1960s, meaning that many of these are stuck in a "tool-oriented approach"! With AI, that last step, the final 15%—actually translating the specifications for the information system into code—can be automated. Most programmers are now obsolete. Nobody cares if you have fun coding; the good times for coders, when they were overvalued to the point of being made linchpins for billion+ dollar businesses, are now over. The relevant skill now becomes understanding what you want to build and laying it out for the AI in sufficient detail—exactly the role of the systems analyst in PRIDE! So, programmers who do not take a "management-oriented approach" and are unwilling to develop the high-level systems thinking and people skills it takes to work at an organizational level and communicate with users and management are going to have a very, very bad time indeed! Unless you have fun designing systems and seeing them rolled out to production, your job is going to be extremely unfun, if not eliminated entirely!
And that would be just fine with Tim Bryce. He hated programmers, and said as much. I'm kind of curious what he would say if he had lived just a few years more into the current era.