Of course, we have had compilers and tooling, but those are the pencil and drafting board of the draftsperson. An ecosystem of packages, dependencies and APIs has evolved, but those are often just spells the software magician invokes after reading the spellbook^H^H^H^H^H^H^H^H^H stackoverflow^H^H^H^H^H^H^H^H^H^H^H^H^H API documentation.
We are going to need to build a new set of boundaries and abstractions with new handover protocols to manage this mess.
It's the people that claim to "do agile" that invariably don't do it. But software development used to fail most of the time, and it doesn't do that anymore.
So much of what makes high-functioning teams work is a sense of ownership and stewardship, and what makes low-functioning teams break is a lack thereof. Someone with pride, drive, and a high standard feeling responsible for a particular area or thing.
In the past, that ownership could be individual or collective, but with AI and a lot more lane-crossing, ownership should tend toward smaller groups (or individuals).
A developer can design, but a designer needs to review it. A designer can code, but the owner of the code must review it.
This might feel like gatekeeping, but it's the only way.
Wait...
I've said this before, but people gloss over this fact.
>Someone with pride, drive, and a high standard feeling responsible for a particular area or thing.
I've also said this before, but AI-glazers just respond with "I think we may just have to let go of pride & kudos and their connection to our identity."
Most people who vibecode don't give a shit about their work. Any solution is a solution as long as it works.
>This might feel like gatekeeping, but it's the only way.
Gatekeeping is not inherently bad. We want gatekeeping.
If I'm getting surgery, I want an actual doctor with proven credentials to do it.
And to anyone claiming that software doesn't kill, please look up "Therac-25" or the 65 people that died due to Tesla's "Full Self-Driving".
Now we all know horrible mangers who didn't keep up to date nor used their thinking. This will happen with AI useage too. What is more we are expecting people who are engineers to have a manager's mindset (by managing AI agents, products requirements, etc). Many engineers are horrible at this and have no desires or ability to become a manager. This is why they went to engineering in the first place.
Bingo. If I wanted to spend my life managing incompetent sycophants, I would've studied for an MBA to try to rise the ranks at McKinsey.
The funny part is that these are the same people who are upset that these folks up the food chain "do nothing".
My primary editor is vim, and for a significant amount of time I was using it almost in puritan fashion, this was before LLM was mainstream.
However, I could not use vim to edit java, even with language server - I tried, but each time I went back to intellij - the rest of the code base in python, ruby and typescript was typically fine.
The reason was two fold, because everyone was using all of the features that intellij had to offer, the code was structured similar to intellij and obviously the java design patterns that was popular at the time. Everything went through factories and managers and interfaces and tracking them through a pure editor was almost impossible. The IDE handled it for you.
But everything else? Things I or others had to build from ground up was built with this cognitive limitation in mind, which means I can fit everything nicely and edit with vim, even without a language server with high efficiency.
Those cognitive limitation is good for the software. It's easy to explain, easy to debug, easy to add and subtract. And I've come to disregard the intellij way, or the current vibe coding till it works that is common everywhere now. The principle is KISS - keep it simple stupid. If AI will not do that, then you have to. It is a simple philosophical question that is more important than ever. And sadly most people still don't realize it - they will happily tack on the next "feature" in with the scaling they didn't need at that time with the design pattern that they don't need at the time and prematurely optimize themselves into cognitive and technical bankruptcy.
That’s the neat trick kiddo, they won’t. Across the industry, the messaging is clear: use AI and be more productive. Management is salivating at the idea of getting rid of people and keeping a higher share of profits for themselves. Most ICs I talk to are increasingly expressing the feeling of burnout, fear of losing jobs and resentment that AI is being pushed the way it is being pushed. I have more than a few conversations where people have clearly expressed that they are mostly focused on keeping their jobs. They don’t care about cognitive debt and some are looking forward to the time when the debt comes due.
It is depressing, but it is the reality.
I think it's great for writing tests and sanity checking changes but wouldn't let it write core driver code(I'm a systems programmer so YMMV). Maybe in a month I'll think differently.
The software is necessarily complex due to legislative requirements, and the corpus of documentation the AI has access to just doesn't seem to capture the complexities and subtleties of the system and its related platforms.
I can churn out ACs quicker, but if I just move on to the next thing as if they're 'done' then quality is going to decline sharply. I'm currently entirely re-writing the first set of ACs it generated because the base premise was off.
This is both a prompt engineering and an availability-of-enough-context documentation problem, but both of those have fairly long learning curve work. Not many places do knowledge management very well, and so the requisite base information just may not be complete enough, and one missing 'patch' can very much change a lot of contexts.
I did a live demo in front of the CPAs, using their documentation, and Claude asked clarification questions they hadn't thought of and exposed gaps in the old manual processes.
Smaller teams have more agency to move and usually team members with broader responsibility and understanding of the systems. Also possibly closer to stakeholders, so are already involved in specification creation and know where automation can add value. Add an AI agent and they can pick and choose where they can be most effective at a system level.
Bigger teams have clear boundaries that stop agency - blockers due to cross team dependencies, potentially no idea what stakeholders want, just piecemeal incremental change of a bigger system specified by someone else. If all they can do is automate that limited scope it's really just like faster typing.
Not every company is going to see those boundaries and stakeholders as features, and they'll be under pressure to "mitigate those blockers to execution". That's where the cognitive debt skyrockets.
Large teams prioritize service resilience and depth of coverage.
In that situation, coming in cold to a library that you haven't worked on before to make a change is the normal case, not "cognitive debt."
If you have common coding standards that all your libraries abide by, then it's much easier to dive into a new one.
Also, being able to ask an AI questions about an unfamiliar library might actually help?
The ability to generate code has seemingly transposed what people think of as a "high-performing team" from one that produces quality to one that produces quantity. With the short term gains obviously increasing long term technical debt.
That just sounds like everyone is going to be management. Blindly setting goals and demanding features of a black box, formerly the development team, soon to be 'AI' agents.
AI is making people afraid as they run into these things. It's a little sad that they don't have the historical context or perspective to realize these are old problems. I imagine this is what Samurai felt like as flintlock guns came in and completely upended hundreds of years of martial tradition. How will they be able to defend themselves if they don't learn Kenjutsu? What will happen to our Bushido?
And I do think the fear is warranted. But I don't think people are going to act differently once they realize this unfortunate status quo hasn't yet led to the collapse of civilization, or their paychecks. Once the fear has passed, we will move on into the new normal, willfully ignorant and mildly disappointed.
> Cognitive Debt, Like Technical Debt, Must Be Repaid
In quite a few circumstances, cognitive debt doesn't entirely need to be repaid. I personally found with multiple projects that certain directions aren't the one I want to go in. But I only found it out after fully fleshing it out with Claude Code and then by using my own app realizing that certain things that I thought would work, they don't.
For example, I created library.aliceindataland.com (a narrative driven SQL course). After a while, I noticed that the grading scheme was off and it needed to be rewritten. The same goes for how I wanted to implement the cheatsheet, or lessons not following the standard format. Of course, I need to understand the new code but I don't need to understand the old code.
With other small forms of code, I just don't really need to know how things work because it's that simple. For example, every 5 minutes I track to which wifi network I'm connected with. It's mostly useful to simply know whether I went to the office that day or not. A python script retrieves the data and when I look at it, I can recognize that it's correct. But doing it this way is sure a lot faster than active recall.
At work, I've had similar things. At my previous job I created SEO and SEA tools for marketing experts. So I remember creating this whole app that gave experts insights into SEO things that Ahrefs and similar sites don't, as it was tailored to the data of the company I worked at. The feedback I basically got was: the data is great, the insights are necessary, but the way the app works is unusuable for us. I was a bit perplexed as I personally didn't find it that complicated. But I also know that I'm not the one using it. Then I created a second version and that was way more usable. The second version assumed a completely different front-end app and front-end app architecture though. All the cognitive debt of V1? No payback needed.
The reason that this is the case, as it seems to me, fall under a few categories:
1. Experimenting with technologies. If you have certain assumptions about how a technology works but it turns out you're wrong, or you learn through the process that an adjacent technology works way better, then you need to redo it. Back when coding by hand was such a thing, I had this with a collaborative drawing project called Doodledocs (2019). I didn't know if browsers supported pressure sensitivity and to what extent it was easy to implement. It required a few programming experiments.
2. It's a small and simple script, not much more to it.
3. Experimenting with usability. A lot of the time, we don't know how usable our app is. In my experience, this seems to be either because (1) it's a hobby project or (2) the UX people have been fired years ago. In these cases, more often than not, UX becomes an afterthought. But with LLMs, delivering a 95% fully working version is usually done within a week for a greenfield project. This 95% fully working version is an amazing high fidelity interaction prototype (95% no less). Once you do that for a few iterations, you then understand what you really need. Once you understand what you really need, then you can start repaying the cognitive debt.
I've found it's usually category 3, sometimes 2 and rarely 1.
So the logical next step is to focus on Biological Immortality and short of that Digital Immortality. God speed everyone.
Lack of documentation, failed onboarding, poor architectural understanding, missing tests, review fatigue — if all of these are simply grouped together as “cognitive debt,” isn’t that just a failure to build a proper workflow?
The scope is too broad. It reminds me of Stepanov, the creator of the STL, saying that if everything is an object, then nothing is.
When an abstraction tries to cover too many things, that abstraction inevitably fails.
The way AI specifically amplifies this problem is through the difference between direct work and indirect work. The core issue is that “it works” can easily create the illusion that “I understand it.”
Another thing I felt while reading this essay is that it almost seems to go against the direction of modern software engineering. Once software grows beyond a certain size, it is already impossible for anyone except perhaps the original designer to understand the entire system. The goal is not for everyone to understand everything.
The real goal is to make local changes safely, and to ensure that the system keeps running without major disruption when one replaceable part — including a person — leaves.
At this point, many things being described in the industry as “cognitive debt” look to me like rhetorical tools for selling essays.
Reading this, I even wondered: if I write about trendy terms like cognitive debt or spec-driven development on my own blog, will people pay more attention?
To be honest, spec driven development has a similar issue. When you go from a specification down into implementation, information loss is inevitable. LLMs cannot fully solve that. In the end, a human supervisor still has to iterate several times and tune the result precisely. The real question should be: how far down should the specification go? In other words, at what local scope does it become faster for a human programmer to modify the code directly than to keep steering the AI-generated code?
But that discussion is often missing.
As people sometimes say, “when you start talking about Agile, it stops being agile.” In the same way, I think the “cognitive debt” frame may be a flawed abstraction of the current phenomenon.
The moment a living practice is nominalized, packaged, and turned into a consulting product, it loses its original dynamism and context-dependence, becoming a dead template.
It puts various discomforts that emerged after AI adoption — review burden, lack of understanding, fatigue — into a single box.
Then it attaches the economic metaphor of “debt” to emphasize the seriousness of the problem, and subtly injects the normative idea that “this must eventually be repaid.”
Thinking back to Parnas’s 1972 work on information hiding, software engineering was built on the principle that local understanding should be sufficient, and global understanding is not the goal.
The cognitive debt framing seems to implicitly reverse that principle by treating “shared understanding” as something that must be preserved as a global unit. I do not understand why the discussion keeps moving toward the idea that everything must be understood.
It reminds me of Bjarne’s onion metaphor for abstraction: if an abstraction works, you do not necessarily need to peel it apart without reason.
My main issue with the current cognitive debt framing is that the layer it tries to cover is too broad.