A feature works. The tests pass. The PR is not huge. The business wants to test it live. Nobody wants to block value delivery because of an architecture concern that may sound abstract in the moment.
But this seems to be getting harder.
AI-assisted development, vibe coding, internal tooling, and better frameworks all reduce the friction of producing code. That is useful. Teams can prototype faster and ship experiments sooner.
The problem is that architectural judgment has not become equally cheap.
The code may work and still make the system worse: duplicated logic, unclear ownership, inconsistent patterns, security gaps, bad boundaries, one-off components that should have been reusable, or features that are hard to remove later.
One option is to force more architecture into code review. But then PRs become slow, frustrating, and full of design debates that are difficult to resolve after the code already exists.
Another option is to merge faster, while making the architecture feedback loop after merge much more explicit. Architecture should already be continuous, but faster code creation may require stronger post-merge mechanisms: reviewing what changed at the system level, checking reuse opportunities, reassessing security assumptions, scheduling refactors, keeping features behind flags, and being willing to disable or rewrite things.
That only works if “refactor later” is an actual process, not a wish.
Has your team changed how it handles architecture as code has become easier to produce? Do you handle this before merge, after merge, or through some continuous review process?
For P0, I write the code myself and use AI only for verification. This includes business logic or areas where failure must not happen. Typical examples are JWT authentication, API key handling, and, in PLC-related work, equipment interlocks, deletion logic, and machine control.
For P1, I use AI when writing logic that connects the backend and frontend. Even if something goes wrong, the damage is relatively limited, and I have found that AI often writes this kind of code better than I do, especially when it is built on top of P0 logic that I have already defined.
For P2, I let AI write the code and only verify that it builds successfully. This mostly applies to frontend-related work. Of course, if the frontend includes core animation logic, I may write that myself, but in most cases I let AI handle it.