The larger the project the more ways there are that you could decompose it, but only some of these are going to have good outcomes in terms of things like a concise flexible implementation, easy to read/write, debug and extend etc.
You are mentally exploring the alternatives trying to find the ideal factorization that minimizes complexity, keeps interfaces between parts simple and friction-free, and results in an implementation where the code almost reads like a high level description of the requirements, with additional levels of detail only exposed as you descend each level of the implementation.
I can't off the top of my head think of a super pithy way of expressing it, but optimizing the factorization and representations being exchanged between parts (the two go hand in hand) is the key. How do you reduce the requirements into a design with the fewest moving parts and simplest interfaces between parts. It's kind of co-evolution in a way.
Design needs theory to be intentional. It can of course be accidental (”seems to work, I guess”) or intuitive (”i know in my guts this is right but cant explain it”).
While both can end up with functional systems, if you cant vocalize the design journey the system is not very maintainable in the industrial sense (hence - theory is the vocalization of the design and the forces that influenced it).
Most of my posts have aged terribly in the age of AI (especially the ones I didn't finish...so long, extended discussion of how to use a lab notebook when debugging, we hardly knew ye. Claude fixes our bugs now) but one job that engineers still have is the collection and retention of context that AI doesn't have and can't easily get.
The larger and more varied projects that you have designed from scratch, the more you start to understand what programming/designing is really about.