Once you have a comprehensive plan together, or a fairly full context window, agents have a lot of issues zooming out. This is particularly painful in some coding agents since they're loading your existing code into context and get weighted down heavily by what already exists (which makes them good at other tasks) vs. what may be significantly simpler and better for net-new stuff or areas of your codebase that are more nascent.
>> Small decisions have to be made by design/eng based on discovery of product constraints, but communicating this to stakeholders is hard and time consuming and often doesn’t work.
This implies that a great deal of extraneous work and headaches result from the stakeholders not having a clear mental model of what they need software to do, versus what is either secondary or could be disposed of with minor tweaks to some operational flow, usage guidance, or terms of service document. In my experience: Even more valuable than having my own mental model of a large piece of software, is having an interlocutor representing the stakeholders and end users, who understands the business model completely and has the authority to say: (A) We absolutely need to remove this constraint, or (B) If this is going to cost an extra 40 hours of coding, maybe we can find a workflow on our side thet gets around it - or find a shortcut, and shelve this for now so you can move on with the rest of the project.
Clients usually have a poor understanding of where constraints are and why some seemingly easy problems are very hard, or why some problems that seem hard to them are actually quite easy. I find that giving them a clear idea of the effort involved in each part of fulfilling a request often leads to me talking to someone directly who can make a call as to whether it's actually necessary.
https://www.linkedin.com/pulse/tail-wagging-dog-tim-bryce/
Systems analysis is about to make a roaring return, as the need for human programmers wanes thanks to LLMs generating all the code.
There's something to be said for attention, and the window of attention provided by a human who wrote something rather than an LLM that has to guess the intention of it, every time it reboots.
Any coder who understands what they're building could theoretically be a systems analyst for the greater part of what their code is going to be embedded in. This is a weakly referenced argument, but, dropping the lower links in the chain and substituting them with LLMs is exactly where I think communication is bound to break down. Or: You can try to herd cats all day, but if you move up the chain and all you have below you is cats relying on LLMs, you've just shifted the same problem up to your level in the organization.
You ask the coding assistant for a brand new feature.
The coding assistant says, we have two or three or four different paths we could go about doing it. Maybe the coding assistant can recommend a specific one. Once you pick the option, the coding assistant can ask more specific questions.
The database looks like this right now, should we modify this table which would be the simplest solution, or create a new one? If you will in the future want a many-to-one relationship for this component, we should create a new table and reference it via a join table. Which approach do you prefer?
What about the frontend, we can surface controls for this in on our existing pages, however for reasons x, y, and z I'd recommend creating a new page for the CRUD operations on this new feature. Which would you prefer?
Now that we've gotten the big questions squared away, do you want to proceed with code generation, or would you like to dig deeper into either the backend or the frontend implementation?
Codex seems to be more thorough for it, but needs a lot of baby sitting, Claude will be happy to tell you he is done while missing half of them but will implement through the stack.
Tests will be generally crap for both of them.
So while I am happy to have those, it doesn't replace development knowledge.
Claude will be happy to kill security features to make it works.
I'd turn it around- this is the reason asking questions does work! When you don't know what you want, someone asking you for more specifics is sometimes very illuminating, whether that someone is real or not.
LLMs have played this role well for me in some situations, and atrociously in others.
We humans can imagine it in our mind because we have used the PC a lot. But it is still hard for use to anticipate how the actual system will feel for the end-users. Therefore we build a prototype and once we use the prototype we learn hey this can not possibly work productively. So we must try something else. The LLM does not try to use a virtual prototype and thne learn it is hard to use. Unlike Bill Clinton it doesn't feel our pain.
It can. It totally is able to refuse and then give me options for how it thinks it should do something.
In summary, the user research we have conducted thus far uncovered the central tension that underlies the use of coding assistants:
1. Most technical constraints require cross-functional alignment, but communicating them during stakeholder meetings is challenging due to context gap and cognitive load
2. Code generation cannibalizes the implementation phase where additional constraints were previously caught, shifting the burden of discovery to code review — where it’s even harder and more expensive to resolve
How to get around this conundrum? The context problem must be addressed at its inception: during product meetings, where there is cross-functional presence and different ideas can be entertained without rework cost. If AI handles the implementation, then the planning phase has to absorb the discovery work that manual implementation used to provide.
They're emphasizing one thing too much and another not enough.First, the communication problem. Either the humans are getting the right information and communicating it, or they aren't. The AI has nothing to do with this; it's not preventing communication at all. If anything, it will now demand more of it, which is good.
Second, the "implementation feedback". Yes, 'additional constraints' were previously encountered by developers trying to implement asinine asks, and would force them to go back and ask for more feedback. But now the AI goes ahead and implements crap. And this is perfectly fine, because after it churns out the software in a day rather than a week, anyone who tries to use the software will see the problem, and then go back and ask for more detail. AI is making the old feedback loop faster. It's just not at implementation-time anymore.
How do you explain the constraints to the stakeholders if you didn't try to solve them yourself and you don't fully understand why they are constraints?
[edit] Just to add to this thought: It might be more useful to do the initial exploratory work oneself, to find out what's involved in fulfilling a request and where the constraints are, and then ask an AI to summarize that for a client along with an estimate of the work involved. Because to me, the pain point in those meetings is getting mired in explaining technical details about asynchronous operational/code processes or things like that, trying to convey the trade-offs involved.