However, AI agents don't share these problems in the classical sense. Building agents is about context attention, relevance, and information density inside a single ordered buffer. The distributed part is creating an orchestrator that manages these things. At noetive.io we currently work on the context relevance part with our contextual broker Semantik.
Good architecture, actor models, and collaboration patterns do not emerge magically from “more agents”.
Maybe what’s missing is the architect’s role.
If that could have been achieved it would have been very profitable, too. There's no shortage of cheap, motivated interns/3rd world devs and the executive class prefer to rely on disposable resources even when it costs more overall.
The net result was always the opposite though - one or two juniors on a leash could be productive but more than that and it always caused more problems than it solved.
Seeing the same problem with agents. Multi agent orchestration seems like a scam to manufacture demand for tokens.
You can certainly prove that mathematically, and giving that proof to an LLM it will give you the correct answer. Which is a prompting technique btw to improve accuracy of an LLMs results.
Although the more recent LLMs have built in approaches to answer this question.
Gemma4 for example:
-----
The user is asking for the number of 'r's in the word "strawberry".
* S
* T
* *R* (1)
* A
* W
* B
* E
* *R* (2)
* *R* (3)
* Y
There are 3 'r's.
-----
GPT-OSS-120b example.
-----
The user asks: "How many r's in strawberry." This is a straightforward counting problem: count the letter 'r' in the word "strawberry". The word "strawberry" contains letters: s t r a w b e r r y. Count r's: there are three 'r's? Let's check: s(1) t(2) r(3) a(4) w(5) b(6) e(7) r(8) r(9) y(10). Actually the word is "strawberry". Let's write out: s, t, r, a, w, b, e, r, r, y. So there are three r's: at positions 3, 8, 9. Yes, three r's. So answer: 3.
But we may also note that "r" appears three times.
Thus answer: 3.
We can provide a short answer.
Thus final: There are three r's in "strawberry".
----
At the end, in both cases, it's a back and forth with an LLM, and every request has its own lifecycle. So it's unfortunately at least a networked systems problem. I think your point works with infinite context window and one-shot ting the whole repo every time... Maybe quantum LLM models will enable that
This might be obvious to everyone but it’s a nice way to me to view it (sort of restating the non-waterfall (agile?) approach to specification discovery)
Ie waterfall design without coding is too under specified, hence the agile waterfall of using code iteratively to find an exact specification