▲hleichsenring2 hours ago
[-] Most multi-agent demos run parallel discussions that look impressive but are impossible to audit. No trace, no decision owner, no reproducibility. That's fine for atomic tasks but not for team based decision-making.
I built Agent Smith (open source) around a different principle: commands execute sequentially on a flat linked list pipeline, with runtime insertion. Roles (Architect, DevOps, Backend Dev, Security) discuss in structured rounds with explicit AGREE/OBJECTION verdicts. If consensus fails after 3 rounds, it escalates to a human instead of forcing a decision.
Every step is logged in an execution trail with command, duration, cost, and insertions. The question isn't how many agents you can run in parallel. It's whether you can explain to an auditor how the system reached its conclusion.
reply