Four ways to think,no LLMs.
Most multi-agent systems are several copies of the same LLM voting on one prompt — same blind spots, just averaged. This system runs four rule-based agents in parallel. Each one uses a completely different algorithm: decomposition, lateral signals, structural risk, multi-domain taxonomy. Zero LLM calls anywhere on this page.
One model answers. Its blind spots become your output's blind spots.
The 2015 ensemble pattern. Same model, multiple calls, majority wins. Same blind spots, averaged.
Analytical decomposes the task structurally. Creative measures lateral signals. Adversarial runs CWE pattern matching + entropy. DomainExpert classifies across a multi-domain taxonomy. Different failure modes = genuine diverse redundancy.
most chatbots: one model answers; its blind spots become yours.
ensemble: same model, multiple calls, majority wins. Same blind spots, averaged.
this system: four different algorithms with different failure modes — genuine redundancy.
Below is a working example. The walkthrough plays automatically — five real tasks running through all four agents in parallel, the meta-controller composing the verdicts.
Scope: this walkthrough uses the agents' static default weights. The system can also adapt weights over time based on which agent style tends to be right for which task class — that learning behavior is a separate scene.
“rm -rf /var/admin && sudo dropdb users; cat /etc/passwd”
A destructive shell sequence with privilege escalation. Adversarial's structural risk analysis catches it without any LLM in the loop.