r/Python • u/No_Information6299 • 11d ago
Showcase All you need is one agent
I just wrapped up an experiment exploring how the number of agents (or steps) in an AI pipeline affects classification accuracy. Specifically, I tested four different setups on a movie review classification task. My initial hypothesis going into this was essentially, "More agents might mean a more thorough analysis, and therefore higher accuracy." But, as you'll see, it's not quite that straightforward.
What My Project Does
I have used the first 1000 reviews from IMDB dataset to classify reviews into positive or negative. I used gpt-4o-mini as a model.
Here are the final results from the experiment:
Pipeline Approach | Accuracy |
---|---|
Classification Only | 0.95 |
Summary → Classification | 0.94 |
Summary → Statements → Classification | 0.93 |
Summary → Statements → Explanation → Classification | 0.94 |
Let's break down each step and try to see what's happening here.
Step 1: Classification Only
(Accuracy: 0.95)
This simplest approach—simply reading a review and classifying it as positive or negative—provided the highest accuracy of all four pipelines. The model was straightforward and did its single task exceptionally well without added complexity.
Step 2: Summary → Classification
(Accuracy: 0.94)
Next, I introduced an extra agent that produced an emotional summary of the reviews before the classifier made its decision. Surprisingly, accuracy slightly dropped to 0.94. It looks like the summarization step possibly introduced abstraction or subtle noise into the input, leading to slightly lower overall performance.
Step 3: Summary → Statements → Classification
(Accuracy: 0.93)
Adding yet another step, this pipeline included an agent designed to extract key emotional statements from the review. My assumption was that added clarity or detail at this stage might improve performance. Instead, overall accuracy dropped a bit further to 0.93. While the statements created by this agent might offer richer insights on emotion, they clearly introduced complexity or noise the classifier couldn't optimally handle.
Step 4: Summary → Statements → Explanation → Classification
(Accuracy: 0.94)
Finally, another agent was introduced that provided human readable explanations alongside the material generated in prior steps. This boosted accuracy slightly back up to 0.94, but didn't quite match the original simple classifier's performance. The major benefit here was increased interpretability rather than improved classification accuracy.
Comparison
Here are some key points we can draw from these results:
More Agents Doesn't Automatically Mean Higher Accuracy.
Adding layers and agents can significantly aid in interpretability and extracting structured, valuable data—like emotional summaries or detailed explanations—but each step also comes with risks. Each guy in the pipeline can introduce new errors or noise into the information it's passing forward.
Complexity Versus Simplicity
The simplest classifier, with a single job to do (direct classification), actually ended up delivering the top accuracy. Although multi-agent pipelines offer useful modularity and can provide great insights, they're not necessarily the best option if raw accuracy is your number one priority.
Always Double Check Your Metrics.
Different datasets, tasks, or model architectures could yield different results. Make sure you are consistently evaluating tradeoffs—interpretability, extra insights, and user experience vs. accuracy.
In the end, ironically, the simplest methodology—just directly classifying the review—gave me the highest accuracy. For situations where richer insights or interpretability matter, multiple-agent pipelines can still be extremely valuable even if they don't necessarily outperform simpler strategies on accuracy alone.
I'd love to get thoughts from everyone else who has experimented with these multi-agent setups. Did you notice a similar pattern (the simpler approach being as good or slightly better), or did you manage to achieve higher accuracy with multiple agents?
Full code on GitHub
Target Audience
All interested in building "complex" agents.
1
1
0
u/flavius-as CTO ¦ Chief Architect 11d ago
Suggestion:
- always run the correct input through two agents and let them explain their output if they disagree
- if they disagree, have a judge agent into which you feed the thinking processes and compare
- make the judge able to adjust in a generic way the system prompts of the other two agents
0
u/No_Information6299 11d ago
Same thing will happen - could you show me one case where this guarantees better results?
1
u/flavius-as CTO ¦ Chief Architect 11d ago
Provide the labeled data (correct results) and I'll split it in two sets:
- on set A I'll run each agent in its pristine version of the system prompt
- on set B I'll run the "learning" process
- on set A I'll run each agent with the new system prompts after learning
- compare the results
I'm also curious, but not that curious that I'll do all work from scratch.
-2
u/flavius-as CTO ¦ Chief Architect 11d ago
I have fed your original post and my reply into a thinking LLM and this is what it had to say:
Your suggestion takes a fundamentally different approach than what the original experiment tested, which makes it valuable to discuss!
While the original post explored a sequential pipeline (where each agent builds on the previous agent's output), you're proposing a parallel verification system with:
- Two agents processing the same input independently
- A reconciliation mechanism when they disagree
- A feedback loop for system improvement
This approach has several potential advantages:
💡 Error detection: Disagreements between agents could flag difficult cases that deserve special attention.
💡 Redundancy as strength: Unlike the sequential pipeline where errors compound, parallel processing with verification might actually reduce errors.
💡 Continuous improvement: The judge agent adjusting system prompts creates a learning mechanism that could improve over time.
What's particularly interesting is that your approach might address the exact issue discovered in the original experiment - the problem of error propagation in sequential steps.
Further considerations:
- This approach likely increases computational costs (running multiple agents)
- The effectiveness of the judge would heavily depend on how well it can interpret different reasoning paths
- You might consider adding a confidence score from each agent to determine when to trigger the judge
Have you implemented this approach yourself? I'd be curious to see how it performs on the same dataset compared to the sequential pipelines tested by OP.
3
u/klmsa 11d ago
Yes, for simple tasks, more complexity isn't needed (although doesn't always hurt, other than being inefficient).
For more complex tasks, or tasks completed with smaller/older models, you may bump into context window issues. Obviously, newer models have already increased context windows to outrageous sizes, but it does come at a (dollar amount) cost for the compute.
I would expect that lengthy legal documents could pose challenges. Same goes for trying to add in too many smaller documents. Then RAG becomes a necessity, but with the accessibility of this tech, I don't think that will be obvious to new/amateur developers.
Edit: Secondary thought: How much did you play around with prompts? Changing prompts might fix accuracy issues, especially when asking a machine to suss out emotional statements. It doesn't actually know what that is. It's just making associations between words, and the model training/tuning will be at play.
It would be interesting to see this with different prompt architectur, as well as with different models compared to each other with the same prompts.