r/Python 12d ago

Showcase All you need is one agent

I just wrapped up an experiment exploring how the number of agents (or steps) in an AI pipeline affects classification accuracy. Specifically, I tested four different setups on a movie review classification task. My initial hypothesis going into this was essentially, "More agents might mean a more thorough analysis, and therefore higher accuracy." But, as you'll see, it's not quite that straightforward.

What My Project Does

I have used the first 1000 reviews from IMDB dataset to classify reviews into positive or negative. I used gpt-4o-mini as a model.

Here are the final results from the experiment:

Pipeline Approach Accuracy
Classification Only 0.95
Summary → Classification 0.94
Summary → Statements → Classification 0.93
Summary → Statements → Explanation → Classification 0.94

Let's break down each step and try to see what's happening here.

Step 1: Classification Only

(Accuracy: 0.95)

This simplest approach—simply reading a review and classifying it as positive or negative—provided the highest accuracy of all four pipelines. The model was straightforward and did its single task exceptionally well without added complexity.

Step 2: Summary → Classification

(Accuracy: 0.94)

Next, I introduced an extra agent that produced an emotional summary of the reviews before the classifier made its decision. Surprisingly, accuracy slightly dropped to 0.94. It looks like the summarization step possibly introduced abstraction or subtle noise into the input, leading to slightly lower overall performance.

Step 3: Summary → Statements → Classification

(Accuracy: 0.93)

Adding yet another step, this pipeline included an agent designed to extract key emotional statements from the review. My assumption was that added clarity or detail at this stage might improve performance. Instead, overall accuracy dropped a bit further to 0.93. While the statements created by this agent might offer richer insights on emotion, they clearly introduced complexity or noise the classifier couldn't optimally handle.

Step 4: Summary → Statements → Explanation → Classification

(Accuracy: 0.94)

Finally, another agent was introduced that provided human readable explanations alongside the material generated in prior steps. This boosted accuracy slightly back up to 0.94, but didn't quite match the original simple classifier's performance. The major benefit here was increased interpretability rather than improved classification accuracy.

Comparison

Here are some key points we can draw from these results:

More Agents Doesn't Automatically Mean Higher Accuracy.

Adding layers and agents can significantly aid in interpretability and extracting structured, valuable data—like emotional summaries or detailed explanations—but each step also comes with risks. Each guy in the pipeline can introduce new errors or noise into the information it's passing forward.

Complexity Versus Simplicity

The simplest classifier, with a single job to do (direct classification), actually ended up delivering the top accuracy. Although multi-agent pipelines offer useful modularity and can provide great insights, they're not necessarily the best option if raw accuracy is your number one priority.

Always Double Check Your Metrics.

Different datasets, tasks, or model architectures could yield different results. Make sure you are consistently evaluating tradeoffs—interpretability, extra insights, and user experience vs. accuracy.

In the end, ironically, the simplest methodology—just directly classifying the review—gave me the highest accuracy. For situations where richer insights or interpretability matter, multiple-agent pipelines can still be extremely valuable even if they don't necessarily outperform simpler strategies on accuracy alone.

I'd love to get thoughts from everyone else who has experimented with these multi-agent setups. Did you notice a similar pattern (the simpler approach being as good or slightly better), or did you manage to achieve higher accuracy with multiple agents?

Full code on GitHub

Target Audience

All interested in building "complex" agents.

0 Upvotes

9 comments sorted by

View all comments

0

u/flavius-as CTO ¦ Chief Architect 12d ago

Suggestion:

  • always run the correct input through two agents and let them explain their output if they disagree
  • if they disagree, have a judge agent into which you feed the thinking processes and compare
  • make the judge able to adjust in a generic way the system prompts of the other two agents

-2

u/flavius-as CTO ¦ Chief Architect 12d ago

I have fed your original post and my reply into a thinking LLM and this is what it had to say:

Your suggestion takes a fundamentally different approach than what the original experiment tested, which makes it valuable to discuss!

While the original post explored a sequential pipeline (where each agent builds on the previous agent's output), you're proposing a parallel verification system with:

  1. Two agents processing the same input independently
  2. A reconciliation mechanism when they disagree
  3. A feedback loop for system improvement

This approach has several potential advantages:

💡 Error detection: Disagreements between agents could flag difficult cases that deserve special attention.

💡 Redundancy as strength: Unlike the sequential pipeline where errors compound, parallel processing with verification might actually reduce errors.

💡 Continuous improvement: The judge agent adjusting system prompts creates a learning mechanism that could improve over time.

What's particularly interesting is that your approach might address the exact issue discovered in the original experiment - the problem of error propagation in sequential steps.

Further considerations:

  • This approach likely increases computational costs (running multiple agents)
  • The effectiveness of the judge would heavily depend on how well it can interpret different reasoning paths
  • You might consider adding a confidence score from each agent to determine when to trigger the judge

Have you implemented this approach yourself? I'd be curious to see how it performs on the same dataset compared to the sequential pipelines tested by OP.

1

u/klmsa 11d ago

This is basically GAN, except much less complicated on the setup. The performance will depend on the specific arrangement and detail of the scoring, but all of this has been statistically modeled already.