Papers
arxiv:2503.02682

MPO: Boosting LLM Agents with Meta Plan Optimization

Published on Mar 4
· Submitted by xwm on Mar 5
#2 Paper of the day
Authors:
,
,

Abstract

Recent advancements in large language models (LLMs) have enabled LLM-based agents to successfully tackle interactive planning tasks. However, despite their successes, existing approaches often suffer from planning hallucinations and require retraining for each new agent. To address these challenges, we propose the Meta Plan Optimization (MPO) framework, which enhances agent planning capabilities by directly incorporating explicit guidance. Unlike previous methods that rely on complex knowledge, which either require significant human effort or lack quality assurance, MPO leverages high-level general guidance through meta plans to assist agent planning and enables continuous optimization of the meta plans based on feedback from the agent's task execution. Our experiments conducted on two representative tasks demonstrate that MPO significantly outperforms existing baselines. Moreover, our analysis indicates that MPO provides a plug-and-play solution that enhances both task completion efficiency and generalization capabilities in previous unseen scenarios.

Community

Paper author Paper submitter

The implementation is open-sourced at https://github.com/WeiminXiong/MPO.
The keypoints we want to make for MPO:

  1. Introduce explicit guidance to steer the agent's planning process, with the ability to refine the guidance based on agent feedback.
  2. Serve as a plug-and-play module that significantly enhances the performance of any agent on tasks without requiring retraining.
  3. Effectively improve the agent's generalization in out-of-distribution tests while boosting task completion efficiency.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.02682 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.02682 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.02682 in a Space README.md to link it from this page.

Collections including this paper 5