LLM-Agent-UMF: LLM-based Agent Unified Modeling Framework for Seamless Integration of Multi Active/Passive Core-Agents
Abstract
The integration of tools in LLM-based agents overcame the difficulties of standalone LLMs and traditional agents' limited capabilities. However, the conjunction of these technologies and the proposed enhancements in several state-of-the-art works followed a non-unified software architecture resulting in a lack of modularity. Indeed, they focused mainly on functionalities and overlooked the definition of the component's boundaries within the agent. This caused terminological and architectural ambiguities between researchers which we addressed in this paper by proposing a unified framework that establishes a clear foundation for LLM-based agents' development from both functional and software architectural perspectives. Our framework, LLM-Agent-UMF (LLM-based Agent Unified Modeling Framework), clearly distinguishes between the different components of an agent, setting LLMs, and tools apart from a newly introduced element: the core-agent, playing the role of the central coordinator of the agent which comprises five modules: planning, memory, profile, action, and security, the latter often neglected in previous works. Differences in the internal structure of core-agents led us to classify them into a taxonomy of passive and active types. Based on this, we proposed different multi-core agent architectures combining unique characteristics of various individual agents. For evaluation purposes, we applied this framework to a selection of state-of-the-art agents, thereby demonstrating its alignment with their functionalities and clarifying the overlooked architectural aspects. Moreover, we thoroughly assessed four of our proposed architectures by integrating distinctive agents into hybrid active/passive core-agents' systems. This analysis provided clear insights into potential improvements and highlighted the challenges involved in the combination of specific agents.
Community
This paper proposes a unified framework that establishes a clear foundation for LLM-based agents' development from both functional and software architectural perspectives.
Compared with previous works, the 5 main contributions of this paper can be summarized as follows:
- Introducing a new terminology, core-agent, as a structural sub-component in LLM-based agents to improve modularity and promote more effective and precise communication among researchers and contributors in the field of agents and LLM technologies.
- Modelling the internal structure of a core-agent by adapting the framework suggested by [4] that was originally meant to describe the whole agent from an abstract functional perspective.
- Improving our modeling framework by augmenting the core-agent with a security module and introducing new methods within other modules.
- Classifying core-agents into active core-agents and passive core-agents, explaining their differences and similarities, and highlighting their unique advantages.
- Finally, introducing various multi-core agent architectures emphasizing that the hybrid one-active-many-passive core-agent architecture is the optimal setup for LLM-based agents.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Taxonomy of Architecture Options for Foundation Model-based Agents: Analysis and Decision Model (2024)
- From LLMs to LLM-based Agents for Software Engineering: A Survey of Current, Challenges and Future (2024)
- Large Language Model-Based Agents for Software Engineering: A Survey (2024)
- Tulip Agent - Enabling LLM-Based Agents to Solve Tasks Using Large Tool Libraries (2024)
- Towards AI-Safety-by-Design: A Taxonomy of Runtime Guardrails in Foundation Model based Systems (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper