Bread-F commited on
Commit
265826a
·
verified ·
1 Parent(s): b9e68ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -532
README.md CHANGED
@@ -1,532 +1,8 @@
1
- <div id="top"></div>
2
- <div align="center">
3
- <img src="docs/imgs/lagent_logo.png" width="450"/>
4
-
5
- [![docs](https://img.shields.io/badge/docs-latest-blue)](https://lagent.readthedocs.io/en/latest/)
6
- [![PyPI](https://img.shields.io/pypi/v/lagent)](https://pypi.org/project/lagent)
7
- [![license](https://img.shields.io/github/license/InternLM/lagent.svg)](https://github.com/InternLM/lagent/tree/main/LICENSE)
8
- [![issue resolution](https://img.shields.io/github/issues-closed-raw/InternLM/lagent)](https://github.com/InternLM/lagent/issues)
9
- [![open issues](https://img.shields.io/github/issues-raw/InternLM/lagent)](https://github.com/InternLM/lagent/issues)
10
- ![Visitors](https://api.visitorbadge.io/api/visitors?path=InternLM%2Flagent%20&countColor=%23263759&style=flat)
11
- ![GitHub forks](https://img.shields.io/github/forks/InternLM/lagent)
12
- ![GitHub Repo stars](https://img.shields.io/github/stars/InternLM/lagent)
13
- ![GitHub contributors](https://img.shields.io/github/contributors/InternLM/lagent)
14
-
15
- </div>
16
-
17
- <p align="center">
18
- 👋 join us on <a href="https://twitter.com/intern_lm" target="_blank">𝕏 (Twitter)</a>, <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=internwx" target="_blank">WeChat</a>
19
- </p>
20
-
21
- ## Installation
22
-
23
- Install from source:
24
-
25
- ```bash
26
- git clone https://github.com/InternLM/lagent.git
27
- cd lagent
28
- pip install -e .
29
- ```
30
-
31
- ## Usage
32
-
33
- Lagent is inspired by the design philosophy of PyTorch. We expect that the analogy of neural network layers will make the workflow clearer and more intuitive, so users only need to focus on creating layers and defining message passing between them in a Pythonic way. This is a simple tutorial to get you quickly started with building multi-agent applications.
34
-
35
- ### Models as Agents
36
-
37
- Agents use `AgentMessage` for communication.
38
-
39
- ```python
40
- from typing import Dict, List
41
- from lagent.agents import Agent
42
- from lagent.schema import AgentMessage
43
- from lagent.llms import VllmModel, INTERNLM2_META
44
-
45
- llm = VllmModel(
46
- path='Qwen/Qwen2-7B-Instruct',
47
- meta_template=INTERNLM2_META,
48
- tp=1,
49
- top_k=1,
50
- temperature=1.0,
51
- stop_words=['<|im_end|>'],
52
- max_new_tokens=1024,
53
- )
54
- system_prompt = '你的回答只能从“典”、“孝”、“急”三个字中选一个。'
55
- agent = Agent(llm, system_prompt)
56
-
57
- user_msg = AgentMessage(sender='user', content='今天天气情况')
58
- bot_msg = agent(user_msg)
59
- print(bot_msg)
60
- ```
61
-
62
- ```
63
- content='急' sender='Agent' formatted=None extra_info=None type=None receiver=None stream_state=<AgentStatusCode.END: 0>
64
- ```
65
-
66
- ### Memory as State
67
-
68
- Both input and output messages will be added to the memory of `Agent` in each forward pass. This is performed in `__call__` rather than `forward`. See the following pseudo code
69
-
70
- ```python
71
- def __call__(self, *message):
72
- message = pre_hooks(message)
73
- add_memory(message)
74
- message = self.forward(*message)
75
- add_memory(message)
76
- message = post_hooks(message)
77
- return message
78
- ```
79
-
80
- Inspect the memory in two ways
81
-
82
- ```python
83
- memory: List[AgentMessage] = agent.memory.get_memory()
84
- print(memory)
85
- print('-' * 120)
86
- dumped_memory: Dict[str, List[dict]] = agent.state_dict()
87
- print(dumped_memory['memory'])
88
- ```
89
-
90
- ```
91
- [AgentMessage(content='今天天气情况', sender='user', formatted=None, extra_info=None, type=None, receiver=None, stream_state=<AgentStatusCode.END: 0>), AgentMessage(content='急', sender='Agent', formatted=None, extra_info=None, type=None, receiver=None, stream_state=<AgentStatusCode.END: 0>)]
92
- ------------------------------------------------------------------------------------------------------------------------
93
- [{'content': '今天天气情况', 'sender': 'user', 'formatted': None, 'extra_info': None, 'type': None, 'receiver': None, 'stream_state': <AgentStatusCode.END: 0>}, {'content': '急', 'sender': 'Agent', 'formatted': None, 'extra_info': None, 'type': None, 'receiver': None, 'stream_state': <AgentStatusCode.END: 0>}]
94
- ```
95
-
96
- Clear the memory of this session(`session_id=0` by default):
97
-
98
- ```python
99
- agent.reset()
100
- ```
101
-
102
- ### Custom Message Aggregation
103
-
104
- `DefaultAggregator` is called under the hood to assemble and convert `AgentMessage` to OpenAI message format.
105
-
106
- ```python
107
- def forward(self, *message: AgentMessage, session_id=0, **kwargs) -> Union[AgentMessage, str]:
108
- formatted_messages = self.aggregator.aggregate(
109
- self.memory.get(session_id),
110
- self.name,
111
- self.output_format,
112
- self.template,
113
- )
114
- llm_response = self.llm.chat(formatted_messages, **kwargs)
115
- ...
116
- ```
117
-
118
- Implement a simple aggregator that can receive few-shots
119
-
120
- ```python
121
- from typing import List, Union
122
- from lagent.memory import Memory
123
- from lagent.prompts import StrParser
124
- from lagent.agents.aggregator import DefaultAggregator
125
-
126
- class FewshotAggregator(DefaultAggregator):
127
- def __init__(self, few_shot: List[dict] = None):
128
- self.few_shot = few_shot or []
129
-
130
- def aggregate(self,
131
- messages: Memory,
132
- name: str,
133
- parser: StrParser = None,
134
- system_instruction: Union[str, dict, List[dict]] = None) -> List[dict]:
135
- _message = []
136
- if system_instruction:
137
- _message.extend(
138
- self.aggregate_system_intruction(system_instruction))
139
- _message.extend(self.few_shot)
140
- messages = messages.get_memory()
141
- for message in messages:
142
- if message.sender == name:
143
- _message.append(
144
- dict(role='assistant', content=str(message.content)))
145
- else:
146
- user_message = message.content
147
- if len(_message) > 0 and _message[-1]['role'] == 'user':
148
- _message[-1]['content'] += user_message
149
- else:
150
- _message.append(dict(role='user', content=user_message))
151
- return _message
152
-
153
- agent = Agent(
154
- llm,
155
- aggregator=FewshotAggregator(
156
- [
157
- {"role": "user", "content": "今天天气"},
158
- {"role": "assistant", "content": "【晴】"},
159
- ]
160
- )
161
- )
162
- user_msg = AgentMessage(sender='user', content='昨天天气')
163
- bot_msg = agent(user_msg)
164
- print(bot_msg)
165
- ```
166
-
167
- ```
168
- content='【多云转晴,夜间有轻微降温】' sender='Agent' formatted=None extra_info=None type=None receiver=None stream_state=<AgentStatusCode.END: 0>
169
- ```
170
-
171
- ### Flexible Response Formatting
172
-
173
- In `AgentMessage`, `formatted` is reserved to store information parsed by `output_format` from the model output.
174
-
175
- ```python
176
- def forward(self, *message: AgentMessage, session_id=0, **kwargs) -> Union[AgentMessage, str]:
177
- ...
178
- llm_response = self.llm.chat(formatted_messages, **kwargs)
179
- if self.output_format:
180
- formatted_messages = self.output_format.parse_response(llm_response)
181
- return AgentMessage(
182
- sender=self.name,
183
- content=llm_response,
184
- formatted=formatted_messages,
185
- )
186
- ...
187
- ```
188
-
189
- Use a tool parser as follows
190
-
191
- ````python
192
- from lagent.prompts.parsers import ToolParser
193
-
194
- system_prompt = "逐步分析并编写Python代码解决以下问题。"
195
- parser = ToolParser(tool_type='code interpreter', begin='```python\n', end='\n```\n')
196
- llm.gen_params['stop_words'].append('\n```\n')
197
- agent = Agent(llm, system_prompt, output_format=parser)
198
-
199
- user_msg = AgentMessage(
200
- sender='user',
201
- content='Marie is thinking of a multiple of 63, while Jay is thinking of a '
202
- 'factor of 63. They happen to be thinking of the same number. There are '
203
- 'two possibilities for the number that each of them is thinking of, one '
204
- 'positive and one negative. Find the product of these two numbers.')
205
- bot_msg = agent(user_msg)
206
- print(bot_msg.model_dump_json(indent=4))
207
- ````
208
-
209
- ````
210
- {
211
- "content": "首先,我们需要找出63的所有正因数和负因数。63的正因数可以通过分解63的质因数来找出,即\\(63 = 3^2 \\times 7\\)。因此,63的正因数包括1, 3, 7, 9, 21, 和 63。对于负因数,我们只需将上述正因数乘以-1。\n\n接下来,我们需要找出与63的正因数相乘的结果为63的数,以及与63的负因数相乘的结果为63的数。这可以通过将63除以每个正因数和负因数来实现。\n\n最后,我们将找到的两个数相乘得到最终答案。\n\n下面是Python代码实现:\n\n```python\ndef find_numbers():\n # 正因数\n positive_factors = [1, 3, 7, 9, 21, 63]\n # 负因数\n negative_factors = [-1, -3, -7, -9, -21, -63]\n \n # 找到与正因数相乘的结果为63的数\n positive_numbers = [63 / factor for factor in positive_factors]\n # 找到与负因数相乘的结果为63的数\n negative_numbers = [-63 / factor for factor in negative_factors]\n \n # 计算两个数的乘积\n product = positive_numbers[0] * negative_numbers[0]\n \n return product\n\nresult = find_numbers()\nprint(result)",
212
- "sender": "Agent",
213
- "formatted": {
214
- "tool_type": "code interpreter",
215
- "thought": "首先,我们需要找出63的所有正因数和负因数。63的正因数可以通过分解63的质因数来找出,即\\(63 = 3^2 \\times 7\\)。因此,63的正因数包括1, 3, 7, 9, 21, 和 63。对于负因数,我们只需将上述正因数乘以-1。\n\n接下来,我们需要找出与63的正因数相乘的结果为63的数,以及与63的负因数相乘的结果为63的数。这可以通过将63除以每个正因数和负因数来实现。\n\n最后,我们将找到的两个数相乘得到最终答案。\n\n下面是Python代码实现:\n\n",
216
- "action": "def find_numbers():\n # 正因数\n positive_factors = [1, 3, 7, 9, 21, 63]\n # 负因数\n negative_factors = [-1, -3, -7, -9, -21, -63]\n \n # 找到与正因数相乘的结果为63的数\n positive_numbers = [63 / factor for factor in positive_factors]\n # 找到与负因数相乘的结果为63的数\n negative_numbers = [-63 / factor for factor in negative_factors]\n \n # 计算两个数的乘积\n product = positive_numbers[0] * negative_numbers[0]\n \n return product\n\nresult = find_numbers()\nprint(result)",
217
- "status": 1
218
- },
219
- "extra_info": null,
220
- "type": null,
221
- "receiver": null,
222
- "stream_state": 0
223
- }
224
- ````
225
-
226
- ### Consistency of Tool Calling
227
-
228
- `ActionExecutor` uses the same communication data structure as `Agent`, but requires the content of input `AgentMessage` to be a dict containing:
229
-
230
- - `name`: tool name, e.g. `'IPythonInterpreter'`, `'WebBrowser.search'`.
231
- - `parameters`: keyword arguments of the tool API, e.g. `{'command': 'import math;math.sqrt(2)'}`, `{'query': ['recent progress in AI']}`.
232
-
233
- You can register custom hooks for message conversion.
234
-
235
- ```python
236
- from lagent.hooks import Hook
237
- from lagent.schema import ActionReturn, ActionStatusCode, AgentMessage
238
- from lagent.actions import ActionExecutor, IPythonInteractive
239
-
240
- class CodeProcessor(Hook):
241
- def before_action(self, executor, message, session_id):
242
- message = message.copy(deep=True)
243
- message.content = dict(
244
- name='IPythonInteractive', parameters={'command': message.formatted['action']}
245
- )
246
- return message
247
-
248
- def after_action(self, executor, message, session_id):
249
- action_return = message.content
250
- if isinstance(action_return, ActionReturn):
251
- if action_return.state == ActionStatusCode.SUCCESS:
252
- response = action_return.format_result()
253
- else:
254
- response = action_return.errmsg
255
- else:
256
- response = action_return
257
- message.content = response
258
- return message
259
-
260
- executor = ActionExecutor(actions=[IPythonInteractive()], hooks=[CodeProcessor()])
261
- bot_msg = AgentMessage(
262
- sender='Agent',
263
- content='首先,我们需要...',
264
- formatted={
265
- 'tool_type': 'code interpreter',
266
- 'thought': '首先,我们需要...',
267
- 'action': 'def find_numbers():\n # 正因数\n positive_factors = [1, 3, 7, 9, 21, 63]\n # 负因数\n negative_factors = [-1, -3, -7, -9, -21, -63]\n \n # 找到与正因数相乘的结果为63的数\n positive_numbers = [63 / factor for factor in positive_factors]\n # 找到与负因数相乘的结果为63的数\n negative_numbers = [-63 / factor for factor in negative_factors]\n \n # 计算两个数的乘积\n product = positive_numbers[0] * negative_numbers[0]\n \n return product\n\nresult = find_numbers()\nprint(result)',
268
- 'status': 1
269
- })
270
- executor_msg = executor(bot_msg)
271
- print(executor_msg)
272
- ```
273
-
274
- ```
275
- content='3969.0' sender='ActionExecutor' formatted=None extra_info=None type=None receiver=None stream_state=<AgentStatusCode.END: 0>
276
- ```
277
-
278
- **For convenience, Lagent provides `InternLMActionProcessor` which is adapted to messages formatted by `ToolParser` as mentioned above.**
279
-
280
- ### Dual Interfaces
281
-
282
- Lagent adopts dual interface design, where almost every component(LLMs, actions, action executors...) has the corresponding asynchronous variant by prefixing its identifier with 'Async'. It is recommended to use synchronous agents for debugging and asynchronous ones for large-scale inference to make the most of idle CPU and GPU resources.
283
-
284
- However, make sure the internal consistency of agents, i.e. asynchronous agents should be equipped with asynchronous LLMs and asynchronous action executors that drive asynchronous tools.
285
-
286
- ```python
287
- from lagent.llms import VllmModel, AsyncVllmModel, LMDeployPipeline, AsyncLMDeployPipeline
288
- from lagent.actions import ActionExecutor, AsyncActionExecutor, WebBrowser, AsyncWebBrowser
289
- from lagent.agents import Agent, AsyncAgent, AgentForInternLM, AsyncAgentForInternLM
290
- ```
291
-
292
- ______________________________________________________________________
293
-
294
- ## Practice
295
-
296
- - **Try to implement `forward` instead of `__call__` of subclasses unless necessary.**
297
- - **Always include the `session_id` argument explicitly, which is designed for isolation of memory, LLM requests and tool invocation(e.g. maintain multiple independent IPython environments) in concurrency.**
298
-
299
- ### Single Agent
300
-
301
- Math agents that solve problems by programming
302
-
303
- ````python
304
- from lagent.agents.aggregator import InternLMToolAggregator
305
-
306
- class Coder(Agent):
307
- def __init__(self, model_path, system_prompt, max_turn=3):
308
- super().__init__()
309
- llm = VllmModel(
310
- path=model_path,
311
- meta_template=INTERNLM2_META,
312
- tp=1,
313
- top_k=1,
314
- temperature=1.0,
315
- stop_words=['\n```\n', '<|im_end|>'],
316
- max_new_tokens=1024,
317
- )
318
- self.agent = Agent(
319
- llm,
320
- system_prompt,
321
- output_format=ToolParser(
322
- tool_type='code interpreter', begin='```python\n', end='\n```\n'
323
- ),
324
- # `InternLMToolAggregator` is adapted to `ToolParser` for aggregating
325
- # messages with tool invocations and execution results
326
- aggregator=InternLMToolAggregator(),
327
- )
328
- self.executor = ActionExecutor([IPythonInteractive()], hooks=[CodeProcessor()])
329
- self.max_turn = max_turn
330
-
331
- def forward(self, message: AgentMessage, session_id=0) -> AgentMessage:
332
- for _ in range(self.max_turn):
333
- message = self.agent(message, session_id=session_id)
334
- if message.formatted['tool_type'] is None:
335
- return message
336
- message = self.executor(message, session_id=session_id)
337
- return message
338
-
339
- coder = Coder('Qwen/Qwen2-7B-Instruct', 'Solve the problem step by step with assistance of Python code')
340
- query = AgentMessage(
341
- sender='user',
342
- content='Find the projection of $\\mathbf{a}$ onto $\\mathbf{b} = '
343
- '\\begin{pmatrix} 1 \\\\ -3 \\end{pmatrix}$ if $\\mathbf{a} \\cdot \\mathbf{b} = 2.$'
344
- )
345
- answer = coder(query)
346
- print(answer.content)
347
- print('-' * 120)
348
- for msg in coder.state_dict()['agent.memory']:
349
- print('*' * 80)
350
- print(f'{msg["sender"]}:\n\n{msg["content"]}')
351
- ````
352
-
353
- ### Multiple Agents
354
-
355
- Asynchronous blogging agents that improve writing quality by self-refinement ([original AutoGen example](https://microsoft.github.io/autogen/0.2/docs/topics/prompting-and-reasoning/reflection/))
356
-
357
- ```python
358
- import asyncio
359
- import os
360
- from lagent.llms import AsyncGPTAPI
361
- from lagent.agents import AsyncAgent
362
- os.environ['OPENAI_API_KEY'] = 'YOUR_API_KEY'
363
-
364
- class PrefixedMessageHook(Hook):
365
- def __init__(self, prefix: str, senders: list = None):
366
- self.prefix = prefix
367
- self.senders = senders or []
368
-
369
- def before_agent(self, agent, messages, session_id):
370
- for message in messages:
371
- if message.sender in self.senders:
372
- message.content = self.prefix + message.content
373
-
374
- class AsyncBlogger(AsyncAgent):
375
- def __init__(self, model_path, writer_prompt, critic_prompt, critic_prefix='', max_turn=3):
376
- super().__init__()
377
- llm = AsyncGPTAPI(model_type=model_path, retry=5, max_new_tokens=2048)
378
- self.writer = AsyncAgent(llm, writer_prompt, name='writer')
379
- self.critic = AsyncAgent(
380
- llm, critic_prompt, name='critic', hooks=[PrefixedMessageHook(critic_prefix, ['writer'])]
381
- )
382
- self.max_turn = max_turn
383
-
384
- async def forward(self, message: AgentMessage, session_id=0) -> AgentMessage:
385
- for _ in range(self.max_turn):
386
- message = await self.writer(message, session_id=session_id)
387
- message = await self.critic(message, session_id=session_id)
388
- return await self.writer(message, session_id=session_id)
389
-
390
- blogger = AsyncBlogger(
391
- 'gpt-4o-2024-05-13',
392
- writer_prompt="You are an writing assistant tasked to write engaging blogpost. You try to generate the best blogpost possible for the user's request. "
393
- "If the user provides critique, then respond with a revised version of your previous attempts",
394
- critic_prompt="Generate critique and recommendations on the writing. Provide detailed recommendations, including requests for length, depth, style, etc..",
395
- critic_prefix='Reflect and provide critique on the following writing. \n\n',
396
- )
397
- user_prompt = (
398
- "Write an engaging blogpost on the recent updates in {topic}. "
399
- "The blogpost should be engaging and understandable for general audience. "
400
- "Should have more than 3 paragraphes but no longer than 1000 words.")
401
- bot_msgs = asyncio.get_event_loop().run_until_complete(
402
- asyncio.gather(
403
- *[
404
- blogger(AgentMessage(sender='user', content=user_prompt.format(topic=topic)), session_id=i)
405
- for i, topic in enumerate(['AI', 'Biotechnology', 'New Energy', 'Video Games', 'Pop Music'])
406
- ]
407
- )
408
- )
409
- print(bot_msgs[0].content)
410
- print('-' * 120)
411
- for msg in blogger.state_dict(session_id=0)['writer.memory']:
412
- print('*' * 80)
413
- print(f'{msg["sender"]}:\n\n{msg["content"]}')
414
- print('-' * 120)
415
- for msg in blogger.state_dict(session_id=0)['critic.memory']:
416
- print('*' * 80)
417
- print(f'{msg["sender"]}:\n\n{msg["content"]}')
418
- ```
419
-
420
- A multi-agent workflow that performs information retrieval, data collection and chart plotting ([original LangGraph example](https://vijaykumarkartha.medium.com/multiple-ai-agents-creating-multi-agent-workflows-using-langgraph-and-langchain-0587406ec4e6))
421
-
422
- <div align="center">
423
- <img src="https://miro.medium.com/v2/resize:fit:1400/format:webp/1*ffzadZCKXJT7n4JaRVFvcQ.jpeg" width="850" />
424
- </div>
425
-
426
- ````python
427
- import json
428
- from lagent.actions import IPythonInterpreter, WebBrowser, ActionExecutor
429
- from lagent.agents.stream import get_plugin_prompt
430
- from lagent.llms import GPTAPI
431
- from lagent.hooks import InternLMActionProcessor
432
-
433
- TOOL_TEMPLATE = (
434
- "You are a helpful AI assistant, collaborating with other assistants. Use the provided tools to progress"
435
- " towards answering the question. If you are unable to fully answer, that's OK, another assistant with"
436
- " different tools will help where you left off. Execute what you can to make progress. If you or any of"
437
- " the other assistants have the final answer or deliverable, prefix your response with {finish_pattern}"
438
- " so the team knows to stop. You have access to the following tools:\n{tool_description}\nPlease provide"
439
- " your thought process when you need to use a tool, followed by the call statement in this format:"
440
- "\n{invocation_format}\\\\n**{system_prompt}**"
441
- )
442
-
443
- class DataVisualizer(Agent):
444
- def __init__(self, model_path, research_prompt, chart_prompt, finish_pattern="Final Answer", max_turn=10):
445
- super().__init__()
446
- llm = GPTAPI(model_path, key='YOUR_OPENAI_API_KEY', retry=5, max_new_tokens=1024, stop_words=["```\n"])
447
- interpreter, browser = IPythonInterpreter(), WebBrowser("BingSearch", api_key="YOUR_BING_API_KEY")
448
- self.researcher = Agent(
449
- llm,
450
- TOOL_TEMPLATE.format(
451
- finish_pattern=finish_pattern,
452
- tool_description=get_plugin_prompt(browser),
453
- invocation_format='```json\n{"name": {{tool name}}, "parameters": {{keyword arguments}}}\n```\n',
454
- system_prompt=research_prompt,
455
- ),
456
- output_format=ToolParser(
457
- "browser",
458
- begin="```json\n",
459
- end="\n```\n",
460
- validate=lambda x: json.loads(x.rstrip('`')),
461
- ),
462
- aggregator=InternLMToolAggregator(),
463
- name="researcher",
464
- )
465
- self.charter = Agent(
466
- llm,
467
- TOOL_TEMPLATE.format(
468
- finish_pattern=finish_pattern,
469
- tool_description=interpreter.name,
470
- invocation_format='```python\n{{code}}\n```\n',
471
- system_prompt=chart_prompt,
472
- ),
473
- output_format=ToolParser(
474
- "interpreter",
475
- begin="```python\n",
476
- end="\n```\n",
477
- validate=lambda x: x.rstrip('`'),
478
- ),
479
- aggregator=InternLMToolAggregator(),
480
- name="charter",
481
- )
482
- self.executor = ActionExecutor([interpreter, browser], hooks=[InternLMActionProcessor()])
483
- self.finish_pattern = finish_pattern
484
- self.max_turn = max_turn
485
-
486
- def forward(self, message, session_id=0):
487
- for _ in range(self.max_turn):
488
- message = self.researcher(message, session_id=session_id, stop_words=["```\n", "```python"]) # override llm stop words
489
- while message.formatted["tool_type"]:
490
- message = self.executor(message, session_id=session_id)
491
- message = self.researcher(message, session_id=session_id, stop_words=["```\n", "```python"])
492
- if self.finish_pattern in message.content:
493
- return message
494
- message = self.charter(message)
495
- while message.formatted["tool_type"]:
496
- message = self.executor(message, session_id=session_id)
497
- message = self.charter(message, session_id=session_id)
498
- if self.finish_pattern in message.content:
499
- return message
500
- return message
501
-
502
- visualizer = DataVisualizer(
503
- "gpt-4o-2024-05-13",
504
- research_prompt="You should provide accurate data for the chart generator to use.",
505
- chart_prompt="Any charts you display will be visible by the user.",
506
- )
507
- user_msg = AgentMessage(
508
- sender='user',
509
- content="Fetch the China's GDP over the past 5 years, then draw a line graph of it. Once you code it up, finish.")
510
- bot_msg = visualizer(user_msg)
511
- print(bot_msg.content)
512
- json.dump(visualizer.state_dict(), open('visualizer.json', 'w'), ensure_ascii=False, indent=4)
513
- ````
514
-
515
- ## Citation
516
-
517
- If you find this project useful in your research, please consider cite:
518
-
519
- ```latex
520
- @misc{lagent2023,
521
- title={{Lagent: InternLM} a lightweight open-source framework that allows users to efficiently build large language model(LLM)-based agents},
522
- author={Lagent Developer Team},
523
- howpublished = {\url{https://github.com/InternLM/lagent}},
524
- year={2023}
525
- }
526
- ```
527
-
528
- ## License
529
-
530
- This project is released under the [Apache 2.0 license](LICENSE).
531
-
532
- <p align="right"><a href="#top">🔼 Back to top</a></p>
 
1
+ title: Lagent
2
+ emoji: 🔥
3
+ colorFrom: pink
4
+ colorTo: red
5
+ sdk: spacse
6
+ pinned: false
7
+ license: apache-2.0
8
+ short_description: '1'