Spaces:
Paused
Paused
from CustomLLMMistral import CustomLLMMistral | |
from tools.robot_information import robot_information | |
import os | |
os.environ["LANGCHAIN_TRACING_V2"] = "true" | |
os.environ["LANGCHAIN_PROJECT"] = f"InfiniFleetTrace" | |
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com" | |
os.environ["LANGCHAIN_API_KEY"] = "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" # Update to your API key | |
llm = CustomLLMMistral() | |
#info = robot_information.invoke("test") | |
#print(info) | |
tools = [ robot_information ] | |
system=""" | |
You are designed to solve tasks. Each task requires multiple steps that are represented by a markdown code snippet of a json blob. | |
The json structure should contain the following keys: | |
thought -> your thoughts | |
action -> name of a tool | |
action_input -> parameters to send to the tool | |
These are the tools you can use: {tool_names}. | |
These are the tools descriptions: | |
{tools} | |
If you have enough information to answer the query use the tool "Final Answer". Its parameters is the solution. | |
If there is not enough information, keep trying. | |
""" | |
human=""" | |
Add the word "STOP" after each markdown snippet. Example: | |
```json | |
{{"thought": "<your thoughts>", | |
"action": "<tool name or Final Answer to give a final answer>", | |
"action_input": "<tool parameters or the final output"}} | |
``` | |
STOP | |
This is my query="{input}". Write only the next step needed to solve it. | |
Your answer should be based in the previous tools executions, even if you think you know the answer. | |
Remember to add STOP after each snippet. | |
These were the previous steps given to solve this query and the information you already gathered: | |
""" | |
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder | |
prompt = ChatPromptTemplate.from_messages( | |
[ | |
("system", system), | |
MessagesPlaceholder("chat_history", optional=True), | |
("human", human), | |
MessagesPlaceholder("agent_scratchpad"), | |
] | |
) | |
from langchain.agents import create_json_chat_agent, AgentExecutor | |
from langchain.memory import ConversationBufferMemory | |
agent = create_json_chat_agent( | |
tools = tools, | |
llm = llm, | |
prompt = prompt, | |
stop_sequence = ["STOP"], | |
template_tool_response = "{observation}" | |
) | |
memory = AgentExecutor(agent=agent, tools=tools, verbose=True) | |
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, handle_parsing_errors=True) | |
agent_executor.invoke({"input": "Who are you?"}) |