在我们使用LangChain进行我们自己功能开发时,同时想借助AI能力来帮我们处理数据并按照一定顺序去执行方法时,那么我们就会需要使用到Agent。
这里由于我的方法比较单一,所以使用的是OpenAI Functions。不过建议是更换为OPENAI Tools,它支持并行。
必不可少的模型:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
首先我们需要定义调用方法:
from langchain.agents import tool
@tool
def get_word_length(word: str) -> int:
"""Returns the length of a word."""
return len(word)
接下来定义我们这个AI具体是来做什么的,自定义System Prompt,并且当我们想用历史记录的时候那么就可以按照以下方式来定义:
from langchain.prompts import MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are very powerful assistant, but bad at calculating lengths of words.",
),
MessagesPlaceholder(variable_name='chat_history'),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
接下来将我们的方法进行绑定:
llm_with_tools = llm.bind_functions(tools)
如果你想直接返回工具生成的结果,可以再加上自定义输出,这部分可以自己打断点去看看内部实现步骤并且对于流程很有帮助。
def parse(output):
# If no function was invoked, return to user
if "function_call" not in output.additional_kwargs:
return AgentFinish(return_values={"output": output.content}, log=output.content)
# Parse out the function call
function_call = output.additional_kwargs["function_call"]
name = function_call["name"]
inputs = json.loads(function_call["arguments"])
# If the cusResponse function was invoked, return to the user with the function inputs
if name == "cusResponse":
return AgentFinish(return_values=inputs, log=str(function_call))
# Otherwise, return an agent action
else:
return AgentActionMessageLog(
tool=name, tool_input=inputs, log="", message_log=[output]
)
创建代理:
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
"chat_history": lambda x: x["chat_history"],
}
| prompt
| llm_with_tools
| parse
)
代理执行:
agent_executor = AgentExecutor(agent=agent, tools=[searchVideos], verbose=False).with_config(
{"run_name": "Agent"}
)
正常的运行:
result=agent_executor.astream({"input": content, "chat_history": chat_history})
这里我想要流式输出结果:
async for event in agent_executor.astream_events(
{"input": content, "chat_history": chat_history},
version="v1"
):
kind = event["event"]
if kind == "on_chain_start":
if (
event["name"] == "Agent"
): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
print(
f"Starting agent: {event['name']} with input: {event['data'].get('input')}"
)
elif kind == "on_chain_end":
if (
event["name"] == "Agent"
): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
print()
print("--")
print(
f"Done agent: {event['name']} with output: {event['data'].get('output')['output']}"
)
if kind == "on_chat_model_stream":
print(event)
content = event["data"]["chunk"].content
if content:
# Empty content in the context of OpenAI means
# that the model is asking for a tool to be invoked.
# So we only print non-empty content
print(content, end="|")
elif kind == "on_tool_start":
print("--")
print(
f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"
)
elif kind == "on_tool_end":
print(f"Done tool: {event['name']}")
print(f"Tool output was: {event['data'].get('output')}")
print("--")
以上就是LangChain对于Tools的调用。如果你还有什么疑问可以加下面的公众号,联系我获取更多资源和解答。
如果你想关注更多的AI最新信息和获取到更多关于AI的资源,请关注我们的公众号: