本文翻译整理自:Multi-agent supervisor
https://langchain-ai.github.io/langgraph/tutorials/multi_agent/agent_supervisor/
一、前言
根据初始研究代理的输出,前面示例 自动路由消息。
我们还可以选择使用LLM来编排不同的代理。
下面,我们将创建一个代理组,其中有一个代理主管来帮助委派任务。
为了简化每个代理节点中的代码,我们将使用LangGraph的预构建create_react_agent。这个和其他“高级代理”笔记本旨在展示如何在LangGraph中实现某些设计模式。如果该模式适合您的需求,我们建议将其与文档中其他地方描述的一些其他基本模式相结合,以获得最佳性能。
二、设置
首先,让我们安装所需的包并设置我们的API密钥
%%capture --no-stderr
%pip install -U langgraph langchain langchain_openai langchain_experimental langsmith pandas
import getpass
import os
def _set_if_undefined(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"Please provide your {var}")
_set_if_undefined("OPENAI_API_KEY")
_set_if_undefined("TAVILY_API_KEY")
设置LangSmith用于LangGraph开发
注册 LangSmith 以快速发现问题并提高 LangGraph 项目的性能。
LangSmith允许您使用跟踪数据来调试、测试和监控使用LangGraph构建的LLM应用程序 - 在此处阅读有关如何开始的更多信息。
三、创建工具
对于此示例,您将使一个代理 使用搜索引擎 进行网络研究,并使一个 Agent 创建 Graph。在下面定义他们将使用的工具:
from typing import Annotated
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_experimental.tools import PythonREPLTool
tavily_tool = TavilySearchResults(max_results=5)
# This executes code locally, which can be unsafe
python_repl_tool = PythonREPLTool()
API Reference: TavilySearchResults | PythonREPLTool
四、Helper Utilities
定义一个我们将用来创建 图中节点的辅助函数——它负责将代理响应转换为人工消息。这很重要,因为这就是我们将如何将它添加到图的全局状态
from langchain_core.messages import HumanMessage
def agent_node(state, agent, name):
result = agent.invoke(state)
return {
"messages": [HumanMessage(content=result["messages"][-1].content, name=name)]
}
API Reference: HumanMessage
创建 Agent Supervisor
它将使用函数调用来选择下一个工作节点或完成处理。
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
from typing import Literal
members = ["Researcher", "Coder"]
system_prompt = (
"You are a supervisor tasked with managing a conversation between the"
" following workers: {members}. Given the following user request,"
" respond with the worker to act next. Each worker will perform a"
" task and respond with their results and status. When finished,"
" respond with FINISH."
)
# Our team supervisor is an LLM node. It just picks the next agent to process
# and decides when the work is completed
options = ["FINISH"] + members
class routeResponse(BaseModel):
next: Literal[*options]
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder(variable_name="messages"),
(
"system",
"Given the conversation above, who should act next?"
" Or should we FINISH? Select one of: {options}",
),
]
).partial(options=str(options), members=", ".join(members))
llm = ChatOpenAI(model="gpt-4o")
def supervisor_agent(state):
supervisor_chain = prompt | llm.with_structured_output(routeResponse)
return supervisor_chain.invoke(state)
API Reference: ChatPromptTemplate | MessagesPlaceholder | ChatOpenAI
五、构造 Graph
我们准备开始构建 Graph。下面,使用我们刚刚定义的函数定义状态和工作节点。
import functools
import operator
from typing import Sequence
from typing_extensions import TypedDict
from langchain_core.messages import BaseMessage
from langgraph.graph import END, StateGraph, START
from langgraph.prebuilt import create_react_agent
# The agent state is the input to each node in the graph
class AgentState(TypedDict):
# The annotation tells the graph that new messages will always be added to the current states
messages: Annotated[Sequence[BaseMessage], operator.add]
# The 'next' field indicates where to route to next
next: str
research_agent = create_react_agent(llm, tools=[tavily_tool])
research_node = functools.partial(agent_node, agent=research_agent, name="Researcher")
# NOTE: THIS PERFORMS ARBITRARY CODE EXECUTION. PROCEED WITH CAUTION
code_agent = create_react_agent(llm, tools=[python_repl_tool])
code_node = functools.partial(agent_node, agent=code_agent, name="Coder")
workflow = StateGraph(AgentState)
workflow.add_node("Researcher", research_node)
workflow.add_node("Coder", code_node)
workflow.add_node("supervisor", supervisor_agent)
API Reference: BaseMessage | END | StateGraph | START | create_react_agent
现在连接图中的所有边。
for member in members:
# We want our workers to ALWAYS "report back" to the supervisor when done
workflow.add_edge(member, "supervisor")
# The supervisor populates the "next" field in the graph state
# which routes to a node or finishes
conditional_map = {k: k for k in members}
conditional_map["FINISH"] = END
workflow.add_conditional_edges("supervisor", lambda x: x["next"], conditional_map)
# Finally, add entrypoint
workflow.add_edge(START, "supervisor")
graph = workflow.compile()
六、调起 team
创建 graph 后,我们现在可以调起它,看看它的表现如何!
for s in graph.stream(
{
"messages": [
HumanMessage(content="Code hello world and print it to the terminal")
]
}
):
if "__end__" not in s:
print(s)
print("----")
{'supervisor': {'next': 'Coder'}}
----
``````output
Python REPL can execute arbitrary code. Use with caution.
``````output
{'Coder': {'messages': [HumanMessage(content='The code to print "Hello, World!" in the terminal has been executed successfully. Here is the output:\n\n\`\`\`\nHello, World!\n\`\`\`', additional_kwargs={}, response_metadata={}, name='Coder')]}}
----
{'supervisor': {'next': 'FINISH'}}
----
for s in graph.stream(
{"messages": [HumanMessage(content="Write a brief research report on pikas.")]},
{"recursion_limit": 100},
):
if "__end__" not in s:
print(s)
print("----")
{'supervisor': {'next': 'Researcher'}}
----
{'Researcher': {'messages': [HumanMessage(content='### Research ...s/pikas.pdf)\n7. NatureMapping Foundation - American Pika: [Link](http://naturemappingfoundation.org/natmap/facts/american_pika_712.html)\n8. USDA Forest Service - Conservation Status of Pikas: [Link](https://www.fs.usda.gov/psw/publications/millar/psw_2022_millar002.pdf)', additional_kwargs={}, response_metadata={}, name='Researcher')]}}
----
{'supervisor': {'next': 'Coder'}}
----
{'Coder': {'messages': [HumanMessage(content='### R...(http://naturemappingfoundation.org/natmap/facts/american_pika_712.html)\n8. USDA Forest Service - Conservation Status of Pikas: [Link](https://www.fs.usda.gov/psw/publications/millar/psw_2022_millar002.pdf)', additional_kwargs={}, response_metadata={}, name='Coder')]}}
----
{'supervisor': {'next': 'FINISH'}}
----
2024-10-18(五)