1. 学习背景
接上一篇传送门,介绍了AI Agent的基本工作原理。本节学习LangGraph的部分组件。
学习地址:传送门
代码地址:传送门
2. 开始上手
2.1 基本知识
上节学习,是从头开始构建了一个代理。如图:
这一次实践,则是使用LangGraph实现代理,顺便学习组件和特性。LangGraph主要做的事情则是帮助描述和编排控制流。也就是说,允许创建循环图,自带持久化可以不丢失上下文信息。
LangGraph的三个核心概念是:节点、边、和条件边
,如图所示:
节点是代理或者函数,边链接这些点,在做决定时,确定下一步路径,去哪个节点。形象化如下图所示:
2.2 代码实践,导入基本包
# 导入基本的环境读取包
from dotenv import load_dotenv
_ = load_dotenv()
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
# 导入基本工具,Tavily是一款在线搜索工具,每月有1000次API免费调用机会,需自行注册
tool = TavilySearchResults(max_results=4) #increased number of results
print(type(tool))
print(tool.name)
输出如下:
<class 'langchain_community.tools.tavily_search.tool.TavilySearchResults'>
tavily_search_results_json
2.3 尝试编写Agent
如果用户不熟悉python中的typing 注解,可以参考如下链接:Support for type hints
# 定义AgentState类,messages变量接受列表类型的变量,内容为AnyMessage类型数据
# operator.add意味着会将新的更新推送到messages其中
class AgentState(TypedDict):
messages: Annotated[list[AnyMessage], operator.add]
# 整个过程,Agent的状态非常重要,是衡量是否执行结束的判断标志
class Agent:
def __init__(self, model, tools, system=""):
self.system = system
graph = StateGraph(AgentState)
graph.add_node("llm", self.call_openai) # 构建节点
graph.add_node("action", self.take_action) # 如果有存在action,则执行
graph.add_conditional_edges(
"llm",
self.exists_action,
{True: "action", False: END}
) # llm判断是否有exists_action
graph.add_edge("action", "llm") # 将执行后的action与llm连接起来
graph.set_entry_point("llm") # 设置graph的初始进入节点
self.graph = graph.compile() # 对构建的图结构进行编译
self.tools = {t.name: t for t in tools}
self.model = model.bind_tools(tools)
def exists_action(self, state: AgentState):
result = state['messages'][-1]
return len(result.tool_calls) > 0
def call_openai(self, state: AgentState):
messages = state['messages']
if self.system:
messages = [SystemMessage(content=self.system)] + messages
message = self.model.invoke(messages)
return {'messages': [message]}
def take_action(self, state: AgentState):
tool_calls = state['messages'][-1].tool_calls
results = []
for t in tool_calls:
print(f"Calling: {t}")
if not t['name'] in self.tools: # check for bad tool name from LLM
print("\n ....bad tool name....")
result = "bad tool name, retry" # instruct LLM to retry if bad
else:
result = self.tools[t['name']].invoke(t['args'])
results.append(ToolMessage(tool_call_id=t['id'], name=t['name'], content=str(result)))
print("Back to the model!")
return {'messages': results}
2.4 prompt及实例化
prompt = """You are a smart research assistant. Use the search engine to look up information. \
You are allowed to make multiple calls (either together or in sequence). \
Only look up information when you are sure of what you want. \
If you need to look up some information before asking a follow up question, you are allowed to do that!
"""
model = ChatOpenAI(model="gpt-3.5-turbo") #reduce inference cost
abot = Agent(model, [tool], system=prompt)
# 对构建的graph进行可视化
from IPython.display import Image
Image(abot.graph.get_graph().draw_png())
输出如下:
2.5 执行
messages = [HumanMessage(content="What is the weather in sf?")]
result = abot.graph.invoke({"messages": messages})
输出如下:
Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_ByYZ7fIJmNcWM3uWY3C56gqG'}
Back to the model!
result
输出如下:
{'messages': [HumanMessage(content='What is the weather in sf?'),
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_ByYZ7fIJmNcWM3uWY3C56gqG', 'function': {'arguments': '{"query":"weather in San Francisco"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 153, 'total_tokens': 174}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-978e86b8-d1fe-4e7c-81e1-b5621cc3e8d1-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_ByYZ7fIJmNcWM3uWY3C56gqG'}]),
ToolMessage(content='[{\'url\': \'https://www.weatherapi.com/\', \'content\': "{\'location\': {\'name\': \'San Francisco\', \'region\': \'California\', \'country\': \'United States of America\', \'lat\': 37.78, \'lon\': -122.42, \'tz_id\': \'America/Los_Angeles\', \'localtime_epoch\': 1721562338, \'localtime\': \'2024-07-21 4:45\'}, \'current\': {\'last_updated_epoch\': 1721562300, \'last_updated\': \'2024-07-21 04:45\', \'temp_c\': 13.8, \'temp_f\': 56.9, \'is_day\': 0, \'condition\': {\'text\': \'Patchy rain nearby\', \'icon\': \'//cdn.weatherapi.com/weather/64x64/night/176.png\', \'code\': 1063}, \'wind_mph\': 10.5, \'wind_kph\': 16.9, \'wind_degree\': 270, \'wind_dir\': \'W\', \'pressure_mb\': 1014.0, \'pressure_in\': 29.93, \'precip_mm\': 0.01, \'precip_in\': 0.0, \'humidity\': 90, \'cloud\': 50, \'feelslike_c\': 12.6, \'feelslike_f\': 54.7, \'windchill_c\': 12.6, \'windchill_f\': 54.7, \'heatindex_c\': 13.8, \'heatindex_f\': 56.9, \'dewpoint_c\': 12.2, \'dewpoint_f\': 53.9, \'vis_km\': 10.0, \'vis_miles\': 6.0, \'uv\': 1.0, \'gust_mph\': 14.9, \'gust_kph\': 24.0}}"}, {\'url\': \'https://www.timeanddate.com/weather/usa/san-francisco/historic\', \'content\': \'San Francisco Weather History for the Previous 24 Hours Show weather for: Previous 24 hours July 18, 2024 July 17, 2024 July 16, 2024 July 15, 2024 July 14, 2024 July 13, 2024 July 12, 2024 July 11, 2024 July 10, 2024 July 9, 2024 July 8, 2024 July 7, 2024 July 6, 2024 July 5, 2024 July 4, 2024 July 3, 2024\'}, {\'url\': \'https://weatherspark.com/h/y/557/2024/Historical-Weather-during-2024-in-San-Francisco-California-United-States\', \'content\': \'San Francisco Temperature History 2024\\nHourly Temperature in 2024 in San Francisco\\nCompare San Francisco to another city:\\nCloud Cover in 2024 in San Francisco\\nDaily Precipitation in 2024 in San Francisco\\nObserved Weather in 2024 in San Francisco\\nHours of Daylight and Twilight in 2024 in San Francisco\\nSunrise & Sunset with Twilight and Daylight Saving Time in 2024 in San Francisco\\nSolar Elevation and Azimuth in 2024 in San Francisco\\nMoon Rise, Set & Phases in 2024 in San Francisco\\nHumidity Comfort Levels in 2024 in San Francisco\\nWind Speed in 2024 in San Francisco\\nHourly Wind Speed in 2024 in San Francisco\\nHourly Wind Direction in 2024 in San Francisco\\nAtmospheric Pressure in 2024 in San Francisco\\nData Sources\\n See all nearby weather stations\\nLatest Report — 3:56 PM\\nWed, Jan 24, 2024\\xa0\\xa0\\xa0\\xa013 min ago\\xa0\\xa0\\xa0\\xa0UTC 23:56\\nCall Sign KSFO\\nTemp.\\n60.1°F\\nPrecipitation\\nNo Report\\nWind\\n6.9 mph\\nCloud Cover\\nMostly Cloudy\\n1,800 ft\\nRaw: KSFO 242356Z 18006G19KT 10SM FEW015 BKN018 BKN039 16/12 A3004 RMK AO2 SLP171 T01560122 10156 20122 55001\\n While having the tremendous advantages of temporal and spatial completeness, these reconstructions: (1) are based on computer models that may have model-based errors, (2) are coarsely sampled on a 50 km grid and are therefore unable to reconstruct the local variations of many microclimates, and (3) have particular difficulty with the weather in some coastal areas, especially small islands.\\n We further caution that our travel scores are only as good as the data that underpin them, that weather conditions at any given location and time are unpredictable and variable, and that the definition of the scores reflects a particular set of preferences that may not agree with those of any particular reader.\\n 2024 Weather History in San Francisco California, United States\\nThe data for this report comes from the San Francisco International Airport.\'}, {\'url\': \'https://world-weather.info/forecast/usa/san_francisco/july-2024/\', \'content\': \'Extended weather forecast in San Francisco. Hourly Week 10 days 14 days 30 days Year. Detailed ⚡ San Francisco Weather Forecast for July 2024 - day/night 🌡️ temperatures, precipitations - World-Weather.info.\'}]', name='tavily_search_results_json', tool_call_id='call_ByYZ7fIJmNcWM3uWY3C56gqG'),
AIMessage(content='The current weather in San Francisco is 13.8°C (56.9°F) with patchy rain nearby. The wind speed is 10.5 mph (16.9 kph) coming from the west. The humidity is at 90%, and the visibility is 10.0 km (6.0 miles).', response_metadata={'token_usage': {'completion_tokens': 68, 'prompt_tokens': 1369, 'total_tokens': 1437}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-fb75ceb5-3af5-4d40-af4c-fbe9269c2bc6-0')]}
result['messages'][-1].content
输出如下:
'The current weather in San Francisco is 13.8°C (56.9°F) with patchy rain nearby. The wind speed is 10.5 mph (16.9 kph) coming from the west. The humidity is at 90%, and the visibility is 10.0 km (6.0 miles).'
可以看到,执行Agent,得到的温度查询的结果,内容如上。
2.6 再换个问题,问两个地方
messages = [HumanMessage(content="What is the weather in SF and LA?")]
result = abot.graph.invoke({"messages": messages})
输出如下:
Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_XvYRF6SENKCrHWPFjKGDszTS'}
Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_Y6bXqsInHrzw4nQZNcv9HWao'}
Back to the model!
可以看到,模型正常调用了两次查询。我们查看最终结果:
result['messages'][-1].content
输出如下:
'The current weather in San Francisco is 56.9°F with patchy rain nearby. The wind speed is 16.9 kph coming from the west. The humidity is at 90%, and the visibility is 6.0 miles.\n\nIn Los Angeles, the current temperature is 74.9°F with clear skies. The wind speed is 4.3 kph from the south-southwest direction, and the humidity is at 54%. The visibility is also 6.0 miles.'
2.7 换个其他问题试试
# Note, the query was modified to produce more consistent results.
# Results may vary per run and over time as search information and models change.
query = "Who won the super bowl in 2024? In what state is the winning team headquarters located? \
What is the GDP of that state? Answer each question."
messages = [HumanMessage(content=query)]
model = ChatOpenAI(model="gpt-4o") # requires more advanced model
abot = Agent(model, [tool], system=prompt)
result = abot.graph.invoke({"messages": messages})
输出如下:
Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'who won the Super Bowl 2024'}, 'id': 'call_NsQLpXo7nwykpIE1LSpoJInz'}
Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'where is the headquarters of the 2024 Super Bowl winning team located?'}, 'id': 'call_qRPcyagBjzfskrdDJ6EsjroT'}
Back to the model!
Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'What is the GDP of Missouri?'}, 'id': 'call_cMusVClqywmUffVVSHeyzGWR'}
Back to the model!
可以看懂,模型执行了三次调用。我们直接查看结果,效果如下:
print(result['messages'][-1].content)
输出如下:
1. **Who won the Super Bowl in 2024?**
- The Kansas City Chiefs won the Super Bowl in 2024.
2. **In what state is the winning team's headquarters located?**
- The Kansas City Chiefs' headquarters is located in Missouri.
3. **What is the GDP of that state?**
- As of the 3rd quarter of 2023, the GDP of Missouri was $423.6 billion.
3. 总结
LangGraph组件其实就是把链式的Agent执行过程,通过graph形式构造出来并执行,具体怎么执行,调用了哪些函数,建议手动debug一次,看看执行流程。