LangGraph 中实现 “时间旅行”:让人机对话具备“撤销”和“重试”能力

LangGraph 官方文档教程:Part 6: Time Travel

感谢 Grok 3 think 帮助完成了对原英文文档的翻译和整理,Grok 3 帮忙给实现代码加上了简洁明了的中文注释。

为什么需要 “时间旅行”(Time Travel)?

在实际使用chatbot进行人机对话时,用户可能会有以下需求:

  1. 探索不同:用户可能想知道,如果在某个对话节点选择了不同的选项,会有什么不同的结果;
  2. 修正错误:如果机器人误解了用户意图或给出了错误的回答,用户希望回退到错误发生前的状态,重新引导对话;
  3. 优化策略:在一些复杂的任务中(例如自主软件工程师),用户可能希望回退到某个关键点,尝试不同的方法来解决问题。

以上这些需求在传统的线性对话系统中很难实现,而 LangGraph 的 “时间旅行” 功能提供了一个强大的解决方案。

如何实现 “时间旅行”?

LangGraph 提供了 get_state_history 方法,让你可以轻松获取对话的历史检查点,并基于这些检查点进行操作。具体步骤如下:

  1. 获取历史状态

    使用 graph.get_state_history() 方法,可以获取对话中所有的历史检查点列表。这些检查点记录了对话在不同时间点的状态。

  2. 选择一个检查点

    从历史检查点列表中选择一个你希望回溯到的状态。例如,你可以选择某个错误发生前的状态,或者某个关键决策点。

  3. 恢复并继续执行

    使用选定的检查点恢复对话状态,然后从该点开始继续对话。你可以输入新的指令,或者让机器人以不同的方式处理后续步骤。

通过以上这三个步骤,你可以实现对话的“回退”或“分支”,从而满足用户的各种需求。

实现代码

我们用 Ollama 调用 llama3.1:latest 模型,并将 Tavily AI搜索引擎作为tool进行实现(代码参考自 langgraph 官方文档教程)。如果想获取代码详解,可以前往 基于LangGraph、Groq和Tavily打造可以调用外部搜索引擎工具的对话机器人 这篇博客。

1. 导入需要的库

from typing import Annotated
from langchain_ollama import ChatOllama
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
import os
os.environ["LANGCHAIN_API_KEY"] = "your_langchain_api_key"  # Get this from smith.langchain.com
os.environ["LANGCHAIN_TRACING_V2"] = "true"

2. 创建状态图示例

# 定义状态类,使用 TypedDict,包含消息列表
class State(TypedDict):
    messages: Annotated[list, add_messages]  # 消息列表,带注解

# 创建状态图实例
graph_builder = StateGraph(State)

# 初始化搜索工具,设置最大结果数为 2
tool = TavilySearchResults(max_results=2)
tools = [tool]  # 工具列表
# 初始化语言模型,使用 llama3.1:latest,温度为 0.8
llm = ChatOllama(model="llama3.1:latest", 
                 temperature=0.8)
# 将工具绑定到语言模型
llm_with_tools = llm.bind_tools(tools)

# 定义聊天机器人函数,接收状态并返回模型生成的消息
def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}

# 在图中添加聊天机器人节点
graph_builder.add_node("chatbot", chatbot)

# 创建工具节点并添加到图中
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

# 添加条件边,从 chatbot 根据条件跳转到 tools
graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,  # 条件函数,决定是否调用工具
)
# 添加从 tools 到 chatbot 的边
graph_builder.add_edge("tools", "chatbot")
# 添加从起点到 chatbot 的边
graph_builder.add_edge(START, "chatbot")

# 初始化内存保存器
memory = MemorySaver()
# 编译状态图,启用内存检查点
graph = graph_builder.compile(checkpointer=memory)

3. 运行创建好的状态图

# 配置线程 ID,用于跟踪会话状态
config = {"configurable": {"thread_id": "1"}}

# 创建事件流,处理用户输入并流式输出结果
events = graph.stream(
    {
        "messages": [  # 输入消息列表
            {
                "role": "user",  # 用户角色
                "content": (  # 用户提问内容
                    "I'm learning LangGraph. "
                    "Could you do some research on it for me?"
                ),
            },
        ],
    },
    config,  # 传入配置
    stream_mode="values",  # 设置流模式为值模式
)

# 遍历事件流,打印最新消息
for event in events:
    if "messages" in event:  # 检查事件中是否包含消息
        event["messages"][-1].pretty_print()  # 格式化打印最后一条消息

输出结果如下:

================================ Human Message =================================

I'm learning LangGraph. Could you do some research on it for me?
================================== Ai Message ==================================
Tool Calls:
  tavily_search_results_json (ab102856-1069-4d55-b523-a1381e4401cc)
 Call ID: ab102856-1069-4d55-b523-a1381e4401cc
  Args:
    query: LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json

[{"url": "https://github.com/langchain-ai/langgraph", "content": "GitHub - langchain-ai/langgraph: Build resilient language agents as graphs. LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows. Let's build a tool-calling ReAct-style agent that uses a search tool! The simplest way to create a tool-calling agent in LangGraph is to use create_react_agent: Define the tools for the agent to use Define the tools for the agent to use This means that after tools is called, agent node is called next. workflow.add_edge(\"tools\", 'agent') Normal edge: after the tools are invoked, the graph should always return to the agent to decide what to do next LangGraph adds the input message to the internal state, then passes the state to the entrypoint node, \"agent\"."}, {"url": "https://www.langchain.com/langgraph", "content": "Build and scale agentic applications with LangGraph Platform. Design agent-driven user experiences with LangGraph Platform's APIs. Quickly deploy and scale your application with infrastructure built for agents. LangGraph sets the foundation for how we can build and scale AI workloads — from conversational agents, complex task automation, to custom LLM-backed experiences that 'just work'. The next chapter in building complex production-ready features with LLMs is agentic, and with LangGraph and LangSmith, LangChain delivers an out-of-the-box solution to iterate quickly, debug immediately, and scale effortlessly.” LangGraph sets the foundation for how we can build and scale AI workloads — from conversational agents, complex task automation, to custom LLM-backed experiences that 'just work'. LangGraph Platform is a service for deploying and scaling LangGraph applications, with an opinionated API for building agent UXs, plus an integrated developer studio."}]
================================== Ai Message ==================================

Based on the search results, LangGraph is a library for building stateful, multi-actor applications with LLMs (Large Language Models). It allows users to create resilient language agents as graphs and can be used to build complex task automation, conversational agents, and custom LLM-backed experiences.

To use LangGraph, you need to define the tools that the agent will use and then create a react-style agent using the `create_react_agent` function. This function takes in two parameters: `tools` and `entrypoint`. The `tools` parameter is used to specify the tools that the agent will use, while the `entrypoint` parameter specifies the node where the agent will start.

Here's an example of how you can create a LangGraph application:

```python
import langgraph

# Define the tools for the agent to use
tools = ['tool1', 'tool2']

# Create a react-style agent using the create_react_agent function
agent = langgraph.create_react_agent(tools, 'entrypoint')

# Add an edge to the workflow after the tools are invoked
workflow.add_edge('tools', 'agent')
```

In this example, we first define the tools that the agent will use. We then create a react-style agent using the `create_react_agent` function, specifying the tools and entrypoint node. Finally, we add an edge to the workflow after the tools are invoked.

LangGraph also has a platform component called LangGraph Platform, which is used for deploying and scaling LangGraph applications. This platform provides an opinionated API for building agent UXs (user experiences) and includes an integrated developer studio.

Overall, LangGraph is a powerful tool for building complex AI workloads and can be used to create conversational agents, task automation, and custom LLM-backed experiences with ease.

4. 回到指定历史状态

# 初始化要replay的状态为 None
to_replay = None

# 遍历状态历史,基于配置获取所有状态
for state in graph.get_state_history(config):
    # 打印状态中的消息数量和下一步节点
    print("Num Messages: ", len(state.values["messages"]), "Next: ", state.next)
    # 打印分隔线
    print("-" * 80)
    # 检查消息数量是否为 2
    if len(state.values["messages"]) == 2:
        # 根据消息数量(此处定为 2)选择特定状态进行replay
        to_replay = state  # 保存该状态用于后续replay

打印结果如下:

Num Messages:  4 Next:  ()
--------------------------------------------------------------------------------
Num Messages:  3 Next:  ('chatbot',)
--------------------------------------------------------------------------------
Num Messages:  2 Next:  ('tools',)
--------------------------------------------------------------------------------
Num Messages:  1 Next:  ('chatbot',)
--------------------------------------------------------------------------------
Num Messages:  0 Next:  ('__start__',)
--------------------------------------------------------------------------------

以上代码将把图的状态回撤到消息数量为2,即第一次运行完 chatbot 节点,即将走到 tools 进行执行的状态。

我们可以打印出 to_replay 历史状态的相关信息:

print(to_replay.next)
('tools',)
print(to_replay.config)
{'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1eff3262-dcab-6916-8001-9cd5e940f73b'}}

5. replay历史状态

for event in graph.stream(None, to_replay.config, stream_mode="values"):
    if "messages" in event:
        event["messages"][-1].pretty_print()

输出如下(从第一次运行完 chatbot 节点后生成的 Ai Message 开始):

================================== Ai Message ==================================
Tool Calls:
  tavily_search_results_json (d21f8486-712f-4ca4-bac6-5ce429936826)
 Call ID: d21f8486-712f-4ca4-bac6-5ce429936826
  Args:
    query: LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json

[{"url": "https://github.com/langchain-ai/langgraph", "content": "GitHub - langchain-ai/langgraph: Build resilient language agents as graphs. LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows. Let's build a tool-calling ReAct-style agent that uses a search tool! The simplest way to create a tool-calling agent in LangGraph is to use create_react_agent: Define the tools for the agent to use Define the tools for the agent to use This means that after tools is called, agent node is called next. workflow.add_edge(\"tools\", 'agent') Normal edge: after the tools are invoked, the graph should always return to the agent to decide what to do next LangGraph adds the input message to the internal state, then passes the state to the entrypoint node, \"agent\"."}, {"url": "https://www.langchain.com/langgraph", "content": "Build and scale agentic applications with LangGraph Platform. Design agent-driven user experiences with LangGraph Platform's APIs. Quickly deploy and scale your application with infrastructure built for agents. LangGraph sets the foundation for how we can build and scale AI workloads — from conversational agents, complex task automation, to custom LLM-backed experiences that 'just work'. The next chapter in building complex production-ready features with LLMs is agentic, and with LangGraph and LangSmith, LangChain delivers an out-of-the-box solution to iterate quickly, debug immediately, and scale effortlessly.” LangGraph sets the foundation for how we can build and scale AI workloads — from conversational agents, complex task automation, to custom LLM-backed experiences that 'just work'. LangGraph Platform is a service for deploying and scaling LangGraph applications, with an opinionated API for building agent UXs, plus an integrated developer studio."}]
================================== Ai Message ==================================

Based on the search results, it appears that LangGraph is a library for building stateful, multi-actor applications with LLMs (Large Language Models). It allows you to create agent and multi-agent workflows, making it suitable for complex tasks such as conversational agents, task automation, and custom LLM-backed experiences.

To use LangGraph, you can follow these steps:

1. Define the tools for the agent to use.
2. Create a react-style agent using `create_react_agent`.
3. Add an edge to the graph after the tools are invoked, so that the graph always returns to the agent node.

LangGraph also has a platform called LangGraph Platform, which provides a service for deploying and scaling LangGraph applications. It includes an opinionated API for building agent UXs (user experiences) and an integrated developer studio.

Overall, LangGraph seems like a powerful tool for building complex AI workloads, and its platform makes it easy to deploy and scale these applications.

Here are some key points to consider when learning LangGraph:

* Stateful, multi-actor applications
* LLM-backed experiences
* Conversational agents and task automation
* Agent-driven user experiences

By understanding these concepts and following the steps outlined above, you can start building complex AI workloads with LangGraph.

总结

LangGraph 的 “时间旅行” 功能允许你通过 get_state_history() 方法获取对话的历史检查点,并从中选择一个状态来恢复和继续对话。它解决了用户希望回退修复错误或探索不同对话路径的需求。通过这种功能,你可以让chatbot变得更智能、更灵活,为用户提供更好的交互体验。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值