2. langgraph中的Tool Calling (How to handle tool calling errors)

1. 工具定义

from langchain_core.tools import tool


@tool
def get_weather(location: str):
    """Call to get the current weather."""
    if location == "san francisco":
        raise ValueError("Input queries must be proper nouns")
    elif location == "San Francisco":
        return "It's 60 degrees and foggy."
    else:
        raise ValueError("Invalid input.")

2. 无错误处理的graph

2.1 graph 定义

from typing import Literal

from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.prebuilt import ToolNode

tool_node = ToolNode([get_weather])
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    temperature=0,
    model="GLM-4",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
model_with_tools = llm.bind_tools([get_weather])


def should_continue(state: MessagesState):
    messages = state["messages"]
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"
    return END


def call_model(state: MessagesState):
    messages = state["messages"]
    response = model_with_tools.invoke(messages)
    return {"messages": [response]}


workflow = StateGraph(MessagesState)

# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)

workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, ["tools", END])
workflow.add_edge("tools", "agent")

app = workflow.compile()

2.2 graph 可视化

from IPython.display import Image, display

try:
    display(Image(app.get_graph().draw_mermaid_png()))
except Exception:
    # This requires some extra dependencies and is optional
    pass

在这里插入图片描述

2.3 错误处理

当你尝试调用工具时,可以看到模型用错误的输入调用了工具,导致工具抛出错误。预构建的ToolNode执行工具有一些内置的错误处理机制,它会捕获错误并将其传回模型,以便模型可以重试。

response = app.invoke(
    {"messages": [("human", "what is the weather in san francisco?")]},
)

for message in response["messages"]:
    string_representation = f"{message.type.upper()}: {message.content}\n"
    print(string_representation)
HUMAN: what is the weather in san francisco?

AI: 

TOOL: Error: ValueError('Input queries must be proper nouns')
 Please fix your mistakes.

AI: 

TOOL: It's 60 degrees and foggy.

AI: The current weather in San Francisco is 60 degrees and foggy.

3. 具备错误处理的graph

3.1 自定义的回退策略

from langchain_core.output_parsers import StrOutputParser
from pydantic import BaseModel, Field


class HaikuRequest(BaseModel):
    topic: list[str] = Field(
        max_length=3,
        min_length=3,
    )



@tool
def master_haiku_generator(request: HaikuRequest):
    """Generates a haiku based on the provided topics."""
    model = ChatOpenAI(
        temperature=0,
        model="GLM-4",
        openai_api_key="your api key",
        openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
    chain = model | StrOutputParser()
    topics = ", ".join(request.topic)
    haiku = chain.invoke(f"Write a haiku about {topics}")
    return haiku


tool_node = ToolNode([master_haiku_generator])


llm = ChatOpenAI(
    temperature=0,
    model="GLM-4",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)

model_with_tools = llm.bind_tools([master_haiku_generator])


def should_continue(state: MessagesState):
    messages = state["messages"]
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"
    return END


def call_model(state: MessagesState):
    messages = state["messages"]
    response = model_with_tools.invoke(messages)
    return {"messages": [response]}


workflow = StateGraph(MessagesState)

# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)

workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, ["tools", END])
workflow.add_edge("tools", "agent")

app = workflow.compile()

response = app.invoke(
    {"messages": [("human", "Write me an incredible haiku about water.")]},
    {"recursion_limit": 10},
)

for message in response["messages"]:
    string_representation = f"{message.type.upper()}: {message.content}\n"
    print(string_representation)
HUMAN: Write me an incredible haiku about water.

AI: 

TOOL: Error: 1 validation error for master_haiku_generator
request.topic
  Input should be a valid list [type=list_type, input_value={'Items': ['water', 'ocean', 'wave']}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.8/v/list_type
 Please fix your mistakes.

AI: 

TOOL: Error: 1 validation error for master_haiku_generator
request.topic
  Input should be a valid list [type=list_type, input_value={'Items': ['water', 'ocean', 'wave']}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.8/v/list_type
 Please fix your mistakes.

AI: 

TOOL: Water's embrace vast,
Ocean's salty caress,
Waves dance with the moon.

AI: Here is an incredible haiku about water:

Water's embrace vast,
Ocean's salty caress,
Waves dance with the moon.

3.2 使用更好的模型

import json

from langchain_core.messages import AIMessage, ToolMessage
from langchain_core.messages.modifier import RemoveMessage


@tool
def master_haiku_generator(request: HaikuRequest):
    """Generates a haiku based on the provided topics."""
    from langchain_openai import ChatOpenAI

    llm = ChatOpenAI(
        temperature=0,
        model="GLM-4",
        openai_api_key="your api key",
        openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
    )
    chain = llm | StrOutputParser()
    topics = ", ".join(request.topic)
    haiku = chain.invoke(f"Write a haiku about {topics}")
    return haiku


def call_tool(state: MessagesState):
    tools_by_name = {master_haiku_generator.name: master_haiku_generator}
    messages = state["messages"]
    last_message = messages[-1]
    output_messages = []
    for tool_call in last_message.tool_calls:
        try:
            tool_result = tools_by_name[tool_call["name"]].invoke(tool_call["args"])
            output_messages.append(
                ToolMessage(
                    content=json.dumps(tool_result),
                    name=tool_call["name"],
                    tool_call_id=tool_call["id"],
                )
            )
        except Exception as e:
            # Return the error if the tool call fails
            output_messages.append(
                ToolMessage(
                    content="",
                    name=tool_call["name"],
                    tool_call_id=tool_call["id"],
                    additional_kwargs={"error": e},
                )
            )
    return {"messages": output_messages}


llm = ChatOpenAI(
    temperature=0,
    model="GLM-4",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
model_with_tools = llm.bind_tools([master_haiku_generator])
from langchain_openai import ChatOpenAI

better_model = ChatOpenAI(
    temperature=0,
    model="GLM-4-plus",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
better_model_with_tools = better_model.bind_tools([master_haiku_generator])


def should_continue(state: MessagesState):
    messages = state["messages"]
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"
    return END


def should_fallback(
    state: MessagesState,
) -> Literal["agent", "remove_failed_tool_call_attempt"]:
    messages = state["messages"]
    failed_tool_messages = [
        msg
        for msg in messages
        if isinstance(msg, ToolMessage)
        and msg.additional_kwargs.get("error") is not None
    ]
    if failed_tool_messages:
        return "remove_failed_tool_call_attempt"
    return "agent"


def call_model(state: MessagesState):
    messages = state["messages"]
    response = model_with_tools.invoke(messages)
    return {"messages": [response]}


def remove_failed_tool_call_attempt(state: MessagesState):
    messages = state["messages"]
    # Remove all messages from the most recent
    # instance of AIMessage onwards.
    last_ai_message_index = next(
        i
        for i, msg in reversed(list(enumerate(messages)))
        if isinstance(msg, AIMessage)
    )
    messages_to_remove = messages[last_ai_message_index:]
    return {"messages": [RemoveMessage(id=m.id) for m in messages_to_remove]}


# Fallback to a better model if a tool call fails
def call_fallback_model(state: MessagesState):
    messages = state["messages"]
    response = better_model_with_tools.invoke(messages)
    return {"messages": [response]}


workflow = StateGraph(MessagesState)

workflow.add_node("agent", call_model)
workflow.add_node("tools", call_tool)
workflow.add_node("remove_failed_tool_call_attempt", remove_failed_tool_call_attempt)
workflow.add_node("fallback_agent", call_fallback_model)

workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", should_continue, ["tools", END])
workflow.add_conditional_edges("tools", should_fallback)
workflow.add_edge("remove_failed_tool_call_attempt", "fallback_agent")
workflow.add_edge("fallback_agent", "tools")

app = workflow.compile()

可视化

try:
    display(Image(app.get_graph().draw_mermaid_png()))
except Exception:
    # This requires some extra dependencies and is optional
    pass

在这里插入图片描述

示例

from pprint import pprint
stream = app.stream(
    {"messages": [("human", "Write me an incredible haiku about water.")]},
    {"recursion_limit": 10},
)

for chunk in stream:
    for key, value in chunk.items():
        # Node
        pprint(f"Node '{key}':")

    pprint("\n---\n")
pprint(value['messages'][0].content)    
"Node 'agent':"
'\n---\n'
"Node 'tools':"
'\n---\n'
"Node 'remove_failed_tool_call_attempt':"
'\n---\n'
"Node 'fallback_agent':"
'\n---\n'
"Node 'tools':"
'\n---\n'
"Node 'agent':"
'\n---\n'
('"Ripples dance, still,\\nCurrents weave through silent streams,\\nPeace in '
 'flow\'s embrace."')

参考链接:https://langchain-ai.github.io/langgraph/how-tos/tool-calling-errors/#custom-strategies
如果有任何问题,欢迎在评论区提问。

基于大模型智能体Agent的LangGraph入门与实战课程目标:本课程旨在为LangGraph的初学者提供深入的理论知识和实践技能,使其能够独立构建和部署基于LangGraph的应用程序。课程形式:理论讲解 + 实战演练第1课 LangGraph基础架构与环境配置-LangGraph的概念解析第2LangGraph基础架构与环境配置-LangGraph的环境搭建与依赖管理第3课 LangGraph的基础原理与应用入门-构建基本聊天机器人及使用工具增强第4课 LangGraph的基础原理与应用入门-内存管理、人在回路、状态更新第5课 LangGraph高级图控制技术-并行节点扇出和扇入、增加额外步骤、条件分支第6课 LangGraph高级图控制技术-稳定排序、Map-Reduce并行执行、图递归控制第7课 LangGraph持久化机制与状态管理-线程级持久化、子图持久化、跨线程持久化第8课 LangGraph Human-in-the-loop-断点设置、动态设置断点、编辑更新状态第9课 LangGraph Human-in-the-loop-等待用户输入、时间旅行、工具评审第10课 LangGraph在具有长期记忆的有状态Agent中的应用-长期记忆及短期记忆、过滤信息、删掉信息第11课 LangGraph在具有长期记忆的有状态Agent中的应用-摘要总结、跨线程持久化、代理语义搜索第12LangGraph工具集成与调用-直接调用ToolNode、大模型使用工具第13课 LangGraph工具集成与调用-工具调用报错处理、运行时值传递给工具、注入参数第14课 LangGraph工具集成与调用-配置传入工具、从工具更新图状态、管理大量工具第15课 LangGraph子图设计与实现-添加及使用子图、父图及子图状态管理第16课 LangGraph子图设计与实现-子图状态的查看与更新、子图输入输出的转换与处理第17课 LangGraph项目实战演练-多智能体系统主管委托各个代理第18课 LangGraph课程复习与答疑 自我反思案例及论文案例讲解
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值