LangChain工具完全指南:AI扩展能力

LangChain Tools 完全指南

目录

  1. 什么是 Tools
  2. 核心概念
  3. 基础工具定义
  4. 高级工具特性
  5. Tool Calling 工作流
  6. 与 Agent 配合使用
  7. 工具类型与示例
  8. 工具错误处理
  9. 最佳实践
  10. 高级主题

什么是 Tools

Tools(工具) 是 LangChain 中的核心组件,允许 AI 模型与外部系统进行交互并执行超越文本生成的操作。

核心特点

  • 扩展模型能力: 让 LLM 能够访问 APIs、数据库、文件系统等外部资源
  • 结构化输入输出: 通过定义良好的 schema 确保数据格式正确
  • 智能决策: 模型自主决定何时使用哪个工具以及传递什么参数
  • 灵活组合: 可以组合多个工具构建复杂的 AI 应用

工具的组成

每个 Tool 由两部分组成:

  1. Schema(模式)

    • 工具名称
    • 工具描述
    • 参数定义(通常是 JSON Schema)
  2. 可执行函数

    • Python 函数或协程
    • 实际执行的业务逻辑

核心概念

Tool Calling vs Function Calling

在 LangChain 中,“tool calling” 和 “function calling” 可以互换使用,指的是同一个概念:模型生成结构化请求来调用外部功能。

两种主要使用方式

1. 定义输入 Schema

将工具的输入 schema 传递给 chat model 的 tool calling 功能,模型根据用户输入生成 tool call(包含工具名称和参数)。

2. 执行工具并返回结果

接收模型生成的 tool call,执行实际操作,并将结果作为 ToolMessage 返回给模型,供其后续推理使用。

工作流程

用户输入 → 模型分析 → 生成 Tool Call → 执行工具 → 返回结果 → 模型处理 → 最终响应

基础工具定义

Python 版本

使用 @tool 装饰器(最简单)
from langchain.tools import tool

@tool
def search_database(query: str, limit: int = 10) -> str:
    """Search the customer database for records matching the query.

    Args:
        query: Search terms to look for
        limit: Maximum number of results to return
    """
    return f"Found {limit} results for '{query}'"

重要说明:

  • 类型提示(Type Hints)是必需的 - 定义工具的输入 schema
  • Docstring 作为工具描述 - 帮助模型理解何时使用该工具
  • 描述应该简洁明确 - 清楚说明工具的用途
自定义工具描述
@tool(
    "calculator",
    description="Performs arithmetic calculations. Use this for any math problems."
)
def calc(expression: str) -> str:
    """Evaluate mathematical expressions."""
    return str(eval(expression))
多参数工具示例
@tool
def search(query: str) -> str:
    """Search for information."""
    return f"Results for: {query}"

@tool
def get_weather(location: str) -> str:
    """Get weather information for a location."""
    return f"Weather in {location}: Sunny, 72°F"

JavaScript 版本

import { tool } from "langchain";
import { z } from "zod";

const searchTool = tool(
  async ({ query, limit = 10 }) => {
    // 执行搜索逻辑
    return `Found ${limit} results for '${query}'`;
  },
  {
    name: "search_database",
    description: "Search the customer database for records",
    schema: z.object({
      query: z.string().describe("Search terms to look for"),
      limit: z.number().default(10).describe("Maximum results to return"),
    }),
  }
);

高级工具特性

1. 访问运行时信息 (ToolRuntime)

使用 ToolRuntime 在单个参数中访问所有运行时信息:

from langchain.tools import tool
from langchain.tools.runtime import ToolRuntime

@tool
def get_user_preference(pref_name: str, tool_runtime: ToolRuntime) -> str:
    """Get user preference from state."""
    # 访问当前图状态
    user_prefs = tool_runtime.state.get("user_preferences", {})
    return user_prefs.get(pref_name, "Not found")

ToolRuntime 提供的功能:

  • state: 当前图的状态
  • context: 上下文信息
  • store: 持久化存储
  • streaming: 流式输出
  • config: 配置信息
  • tool_call_id: 工具调用 ID

注意: tool_runtime 参数对模型是隐藏的,不会出现在工具的 schema 中。

2. 更新状态

使用 Command 更新 agent 状态或控制图的执行流程:

from langchain.tools import tool
from langchain.tools.runtime import ToolRuntime
from langgraph.types import Command

@tool
def update_preferences(pref_name: str, value: str, tool_runtime: ToolRuntime) -> Command:
    """Update user preferences."""
    return Command(
        update={
            "user_preferences": {
                **tool_runtime.state.get("user_preferences", {}),
                pref_name: value
            }
        }
    )

3. 工具附带 Artifacts

在 RAG 应用中,可以附加原始文档作为 artifacts:

from langchain.tools import tool
from langchain_core.messages import ToolMessage

@tool
def retrieve_context(query: str) -> ToolMessage:
    """Retrieve context from a blog post."""
    # 执行检索
    docs = vector_store.similarity_search(query, k=3)

    # 创建字符串内容供模型参考
    content = "\n\n".join(doc.page_content for doc in docs)

    # 返回 ToolMessage,附带原始文档作为 artifacts
    return ToolMessage(
        content=content,
        tool_call_id="...",  # 由系统自动填充
        artifacts=docs  # 不发送给模型,但可在应用中访问
    )

4. 限制搜索参数

可以通过添加参数强制 LLM 指定额外的搜索参数:

from typing import Literal

@tool
def retrieve_context(
    query: str,
    section: Literal["beginning", "middle", "end"]
) -> str:
    """Retrieve context from specific section of blog post."""
    # 根据 section 参数执行不同的检索逻辑
    return f"Content from {section}: ..."

Tool Calling 工作流

基本流程

from langchain_openai import ChatOpenAI
from langchain.tools import tool

# 1. 定义工具
@tool
def get_weather(location: str) -> str:
    """Get current weather for a location."""
    return f"Weather in {location}: Sunny, 72°F"

# 2. 初始化模型并绑定工具
model = ChatOpenAI(model="gpt-4")
model_with_tools = model.bind_tools([get_weather])

# 3. 调用模型
response = model_with_tools.invoke("What's the weather in Paris?")

# 4. 检查是否有工具调用
if response.tool_calls:
    tool_call = response.tool_calls[0]
    print(f"Tool: {tool_call['name']}")
    print(f"Args: {tool_call['args']}")

    # 5. 执行工具
    result = get_weather.invoke(tool_call['args'])

    # 6. 创建 ToolMessage 并继续对话
    from langchain_core.messages import ToolMessage

    tool_message = ToolMessage(
        content=result,
        tool_call_id=tool_call['id']
    )

    # 7. 将结果返回给模型
    final_response = model_with_tools.invoke([
        {"role": "user", "content": "What's the weather in Paris?"},
        response,
        tool_message
    ])

    print(final_response.content)

强制工具选择

强制使用任意工具
# 强制模型必须调用某个工具(任何绑定的工具)
model_with_tools = model.bind_tools([tool_1, tool_2], tool_choice="any")
强制使用特定工具
# 强制模型必须调用特定的工具
model_with_tools = model.bind_tools([tool_1, tool_2], tool_choice="tool_1")

并行工具调用

大多数支持 tool calling 的模型默认启用并行调用:

# 模型会在合适时自动并行调用多个工具
response = model_with_tools.invoke(
    "What's the weather in Boston and Tokyo?"
)

# response.tool_calls 可能包含多个工具调用
for tool_call in response.tool_calls:
    print(f"Calling {tool_call['name']} with {tool_call['args']}")
禁用并行调用
# 某些模型(如 OpenAI、Anthropic)允许禁用并行调用
model_with_tools = model.bind_tools(
    [get_weather],
    parallel_tool_calls=False
)

流式工具调用

# 工具调用在生成过程中逐步构建
for chunk in model_with_tools.stream("What's the weather in Boston and Tokyo?"):
    for tool_chunk in chunk.tool_call_chunks:
        if name := tool_chunk.get("name"):
            print(f"Tool: {name}")
        if args := tool_chunk.get("args"):
            print(f"Args chunk: {args}")

与 Agent 配合使用

使用 create_agent

LangChain 提供的 create_agent 函数自动处理工具执行循环:

from langchain.agents import create_agent
from langchain_openai import ChatOpenAI

# 定义工具
@tool
def search(query: str) -> str:
    """Search for information."""
    return f"Results for: {query}"

@tool
def get_weather(location: str) -> str:
    """Get weather information for a location."""
    return f"Weather in {location}: Sunny, 72°F"

# 创建 agent
model = ChatOpenAI(model="gpt-4")
agent = create_agent(
    model,
    tools=[search, get_weather],
    system_prompt="You are a helpful assistant."
)

# 调用 agent(自动处理工具循环)
result = agent.invoke({
    "messages": [{"role": "user", "content": "What's the weather in Paris?"}]
})

print(result["messages"][-1].content)

Agent 的能力

Agent 超越了简单的模型工具绑定,提供:

  1. 顺序工具调用 - 单个提示触发的多次工具调用
  2. 并行工具调用 - 适当时并行执行
  3. 动态工具选择 - 基于前一个结果动态选择工具
  4. 工具重试逻辑 - 错误处理和重试
  5. 状态持久化 - 跨工具调用保持状态

Agent 工作流程图

输入 → [模型决策] → 需要工具?
                       ↓ 是
                   [调用工具]
                       ↓
                   [工具执行]
                       ↓
                   [返回结果]
                       ↓
                   [模型决策] → 完成? → 输出
                       ↑              ↓ 否
                       └──────────────┘

工具类型与示例

1. Server-side Tools

某些 chat models(如 OpenAI、Anthropic、Gemini)提供内置的服务端工具:

from langchain_openai import ChatOpenAI

# OpenAI 内置工具
model = ChatOpenAI(model="gpt-4")
response = model.invoke(
    "Search the web for latest news about AI",
    tools=["web_search"]  # 使用内置 web search
)

2. 数据库工具

from langchain.tools import tool
import sqlite3

@tool
def query_database(sql: str) -> str:
    """Execute SQL query on the database.

    Args:
        sql: The SQL query to execute
    """
    conn = sqlite3.connect('example.db')
    cursor = conn.cursor()
    cursor.execute(sql)
    results = cursor.fetchall()
    conn.close()
    return str(results)

3. API 调用工具

@tool
def fetch_user_data(user_id: int) -> str:
    """Fetch user data from external API.

    Args:
        user_id: The ID of the user to fetch
    """
    import requests
    response = requests.get(f"https://api.example.com/users/{user_id}")
    return response.json()

4. 文件系统工具

@tool
def read_file(file_path: str) -> str:
    """Read contents of a file.

    Args:
        file_path: Path to the file to read
    """
    with open(file_path, 'r') as f:
        return f.read()

@tool
def write_file(file_path: str, content: str) -> str:
    """Write content to a file.

    Args:
        file_path: Path to the file
        content: Content to write
    """
    with open(file_path, 'w') as f:
        f.write(content)
    return f"Successfully wrote to {file_path}"

5. 计算工具

@tool
def calculator(expression: str) -> str:
    """Evaluate a mathematical expression.

    Args:
        expression: Mathematical expression to evaluate
    """
    try:
        result = eval(expression)
        return f"Result: {result}"
    except Exception as e:
        return f"Error: {str(e)}"

6. 检索工具 (RAG)

from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings

# 创建向量存储
vectorstore = Chroma.from_documents(
    documents=docs,
    embedding=OpenAIEmbeddings()
)

@tool
def retrieve_context(query: str) -> str:
    """Search and return information from the knowledge base.

    Args:
        query: The search query
    """
    docs = vectorstore.similarity_search(query, k=3)
    return "\n\n".join(doc.page_content for doc in docs)

7. 预构建检索工具

from langchain.tools.retriever import create_retriever_tool

retriever_tool = create_retriever_tool(
    retriever=vectorstore.as_retriever(),
    name="search_blog_posts",
    description="Search and return information about LLM blog posts."
)

8. Custom Tools (JavaScript)

import { ChatOpenAI, customTool } from "@langchain/openai";

const codeTool = customTool(
  async (input) => {
    // 执行代码
    return "Code executed successfully";
  },
  {
    name: "execute_code",
    description: "Execute a code snippet",
    format: { type: "text" },
  }
);

工具错误处理

使用 @wrap_tool_call 装饰器

from langchain.agents.middleware import wrap_tool_call
from langchain.tools.tool_node import ToolCallRequest
from langchain_core.messages import ToolMessage
from typing import Callable

@wrap_tool_call
def monitor_tool(
    request: ToolCallRequest,
    handler: Callable[[ToolCallRequest], ToolMessage],
) -> ToolMessage:
    """监控工具执行并处理错误"""
    print(f"Executing tool: {request.tool_call['name']}")
    print(f"Arguments: {request.tool_call['args']}")

    try:
        result = handler(request)
        print(f"Tool completed successfully")
        return result
    except Exception as e:
        print(f"Tool failed: {e}")
        # 返回自定义错误消息
        return ToolMessage(
            content=f"Tool error: Please check your input. ({str(e)})",
            tool_call_id=request.tool_call['id']
        )

自定义错误处理

@tool
def divide(a: float, b: float) -> str:
    """Divide two numbers.

    Args:
        a: Numerator
        b: Denominator
    """
    try:
        if b == 0:
            return "Error: Cannot divide by zero"
        result = a / b
        return f"Result: {result}"
    except Exception as e:
        return f"Error: {str(e)}"

Agent 级别错误处理

from langchain.agents import create_agent

agent = create_agent(
    model,
    tools=[divide],
    middleware=[monitor_tool]  # 应用中间件
)

最佳实践

1. 清晰的工具描述

# ❌ 不好的描述
@tool
def search(query: str) -> str:
    """Search."""
    pass

# ✅ 好的描述
@tool
def search(query: str) -> str:
    """Search the customer database for records matching the query.

    Use this tool when you need to find information about customers,
    orders, or products in our database.

    Args:
        query: Search terms or keywords to look for
    """
    pass

2. 使用类型提示

# ✅ 始终使用类型提示
@tool
def get_user(user_id: int) -> str:
    """Get user information."""
    pass

# ✅ 使用 Literal 限制选项
from typing import Literal

@tool
def sort_data(order: Literal["asc", "desc"]) -> str:
    """Sort data in ascending or descending order."""
    pass

3. 适当的参数默认值

@tool
def search(query: str, limit: int = 10, offset: int = 0) -> str:
    """Search with pagination support."""
    pass

4. 返回有意义的结果

# ✅ 返回结构化、有用的信息
@tool
def get_weather(location: str) -> str:
    """Get weather information."""
    return json.dumps({
        "location": location,
        "temperature": 72,
        "condition": "Sunny",
        "humidity": 45
    })

5. 安全性考虑

@tool
def execute_sql(query: str) -> str:
    """Execute SQL query (READ-ONLY)."""
    # ✅ 限制为只读操作
    if any(keyword in query.upper() for keyword in ['DROP', 'DELETE', 'UPDATE', 'INSERT']):
        return "Error: Only SELECT queries are allowed"

    # 执行安全的查询
    pass

6. 工具命名规范

# ✅ 使用动词开头,清晰描述行为
@tool
def search_products(query: str) -> str:
    """Search for products."""
    pass

@tool
def create_order(product_id: int, quantity: int) -> str:
    """Create a new order."""
    pass

@tool
def get_user_info(user_id: int) -> str:
    """Get user information."""
    pass

高级主题

1. 工具中间件

工具重试中间件
from langchain.agents.middleware import ToolRetryMiddleware

retry_middleware = ToolRetryMiddleware(
    max_retries=2,  # 最多重试 2 次
    tools=["search", "database_query"]  # 指定需要重试的工具
)

agent = create_agent(
    model,
    tools=[search, database_query],
    middleware=[retry_middleware]
)
工具调用限制
from langchain.agents.middleware import ToolCallLimitMiddleware

limit_middleware = ToolCallLimitMiddleware(
    max_calls=5,  # 全局限制为 5 次调用
    tool_limits={
        "expensive_api": 2,  # 特定工具限制为 2 次
    }
)

2. LLM 工具选择器

当有大量工具(10+)时,使用 LLM 智能选择相关工具:

from langchain.agents.middleware import LLMToolSelectorMiddleware

selector = LLMToolSelectorMiddleware(
    model="gpt-4o-mini",  # 用于选择的模型
    max_tools=3,  # 最多选择 3 个工具
    always_include=["calculator"]  # 始终包含的工具
)

agent = create_agent(
    model,
    tools=[tool1, tool2, tool3, ..., tool15],  # 许多工具
    middleware=[selector]
)

3. 上下文编辑

当 token 限制达到时自动清理旧的工具调用:

from langchain.agents.middleware import (
    ContextEditingMiddleware,
    ClearToolUsesEdit
)

context_edit = ContextEditingMiddleware(
    edits=[
        ClearToolUsesEdit(
            trigger=2000,  # 达到 2000 tokens 时触发
            keep=3,  # 保留最近 3 个工具结果
            clear_tool_inputs=False,  # 保留工具调用参数
            placeholder="[cleared]"  # 替换文本
        )
    ]
)

4. 动态提示

from langchain.agents.middleware import dynamic_prompt

@dynamic_prompt
def custom_prompt(state: AgentState, runtime: Runtime) -> str:
    """根据状态动态生成系统提示"""
    user_role = state.get("user_role", "guest")

    if user_role == "admin":
        return "You are an admin assistant with full access."
    else:
        return "You are a guest assistant with limited access."

agent = create_agent(
    model,
    tools=[...],
    middleware=[custom_prompt]
)

5. 多 Agent 工具调用模式

Supervisor 模式
from langchain.agents import create_agent

# 创建专门的子 agent
calendar_agent = create_agent(model, tools=[...], name="calendar")
email_agent = create_agent(model, tools=[...], name="email")

# 将子 agent 包装为工具
@tool
def use_calendar(task: str) -> str:
    """Use calendar agent for scheduling tasks."""
    return calendar_agent.invoke({"messages": [{"role": "user", "content": task}]})

@tool
def use_email(task: str) -> str:
    """Use email agent for email tasks."""
    return email_agent.invoke({"messages": [{"role": "user", "content": task}]})

# 创建 supervisor agent
supervisor = create_agent(
    model,
    tools=[use_calendar, use_email],
    system_prompt="You coordinate between calendar and email agents."
)

6. 结构化输出与工具

使用 ToolStrategy 生成结构化输出:

from pydantic import BaseModel
from langchain.agents.structured_output import ToolStrategy

class ContactInfo(BaseModel):
    name: str
    email: str
    phone: str

agent = create_agent(
    model="gpt-4o-mini",
    tools=[search_tool],
    response_format=ToolStrategy(ContactInfo)
)

result = agent.invoke({
    "messages": [{
        "role": "user",
        "content": "Extract: John Doe, john@example.com, (555) 123-4567"
    }]
})

# result["structured_response"] 是 ContactInfo 实例
contact = result["structured_response"]
print(contact.name)  # "John Doe"

7. 工具工具包(Toolkits)

Toolkit 是一组相关工具的集合:

from langchain_community.agent_toolkits import SQLDatabaseToolkit
from langchain_community.utilities import SQLDatabase

# 创建数据库连接
db = SQLDatabase.from_uri("sqlite:///example.db")

# 创建 SQL toolkit
toolkit = SQLDatabaseToolkit(db=db, llm=model)

# 获取所有工具
tools = toolkit.get_tools()
# 包含: sql_db_query, sql_db_schema, sql_db_list_tables, etc.

agent = create_agent(model, tools=tools)

完整示例

示例 1: 简单的天气助手

from langchain.tools import tool
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent

@tool
def get_weather(location: str) -> str:
    """Get current weather for a location."""
    # 实际应用中这里会调用真实的天气 API
    weather_data = {
        "Paris": "Sunny, 22°C",
        "London": "Cloudy, 15°C",
        "Tokyo": "Rainy, 18°C"
    }
    return weather_data.get(location, f"Weather data not available for {location}")

@tool
def get_forecast(location: str, days: int = 3) -> str:
    """Get weather forecast for a location."""
    return f"{days}-day forecast for {location}: Mostly sunny with occasional clouds"

# 创建 agent
model = ChatOpenAI(model="gpt-4")
agent = create_agent(
    model,
    tools=[get_weather, get_forecast],
    system_prompt="You are a helpful weather assistant."
)

# 使用 agent
result = agent.invoke({
    "messages": [{"role": "user", "content": "What's the weather in Paris?"}]
})

print(result["messages"][-1].content)

示例 2: RAG Agent

from langchain.tools import tool
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings
from langchain.agents import create_agent
from langchain_core.messages import ToolMessage

# 创建向量存储
vectorstore = Chroma.from_texts(
    texts=[
        "LangChain is a framework for developing LLM applications.",
        "Tools extend model capabilities by connecting to external systems.",
        "Agents can make decisions about which tools to use."
    ],
    embedding=OpenAIEmbeddings()
)

@tool
def retrieve_context(query: str) -> ToolMessage:
    """Retrieve relevant context from the knowledge base."""
    docs = vectorstore.similarity_search(query, k=2)
    content = "\n\n".join(doc.page_content for doc in docs)

    return ToolMessage(
        content=content,
        tool_call_id="",  # 自动填充
        artifacts=docs  # 原始文档
    )

# 创建 RAG agent
agent = create_agent(
    model=ChatOpenAI(model="gpt-4"),
    tools=[retrieve_context],
    system_prompt="You answer questions using the retrieved context."
)

result = agent.invoke({
    "messages": [{
        "role": "user",
        "content": "What are tools in LangChain?"
    }]
})

print(result["messages"][-1].content)

示例 3: SQL Agent

from langchain.tools import tool
from langchain.agents import create_agent
import sqlite3

@tool
def list_tables() -> str:
    """List all tables in the database."""
    conn = sqlite3.connect('example.db')
    cursor = conn.cursor()
    cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
    tables = [row[0] for row in cursor.fetchall()]
    conn.close()
    return f"Available tables: {', '.join(tables)}"

@tool
def get_schema(table_name: str) -> str:
    """Get the schema of a specific table."""
    conn = sqlite3.connect('example.db')
    cursor = conn.cursor()
    cursor.execute(f"PRAGMA table_info({table_name})")
    schema = cursor.fetchall()
    conn.close()
    return f"Schema for {table_name}: {schema}"

@tool
def run_query(query: str) -> str:
    """Execute a SQL query (SELECT only)."""
    if not query.strip().upper().startswith('SELECT'):
        return "Error: Only SELECT queries are allowed"

    conn = sqlite3.connect('example.db')
    cursor = conn.cursor()
    try:
        cursor.execute(query)
        results = cursor.fetchall()
        return f"Query results: {results}"
    except Exception as e:
        return f"Error: {str(e)}"
    finally:
        conn.close()

# 创建 SQL agent
agent = create_agent(
    model=ChatOpenAI(model="gpt-4"),
    tools=[list_tables, get_schema, run_query],
    system_prompt="""You are a SQL expert assistant.
    Always start by listing tables, then get schema, then run queries.
    Only use SELECT queries - never modify the database."""
)

result = agent.invoke({
    "messages": [{
        "role": "user",
        "content": "Show me all users in the database"
    }]
})

参考资源

官方文档

  • Python Tools: https://docs.langchain.com/oss/python/langchain/tools
  • JavaScript Tools: https://docs.langchain.com/oss/javascript/langchain/tools
  • Tool Calling: https://docs.langchain.com/oss/python/langchain/models#tool-calling
  • Agents: https://docs.langchain.com/oss/python/langchain/agents

教程

  • RAG Agent: https://docs.langchain.com/oss/python/langgraph/agentic-rag
  • SQL Agent: https://docs.langchain.com/oss/python/langgraph/sql-agent
  • Multi-Agent: https://docs.langchain.com/oss/python/langchain/multi-agent

工具集成

  • Tools & Toolkits: https://docs.langchain.com/oss/python/integrations/tools/

总结

LangChain Tools 是构建强大 AI 应用的核心组件:

  1. 简单入门 - 使用 @tool 装饰器快速创建工具
  2. 灵活扩展 - 支持各种外部系统集成
  3. 智能编排 - Agent 自动处理工具调用循环
  4. 生产就绪 - 内置错误处理、重试、限流等功能
  5. 高度可定制 - 支持中间件、自定义行为

通过合理使用 Tools,可以让 LLM 从单纯的文本生成器转变为能够执行实际任务的智能助手。

评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值