大模型从入门到应用——LangChain:代理(Agents)-[工具(Tools):多输入工具和工具输入模式]

分类目录:《大模型从入门到应用》总目录

LangChain系列文章:

----### 多输入工具
本节演示如何在Agent中使用需要多个输入的工具,推荐的方法是使用StructuredTool类。

import os
os.environ["LANGCHAIN_TRACING"] = "true"
from langchain import OpenAI
from langchain.agents import initialize_agent, AgentType

llm = OpenAI(temperature=0)
from langchain.tools import StructuredTool

def multiplier(a: float, b: float) -> float:
    """Multiply the provided floats."""
    return a * b

tool = StructuredTool.from_function(multiplier)
# Structured tools are compatible with the STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION agent type. 
agent_executor = initialize_agent([tool], llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent_executor.run("What is 3 times 4")

日志输出:

Entering new AgentExecutor chain...

Thought: I need to multiply 3 and 4
Action:

{
“action”: “multiplier”,
“action_input”: {“a”: 3, “b”: 4}
}


Observation: 12
Thought: I know what to respond
Action:

{
“action”: “Final Answer”,
“action_input”: “3 times 4 is 12”
}


Finished chain.

输出:

'3 times 4 is 12'
使用字符串格式的多输入工具

与结构化工具相比,另一种方法是使用常规的Tool类,并接受单个字符串作为输入。然后,工具必须处理解析逻辑,从文本中提取相关值,这将使工具表示与Agent提示紧密耦合。如果底层语言模型无法可靠地生成结构化模式,这仍然很有用。

让我们以乘法函数为例。为了使用它,我们将告诉Agent生成以逗号分隔的长度为两个的Action Input。然后,我们编写一个简单的包装器,将字符串分割成两部分(以逗号为界),并将两个解析后的值作为整数传递给乘法函数。

from langchain.llms import OpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType

以下是乘法函数及其字符串解析器的示例:

def multiplier(a, b):
    return a * b

def parsing_multiplier(string):
    a, b = string.split(",")
    return multiplier(int(a), int(b))
llm = OpenAI(temperature=0)
tools = [
    Tool(
        name = "Multiplier",
        func=parsing_multiplier,
        description="useful for when you need to multiply two numbers together. The input to this tool should be a comma separated list of numbers of length two, representing the two numbers you want to multiply together. For example, `1,2` would be the input if you wanted to multiply 1 by 2."
    )
]
mrkl = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
mrkl.run("What is 3 times 4")

日志输出:

Entering new AgentExecutor chain...
 I need to multiply two numbers
Action: Multiplier
Action Input: 3,4
Observation: 12
Thought: I now know the final answer
Final Answer: 3 times 4 is 12

Finished chain.

输出:

'3 times 4 is 12'

工具输入模式

默认情况下,工具通过检查函数签名来推断参数模式。对于更严格的要求,可以指定自定义输入模式,以及自定义的验证逻辑。

from typing import Any, Dict

from langchain.agents import AgentType, initialize_agent
from langchain.llms import OpenAI
from langchain.tools.requests.tool import RequestsGetTool, TextRequestsWrapper
from pydantic import BaseModel, Field, root_validator

llm = OpenAI(temperature=0)

我们还需安装tldextract

!pip install tldextract > /dev/null
notice A new release of pip is available: 23.0.1 -> 23.1
notice To update, run: pip install --upgrade pip

输入:

import tldextract

_APPROVED_DOMAINS = {
    "langchain",
    "wikipedia",
}

class ToolInputSchema(BaseModel):

    url: str = Field(...)
    
    @root_validator
    def validate_query(cls, values: Dict[str, Any]) -> Dict:
        url = values["url"]
        domain = tldextract.extract(url).domain
        if domain not in _APPROVED_DOMAINS:
            raise ValueError(f"Domain {domain} is not on the approved list:"
                             f" {sorted(_APPROVED_DOMAINS)}")
        return values
    
tool = RequestsGetTool(args_schema=ToolInputSchema, requests_wrapper=TextRequestsWrapper())
agent = initialize_agent([tool], llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)
# This will succeed, since there aren't any arguments that will be triggered during validation
answer = agent.run("What's the main title on langchain.com?")
print(answer)

输出:

The main title of langchain.com is "LANG CHAIN 🦜️🔗 Official Home Page"

将会报错的输入:

agent.run("What's the main title on google.com?")

输出:

---------------------------------------------------------------------------

ValidationError                           Traceback (most recent call last)

Cell In[7], line 1
----> 1 agent.run("What's the main title on google.com?")


File ~/code/lc/lckg/langchain/chains/base.py:213, in Chain.run(self, *args, **kwargs)
    211     if len(args) != 1:
    212         raise ValueError("`run` supports only one positional argument.")
--> 213     return self(args[0])[self.output_keys[0]]
    215 if kwargs and not args:
    216     return self(kwargs)[self.output_keys[0]]


File ~/code/lc/lckg/langchain/chains/base.py:116, in Chain.__call__(self, inputs, return_only_outputs)
    114 except (KeyboardInterrupt, Exception) as e:
    115     self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116     raise e
    117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
    118 return self.prep_outputs(inputs, outputs, return_only_outputs)


File ~/code/lc/lckg/langchain/chains/base.py:113, in Chain.__call__(self, inputs, return_only_outputs)
    107 self.callback_manager.on_chain_start(
    108     {"name": self.__class__.__name__},
    109     inputs,
    110     verbose=self.verbose,
    111 )
    112 try:
--> 113     outputs = self._call(inputs)
    114 except (KeyboardInterrupt, Exception) as e:
    115     self.callback_manager.on_chain_error(e, verbose=self.verbose)


File ~/code/lc/lckg/langchain/agents/agent.py:792, in AgentExecutor._call(self, inputs)
    790 # We now enter the agent loop (until it returns something).
    791 while self._should_continue(iterations, time_elapsed):
--> 792     next_step_output = self._take_next_step(
    793         name_to_tool_map, color_mapping, inputs, intermediate_steps
    794     )
    795     if isinstance(next_step_output, AgentFinish):
    796         return self._return(next_step_output, intermediate_steps)


File ~/code/lc/lckg/langchain/agents/agent.py:695, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps)
    693         tool_run_kwargs["llm_prefix"] = ""
    694     # We then call the tool on the tool input to get an observation
--> 695     observation = tool.run(
    696         agent_action.tool_input,
    697         verbose=self.verbose,
    698         color=color,
    699         **tool_run_kwargs,
    700     )
    701 else:
    702     tool_run_kwargs = self.agent.tool_run_logging_kwargs()


File ~/code/lc/lckg/langchain/tools/base.py:110, in BaseTool.run(self, tool_input, verbose, start_color, color, **kwargs)
    101 def run(
    102     self,
    103     tool_input: Union[str, Dict],
   (...)
    107     **kwargs: Any,
    108 ) -> str:
    109     """Run the tool."""
--> 110     run_input = self._parse_input(tool_input)
    111     if not self.verbose and verbose is not None:
    112         verbose_ = verbose


File ~/code/lc/lckg/langchain/tools/base.py:71, in BaseTool._parse_input(self, tool_input)
     69 if issubclass(input_args, BaseModel):
     70     key_ = next(iter(input_args.__fields__.keys()))
---> 71     input_args.parse_obj({key_: tool_input})
     72 # Passing as a positional argument is more straightforward for
     73 # backwards compatability
     74 return tool_input


File ~/code/lc/lckg/.venv/lib/python3.11/site-packages/pydantic/main.py:526, in pydantic.main.BaseModel.parse_obj()


File ~/code/lc/lckg/.venv/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()


ValidationError: 1 validation error for ToolInputSchema
__root__
  Domain google is not on the approved list: ['langchain', 'wikipedia'] (type=value_error)

参考文献:
[1] LangChain官方网站:https://www.langchain.com/
[2] LangChain 🦜️🔗 中文网,跟着LangChain一起学LLM/GPT开发:https://www.langchain.com.cn/
[3] LangChain中文网 - LangChain 是一个用于开发由语言模型驱动的应用程序的框架:http://www.cnlangchain.com/

### 关于 LangChain入门教程 #### 了解 LangChain 基本概念核心功能 LangChain 是一种用于构建语言模型应用的强大框架,旨在简化开发者创建复杂自然语言处理系统的流程。该框架不仅提供了种预训练的大规模语言模型(LLMs),还支持定制化的提示模板(Prompt Templates)、代理Agents)、记忆(Memory)、索引(Indexes)以及链条(Chains)。这些特性使得开发人员能够更灵活地设计对话系统其他基于文本的应用程序[^1]。 #### 组件详解 - **Models**: 支持不同种类的语言模型,如 ChatGPT、ChatGLM T5 等。 - **Prompts**: 提供管理自定义提示的功能,有助于优化与 LLMs 的交互效果。 - **Agents**: 负责决策并执行特定任务,允许大型语言模型访问外部工具服务。 - **Memory**: 记录会话历史记录,保持上下文连贯性。 - **Indexes**: 对文档进行结构化处理,便于后续查询操作。 - **Chains**: 定义了一系列组件之间的调用顺序,形成完整的业务逻辑流[^4]。 #### 实际应用场景展示——简单问答系统 为了更好地理解如何利用 LangChain 创建实际项目,在此提供了一个简易版的问答系统实例: ```python from langchain import LangChain, PromptTemplate, LLMMemory, SimpleIndexCreator, AgentExecutor import json # 加载配置文件中的参数设置 with open('config.json', 'r') as f: config = json.load(f) # 初始化内存对象来存储聊天记录 memory = LLMMemory() # 设置索引来加速检索过程 index_creator = SimpleIndexCreator() indexes = index_creator.create_indexes(config['documents']) # 配置提示模板以指导模型生成合适的回复 prompt_template = PromptTemplate( input_variables=["history", "input"], template="Based on the following conversation history:\n{history}\nThe user asks: {input}" ) # 构造Agent执行器来进行具体的操作 agent_executor = AgentExecutor.from_llm_and_tools(llm=config['llm'], tools=[...], memory=memory) while True: question = input("Ask a question:") # 获取当前对话的历史作为背景信息的一部分 context = agent_executor.memory.get_context(inputs={"question": question}) # 将问题传递给Agent执行器获取答案 response = agent_executor.run(prompt=prompt_template.format(history=context["history"], input=question)) print(response) ``` 这段代码片段展示了如何结合LangChain 组件建立一个可以持续互动的基础架构,并通过循环读取用户的输入来维持整个交流的过程[^3]。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

von Neumann

您的赞赏是我创作最大的动力~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值