Langchain Agent Type - Json Chat

LangChain是一个开源框架,用于简化基于大语言模型的应用程序开发,支持Python和JavaScript,提供与外部数据源和软件工作流的集成,集成了包括OpenAI、Google和IBM在内的多种LLM提供商。
摘要由CSDN通过智能技术生成
使用提示词:
prompt = hub.pull("hwchase17/react-chat-json")
SYSTEM
Assistant是OpenAI训练的大型语言模型。
 
Assistant 旨在帮助完成各种任务,从回答简单的问题到就各种主题提供深入的解释和讨论。 作为一种语言模型,Assistant 能够根据接收到的输入生成类似人类的文本,使其能够进行听起来自然的对话,并提供与当前主题连贯且相关的响应。
 
Assistant 不断学习和改进,其能力也在不断发展。 它能够处理和理解大量文本,并可以利用这些知识对各种问题提供准确且内容丰富的答案。 此外,Assistant 能够根据收到的输入生成自己的文本,使其能够参与讨论并就各种主题提供解释和描述。
 
总的来说,Assistant 是一个功能强大的系统,可以帮助完成广泛的任务,并提供有关广泛主题的宝贵见解和信息。 无论您需要解决特定问题的帮助还是只想就特定主题进行对话,助理都会随时为您提供帮助。
 
PLACEHOLDER
chat_history
 
HUMAN
工具
------
Assistant可以要求用户使用工具查找可能有助于回答用户原始问题的信息。 人类可以使用的工具有:
 
{tools}
 
回复格式说明
----------------------------
 
回复我时,请以以下两种格式之一输出回复:
 
**选项1:**
如果您希望人类使用工具,请使用此选项。
采用以下模式格式化的 Markdown 代码片段:
 
```json
{{
     "action": string, \ 要采取的行动。 必须是 {tool_names} 之一
     "action_input": string \ 行动的输入
}}
````
 
**选项#2:**
如果您想直接对人类做出反应,请使用此选项。 
采用以下模式格式化的 Markdown 代码片段:
 
```json
{{
     "action": "最终答案",
     "action_input": string \ 你应该把你想要返回使用的东西放在这里
}}
````
 
用户的输入
--------------------
这是用户的输入(请记住通过单个操作使用 json blob 的 markdown 代码片段进行响应,仅此而已):
 
{input}
 
PLACEHOLDER
agent_scratchpad
Agent代码
def create_json_chat_agent(
    llm: BaseLanguageModel,
    tools: Sequence[BaseTool],
    prompt: ChatPromptTemplate,
    stop_sequence: Union[bool, List[str]] = True,
    tools_renderer: ToolsRenderer = render_text_description,
) -> Runnable:
    """Create an agent that uses JSON to format its logic, build for Chat Models.

    Args:
        llm: LLM to use as the agent.
        tools: Tools this agent has access to.
        prompt: The prompt to use. See Prompt section below for more.
        stop_sequence: bool or list of str.
            If True, adds a stop token of "Observation:" to avoid hallucinates. 
            If False, does not add a stop token.
            If a list of str, uses the provided list as the stop tokens.
            
            Default is True. You may to set this to False if the LLM you are using
            does not support stop sequences.
        tools_renderer: This controls how the tools are converted into a string and
            then passed into the LLM. Default is `render_text_description`.

    Returns:
        A Runnable sequence representing an agent. It takes as input all the same input
        variables as the prompt passed in does. It returns as output either an
        AgentAction or AgentFinish.
    """  # noqa: E501

    # Make sure the required variables are present in the prompt
    missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference(
        prompt.input_variables
    )
    if missing_vars:
        raise ValueError(
            "Prompt missing required variables: {}".format(missing_vars)
        )

    # Add the tools names and render the tools to the prompt
    prompt = prompt.partial(
        tools=tools_renderer(list(tools)),
        tool_names=", ".join([t.name for t in tools]),
    )

    # Set up the stop sequence if needed
    if stop_sequence:
        # If True, set the stop sequence to "\nObservation"
        # If a list of strings, use those strings as the stop sequence
        stop = ["\nObservation"] if stop_sequence is True else stop_sequence
        # Bind the stop sequence to the language model
        llm_to_use = llm.bind(stop=stop)
    else:
        # If False, don't add a stop sequence
        llm_to_use = llm

    # Create the agent
    agent = (
        # Add a step to take the intermediate_steps and format them into
        # a Message object with a "tool_response" type
        RunnablePassthrough.assign(
            agent_scratchpad=lambda x: format_log_to_messages(
                x["intermediate_steps"], template_tool_response=TEMPLATE_TOOL_RESPONSE
            )
        )
        # Run the prompt with the input variables
        | prompt
        # Run the LLM with the output of the prompt
        | llm_to_use
        # Parse the output of the LLM as JSON to get the action and input
        | JSONAgentOutputParser()
    )

    return agent
推理输出


> Entering new AgentExecutor chain...
{
    "action": "tavily_search_results_json",
    "action_input": "LangChain"
}[{'url': 'https://www.ibm.com/topics/langchain', 'content': 'LangChain is essentially a library of abstractions for Python and Javascript, representing common steps and concepts  LangChain is an open source orchestration framework for the development of applications using large language models  other LangChain features, like the eponymous chains.  LangChain provides integrations for over 25 different embedding methods, as well as for over 50 different vector storesLangChain is a tool for building applications using large language models (LLMs) like chatbots and virtual agents. It simplifies the process of programming and integration with external data sources and software workflows. It supports Python and Javascript languages and supports various LLM providers, including OpenAI, Google, and IBM.'}]{
    "action": "Final Answer",
    "action_input": "LangChain is an open source orchestration framework for the development of applications using large language models. It simplifies the process of programming and integration with external data sources and software workflows. It supports Python and Javascript languages and supports various LLM providers, including OpenAI, Google, and IBM."
}

> Finished chain.
{'input': 'what is LangChain?',
 'output': 'LangChain is an open source orchestration framework for the development of applications using large language models. It simplifies the process of programming and integration with external data sources and software workflows. It supports Python and Javascript languages and supports various LLM providers, including OpenAI, Google, and IBM.'}

                
LangChain-ChaT是基于Laravel框架构建的一个聊天机器人系统,用于与用户进行自然语言交互。配置LangChain-ChaTagent通常涉及以下几个步骤: 1. **安装依赖**:首先,确保您的项目已经安装了Laravel框架并激活了相应的版本。然后,通过Composer安装`langchain/chaat`库,例如: ``` composer require langchain/chaat ``` 2. **配置服务提供者**:在`config/app.php`文件中,添加LangChain ChaT的服务提供者到`providers`数组中: ```php Langchain\Chaat\Providers\ChaatServiceProvider::class, ``` 3. **注册中间件**:同样在`config/app.php`,在`middleware`部分注册聊天中间件: ```php 'chaaat' => \Langchain\Chaat\Middleware\ChaatMiddleware::class, ``` 4. **配置路由**:在`routes/web.php`或相应的路由文件里,设置与聊天机器人交互的URL,并绑定到适当的控制器动作: ```php Route::post('/chat', [YourController::class, 'chat'])->middleware('chaaat'); ``` 5. **初始化Agent**:在控制器中,创建一个新的ChaT Agent实例并初始化它: ```php public function chat(Request $request) { $agent = app(\Langchain\Chaat\Agent::class); // 设置你的代理配置,如API密钥、模型路径等 $agent->setApiKey('your_api_key'); $response = $agent->processRequest($request->input('message')); return response()->json($response); } ``` 6. **配置数据模型**:如果你使用了自定义的数据模型来存储对话历史,记得按照文档配置模型及其迁移。 7. **测试和调整**:完成上述配置后,你可以开始测试代理,查看是否能正常响应用户的输入并返回适当的结果。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值