LangChain-v0.2文档翻译:2.4、教程-构建一个代理

  1. 介绍
  2. 教程
    2.1. 构建一个简单的 LLM 应用程序
    2.2. 构建一个聊天机器人
    2.3. 构建向量存储库和检索器
    2.4. 构建一个代理(点击查看原文

构建代理

语言模型本身无法执行动作,它们只能输出文本。LangChain的一个重要用例是创建代理。代理是使用大型语言模型(LLM)作为推理引擎来确定要执行的操作以及这些操作的输入应该是什么。然后,这些操作的结果可以反馈到代理中,代理将决定是否需要更多的操作,或者是否可以结束。

在本教程中,我们将构建一个可以与多种不同工具交互的代理:一个是本地数据库,另一个是搜索引擎。你将能够向这个代理提问,观察它调用工具,并与它进行对话。

概念

我们将涵盖以下概念:

  • 使用语言模型,特别是它们的工具调用能力
  • 创建检索器(Retriever),以向我们的代理暴露特定信息
  • 使用搜索工具在线查找信息
  • 使用LangGraph代理,这些代理使用LLM来思考要做什么,然后执行
  • 使用LangSmith调试和跟踪你的应用程序

安装

Jupyter Notebook

这些和其他教程可能最方便在Jupyter笔记本中运行。请参阅此处了解如何安装。

安装

本教程需要安装langchainlangchain-chromalangchain-openai包:

  • pip
pip install langchain
  • conda
conda install langchain -c conda-forge

更多详细信息,请参见我们的安装指南。

LangSmith

使用LangChain构建的许多应用程序将包含多个步骤,涉及LLM调用的多次调用。
随着这些应用程序变得越来越复杂,能够检查链或代理内部的确切情况变得至关重要。
最好的方法是使用LangSmith
在上述链接注册后,请确保设置您的环境变量以开始记录跟踪:

export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY="..."

或者,如果在笔记本中,您可以使用以下方式设置它们:

import getpass
import os

os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()

定义工具

我们首先需要创建我们想要使用的工具。我们将使用两个工具:Tavily(用于在线搜索)以及我们将创建的本地索引的检索器。

Tavily

LangChain内置了一个工具,可以轻松地使用Tavily搜索引擎作为工具。
请注意,这需要API密钥 - 他们有免费层,但如果您没有或不想创建一个,您可以随时忽略这一步。
创建API密钥后,您需要将其导出为:

export TAVILY_API_KEY="..."
from langchain_community.tools.tavily_search import TavilySearchResults

search = TavilySearchResults(max_results=2)

search.invoke("what is the weather in SF")

[{‘url’: ‘https://weather.com/weather/tenday/l/San Francisco CA USCA0987:1:US’,
‘content’: “Comfy & Cozy\nThat’s Not What Was Expected\nOutside\n’No-Name Storms’ In Florida\nGifts From On High\nWhat To Do For Wheezing\nSurviving The Season\nStay Safe\nAir Quality Index\nAir quality is considered satisfactory, and air pollution poses little or no risk.\n Health & Activities\nSeasonal Allergies and Pollen Count Forecast\nNo pollen detected in your area\nCold & Flu Forecast\nFlu risk is low in your area\nWe recognize our responsibility to use data and technology for good. recents\nSpecialty Forecasts\n10 Day Weather-San Francisco, CA\nToday\nMon 18 | Day\nConsiderable cloudiness. Tue 19\nTue 19 | Day\nLight rain early…then remaining cloudy with showers in the afternoon. Wed 27\nWed 27 | Day\nOvercast with rain showers at times.”},
{‘url’: ‘https://www.accuweather.com/en/us/san-francisco/94103/hourly-weather-forecast/347629’,
‘content’: ‘Hourly weather forecast in San Francisco, CA. Check current conditions in San Francisco, CA with radar, hourly, and more.’}]

检索器

我们还将创建一个检索器,用于检索我们自己的一些数据。有关这里每个步骤的更深入解释,请参见此教程。

from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter

loader = WebBaseLoader("https://docs.smith.langchain.com/overview")
docs = loader.load()
documents = RecursiveCharacterTextSplitter(
    chunk_size=1000, chunk_overlap=200
).split_documents(docs)
vector = FAISS.from_documents(documents, OpenAIEmbeddings())
retriever = vector.as_retriever()
retriever.invoke("how to upload a dataset")[0]

Document(page_content=‘import Clientfrom langsmith.evaluation import evaluateclient = Client()# Define dataset: these are your test casesdataset_name = "Sample Dataset"dataset = client.create_dataset(dataset_name, description=“A sample dataset in LangSmith.”)client.create_examples( inputs=[ {“postfix”: “to LangSmith”}, {“postfix”: “to Evaluations in LangSmith”}, ], outputs=[ {“output”: “Welcome to LangSmith”}, {“output”: “Welcome to Evaluations in LangSmith”}, ], dataset_id=dataset.id,)# Define your evaluatordef exact_match(run, example): return {“score”: run.outputs[“output”] == example.outputs[“output”]}experiment_results = evaluate( lambda input: "Welcome " + input[‘postfix’], # Your AI system goes here data=dataset_name, # The data to predict and grade over evaluators=[exact_match], # The evaluators to score the results experiment_prefix=“sample-experiment”, # The name of the experiment metadata={ “version”: “1.0.0”, “revision_id”:’, metadata={‘source’: ‘https://docs.smith.langchain.com/overview’, ‘title’: ‘Getting started with LangSmith | 🦜️🛠️ LangSmith’, ‘description’: ‘Introduction’, ‘language’: ‘en’})

现在我们已经填充了我们将要检索的索引,我们可以轻松地将其转换为工具(代理正确使用所需的格式)。

from langchain.tools.retriever import create_retriever_tool
retriever_tool = create_retriever_tool(
    retriever,
    "langsmith_search",
    "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
)
工具

现在我们已经创建了这两个,我们可以创建一个我们将在下游使用的工具列表。

tools = [search, retriever_tool]
使用语言模型

接下来,让我们学习如何使用语言模型调用工具。LangChain支持许多不同的语言模型,您可以互换使用 - 在下面以OpenAI为例:

pip install -qU langchain-openai
import getpass
import os

os.environ["OPENAI_API_KEY"] = getpass.getpass()

from langchain_openai import ChatOpenAI

model = ChatOpenAI(model="gpt-4")

您可以通过传入消息列表来调用语言模型。默认情况下,响应是一个content字符串。

from langchain_core.messages import HumanMessage

response = model.invoke([HumanMessage(content="hi!")])
response.content

‘Hello! How can I assist you today?’

现在,我们可以看看如何让这个模型进行工具调用。为了实现这一点,我们使用.bind_tools语言模型来了解这些工具

model_with_tools = model.bind_tools(tools)

现在我们可以调用模型了。我们先用普通消息调用它,看看它如何响应。我们可以同时查看字段content和字段tool_calls。

response = model_with_tools.invoke([HumanMessage(content="Hi!")])

print(f"ContentString: {response.content}")
print(f"ToolCalls: {response.tool_calls}")

ContentString: Hello! How can I assist you today?
ToolCalls: []

现在,让我们尝试使用一些需要调用工具的输入来调用它。

response = model_with_tools.invoke([HumanMessage(content="What's the weather in SF?")])

print(f"ContentString: {response.content}")
print(f"ToolCalls: {response.tool_calls}")

ContentString:
ToolCalls: [{‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘current weather in SF’}, ‘id’:‘call_nfE1XbCqZ8eJsB8rNdn4MQZQ’}]

我们可以看到现在没有内容,但是有一个工具调用!它希望我们调用 Tavily Search 工具。

这还不是调用该工具 - 它只是告诉我们要调用它。为了真正调用它,我们需要创建我们的代理。

创建代理

现在我们已经定义了工具和LLM,我们可以创建代理。我们将使用LangGraph来构建代理。
目前,我们正在使用一个高级接口来构建代理,但LangGraph的好处在于,这个高级接口背后有一个低级、高度可控的API,以防您想要修改代理逻辑。

现在,我们可以使用 LLM 和工具初始化代理。

请注意,我们传入的是model,而不是model_with_tools。这是因为create_tool_calling_executor.bind_tools在后台调用我们。

from langgraph.prebuilt import chat_agent_executor

agent_executor = chat_agent_executor.create_tool_calling_executor(model, tools)
运行代理

现在我们可以在一些查询上运行代理了!请注意,目前这些都是无状态的查询(它不会记住以前的交互)。请注意,代理将在交互结束时返回最终状态(其中包括任何输入,我们将稍后看到如何只获取输出)。

首先,让我们看看当不需要调用工具时它如何响应:

response = agent_executor.invoke({"messages": [HumanMessage(content="hi!")]})

response["messages"]

[HumanMessage(content=‘hi!’,
id=‘1535b889-10a5-45d0-a1e1-dd2e60d4bc04’), AIMessage(content=‘Hello! How can I assist you today?’, response_metadata={‘token_usage’: {‘completion_tokens’: 10, ‘prompt_tokens’: 129, ‘total_tokens’: 139}, ‘model_name’: ‘gpt-4’, ‘system_fingerprint’: None, ‘finish_reason’: ‘stop’, ‘logprobs’: None}, id=‘run-2c94c074-bdc9-4f01-8fd7-71cfc4777d55-0’)]

为了确切了解幕后发生了什么(并确保它没有调用工具),我们可以查看LangSmith 跟踪

现在让我们在一个应该调用检索器的示例上尝试一下

response = agent_executor.invoke(
    {"messages": [HumanMessage(content="how can langsmith help with testing?")]}
)
response["messages"]

[HumanMessage(content=‘how can langsmith help with testing?’, id=‘04f4fe8f-391a-427c-88af-1fa064db304c’), AIMessage(content=‘’, additional_kwargs={‘tool_calls’: [{‘id’: ‘call_FNIgdO97wo51sKx3XZOGLHqT’, ‘function’: {‘arguments’: ‘{\n “query”: “how can LangSmith help with testing”\n}’, ‘name’: ‘langsmith_search’}, ‘type’: ‘function’}]},response_metadata={‘token_usage’: {‘completion_tokens’: 22,‘prompt_tokens’: 135, ‘total_tokens’: 157}, ‘model_name’: ‘gpt-4’,‘system_fingerprint’: None, ‘finish_reason’: ‘tool_calls’, ‘logprobs’:None}, id=‘run-51f6ea92-84e1-43a5-b1f2-bc0c12d8613f-0’,tool_calls=[{‘name’: ‘langsmith_search’, ‘args’: {‘query’: ‘how can LangSmith help with testing’}, ‘id’:‘call_FNIgdO97wo51sKx3XZOGLHqT’}]), ToolMessage(content=“Getting started with LangSmith | 🦜️🛠️ LangSmith\n\nSkip to main contentLangSmith API DocsSearchGo to AppQuick StartUser GuideTracingEvaluationProduction Monitoring & AutomationsPrompt HubProxyPricingSelf-HostingCookbookQuick StartOn this pageGetting started with LangSmithIntroduction\u200bLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!Install LangSmith\u200bWe offer Python and Typescript SDKs for all your LangSmith needs.PythonTypeScriptpip install -U langsmithyarn add langchain langsmithCreate an API key\u200bTo create an API key head to the setting pages. Then click Create API Key.Setup your environment\u200bShellexport LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=# The below examples use the OpenAI API, though it’s not necessary in generalexport OPENAI_API_KEY=Log your first trace\u200bWe provide multiple ways to log traces\n\nLearn about the workflows LangSmith supports at each stage of the LLM application lifecycle.Pricing: Learn about the pricing model for LangSmith.Self-Hosting: Learn about self-hosting options for LangSmith.Proxy: Learn about the proxy capabilities of LangSmith.Tracing: Learn about the tracing capabilities of LangSmith.Evaluation: Learn about the evaluation capabilities of LangSmith.Prompt Hub Learn about the Prompt Hub, a prompt management tool built into LangSmith.Additional Resources\u200bLangSmith Cookbook: A collection of tutorials and end-to-end walkthroughs using LangSmith.LangChain Python: Docs for the Python LangChain library.LangChain Python API Reference: documentation to review the core APIs of LangChain.LangChain JS: Docs for the TypeScript LangChain libraryDiscord: Join us on our Discord to discuss all things LangChain!FAQ\u200bHow do I migrate projects between organizations?\u200bCurrently we do not support project migration betwen organizations. While you can manually imitate this by\n\nteam deals with sensitive data that cannot be logged. How can I ensure that only my team can access it?\u200bIf you are interested in a private deployment of LangSmith or if you need to self-host, please reach out to us at sales@langchain.dev. Self-hosting LangSmith requires an annual enterprise license that also comes with support and formalized access to the LangChain team.Was this page helpful?NextUser GuideIntroductionInstall LangSmithCreate an API keySetup your environmentLog your first traceCreate your first evaluationNext StepsAdditional ResourcesFAQHow do I migrate projects between organizations?Why aren’t my runs aren’t showing up in my project?My team deals with sensitive data that cannot be logged. How can I ensure that only my team can access it?CommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc.”, name=‘langsmith_search’, id=‘f286c7e7-6514-4621-ac60-e4079b37ebe2’, tool_call_id=‘call_FNIgdO97wo51sKx3XZOGLHqT’), AIMessage(content=“LangSmith is a platform that can significantly aid in testing by offering several features:\n\n1. Tracing: LangSmith provides robust tracing capabilities that enable you to monitor your application closely. This feature is particularly useful for tracking
the behavior of your application and identifying any potential issues.\n\n2. Evaluation: LangSmith allows you to perform comprehensive evaluations of your application. This can help you assess the performance of your application under various conditions and make necessary adjustments to enhance its functionality.\n\n3. Production Monitoring & Automations: With LangSmith, you can keep a close eye on your application when it’s in active use. The platform provides tools for automatic monitoring and managing routine tasks, helping to ensure your application runs smoothly.\n\n4. Prompt Hub: It’s a prompt management tool built into LangSmith. This feature can be instrumental when testing various prompts in your application.\n\nOverall, LangSmith helps you build production-grade LLM applications with confidence, providing necessary tools for monitoring, evaluation, and automation.”, response_metadata={‘token_usage’: {‘completion_tokens’: 200, ‘prompt_tokens’: 782, ‘total_tokens’: 982}, ‘model_name’: ‘gpt-4’, ‘system_fingerprint’: None, ‘finish_reason’: ‘stop’, ‘logprobs’: None}, id=‘run-4b80db7e-9a26-4043-8b6b-922f847f9c80-0’)]

让我们看一下LangSmith 的跟踪信息,看看内部到底发生了什么。

注意我们最后得到的状态还包含工具调用和工具响应消息。

现在让我们尝试一个需要调用搜索工具的地方:

response = agent_executor.invoke(
    {"messages": [HumanMessage(content="whats the weather in sf?")]}
)
response["messages"]

[HumanMessage(content=‘whats the weather in sf?’, id=‘e6b716e6-da57-41de-a227-fee281fda588’),
AIMessage(content=‘’, additional_kwargs={‘tool_calls’: [{‘id’: ‘call_TGDKm0saxuGKJD5OYOXWRvLe’, ‘function’: {‘arguments’: ‘{\n “query”: “current weather in San Francisco”\n}’, ‘name’: ‘tavily_search_results_json’}, ‘type’: ‘function’}]}, response_metadata={‘token_usage’: {‘completion_tokens’: 23, ‘prompt_tokens’: 134, ‘total_tokens’: 157}, ‘model_name’: ‘gpt-4’, ‘system_fingerprint’: None, ‘finish_reason’: ‘tool_calls’, ‘logprobs’: None}, id=‘run-fd7d5854-2eab-4fca-ad9e-b3de8d587614-0’, tool_calls=[{‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘current weather in San Francisco’}, ‘id’: ‘call_TGDKm0saxuGKJD5OYOXWRvLe’}]),
ToolMessage(content=‘[{“url”: “https://www.weatherapi.com/”, “content”: “{‘location’: {‘name’: ‘San Francisco’, ‘region’: ‘California’, ‘country’: ‘United States of America’, ‘lat’: 37.78, ‘lon’: -122.42, ‘tz_id’: ‘America/Los_Angeles’, ‘localtime_epoch’: 1714426800, ‘localtime’: ‘2024-04-29 14:40’}, ‘current’: {‘last_updated_epoch’: 1714426200, ‘last_updated’: ‘2024-04-29 14:30’, ‘temp_c’: 17.8, ‘temp_f’: 64.0, ‘is_day’: 1, ‘condition’: {‘text’: ‘Sunny’, ‘icon’: ‘//cdn.weatherapi.com/weather/64x64/day/113.png’, ‘code’: 1000}, ‘wind_mph’: 23.0, ‘wind_kph’: 37.1, ‘wind_degree’: 290, ‘wind_dir’: ‘WNW’, ‘pressure_mb’: 1019.0, ‘pressure_in’: 30.09, ‘precip_mm’: 0.0, ‘precip_in’: 0.0, ‘humidity’: 50, ‘cloud’: 0, ‘feelslike_c’: 17.8, ‘feelslike_f’: 64.0, ‘vis_km’: 16.0, ‘vis_miles’: 9.0, ‘uv’: 5.0, ‘gust_mph’: 27.5, ‘gust_kph’: 44.3}}”}, {“url”: “https://www.wunderground.com/hourly/us/ca/san-francisco/94125/date/2024-4-29”, “content”: “Current Weather for Popular Cities . San Francisco, CA warning 59 \u00b0 F Mostly Cloudy; Manhattan, NY 56 \u00b0 F Fair; Schiller Park, IL (60176) warning 58 \u00b0 F Mostly Cloudy; Boston, MA 52 \u00b0 F Sunny …”}]’, name=‘tavily_search_results_json’, id=‘aa0d8c3d-23b5-425a-ad05-3c174fc04892’, tool_call_id=‘call_TGDKm0saxuGKJD5OYOXWRvLe’),
AIMessage(content=‘The current weather in San Francisco, California is sunny with a temperature of 64.0°F (17.8°C). The wind is coming from the WNW at a speed of 23.0 mph. The humidity level is at 50%. There is no precipitation and the cloud cover is 0%. The visibility is 16.0 km. The UV index is 5.0. Please note that this information is as of 14:30 on April 29, 2024, according to Weather API.’, response_metadata={‘token_usage’: {‘completion_tokens’: 117, ‘prompt_tokens’: 620, ‘total_tokens’: 737}, ‘model_name’: ‘gpt-4’, ‘system_fingerprint’: None, ‘finish_reason’: ‘stop’, ‘logprobs’: None}, id=‘run-2359b41b-cab6-40c3-b6d9-7bdf7195a601-0’)]

我们可以检查LangSmith 跟踪以确保它有效地调用搜索工具。

流式消息

我们已经看到代理可以通过.invoke被调用以获得最终响应。如果代理执行多个步骤,这可能需要一段时间。为了展示中间进度,我们可以流式传输发生的消息。

for chunk in agent_executor.stream(
    {"messages": [HumanMessage(content="whats the weather in sf?")]}
):
    print(chunk)
    print("----")
流式令牌

除了流式传输消息外,流式传输令牌也非常有用。
我们可以使用.astream_events方法来实现这一点。

此.astream_events方法仅适用于 Python 3.11 或更高版本。

async for event in agent_executor.astream_events(
    {"messages": [HumanMessage(content="whats the weather in sf?")]}, version="v1"
):
    kind = event["event"]
    if kind == "on_chain_start":
        if (
            event["name"] == "Agent"
        ):  # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
            print(
                f"Starting agent: {event['name']} with input: {event['data'].get('input')}"
            )
    elif kind == "on_chain_end":
        if (
            event["name"] == "Agent"
        ):  # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
            print()
            print("--")
            print(
                f"Done agent: {event['name']} with output: {event['data'].get('output')['output']}"
            )
    if kind == "on_chat_model_stream":
        content = event["data"]["chunk"].content
        if content:
            # Empty content in the context of OpenAI means
            # that the model is asking for a tool to be invoked.
            # So we only print non-empty content
            print(content, end="|")
    elif kind == "on_tool_start":
        print("--")
        print(
            f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"
        )
    elif kind == "on_tool_end":
        print(f"Done tool: {event['name']}")
        print(f"Tool output was: {event['data'].get('output')}")
        print("--")
添加记忆

如前所述,此代理是无状态的。这意味着它不记得以前的交互。为了给它添加记忆,我们需要传入一个检查点器。传入检查点器时,我们还必须在调用代理时传入一个thread_id(以便它知道从哪个线程/对话恢复)。

from langgraph.checkpoint.sqlite import SqliteSaver

memory = SqliteSaver.from_conn_string(":memory:")
agent_executor = chat_agent_executor.create_tool_calling_executor(
    model, tools, checkpointer=memory
)

config = {"configurable": {"thread_id": "abc123"}}
for chunk in agent_executor.stream(
    {"messages": [HumanMessage(content="hi im bob!")]}, config
):
    print(chunk)
    print("----")

{‘agent’: {‘messages’: [AIMessage(content=‘Hello Bob! How can I assist you today?’, response_metadata={‘token_usage’: {‘completion_tokens’: 11, ‘prompt_tokens’: 131, ‘total_tokens’: 142}, ‘model_name’: ‘gpt-4’, ‘system_fingerprint’: None, ‘finish_reason’: ‘stop’, ‘logprobs’: None}, id=‘run-607733e3-4b8d-4137-ae66-8a4b8ccc8d40-0’)]}}


for chunk in agent_executor.stream(
    {"messages": [HumanMessage(content="whats my name?")]}, config
):
    print(chunk)
    print("----")

{‘agent’: {‘messages’: [AIMessage(content=‘Your name is Bob. How can I assist you further?’, response_metadata={‘token_usage’: {‘completion_tokens’: 13, ‘prompt_tokens’: 154, ‘total_tokens’: 167}, ‘model_name’: ‘gpt-4’, ‘system_fingerprint’: None, ‘finish_reason’: ‘stop’, ‘logprobs’: None}, id=‘run-e1181ba6-732d-4564-b479-9f1ab6bf01f6-0’)]}}


结论

在本快速入门中,我们涵盖了如何创建一个简单的代理。
然后,我们展示了如何流式传输响应 - 不仅是中间步骤,还有令牌!
我们还添加了内存,以便您可以与它们进行对话。
代理是一个复杂的话题,有很多要学习的东西!
有关代理的更多信息,请查看LangGraph文档。它有自己的一套概念、教程和指南。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
根据您提供的引用内容,Langchain-Chatchat在执行过程中出现了JSONDecodeError: Expecting value: line 1 column 1 (char 0)的错误。要解决这个问题,可以尝试以下两种方法: 方法一:在encoding.py的data.decode函数后面加上一个ignore属性。这样做可以忽略解码过程中可能出现的错误,并继续进行安装。 方法二:如果您是在conda创建的虚拟环境中操作,可以在conda的安装目录下找到类似的encoding.py文件。例如,如果conda安装在D盘,可以找到D:\Anaconda\envs\langchain\Lib\site-packages\pip\_internal\utils\encoding.py。同样地,在encoding.py的data.decode函数后面加上一个ignore属性。 希望以上方法能够帮助您解决Langchain-Chatchat中的JSONDecodeError问题。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* [windows环境下的langchain-ChatGLM的本地部署](https://blog.csdn.net/muwpq/article/details/131270390)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] - *3* [langchain-chatglm(v0.2.0)使用更新说明-项目更名为Langchain-Chatchat](https://blog.csdn.net/weixin_42232045/article/details/132271595)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值