LangChain学习之四种Memory模式使用

1. 学习背景

在LangChain for LLM应用程序开发中课程中,学习了LangChain框架扩展应用程序开发中语言模型的用例和功能的基本技能,遂做整理为后面的应用做准备。视频地址:基于LangChain的大语言模型应用开发+构建和评估

2. 四种memory模式

本实验基于jupyternotebook进行。

2.1 ConversationBufferMemory

import warnings
warnings.filterwarnings('ignore')
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory


llm = ChatOpenAI(
    api_key = "XXXX",
    base_url = "XXXX",
    temperature=0.0
)

memory = ConversationBufferMemory()
conversation = ConversationChain(
    llm=llm, 
    memory = memory,
    verbose=True #设置为True可以看到模型的具体思考过程
)
conversation.predict(input="Hi, my name is Andrew")

输出如下:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

Human: Hi, my name is Andrew
AI:

> Finished chain.
"Hello Andrew! It's nice to meet you. My name is AI and I'm here to assist you with any questions or tasks you have. How can I help you today?"

继续测试

conversation.predict(input="What is 1+1?")

输出如下:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:
Human: Hi, my name is Andrew
AI: Hello Andrew! It's nice to meet you. My name is AI and I'm here to assist you with any questions or tasks you have. How can I help you today?
Human: What is 1+1?
AI:

> Finished chain.
'1+1 equals 2. This is a basic mathematical operation that can be solved by adding the two numbers together. Is there anything else I can assist you with?'

再次输入之前的问题

conversation.predict(input="What is my name?")

输出如下:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:
Human: Hi, my name is Andrew
AI: Hello Andrew! It's nice to meet you. My name is AI and I'm here to assist you with any questions or tasks you have. How can I help you today?
Human: What is 1+1?
AI: 1+1 equals 2. This is a basic mathematical operation that can be solved by adding the two numbers together. Is there anything else I can assist you with?
Human: What is my name?
AI:

> Finished chain.
'Your name is Andrew.'

我们可以清楚的看到每次LLM所记忆的内容,看看buffer有什么

print(memory.buffer)

输出如下:

Human: Hi, my name is Andrew
AI: Hello Andrew! It's nice to meet you. My name is AI and I'm here to assist you with any questions or tasks you have. How can I help you today?
Human: What is 1+1?
AI: 1+1 equals 2. This is a basic mathematical operation that can be solved by adding the two numbers together. Is there anything else I can assist you with?
Human: What is my name?
AI: Your name is Andrew.
memory.load_memory_variables({})

输出如下:

{'history': "Human: Hi, my name is Andrew\nAI: Hello Andrew! It's nice to meet you. My name is AI and I'm here to assist you with any questions or tasks you have. How can I help you today?\nHuman: What is 1+1?\nAI: 1+1 equals 2. This is a basic mathematical operation that can be solved by adding the two numbers together. Is there anything else I can assist you with?\nHuman: What is my name?\nAI: Your name is Andrew."}

接下来,我们尝试一下对ConversationBufferMemory的操作。

# 重新初始化memory
memory = ConversationBufferMemory()
memory.save_context({"input": "Hi"}, 
                    {"output": "What's up"})
print(memory.buffer)

输出如下:

Human: Hi
AI: What's up

以变量形式读取

memory.load_memory_variables({})

输出如下:

{'history': "Human: Hi\nAI: What's up"}

再往里存入内容

memory.save_context({"input": "Not much, just hanging"}, 
                    {"output": "Cool"})
memory.load_memory_variables({})

输出如下:

{'history': "Human: Hi\nAI: What's up\nHuman: Not much, just hanging\nAI: Cool"}

可以看到把所有的变量读取出来了。一般情况下,ConversationBufferMemory会将所有的信息读出。

2.2 ConversationBufferWindowMemory

from langchain.memory import ConversationBufferWindowMemory

# 设置对话缓冲窗口的轮数为1
memory = ConversationBufferWindowMemory(k = 1)
memory.save_context({"input": "Hi"},
                    {"output": "What's up"})
memory.save_context({"input": "Not much, just hanging"},
                    {"output": "Cool"})
memory.load_memory_variables({})

输出如下:

{'history': 'Human: Not much, just hanging\nAI: Cool'}

可以看到输出的轮次只有一次对话长度。我们尝试用人机对话进行测试。

memory = ConversationBufferWindowMemory(k = 1)
conversation = ConversationChain(
    llm=llm, 
    memory = memory,
    verbose= False
)

conversation.predict(input="Hi, my name is Andrew")

输出如下:

"Hello Andrew! It's nice to meet you. My name is AI. I have access to a vast amount of information, so feel free to ask me anything you'd like to know!"

再进行对话

conversation.predict(input="What is 1+1?")

输出如下:

'1 + 1 equals 2. Is there anything else you would like to know?'

再进行对话

conversation.predict(input="What is my name?")

输出如下:

"I'm sorry, I do not have access to that information. Is there anything else you would like to ask?"

可以看到,设置对话窗口长度为1的时候,LLM的memory中只保留最后一轮的对话。

2.3 ConversationTokenBufferMemory

首先需要安装包,因为ConversationTokenBufferMemory会调用tiktoken包相关内容。

!pip install tiktoken
from langchain.memory import ConversationTokenBufferMemory


# 设置对话过程中的记忆的长度最大为30个token
memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=30)
memory.save_context({"input": "AI is what?!"},
                    {"output": "Amazing!"})
memory.save_context({"input": "Backpropagation is what?"},
                    {"output": "Beautiful!"})
memory.save_context({"input": "Chatbots are what?"}, 
                    {"output": "Charming!"})
memory.load_memory_variables({})

输出如下:

{'history': 'AI: Beautiful!\nHuman: Chatbots are what?\nAI: Charming!'}

2.4 ConversationSummaryBufferMemory

from langchain.memory import ConversationSummaryBufferMemory


schedule = "There is a meeting at 8am with your product team. \
You will need your powerpoint presentation prepared. \
9am-12pm have time to work on your LangChain \
project which will go quickly because Langchain is such a powerful tool. \
At Noon, lunch at the italian resturant with a customer who is driving \
from over an hour away to meet you to understand the latest in AI. \
Be sure to bring your laptop to show the latest LLM demo."

memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=100)
memory.save_context({"input": "Hello"}, {"output": "What's up"})
memory.save_context({"input": "Not much, just hanging"},
                    {"output": "Cool"})
memory.save_context({"input": "What is on the schedule today?"}, 
                    {"output": f"{schedule}"})
memory.load_memory_variables({})

输出如下:

{'history': "System: The human and AI have a brief exchange of greetings, and the AI informs the human of the day's schedule, including a meeting, work time on a project, and a lunch appointment with a customer interested in AI technology."}

可以看到,由于设置了最大token的长度为100,使用的是SummaryBuffer,因此LLM会对内存中的段落信息进行摘要存储。再继续对话,打开verbose查看chain细节。

conversation = ConversationChain(
    llm = llm, 
    memory = memory,
    verbose = True
)
conversation.predict(input="What would be a good demo to show?")

输出如下:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:
System: The human and AI have a brief exchange of greetings, and the AI informs the human of the day's schedule, including a meeting, work time on a project, and a lunch appointment with a customer interested in AI technology.
Human: What would be a good demo to show?
AI:

> Finished chain.
"I suggest showcasing our AI's natural language processing capabilities, as well as its ability to analyze and interpret large datasets for actionable insights. We could also demonstrate its real-time decision-making capabilities and its integration with various platforms and applications. Additionally, we could show its ability to automate repetitive tasks and streamline workflow processes."

尝试读取memory中的信息。

memory.load_memory_variables({})

输出如下:

{'history': "System: The human and AI have a brief exchange of greetings, and the AI informs the human of the day's schedule, including a meeting, work time on a project, and a lunch appointment with a customer interested in AI technology.\nHuman: What would be a good demo to show?\nAI: I suggest showcasing our AI's natural language processing capabilities, as well as its ability to analyze and interpret large datasets for actionable insights. We could also demonstrate its real-time decision-making capabilities and its integration with various platforms and applications. Additionally, we could show its ability to automate repetitive tasks and streamline workflow processes."}

3. 思考

在构建应用时,如果token开销比较花钱,可以选择合适的对话memory进行存储关键信息。

### LangChain框架介绍 LangChain 是一个专为利用大型语言模型(LLM)创建应用而设计的框架[^1]。该框架不仅简化了与这些强大模型之间的交互过程,还提供了多种机制用于处理数据检索、对话管理以及其他复杂的任务流控制。 #### 架构特点 核心在于其模块化的设计理念,允许开发者通过组合不同的组件快速搭建自定义解决方案。这种灵活性使得即使是不具备深厚机器学习背景的人也能高效地实现智能化服务[^2]。 - **工具集**:内置了一系列实用的功能库,涵盖了从基础的数据预处理到高级别的自然语言理解等多个方面。 - **接口支持**:为了方便集成第三方API和服务,特别优化了对外部资源调用的支持程度。 - **逻辑编排**:借助于精心设计的工作流引擎,可以轻松定义并执行多步骤的任务序列,在各个阶段之间传递必要的状态信息。 ### 主要模块解析 以下是构成整个系统的几个重要部分: - **Prompt Templates (提示模板)** :用来定制输入给AI模型的具体形式,从而影响最终输出的质量和风格。 - **Memory Components (记忆组件)** :负责保存会话历史记录或其他持久化的上下文资料,以便后续查询或决策时参考。 - **Tool Integration (工具整合)** :实现了与其他软件平台无缝对接的能力,扩大了可用资源池的同时也增强了整体功能性。 - **Chains and Agents (链路与代理)** :作为连接各独立单元的核心枢纽,确保所有操作都能按照预定顺序顺利进行下去。 ### 安装指南 对于希望尝试这一先进技术的人来说,官方文档给出了详细的环境配置说明。通常情况下只需要几条简单的命令就能完成基本设置: ```bash pip install langchain ``` 之后便可以根据项目需求进一步探索更多特性。 ### 使用实例 下面给出一段Python代码片段展示如何初始化一个简单的LangChain应用: ```python from langchain import LangChain # 创建一个新的LangChain实例 app = LangChain() # 设置初始参数... app.configure(prompt_template="你好, {name}!", memory={"context": "这是一个测试"}) print(app.run(name="张三")) ``` 这段脚本展示了最基本的启动流程——指定想要使用的提示语句模式,并传入一些额外的记忆项供内部算法参考;最后调用`run()`函数触发实际运行,向用户提供个性化的问候消息。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

l8947943

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值