ConversationBufferMemory用法
通过Token数控制上下文长度,通过设置max_token_limit的大小设置截取长度
以下代码运行前提是把env文件的key设置好,key需要从openai申请
from langchain.memory import ConversationTokenBufferMemory
from langchain.llms import OpenAI
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(temperature=0.0)
#创建一个带token限制的对话缓存
memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=30)
#设置3次对话内容
memory.save_context({"input": "AI is what?!"},
{"output": "Amazing!"})
memory.save_context({"input": "Backpropagation is what?"},
{"output": "Beautiful!"})
memory.save_context({"input": "Chatbots are what?"},
{"output": "Charming!"})
print(memory.load_memory_variables({}))
打印
{'history': 'AI: Beautiful!\nHuman: Chatbots are what?\nAI: Charming!'}
VectorStoreRetrieverMemory用法
将 Memory 存储在向量数据库中,根据用户输入检索回最相关的部分
import faiss
from langchain.docstore import InMemoryDocstore
from langchain_community.vectorstores import FAISS
embedding_size = 1536 # Dimensions of the OpenAIEmbeddings
index = faiss.IndexFlatL2(embedding_size)
embedding_fn = OpenAIEmbeddings().embed_query
vectorstore = FAISS(embedding_fn, index, InMemoryDocstore({}), {})
# In actual usage, you would set `k` to be a higher value, but we use k=1 to show that
# the vector lookup still returns the semantically relevant information
retriever = vectorstore.as_retriever(search_kwargs=dict(k=1))
memory = VectorStoreRetrieverMemory(retriever=retriever)
# When added to an agent, the memory object can save pertinent information from conversations or used tools
memory.save_context({"input": "My favorite food is pizza"}, {"output": "that's good to know"})
memory.save_context({"input": "My favorite sport is soccer"}, {"output": "..."})
memory.save_context({"input": "I don't the Celtics"}, {"output": "ok"})
print(memory.load_memory_variables({"prompt": "what sport should i watch?"})["history"])
输出
input: My favorite sport is soccer
output: ...