题意:ValueError: 在带有记忆和多个输入的 LangChain 中,期望一个输入键,但得到了 ['text_one', 'text_two']
问题背景:
I'm trying to run a chain in LangChain with memory and multiple inputs. The closest error I could find was was posted here, but in that one, they are passing only one input.
我正在尝试在 LangChain 中运行一个带有记忆和多个输入的链。我能找到的最接近的错误是在这里发布的,但在那个问题中,他们只传递了一个输入。
Here is the setup: 以下是设置
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
llm = OpenAI(
model="text-davinci-003",
openai_api_key=environment_values["OPEN_AI_KEY"], # Used dotenv to store API key
temperature=0.9,
client="",
)
memory = ConversationBufferMemory(memory_key="chat_history")
prompt = PromptTemplate(
input_variables=[
"text_one",
"text_two",
"chat_history"
],
template=(
"""You are an AI talking to a huamn. Here is the chat
history so far:
{chat_history}
Here is some more text:
{text_one}
and here is a even more text:
{text_two}
"""
)
)
chain = LLMChain(
llm=llm,
prompt=prompt,
memory=memory,
verbose=False
)
When I run 当我运行
output = chain.predict(
text_one="Hello",
text_two="World"
)
I get ValueError: One input key expected got ['text_one', 'text_two']
我收到 ValueError
错误:期望一个输入键,但得到了 ['text_one', 'text_two']
I've looked at blog post, which suggests to try: 参考了一篇博客文章,建议尝试以下方法:
output = chain(
inputs={
"text_one" : "Hello",
"text_two" : "World"
}
)
which gives the exact same error. In the spirit of trying different things, I've also tried:
这会导致完全相同的错误。本着尝试不同方法的精神,我也尝试了以下操作:
output = chain.predict( # Also tried .run() here
inputs={
"text_one" : "Hello",
"text_two" : "World"
}
)
which gives Missing some input keys: {'text_one', 'text_two'}
.
错误提示 缺少输入参数 {'text_one', 'text_two'}
I've also looked at the same issue from other blog post, which suggests to do pass the llm
into memory, i.e.
我也查看了其他博客文章中的相同问题,它建议将 llm
(可能是大型语言模型)传递给内存,即:
# Everything the same except...
memory = ConversationBufferMemory(llm=llm, memory_key="chat_history") # Note the llm here
and I still get the same error. If someone knows a way around this error, please let me know. Thank-you.
我仍然遇到相同的错误。如果有人知道如何解决这个问题,请告诉我。谢谢。
问题解决:
While drafting this question, I came across the answer.
在起草这个问题时,我找到了答案。
When defining the memory
variable, pass an input_key="human_input"
and make sure each prompt has a human_input
defined.
在定义内存变量时,传递一个 input_key="human_input"
并确保每个提示(prompt)都定义了 human_input
。
memory=ConversationBufferMemory(
memory_key="chat_history",
input_key="human_input"
)
Then, in each prompt, make sure there is a human_input
input.
然后,在每个提示中,确保有一个 human_input
输入。
prompt = PromptTemplate(
input_variables=[
"text_one",
"text_two",
"chat_history",
"human_input", # Even if it's blank
],
template=(
"""You are an AI talking to a huamn. Here is the chat
history so far:
{chat_history}
Here is some more text:
{text_one}
and here is a even more text:
{text_two}
{human_input}
"""
)
)
Then, build your chain: 然后,构建你的链:
chain = LLMChain(
llm=llm,
prompt=prompt,
memory=memory, # Contains the input_key
verbose=False
)
And then run it as: 然后,像这样运行它:
output = chain.predict(
human_input="", # or whatever you want
text_one="Hello",
text_two="World"
)
print(output)
# On my machine, it outputs: '\nAI: Hi there! How can I help you?'