题意:使用 OpenAI 的 API(特别是其文本补全功能,如 GPT 系列模型)时,它没有返回你期望的文本结果
问题背景:
I am using node.js and want to use openai API
我正在使用 Node.js,并且想要使用 OpenAI API
I just copied the code from openai playground and it looks like this
我刚从 OpenAI Playground 复制了代码,它看起来是这样的
export const askOpenAi = async () => {
const response = await openai.createCompletion("text-davinci-001", {
prompt: "\ninput: What is human life expectancy in the United States?\n",
temperature: 0,
max_tokens: 100,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0,
stop: ["\n", "\ninput:"],
});
return response.data;
}
openai's Returning data look like this OpenAI 的返回数据看起来像这样
{
id: '~~~',
object: 'text_completion',
created: ~~~,
model: 'text-davinci:001',
choices: [ { text: '', index: 0, logprobs: null, finish_reason: 'stop' } ]
}
In the playground, this code works very well. 在后台代码运行正常
How can I get right response? 我怎样才能得到正确的响应?
问题解决:
It should be something like this: 它应该是这样的:
export const askOpenAi = async () => {
const prompt = `input: What is human life expectancy in the United States?
output:`
const response = await openai.createCompletion("text-davinci-001", {
prompt: prompt,
temperature: 0,
max_tokens: 100,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0,
stop: ["input:"],
});
return response.data;
}
Here, first of all, remove the \n from stop array, because then it will stop the completion after every newline (any answer could be in multiple lines). secondly, no need to add extra \n before input:. it doesnt matter actually.
首先,从停止数组(stop array)中移除换行符(\n),因为这样会导致在每个换行符之后都停止补全(任何答案都可能是多行的)。其次,实际上在input:之前不需要添加额外的换行符,这并不重要。
Finally, remember to give some clue about the completion you're expecting by adding output: at the last of your prompt.
最后,请记得通过在提示的最后添加“output:”来给出一些关于你期望的补全结果的线索
Btw, these type of question ask completion can be achieved through openAI's new instruct mode as well.
顺便说一句,这类要求补全的问题也可以通过OpenAI的新指令模式来实现
const prompt = `Answer the following question:
What is human life expectancy in the United States?
{}`
const response = await openai.createCompletion("text-davinci-001", {
prompt: prompt,
temperature: .7,
max_tokens: 100,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0,
stop: ["{}"],
});