原文:https://lglightflow.github.io/ 可能会有错误和纰漏,欢迎指正;
1.快速安装
快速概括一下安装步骤:(假设已经有python)
在cmd或shell中:
pip install openai
setx OPENAI_API_KEY "这里填上api"
现在通过代码来跟chatgpt进行对话:
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "帮我把“你好”翻译为英文"},
]
)
print(response.choices[0].message.content)
print(response)
输出:
输出response的内容,可以分析一下其结构:
ChatCompletion(
id='chatcmpl-8Sh2mR0pjDRqkmV9As',
choices=[Choice(finish_reason='stop',
index=0,
message=ChatCompletionMessage(content='"你好"在英语中可以翻译为"Hello"或"Hi"。这两种表达方式都是常见且通用的问候方式。',
role='assistant',
function_call=None,
tool_calls=None))],
created=1701836,
model='gpt-3.5-turbo-0613',
object='chat.completion',
system_fingerprint=None,
usage=CompletionUsage(completion_tokens=43, prompt_tokens=77, total_tokens=120))
如果对网络编程的同学熟悉的话,应该很容易上手,其实与创建请求和获取响应是一个道理:
请求由client.chat.completions.create方法中填入的参数所构建,响应则是上述提到的response中的内容;
请求request
先来看请求:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "帮我把“你好”翻译为英文"},
]
)
最重要的是messages参数,每一条message都要有一个角色role(系统system、用户user、助理assistant),并且要有内容(content),即在网页版chatgpt发送给gpt的对话;
通过messages可以更好地调教gpt,例如我将assitant的内容改为:
"system", "content": "You are a helpful and purdent and Courtesy assistant."
输出变为:
如果要使chatgpt有记忆,就要通过assistant来告诉chatgpt上一步的结果.
我们换个例子,让chatgpt记住一个数字并告诉我们数字是什么:
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "请记住这个数字:7"},
]
)
print(response.choices[0].message.content)
print("\n\n\n")
response2 = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "请记住这个数字:7"},
{"role": "assistant", "content": response.choices[0].message.content},
{"role": "user", "content": "我告诉你的数字是?"},
]
)
print(response2.choices[0].message.content)
不过感觉这样挺浪费token的
3.响应response
回到上文提到的响应体:
ChatCompletion(
id='chatcmpl-8Sh2mR0pjDRqkmV9As',
choices=[Choice(finish_reason='stop',
index=0,
message=ChatCompletionMessage(content='"你好"在英语中可以翻译为"Hello"或"Hi"。这两种表达方式都是常见且通用的问候方式。',
role='assistant',
function_call=None,
tool_calls=None))],
created=1701836,
model='gpt-3.5-turbo-0613',
object='chat.completion',
system_fingerprint=None,
usage=CompletionUsage(completion_tokens=43, prompt_tokens=77, total_tokens=120))
参数含义:
finish_reason:有stop、length、function_call、content_filter、null
- stop: api返回了完整的信息;
- length: 说明max_tokens参数调的太小,或本身的token数不够用;
- function_call:模型决定调用函数
- content_filter: 由于内容过滤器的标记而省略了内容
- null: 仍在响应或者响应失败
其他参数回头总结
4.管理token
chatgpt有一个类似分词的系统计算tokens,将句子或单词分割,分割出的单词总数量则为token的数量.
官方的例子:“ChatGPT is great!“编码为6个tokens:[“Chat”, “G”, “PT”, " is”, " great”, “!”].
一次对话的tokens总数为请求与响应的messages的总数.可以通过usage查看使用tokens总数:
代码:
print(response2.usage)
在提交请求前计算tokens:
先安装tiktoken:
pip install tiktoken
安装后通过tiktoken进行编码,计算tokens
完整python代码:
def num_tokens_from_messages(messages, model="gpt-3.5-turbo"):
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
encoding = tiktoken.get_encoding("cl100k_base")
if model == "gpt-3.5-turbo": # note: future models may deviate from this
num_tokens = 0
for message in messages:
num_tokens += 4 # every message follows <im_start>{role/name}\n{content}<im_end>\n
for key, value in message.items():
num_tokens += len(encoding.encode(value))
if key == "name": # if there's a name, the role is omitted
num_tokens += -1 # role is always required and always 1 token
num_tokens += 2 # every reply is primed with <im_start>assistant
return num_tokens
else:
raise NotImplementedError(f"""num_tokens_from_messages() is not presently implemented for model {model}.
See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.""")
messages1=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "请记住这个数字:7"},
]
client = OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages1
)
print(response.choices[0].message.content)
print(response)
print("\n\n\n")
messages2=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "请记住这个数字:7"},
{"role": "assistant", "content": response.choices[0].message.content},
{"role": "user", "content": "我告诉你的数字是?"},
]
response2 = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages2
)
print(response2.choices[0].message.content)
print(response2)
print("\n\n\n")
print("message1的tokens数:"+str(num_tokens_from_messages(messages1)))
print("message2的tokens数:"+str(num_tokens_from_messages(messages2)))
注意:num_tokens_from_messages函数提供了这个计算tokens的功能;
输出结果:
对比一下函数计算出的tokens和gpt响应计算的tokens
第一个响应: 函数计算出:27,响应的真实使用token为26
第二个响应: 函数计算出:73,响应的真实使用token为70
5.temperature参数
temperature(人性值)值越大,gpt就越有创造力;使用方法如下
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=0,
)
6. chatgpt读取音频,图像等…
有空再总结一下,谁能送我个gpt4.0 TAT
模板
快速安装:(假设已经有python)
在cmd或shell中:
pip install openai
setx OPENAI_API_KEY "这里填上api"
python代码
from openai import OpenAI
######################step 1
messages1=[
{"role": "system", "content": "助手描述信息"},
{"role": "user", "content": "向gpt提的要求"},
]
client = OpenAI()
response = client.chat.completions.create(
model="gpt版本",
messages=messages1,
)
print(response.choices[0].message.content)
print()
######################
######################
messages1.append({"role": "assistant", "content": response.choices[0].message.content})
messages1.append({"role": "user", "content": "新要求"})
client = OpenAI()
response = client.chat.completions.create(
model="gpt版本",,
messages=messages1,
)
print(response.choices[0].message.content)
print()
######################重复区域
######################