How to use the Python openai client with both Azure and OpenAI at the same time?

题意:怎样同时使用 Azure 和 OpenAI 的 Python 客户端?

问题背景:

OpenAI offers a Python client, currently in version 0.27.8, which supports both Azure and OpenAI.

OpenAI 官方提供了一个 Python 客户端库(当前版本为 0.27.8),这个库既支持 OpenAI 的 API,也通过某种方式(如封装或适配器)支持 Azure OpenAI 服务

Here are examples of how to use it to call the ChatCompletion for each provider:

以下是如何使用它来为每个提供者调用 ChatCompletion 的示例

# openai_chatcompletion.py

"""Test OpenAI's ChatCompletion endpoint"""
import os
import openai
import dotenv
dotenv.load_dotenv()
openai.api_key = os.environ.get('OPENAI_API_KEY')

# Hello, world.
api_response = openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "user", "content": "Hello!"}
  ],
  max_tokens=16,
  temperature=0,
  top_p=1,
  frequency_penalty=0,
  presence_penalty=0,
)

print('api_response:', type(api_response), api_response)
print('api_response.choices[0].message:', type(api_response.choices[0].message), api_response.choices[0].message)

And:        并且:

# azure_openai_35turbo.py

"""Test Microsoft Azure's ChatCompletion endpoint"""
import os
import openai
import dotenv
dotenv.load_dotenv()

openai.api_type = "azure"
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") 
openai.api_version = "2023-05-15"
openai.api_key = os.getenv("AZURE_OPENAI_KEY")


# Hello, world.
# In addition to the `api_*` properties above, mind the difference in arguments
# as well between OpenAI and Azure:
# - OpenAI from OpenAI uses `model="gpt-3.5-turbo"`!
# - OpenAI from Azure uses `engine="‹deployment name›"`! ⚠️
#   > You need to set the engine variable to the deployment name you chose when
#   > you deployed the GPT-35-Turbo or GPT-4 models.
#  This is the name of the deployment I created in the Azure portal on the resource.
api_response = openai.ChatCompletion.create(
  engine="gpt-35-turbo", # engine = "deployment_name".
  messages=[
    {"role": "user", "content": "Hello!"}
  ],
  max_tokens=16,
  temperature=0,
  top_p=1,
  frequency_penalty=0,
  presence_penalty=0,
)

print('api_response:', type(api_response), api_response)
print('api_response.choices[0].message:', type(api_response.choices[0].message), api_response.choices[0].message)

i.e. api_type and other settings are globals of the Python library.

即 api_type 和其他设置是该Python库的全局变量

Here is a third example to transcribe audio (it uses Whisper, which is available on OpenAI but not on Azure):

这是一个使用Whisper进行音频转录的第三个示例(Whisper在OpenAI上可用,但在Azure上不可用)

# openai_transcribe.py

"""
Test the transcription endpoint
https://platform.openai.com/docs/api-reference/audio
"""
import os
import openai
import dotenv
dotenv.load_dotenv()


openai.api_key = os.getenv("OPENAI_API_KEY")
audio_file = open("minitests/minitests_data/bilingual-english-bosnian.wav", "rb")
transcript = openai.Audio.transcribe(
    model="whisper-1",
    file=audio_file,
    prompt="Part of a Bosnian language class.",
    response_format="verbose_json",
)
print(transcript)

These are minimal examples but I use similar code as part of my webapp (a Flask app).

这些是最小的示例,但我在我的网络应用程序(一个Flask应用程序)中使用了类似的代码。

Now my challenge is that I'd like to:        现在我的挑战是我想:

  • Use the ChatCompletion endpoint from Azure; but:

使用Azure的ChatCompletion端点

  • Use the Transcribe endpoint from OpenAI (since it's not available on Azure)

使用OpenAI的Transcribe端点(因为Azure上没有提供)

Is there any way to do so?        有没有办法这样做?

I have a few options in mind:      我心里有几个选择:

  • Changing the globals before every call. But I'm worried that this might cause side-effects I did not expect.
  • Duplicating/Forking the library to have two versions run concurrently, one for each provider, but this also feels very messy.
  • Use an alternative client for OpenAI's Whisper, if any.

I'm not too comfortable with these and feel I may have missed a more obvious solution.

我对这些选择不太满意,并且感觉我可能错过了更明显的解决方案。

Or of course… Alternatively, I could just use Whisper with a different provider (e.g. Replicate) or an alternative to Whisper altogether.

或者当然……另外,我也可以使用不同提供商的Whisper(例如Replicate)或者完全使用Whisper的替代品。

See also        另请参阅

有人在GitHub(openai/openai-python)上报告了这个问题(但没有提供解决方案):

问题解决:

Each API in the library accepts per-method overrides for the configuration options. If you want to access the Azure API for chat completions, you can explicitly pass in your Azure config. For the transcribe endpoint, you can explicitly pass the OpenAI config. For example:

库中的每个API都接受针对配置选项的方法级覆盖。如果你想访问Azure的聊天完成API,你可以明确传递你的Azure配置。对于转写端点,你可以明确传递OpenAI的配置。例如:

import os
import openai

api_response = openai.ChatCompletion.create(
    api_base=os.getenv("AZURE_OPENAI_ENDPOINT"),
    api_key=os.getenv("AZURE_OPENAI_KEY"),
    api_type="azure",
    api_version="2023-05-15",
    engine="gpt-35-turbo",
    messages=[
    {"role": "user", "content": "Hello!"}
    ],
    max_tokens=16,
    temperature=0,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0,
)
print(api_response)



audio_file = open("minitests/minitests_data/bilingual-english-bosnian.wav", "rb")
transcript = openai.Audio.transcribe(
    api_key=os.getenv("OPENAI_API_KEY"),
    model="whisper-1",
    file=audio_file,
    prompt="Part of a Bosnian language class.",
    response_format="verbose_json",
)
print(transcript)

  • 18
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

营赢盈英

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值