最近刚开始openai-agents-sdk的使用,发现在流程设计,agent 配置以及调用方面确实做到了极为精简,格式化输入与输出也非常方便控制,虽然OpenAI-agents-sdk原生支持openai自家提供的模型,不过在这“群模林立”的形势下,OpenAI-agents-sdk也良心地支持外部模型的接入和使用。
官方文档: https://openai.github.io/openai-agents-python/
比如接入本地部署的QwQ-32B,需要导入 AsyncOpenAI,OpenAIChatCompletionsModel, 与使用OpenAI 客户端类似,配置本地部署的QwQ-32B模型如下:
external_client = AsyncOpenAI(api_key='empty',base_url='http://192.168.xxx.xxx:pppp/v1/')
external_model = OpenAIChatCompletionsModel(model='QwQ-32B-AWQ', openai_client=external_client)
需要说明的是,
1)vllm部署模型信息可通过http://192.168.xxx.xxx:pppp/v1/models 来查询;其中返回的模型id为 'QwQ-32B-AWQ',即是OpenAIChatCompletionsModel中model传入的名称。
2)使用本地模型,则需要将tracing 关掉,即: set_tracing_disabled(disabled=True)
3)使用 `function_tool` 装饰器来自定义工具。
一个天气查询助手的完整代码如下,
import asyncio
from agents import Agent, Runner, AsyncOpenAI
from agents import OpenAIChatCompletionsModel, ModelSettings, set_tracing_disabled, function_tool
set_tracing_disabled(disabled=True)
@function_tool
async def fetch_weather(city: str) -> str:
"""Fetch the weather for a given city.
Args:
city: The city to fetch the weather for.
"""
# In real life, we'd fetch the weather from a weather API
print('[debug]: agent is calling this tool.')
return "sunny"
external_client = AsyncOpenAI(api_key='empty',base_url='http://192.168.xxx.xxx:pppp/v1/')
external_model = OpenAIChatCompletionsModel(model='QwQ-32B-AWQ', openai_client=external_client)
agent = Agent(name="weather assistant",
model=external_model,
model_settings=ModelSettings(temperature=0.1),
instructions="你是天气信息助手,必要时可调用工具`fetch_weather`来查询城市的天气。",
tools=[fetch_weather])
async def ask_agent(question):
result = await Runner.run(agent, question)
return result.final_output
async def main():
try:
queries = ['北京天气怎么样啊?','上海天气如何?']
tasks = [ask_agent(q) for q in queries]
results = await asyncio.gather(*tasks)
for q, item in zip(queries, results):
print(q)
print(item)
print('*'*20)
except Exception as e:
print(f"Error occurred: {e}")
if __name__ == "__main__":
asyncio.run(main())
谢谢大家!