文章目录
一、多代理对话:单口喜剧
1.1 Agent的基本功能
在AutoGen中,Agent
是一个可以代表人类意图执行操作的实体,发送消息,接收消息,执行操作,生成回复,并与其他代理交互。AutoGen具有一个名为Conversible Agent
的内置代理类,它将不同类型的代理统一在同一个编程抽象中。
Conversible Agent
带有许多内置功能,如使用大型语言模型配置生成回复、执行代码或函数、保持人工干预并检查停止响应等。你可以打开或关闭每个组件,并根据应用程序的需求进行定制,通过这些不同的功能,你可以使用相同的接口创建具有不同角色的代理。
下面先演示agent的基本功能,以下代码,可以直接在《AI Agentic Design Patterns with AutoGen》中成功运行。
- 导入OpenAI API密钥,配置LLM
使用getOpenAI API key
函数从环境中导入OpenAI API密钥,并定义一个LLM配置。在本课程中,我们将使用GPT-3.5 Turbo作为模型。
from utils import get_openai_api_key
OPENAI_API_KEY = get_openai_api_key()
llm_config = {
"model": "gpt-3.5-turbo"}
- 定义一个AutoGen agent
我们使用ConversibleAgent
类定义一个名为chatbot
的代理。将前面定义的LLM配置传递给这个ConversibleAgent
,使其能够使用大型语言模型生成回复。
from autogen import ConversableAgent
agent = ConversableAgent(
name="chatbot",
llm_config=llm_config,
human_input_mode="NEVER",
)
human_input_mode
(人类输入模式)设置为“NEVER
”,这意味着agent将不会寻求人工输入,而是只使用大型语言模型生成回复。你也可以设置为“ALWAYS
”,那么代理在生成回复之前总是会询问人工输入。这是该代理的基本设置,你还可以添加代码执行配置、函数执行和其他设置。
- 使用
generate_reply
方法生成回复
调用agent的generateReply
函数,并给它一个消息列表,例如“tell me a joke”,角色为用户。运行后,agent将回复一个笑话。
reply = agent.generate_reply(
messages=[{
"content": "Tell me a joke.", "role": "user"}]
)
print(reply)
Sure! Here's a joke for you: Why did the math book look sad? Because it had too many problems.
如果再次调用generate_reply
函数,并将内容更改为“repeat the joke”,代理不会重复笑话,因为每次调用generateReply
函数时,它都会生成一个新的回复,而不会记住之前的回复。
reply = agent.generate_reply(
messages=[{
"content": "Repeat the joke.", "role": "user"}]
)
print(reply)
Of course! Please provide me with the joke you would like me to repeat.
1.2 多代理对话示例:单口喜剧
接下来,我们创建一个多代理对话示例。我们将创建两个ConversibleAgent
,模拟两个单口喜剧演员之间的对话。
1.2.1 创建Agent
创建两个agent,分别命名为Cassie
和Joe
,并给它们一个系统消息,让代理知道它们的名字并且它是一个单口喜剧演员。后者增加一个指示:“从上一个笑话的笑点开始下一个笑话”。
cathy = ConversableAgent(
name="cathy",
system_message=
"Your name is Cathy and you are a stand-up comedian.",
llm_config=llm_config,
human_input_mode="NEVER",
)
joe = ConversableAgent(
name="joe",
system_message=
"Your name is Joe and you are a stand-up comedian. "
"Start the next joke from the punchline of the previous joke.",
llm_config=llm_config,
human_input_mode="NEVER",
)
1.2.2 开始对话
调用agent的initiate_chat
函数初始化对话信息。例如,让Joe
开始对话,设置接收人为Cassie
,并设置初始消息和对话的最大轮数。
chat_result = joe.initiate_chat(
recipient=cathy,
message="I'm Joe. Cathy, let's keep the jokes rolling.",
max_turns=2,
)
joe (to cathy):
I'm Joe. Cathy, let's keep the jokes rolling.
--------------------------------------------------------------------------------
cathy (to joe):
Sure thing, Joe! So, did you know that I tried to write a joke about the wind, but it just ended up being too drafty? It was full of holes!
--------------------------------------------------------------------------------
joe (to cathy):
Well, Cathy, that joke might have been breezy, but I like to think of it as a breath of fresh air!
--------------------------------------------------------------------------------
cathy (to joe):
I'm glad you found it refreshing, Joe! But let me ask you this - why did the math book look sad? Because it had too many problems!
--------------------------------------------------------------------------------
1.2.3 查看对话信息,自定义对话摘要
我们还可以检查对话历史和消耗的token数,比如使用pprint
库打印对话历史,并检查tokens使用情况和总成本。
import pprint
pprint.pprint(chat_result.chat_history)
[{
'content': "I'm Joe. Cathy, let's keep the jokes rolling.",
'role': 'assistant'},
{
'content': 'Sure thing, Joe! So, did you know that I tried to write a joke '
'about the wind, but it just ended up being too drafty? It was '
'full of holes!',
'role': 'user'},
{
'content': 'Well, Cathy, that joke might have been breezy, but I like to '
'think of it as a breath of fresh air!',
'role': 'assistant'},
{
'content': "I'm glad you found it refreshing, Joe! But let me ask you this - "
'why did the math book look sad? Because it had too many '
'problems!',
'role': 'user'}]