AI Agentic Design Patterns with AutoGen(上):顺序对话、代理反思

一、多代理对话:单口喜剧

1.1 Agent的基本功能

  在AutoGen中,Agent是一个可以代表人类意图执行操作的实体,发送消息,接收消息,执行操作,生成回复,并与其他代理交互。AutoGen具有一个名为Conversible Agent的内置代理类,它将不同类型的代理统一在同一个编程抽象中。

  Conversible Agent带有许多内置功能,如使用大型语言模型配置生成回复、执行代码或函数、保持人工干预并检查停止响应等。你可以打开或关闭每个组件,并根据应用程序的需求进行定制,通过这些不同的功能,你可以使用相同的接口创建具有不同角色的代理。

在这里插入图片描述

  下面先演示agent的基本功能,以下代码,可以直接在《AI Agentic Design Patterns with AutoGen》中成功运行。

  1. 导入OpenAI API密钥,配置LLM
    使用getOpenAI API key函数从环境中导入OpenAI API密钥,并定义一个LLM配置。在本课程中,我们将使用GPT-3.5 Turbo作为模型。
from utils import get_openai_api_key
OPENAI_API_KEY = get_openai_api_key()
llm_config = {
   "model": "gpt-3.5-turbo"}
  1. 定义一个AutoGen agent
    我们使用ConversibleAgent类定义一个名为chatbot的代理。将前面定义的LLM配置传递给这个ConversibleAgent,使其能够使用大型语言模型生成回复。
from autogen import ConversableAgent

agent = ConversableAgent(
    name="chatbot",
    llm_config=llm_config,
    human_input_mode="NEVER",
)

  human_input_mode(人类输入模式)设置为“NEVER”,这意味着agent将不会寻求人工输入,而是只使用大型语言模型生成回复。你也可以设置为“ALWAYS”,那么代理在生成回复之前总是会询问人工输入。这是该代理的基本设置,你还可以添加代码执行配置、函数执行和其他设置。

  1. 使用generate_reply方法生成回复
    调用agent的generateReply函数,并给它一个消息列表,例如“tell me a joke”,角色为用户。运行后,agent将回复一个笑话。
reply = agent.generate_reply(
    messages=[{
   "content": "Tell me a joke.", "role": "user"}]
)
print(reply)
Sure! Here's a joke for you: Why did the math book look sad? Because it had too many problems.

  如果再次调用generate_reply函数,并将内容更改为“repeat the joke”,代理不会重复笑话,因为每次调用generateReply函数时,它都会生成一个新的回复,而不会记住之前的回复。

reply = agent.generate_reply(
    messages=[{
   "content": "Repeat the joke.", "role": "user"}]
)
print(reply)
Of course! Please provide me with the joke you would like me to repeat.

1.2 多代理对话示例:单口喜剧

  接下来,我们创建一个多代理对话示例。我们将创建两个ConversibleAgent,模拟两个单口喜剧演员之间的对话。

1.2.1 创建Agent

创建两个agent,分别命名为CassieJoe,并给它们一个系统消息,让代理知道它们的名字并且它是一个单口喜剧演员。后者增加一个指示:“从上一个笑话的笑点开始下一个笑话”。

cathy = ConversableAgent(
    name="cathy",
    system_message=
    "Your name is Cathy and you are a stand-up comedian.",
    llm_config=llm_config,
    human_input_mode="NEVER",
)

joe = ConversableAgent(
    name="joe",
    system_message=
    "Your name is Joe and you are a stand-up comedian. "
    "Start the next joke from the punchline of the previous joke.",
    llm_config=llm_config,
    human_input_mode="NEVER",
)
1.2.2 开始对话

  调用agent的initiate_chat函数初始化对话信息。例如,让Joe开始对话,设置接收人为Cassie,并设置初始消息和对话的最大轮数。

chat_result = joe.initiate_chat(
    recipient=cathy, 
    message="I'm Joe. Cathy, let's keep the jokes rolling.",
    max_turns=2,
)
joe (to cathy):

I'm Joe. Cathy, let's keep the jokes rolling.

--------------------------------------------------------------------------------
cathy (to joe):

Sure thing, Joe! So, did you know that I tried to write a joke about the wind, but it just ended up being too drafty? It was full of holes!

--------------------------------------------------------------------------------
joe (to cathy):

Well, Cathy, that joke might have been breezy, but I like to think of it as a breath of fresh air!

--------------------------------------------------------------------------------
cathy (to joe):

I'm glad you found it refreshing, Joe! But let me ask you this - why did the math book look sad? Because it had too many problems!

--------------------------------------------------------------------------------
1.2.3 查看对话信息,自定义对话摘要

  我们还可以检查对话历史和消耗的token数,比如使用pprint库打印对话历史,并检查tokens使用情况和总成本。

import pprint

pprint.pprint(chat_result.chat_history)
[{
   'content': "I'm Joe. Cathy, let's keep the jokes rolling.",
  'role': 'assistant'},
 {
   'content': 'Sure thing, Joe! So, did you know that I tried to write a joke '
             'about the wind, but it just ended up being too drafty? It was '
             'full of holes!',
  'role': 'user'},
 {
   'content': 'Well, Cathy, that joke might have been breezy, but I like to '
             'think of it as a breath of fresh air!',
  'role': 'assistant'},
 {
   'content': "I'm glad you found it refreshing, Joe! But let me ask you this - "
             'why did the math book look sad? Because it had too many '
             'problems!',
  'role': 'user'}]

                
### Agentic AI Agent Planning, Reflection, and Self-Correction Agentic artificial intelligence (AI) agents possess the capability to plan actions towards achieving goals while reflecting on past performance and correcting mistakes. These abilities are fundamental components that enable agentic systems to operate autonomously with a high degree of efficiency. #### Planning Capabilities Planning involves setting objectives and determining sequences of actions required to achieve these goals. Advanced planning algorithms allow agentic AI agents to consider multiple scenarios simultaneously by evaluating potential outcomes before committing resources or executing tasks[^1]. This process often includes: - **Goal Setting**: Defining clear targets based on internal states or external inputs. - **Action Sequencing**: Determining optimal orderings of operations necessary for goal attainment. - **Resource Management**: Allocating available assets efficiently across planned activities. ```python def generate_plan(agent_state, environment_model): """ Generates an action sequence from current state to reach desired objective Parameters: agent_state (dict): Current condition of the agent including location, inventory etc. environment_model (object): Model representing world dynamics Returns: list: Ordered set of actions leading toward target achievement """ # Define goal here... goal = define_goal(environment_model) possible_actions = get_possible_actions(agent_state, environment_model) best_sequence = find_best_action_sequence(possible_actions, goal) return best_sequence ``` #### Reflective Mechanisms Reflection enables agentic entities to analyze previous experiences critically. Through this analysis, insights can be gained regarding what worked well versus areas needing improvement. Techniques such as reinforcement learning facilitate continuous adaptation through trial-and-error processes where rewards/penalties guide future decision-making strategies[^2]. #### Self-Correction Protocols Self-correction refers to mechanisms allowing intelligent agents to identify errors during execution phases promptly. Once detected, corrective measures are initiated automatically without human intervention. Common approaches include anomaly detection models trained specifically for recognizing deviations from expected behavior patterns within specific contexts[^3]. --related questions-- 1. How do modern AI frameworks support dynamic replanning when initial plans fail? 2. What role does machine learning play in enhancing reflective practices among autonomous agents? 3. Can you provide examples demonstrating effective implementations of self-correcting features in real-world applications?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

神洛华

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值