文章目录
关于 agentops
agentops 是用于AI代理监控、LLM成本跟踪、基准测试等的Python SDK。与大多数LLM和代理框架(如CrewAI、Langchain和Autogen)集成
- github : https://github.com/AgentOps-AI/agentops (2411 2.2k)
- 官网:https://agentops.ai/
AgentOps帮助开发者构建、评估和监控AI代理。从原型到生产。
📊 重放分析和调试 | 步骤式代理执行图 |
💸 LLM成本管理 | 跟踪与LLM基础模型提供商的开支 |
🧪 代理基准测试 | 将您的代理与1,000+个评估进行测试 |
🔐 合规性和安全性 | 检测常见的提示注入和数据泄露攻击 |
🤝 框架集成 | 与CrewAI、AutoGen和LangChain的原生集成 |
快速入门 ⌨️
pip install agentops
2行代码实现会话回放
初始化 AgentOps 客户端并自动获取所有 LLM 调用的分析。
Get an API key
import agentops
# Beginning of your program (i.e. main.py, __init__.py)
agentops.init( < INSERT YOUR API KEY HERE >)
...
# End of program
agentops.end_session('Success')
所有您的会话都可以在 Your Dashboard 上查看。
代理调试
会话回放
总结分析
一流的开发者体验
用尽可能少的代码(逐行)为您的代理、工具和函数添加强大的可观察性。
参考我们的 documentation
# Automatically associate all Events with the agent that originated them
from agentops import track_agent
@track_agent(name='SomeCustomName')
class MyAgent:
...
# Automatically create ToolEvents for tools that agents will use
from agentops import record_tool
@record_tool('SampleToolName')
def sample_tool(...):
...
# Automatically create ActionEvents for other functions.
from agentops import record_action
@agentops.record_action('sample function being record')
def sample_function(...):
...
# Manually record any other Events
from agentops import record, ActionEvent
record(ActionEvent("received_user_input"))
集成 🦾
CrewAI 🛶
用仅两行代码构建具有可观察性的Crew代理。只需在您的环境中设置一个AGENTOPS_API_KEY
,您的团队就会在AgentOps仪表板上获得自动监控。
pip install 'crewai[agentops]'
AutoGen 🤖
只需两行代码,即可为Autogen代理添加完整的可观察性和监控。在你的环境中设置AGENTOPS_API_KEY
并调用agentops.init()
Langchain 🦜🔗
AgentOps 与使用 Langchain 构建的应用程序无缝协作。要使用处理器,将 Langchain 作为可选依赖项进行安装:
安装
pip install agentops[langchain]
要使用处理器,导入并设置
import os
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from agentops.partners.langchain_callback_handler import LangchainCallbackHandler
AGENTOPS_API_KEY = os.environ['AGENTOPS_API_KEY']
handler = LangchainCallbackHandler(api_key=AGENTOPS_API_KEY, tags=['Langchain Example'])
llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY,
callbacks=[handler],
model='gpt-3.5-turbo')
agent = initialize_agent(tools,
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
callbacks=[handler], # You must pass in a callback handler to record your agent
handle_parsing_errors=True)
查看以下内容:
Check out the
查看以下内容:
Langchain Examples Notebook
Cohere ⌨️
第一级支持 Cohere(>=5.4.0)。这是一个活着的集成,如果您需要任何额外的功能,请在 Discord 上给我们发消息!
安装
pip install cohere
import cohere
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
co = cohere.Client()
chat = co.chat(
message="Is it pronounced ceaux-hear or co-hehray?"
)
print(chat)
agentops.end_session('Success')
import cohere
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
co = cohere.Client()
stream = co.chat_stream(
message="Write me a haiku about the synergies between Cohere and AgentOps"
)
for event in stream:
if event.event_type == "text-generation":
print(event.text, end='')
agentops.end_session('Success')
Anthropic
跟踪使用 Anthropic Python SDK(>=0.32.0)构建的代理。
安装
pip install anthropic
import anthropic
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
client = anthropic.Anthropic(
# This is the default and can be omitted
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)
message = client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Tell me a cool fact about AgentOps",
}
],
model="claude-3-opus-20240229",
)
print(message.content)
agentops.end_session('Success')
流式处理
import anthropic
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
client = anthropic.Anthropic(
# This is the default and can be omitted
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)
stream = client.messages.create(
max_tokens=1024,
model="claude-3-opus-20240229",
messages=[
{
"role": "user",
"content": "Tell me something cool about streaming agents",
}
],
stream=True,
)
response = ""
for event in stream:
if event.type == "content_block_delta":
response += event.delta.text
elif event.type == "message_stop":
print("\n")
print(response)
print("\n")
异步
import asyncio
from anthropic import AsyncAnthropic
client = AsyncAnthropic(
# This is the default and can be omitted
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)
async def main() -> None:
message = await client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Tell me something interesting about async agents",
}
],
model="claude-3-opus-20240229",
)
print(message.content)
await main()
Mistral 〽️
使用Anthropic Python SDK (>=0.32.0)构建的跟踪代理。
AgentOps integration example
安装
pip install mistralai
Sync
from mistralai import Mistral
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)
message = client.chat.complete(
messages=[
{
"role": "user",
"content": "Tell me a cool fact about AgentOps",
}
],
model="open-mistral-nemo",
)
print(message.choices[0].message.content)
agentops.end_session('Success')
流
from mistralai import Mistral
import agentops
# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)
message = client.chat.stream(
messages=[
{
"role": "user",
"content": "Tell me something cool about streaming agents",
}
],
model="open-mistral-nemo",
)
response = ""
for event in message:
if event.data.choices[0].finish_reason == "stop":
print("\n")
print(response)
print("\n")
else:
response += event.text
agentops.end_session('Success')
异步
import asyncio
from mistralai import Mistral
client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)
async def main() -> None:
message = await client.chat.complete_async(
messages=[
{
"role": "user",
"content": "Tell me something interesting about async agents",
}
],
model="open-mistral-nemo",
)
print(message.choices[0].message.content)
await main()
异步流
import asyncio
from mistralai import Mistral
client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)
async def main() -> None:
message = await client.chat.stream_async(
messages=[
{
"role": "user",
"content": "Tell me something interesting about async streaming agents",
}
],
model="open-mistral-nemo",
)
response = ""
async for event in message:
if event.data.choices[0].finish_reason == "stop":
print("\n")
print(response)
print("\n")
else:
response += event.text
await main()
LiteLLM 🚅
AgentOps 提供对 LiteLLM(>=1.3.1)的支持,允许您使用相同的输入/输出格式调用 100+ 个 LLM。
安装
pip install litellm
# Do not use LiteLLM like this
# from litellm import completion
# ...
# response = completion(model="claude-3", messages=messages)
# Use LiteLLM like this
import litellm
...
response = litellm.completion(model="claude-3", messages=messages)
# or
response = await litellm.acompletion(model="claude-3", messages=messages)
LlamaIndex 🦙
AgentOps与使用LlamaIndex构建的应用程序无缝协作,LlamaIndex是一个用于构建具有LLMs的上下文增强生成型AI应用程序的框架。
安装
pip install llama-index-instrumentation-agentops
要使用处理器,导入并设置
from llama_index.core import set_global_handler
# NOTE: Feel free to set your AgentOps environment variables (e.g., 'AGENTOPS_API_KEY')
# as outlined in the AgentOps documentation, or pass the equivalent keyword arguments
# anticipated by AgentOps' AOClient as **eval_params in set_global_handler.
set_global_handler("agentops")
更多内容可见: LlamaIndex docs
时间旅行调试 🔮
Agent Arena 🥊
(即将推出!)
评估路线图 🧭
平台 | 仪表板 | 评估 |
---|---|---|
✅ Python SDK | ✅ 多会话和跨会话指标 | ✅ 自定义评估指标 |
🚧 评估构建器 API | ✅ 自定义事件标签跟踪 | 🔜 代理分数卡 |
✅ Javascript/Typescript SDK | ✅ 会话回放 | 🔜 评估游乐场 + 排行榜 |
调试路线图 🧭
性能测试 | 环境 Continue | LLM 测试 | 理解和执行测试 |
---|---|---|---|
✅ 事件延迟分析 | 🔜 非稳定环境测试 | 🔜 LLM 非确定性函数检测 | 🚧 无限循环和递归思维检测 |
✅ 代理工作流程执行定价 | 🔜 多模态环境 | 🚧 令牌限制溢出标志 | 🔜 故障推理检测 |
🚧 成功验证器(外部) | 🔜 执行容器 | 🔜 上下文限制溢出标志 | 🔜 生成代码验证器 |
🔜 代理控制器/技能测试 | ✅ 诱饵和提示注入检测 (PromptArmor) | 🔜 API 账单跟踪 | 🔜 错误断点分析 |
🔜 信息上下文约束测试 | 🔜 反代理障碍(例如,验证码) | 🔜 CI/CD 集成检查 | |
🔜 回归测试 | 🔜 多代理框架可视化 |
为什么选择 AgentOps? 🤔
没有合适的工具,AI 代理会变得缓慢、昂贵且不可靠。我们的使命是将您的代理从原型推向生产。以下是 AgentOps 突出的原因:
- 全面的可观察性:跟踪您的 AI 代理的性能、用户交互和 API 使用情况。
- 实时监控:通过会话回放、指标和实时监控工具获得即时洞察。
- 成本控制:监控和管理您在 LLM 和 API 调用上的花费。
- 故障检测:快速识别并响应代理故障和多代理交互问题。
- 工具使用统计:通过详细分析了解您的代理如何使用外部工具。
- 会话级指标:通过全面统计了解您的代理会话的全貌。
AgentOps 设计用于使代理可观察性、测试和监控变得简单。
使用 AgentOps 的热门项目
Repository | Stars |
---|---|
geekan / MetaGPT | 42787 |
run-llama / llama_index | 34446 |
crewAIInc / crewAI | 18287 |
camel-ai / camel | 5166 |
superagent-ai / superagent | 5050 |
iyaja / llama-fs | 4713 |
BasedHardware / Omi | 2723 |
MervinPraison / PraisonAI | 2007 |
AgentOps-AI / Jaiqu | 272 |
strnad / CrewAI-Studio | 134 |
alejandro-ao / exa-crewai | 55 |
tonykipkemboi / youtube_yapper_trapper | 47 |
sethcoast / cover-letter-builder | 27 |
bhancockio / chatgpt4o-analysis | 19 |
breakstring / Agentic_Story_Book_Workflow | 14 |
MULTI-ON / multion-python | 13 |
Generated using github-dependents-info, by Nicolas Vuillamy