【AI应用开发框架】应用phidata快速构建你的智能体(如个人知识库、自动选股等)

1.phidata是什么?

AI APP开发框架,基于此框架可快速搭建智能体或智能助手以实现记忆、知识库及工具使用等功能。

2.框架是怎样的?

3.为什么选择phidata?

问题:LLMs 的上下文有限,无法执行具体动作

解决方案:添加记忆、知识和工具

  • 记忆:通过将聊天历史记录存储在数据库中,使 LLMs 能够进行长期对话。

  • 知识:通过将信息存储在矢量数据库中,为LLMs提供业务上下文。

  • 工具:启用 LLMs 来执行从 API 提取数据、发送电子邮件或查询数据库等操作。

记忆和知识让LLMs变得更聪明,而工具则让他们变得自主。

4.怎么使用?

  • 第一步:安装pip install -U phidata

  • 第二步:创建 Assistant

  • 第三步:添加工具(功能)、知识(vectordb)和记忆(数据库)

  • 第四步:使用 Streamlit、FastApi 或 Django 构建您的 AI 应用程序

5.例子

(1)网页检索

from phi.assistant import Assistant
from phi.tools.duckduckgo import DuckDuckGo
from phi.llm.ollama import Ollama

assistant = Assistant(
    llm=Ollama(model="llama3"),
    show_tool_calls=True,
    # Let the Assistant search the web using DuckDuckGo
    tools=[DuckDuckGo()],
    # Return answers in Markdown format
    debug_mode=True,
    markdown=True,
)

assistant.print_response("What's happening in France?")

(2)API调用

import json
import httpx

from phi.assistant import Assistant
from phi.llm.ollama import Ollama

def get_top_hackernews_stories(num_stories: int = 10) -> str:
    """Use this function to get top stories from Hacker News.

    Args:
        num_stories (int): Number of stories to return. Defaults to 10.

    Returns:
        str: JSON string of top stories.
    """

    # Fetch top story IDs
    response = httpx.get("https://hacker-news.firebaseio.com/v0/topstories.json")
    story_ids = response.json()

    # Fetch story details
    stories = []
    for story_id in story_ids[:num_stories]:
        story_response = httpx.get(f"https://hacker-news.firebaseio.com/v0/item/{story_id}.json")
        story = story_response.json()
        print(story)
        if "text" in story:
            story.pop("text", None)
        stories.append(story)
        print(story)
    return json.dumps(stories)


assistant = Assistant(
    llm=Ollama(model="llama3"),
    tools=[get_top_hackernews_stories], show_tool_calls=False, markdown=True, debug_mode=False)
assistant.print_response("Summarize the top stories on hackernews?")

(3)工具使用

import json
from phi.assistant import Assistant
from phi.llm.ollama import Ollama
from phi.llm.openai import OpenAIChat
from phi.tools.file import FileTools
from phi.tools.sql import SQLTools
from phi.tools.shell import ShellTools

db_url = "postgresql+psycopg://ai:ai@localhost:5532/ai"

assistant = Assistant(
    llm=Ollama(model="llama3"),
    tools=[
        SQLTools(
            db_url=db_url,
        ),
        ShellTools(),
        FileTools()
    ],
    show_tool_calls=True,
)

#assistant.print_response("List the tables in the database. Tell me about contents of one of the tables", markdown=True)
assistant.print_response("What is the most advanced LLM currently? Save the answer to a file.", markdown=True)
assistant.print_response("Show me the contents of the current directory", markdown=True)

(4)知识库使用

from phi.knowledge.website import WebsiteKnowledgeBase
from phi.vectordb.pgvector import PgVector2
from phi.assistant import Assistant
from phi.llm.openai import OpenAIChat

db_url = "postgresql+psycopg://ai:ai@localhost:5532/ai"

# Create a knowledge base with the seed URLs
knowledge_base = WebsiteKnowledgeBase(
    urls=["https://docs.phidata.com/introduction"],
    # Number of links to follow from the seed URLs
    max_links=10,
    # Table name: ai.website_documents
    vector_db=PgVector2(
        collection="website_documents",
        db_url=db_url
    ),
)
# Load the knowledge base
knowledge_base.load(recreate=False)

# Create an assistant with the knowledge base
assistant = Assistant(
    llm=OpenAIChat(model="gpt-3.5-turbo"),
    knowledge_base=knowledge_base,
    add_references_to_prompt=True,
)

# Ask the assistant about the knowledge base
assistant.print_response("How does phidata work?")

(5)多助手协同

"""
Please install dependencies using:
pip install openai newspaper4k lxml_html_clean phidata
"""

from pathlib import Path
from shutil import rmtree
from phi.llm.ollama import Ollama

from phi.assistant import Assistant
from phi.tools.yfinance import YFinanceTools
from phi.tools.newspaper4k import Newspaper4k
from phi.tools.file import FileTools


reports_dir = Path(__file__).parent.joinpath("junk", "reports")
if reports_dir.exists():
    rmtree(path=reports_dir, ignore_errors=True)
reports_dir.mkdir(parents=True, exist_ok=True)

stock_analyst = Assistant(
    llm=Ollama(model="llama3"),

    name="Stock Analyst",
    role="Get current stock price, analyst recommendations and news for a company.",
    tools=[
        YFinanceTools(stock_price=True, analyst_recommendations=True, company_news=True),
        Newspaper4k(),
        FileTools(base_dir=reports_dir),
    ],
    description="You are an stock analyst tasked with producing factual reports on companies.",
    instructions=[
        "The investment lead will provide you with a list of companies to write reports on.",
        "Get the current stock price, analyst recommendations and news for the company",
        "If you find any news urls, read the article and include it in the report.",
        "Save your report to a file in markdown format with the name `company_name.md` in lower case.",
        "Let the investment lead know the file name of the report.",
    ],
    # debug_mode=True,
)
research_analyst = Assistant(
    llm=Ollama(model="llama3"),
    name="Research Analyst",
    role="Writes research reports on stocks.",
    tools=[FileTools(base_dir=reports_dir)],
    description="You are an investment researcher analyst tasked with producing a ranked list of companies based on their investment potential.",
    instructions=[
        "You will write your research report based on the information available in files produced by the stock analyst.",
        "The investment lead will provide you with the files saved by the stock analyst."
        "If no files are provided, list all files in the entire folder and read the files with names matching company names.",
        "Read each file 1 by 1.",
        "Then think deeply about whether a stock is valuable or not. Be discerning, you are a skeptical investor focused on maximising growth.",
        "Finally, save your research report to a file called `research_report.md`.",
    ],
    # debug_mode=True,
)

investment_lead = Assistant(
    llm=Ollama(model="llama3"),
    name="Investment Lead",
    team=[stock_analyst, research_analyst],
    tools=[FileTools(base_dir=reports_dir)],
    description="You are an investment lead tasked with producing a research report on companies for investment purposes.",
    instructions=[
        "Given a list of companies, first ask the stock analyst to get the current stock price, analyst recommendations and news for these companies.",
        "Ask the stock analyst to write its results to files in markdown format with the name `company_name.md`.",
        "If the stock analyst has not saved the file or saved it with an incorrect name, ask them to save the file again before proceeding."
        "Then ask the research_analyst to write a report on these companies based on the information provided by the stock analyst.",
        "Make sure to provide the research analyst with the files saved by the stock analyst and ask it to read the files directly."
        "The research analyst should save its report to a file called `research_report.md`.",
        "Finally, review the research report and answer the users question. Make sure to answer their question correctly, in a clear and concise manner.",
        "If the research analyst has not completed the report, ask them to complete it before you can answer the users question.",
        "Produce a nicely formatted response to the user, use markdown to format the response.",
    ],
    # debug_mode=True,
)
investment_lead.print_response(
    "How would you invest $10000 in META, GOOG, NVDA and TSLA? Tell me the exact amount you'd invest in each.",
    markdown=True,
)

(6)更多

如workflow,模版创建等见官方文档

6.其他

(1)LLM支持

(2)内置Toolkits

(3)内置知识库加载器

(4)支持的向量数据库

(5)支持的数据库存储

评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Bill Adams

喜欢?打赏一杯阔乐吧!

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值