异步使用langchain

一.先利用langchain官方文档的AI功能问问

在这里插入图片描述

二.langchain async api

还不如直接谷歌搜😂 一下搜到, 上面那个AI文档问答没给出这个链接

在这里插入图片描述

  • 官方示例

    import asyncio
    import time
    
    from langchain.llms import OpenAI
    from langchain.prompts import PromptTemplate
    from langchain.chains import LLMChain
    
    
    def generate_serially():
        llm = OpenAI(temperature=0.9)
        prompt = PromptTemplate(
            input_variables=["product"],
            template="What is a good name for a company that makes {product}?",
        )
        chain = LLMChain(llm=llm, prompt=prompt)
        for _ in range(5):
            resp = chain.run(product="toothpaste")
            print(resp)
    
    
    async def async_generate(chain):
        resp = await chain.arun(product="toothpaste")
        print(resp)
    
    
    async def generate_concurrently():
        llm = OpenAI(temperature=0.9)
        prompt = PromptTemplate(
            input_variables=["product"],
            template="What is a good name for a company that makes {product}?",
        )
        chain = LLMChain(llm=llm, prompt=prompt)
        tasks = [async_generate(chain) for _ in range(5)]
        await asyncio.gather(*tasks)
    
    
    s = time.perf_counter()
    # If running this outside of Jupyter, use asyncio.run(generate_concurrently())
    await generate_concurrently()
    elapsed = time.perf_counter() - s
    print("\033[1m" + f"Concurrent executed in {elapsed:0.2f} seconds." + "\033[0m")
    
    s = time.perf_counter()
    generate_serially()
    elapsed = time.perf_counter() - s
    print("\033[1m" + f"Serial executed in {elapsed:0.2f} seconds." + "\033[0m")
    
  • 不过官方代码报错了
    在这里插入图片描述

  • 我让copilot修改一下,能跑了

    import time
    import asyncio
    from langchain.llms import OpenAI
    from langchain.prompts import PromptTemplate
    from langchain.chains import LLMChain
    
    
    def generate_serially():
        llm = OpenAI(temperature=0.9)
        prompt = PromptTemplate(
            input_variables=["product"],
            template="What is a good name for a company that makes {product}?",
        )
        chain = LLMChain(llm=llm, prompt=prompt)
        for _ in range(5):
            resp = chain.run(product="toothpaste")
            print(resp)
    
    
    async def async_generate(chain):
        resp = await chain.arun(product="toothpaste")
        print(resp)
    
    
    async def generate_concurrently():
        llm = OpenAI(temperature=0.9)
        prompt = PromptTemplate(
            input_variables=["product"],
            template="What is a good name for a company that makes {product}?",
        )
        chain = LLMChain(llm=llm, prompt=prompt)
        tasks = [async_generate(chain) for _ in range(5)]
        await asyncio.gather(*tasks)
    
    
    async def main():
        s = time.perf_counter()
        await generate_concurrently()
        elapsed = time.perf_counter() - s
        print("\033[1m" + f"Concurrent executed in {elapsed:0.2f} seconds." + "\033[0m")
    
        s = time.perf_counter()
        generate_serially()
        elapsed = time.perf_counter() - s
        print("\033[1m" + f"Serial executed in {elapsed:0.2f} seconds." + "\033[0m")
    
    
    asyncio.run(main())
    	
    

    在这里插入图片描述

  • 这还有一篇官方blog
    在这里插入图片描述
    在这里插入图片描述

三.串行,异步速度比较

# 引入time和asyncio模块
import time
import asyncio
# 引入OpenAI类
from langchain.llms import OpenAI


# 定义异步函数async_generate,该函数接收一个llm参数和一个name参数
async def async_generate(llm, name):
    # 调用OpenAI类的agenerate方法,传入字符串列表["Hello, how are you?"]并等待响应
    resp = await llm.agenerate(["Hello, how are you?"])
    # 打印响应结果的生成文本和函数名
    print(f"{name}: {resp.generations[0][0].text}")


# 定义异步函数generate_concurrently
async def generate_concurrently():
    # 创建OpenAI实例,并设置temperature参数为0.9
    llm = OpenAI(temperature=0.9)
    # 创建包含10个async_generate任务的列表
    tasks = [async_generate(llm, f"Function {i}") for i in range(10)]
    # 并发执行任务
    await asyncio.gather(*tasks)


# 主函数
# 如果在Jupyter Notebook环境运行该代码,则无需手动调用await generate_concurrently(),直接在下方执行单元格即可执行该函数
# 如果在命令行或其他环境下运行该代码,则需要手动调用asyncio.run(generate_concurrently())来执行该函数
asyncio.run(generate_concurrently())

免费用户一分钟只能3次,实在是有点难蚌

在这里插入图片描述

  • 整合一下博主的代码,对两个速度进行比较,但是这个调用限制真的很搞人啊啊啊

    import time
    import asyncio
    from langchain.llms import OpenAI
    
    
    async def async_generate(llm, name):
        resp = await llm.agenerate(["Hello, how are you?"])
        # print(f"{name}: {resp.generations[0][0].text}")
    
    
    async def generate_concurrently():
        llm = OpenAI(temperature=0.9)
        tasks = [async_generate(llm, f"Function {i}") for i in range(3)]
        await asyncio.gather(*tasks)
    
    
    def generate_serially():
        llm = OpenAI(temperature=0.9)
        for _ in range(3):
            resp = llm.generate(["Hello, how are you?"])
            # print(resp.generations[0][0].text)
    
    
    async def main():
        s = time.perf_counter()
        await generate_concurrently()
        elapsed = time.perf_counter() - s
        print("\033[1m" + f"Concurrent executed in {elapsed:0.2f} seconds." + "\033[0m")
    
        s = time.perf_counter()
        generate_serially()
        elapsed = time.perf_counter() - s
        print("\033[1m" + f"Serial executed in {elapsed:0.2f} seconds." + "\033[0m")
    
    
    asyncio.run(main())
    

    在这里插入图片描述
    在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

  • 再看一篇blog
  • 读取csv的时候路径一直报错,还好不久前总结了一篇blog:Python中如何获取各种目录路径
    • 直接获取当前脚本路径了

      import os
      import pandas as pd
      
      # Get the directory where the script is located
      script_directory = os.path.dirname(os.path.abspath(__file__))
      
      # Construct the path to the CSV file
      csv_path = os.path.join(script_directory, 'wine_subset.csv')
      
      # Read the CSV file
      df = pd.read_csv(csv_path)
      
      • sequential_run.py 就不跑了… 一天200次调用都快没了
  • 主要是看看两者区别
    在这里插入图片描述
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值