LangChain 39 深入理解LangChain 表达式语言四 为什么要用LCEL LangChain Expression Language (LCEL)

本文介绍了LangChain系列文章的核心内容,包括使用LCEL构建可扩展的模型链条、执行异步和批量处理、实现表达式语言、集成多种服务如OpenAI和Wikipedia,以及调试和管理智能代理。LCEL强调统一接口和组合原语,简化了复杂任务的处理。
摘要由CSDN通过智能技术生成

LangChain系列文章

  1. LangChain 实现给动物取名字
  2. LangChain 2模块化prompt template并用streamlit生成网站 实现给动物取名字
  3. LangChain 3使用Agent访问Wikipedia和llm-math计算狗的平均年龄
  4. LangChain 4用向量数据库Faiss存储,读取YouTube的视频文本搜索Indexes for information retrieve
  5. LangChain 5易速鲜花内部问答系统
  6. LangChain 6根据图片生成推广文案HuggingFace中的image-caption模型
  7. LangChain 7 文本模型TextLangChain和聊天模型ChatLangChain
  8. LangChain 8 模型Model I/O:输入提示、调用模型、解析输出
  9. LangChain 9 模型Model I/O 聊天提示词ChatPromptTemplate, 少量样本提示词FewShotPrompt
  10. LangChain 10思维链Chain of Thought一步一步的思考 think step by step
  11. LangChain 11实现思维树Implementing the Tree of Thoughts in LangChain’s Chain
  12. LangChain 12调用模型HuggingFace中的Llama2和Google Flan t5
  13. LangChain 13输出解析Output Parsers 自动修复解析器
  14. LangChain 14 SequencialChain链接不同的组件
  15. LangChain 15根据问题自动路由Router Chain确定用户的意图
  16. LangChain 16 通过Memory记住历史对话的内容
  17. LangChain 17 LangSmith调试、测试、评估和监视基于任何LLM框架构建的链和智能代理
  18. LangChain 18 LangSmith监控评估Agent并创建对应的数据库
  19. LangChain 19 Agents Reason+Action自定义agent处理OpenAI的计算缺陷
  20. LangChain 20 Agents调用google搜索API搜索市场价格 Reason Action:在语言模型中协同推理和行动
  21. LangChain 21 Agents自问自答与搜索 Self-ask with search
  22. LangChain 22 LangServe用于一键部署LangChain应用程序
  23. LangChain 23 Agents中的Tools用于增强和扩展智能代理agent的功能
  24. LangChain 24 对本地文档的搜索RAG检索增强生成Retrieval-augmented generation
  25. LangChain 25: SQL Agent通过自然语言查询数据库sqlite
  26. LangChain 26: 回调函数callbacks打印prompt verbose调用
  27. LangChain 27 AI Agents角色扮演多轮对话解决问题CAMEL
  28. LangChain 28 BabyAGI编写旧金山的天气预报
  29. LangChain 29 调试Debugging 详细信息verbose
  30. LangChain 30 ChatGPT LLM将字符串作为输入并返回字符串Chat Model将消息列表作为输入并返回消息
  31. LangChain 31 模块复用Prompt templates 提示词模板
  32. LangChain 32 输出解析器Output parsers
  33. LangChain 33: LangChain表达语言LangChain Expression Language (LCEL)
  34. LangChain 34: 一站式部署LLMs使用LangServe
  35. LangChain 35: 安全最佳实践深度防御Security
  36. LangChain 36 深入理解LangChain 表达式语言优势一 LangChain Expression Language (LCEL)
  37. LangChain 37 深入理解LangChain 表达式语言二 实现prompt+model+output parser LangChain Expression Language (LCEL)
  38. LangChain 38 深入理解LangChain 表达式语言三 实现RAG检索增强生成 LangChain Expression Language (LCEL)

在这里插入图片描述

1. 为什么要用 LangChain Expression Language (LCEL)

LCEL使得从基本组件构建复杂链条变得轻而易举。它通过提供以下功能实现这一点:

  1. 统一接口A unified interface:每个LCEL对象都实现了Runnable接口,该接口定义了一组通用的调用方法(invokebatchstreamainvoke等)。这使得LCEL对象的链条也能自动支持这些调用。也就是说,每个LCEL对象的链条本身也是一个LCEL对象。
    1. 组合原语Composition primitives:LCEL提供了许多原语,使得组合链条、并行化组件、添加回退、动态配置链条内部等变得轻而易举。

要更好地理解LCEL的价值,有助于看到它的实际作用,并考虑如何在没有它的情况下重新创建类似的功能。在本教程中,我们将通过起步部分的基本示例来做到这一点。我们将采用我们的简单提示+模型链条,它在幕后已经定义了许多功能,然后看看重新创建所有这些功能需要做些什么。

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser


prompt = ChatPromptTemplate.from_template("Tell me a short joke about {topic}")
model = ChatOpenAI(model="gpt-3.5-turbo")
output_parser = StrOutputParser()

chain = prompt | model | output_parser

2. Invoke 调用执行

在最简单的情况下,我们只是想输入一个话题字符串,然后得到一个笑话字符串:

2.1 LCEL 实现

from langchain_core.runnables import RunnablePassthrough
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from dotenv import load_dotenv

load_dotenv()

prompt = ChatPromptTemplate.from_template(
    "Tell me a short joke about {topic}"
)
output_parser = StrOutputParser()
model = ChatOpenAI(model="gpt-3.5-turbo")
chain = (
    {"topic": RunnablePassthrough()} 
    | prompt
    | model
    | output_parser
)

response = chain.invoke("ice cream")
print('response >> ', response)

运行结果

(develop)[1] % python LCEL/invoke_lcel.py                                                              ~/Workspace/LLM/langchain-llm-app
response >>  Why did the ice cream go to therapy?

Because it had too many toppings and couldn't hold it together!

2.2 不用 LCEL 实现

from typing import List

import openai


prompt_template = "Tell me a short joke about {topic}"
client = openai.OpenAI()

def call_chat_model(messages: List[dict]) -> str:
    response = client.chat.completions.create(
        model="gpt-3.5-turbo", 
        messages=messages,
    )
    return response.choices[0].message.content

def invoke_chain(topic: str) -> str:
    prompt_value = prompt_template.format(topic=topic)
    messages = [{"role": "user", "content": prompt_value}]
    return call_chat_model(messages)

invoke_chain("ice cream")

运行结果

(develop)⚡ % python LCEL/invoke.py                                                                       ~/Workspace/LLM/langchain-llm-app
response >>  Why did the ice cream go to therapy? 
Because it had too many toppings!

3. Stream 流式输出,逐个逐个字输出

如果我们想要实时传输结果,我们就需要改变我们的函数:

3.1 LCEL实现

for chunk in chain.stream("ice cream"):
    print(chunk, end="", flush=True)

输出就像ChatGPT 逐个字打印

Why did the ice cream go to therapy?

Because it had too many sprinkles of anxiety!%      

3.2 不用 LCEL实现

需要更改基本上所有的方法

from typing import List
from typing import Iterator
import openai
from langchain_core.runnables import RunnablePassthrough
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from dotenv import load_dotenv

load_dotenv()

prompt_template = "Tell me a short joke about {topic}"
client = openai.OpenAI()
def stream_chat_model(messages: List[dict]) -> Iterator[str]:
    stream = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=messages,
        stream=True,
    )
    for response in stream:
        content = response.choices[0].delta.content
        if content is not None:
            yield content

def stream_chain(topic: str) -> Iterator[str]:
    prompt_value = prompt_template.format(topic=topic)
    return stream_chat_model([{"role": "user", "content": prompt_value}])


for chunk in stream_chain("ice cream"):
    print(chunk, end="", flush=True)

逐个字输出

(develop)[1] % python LCEL/invoke_stream.py                                                            ~/Workspace/LLM/langchain-llm-app
Why did the ice cream go to therapy?

Because it felt a little soft serve!%    

4. Batch 批量处理

如果我们想要并行处理一批输入,我们将需要一个新的函数:

4.1 LCEL实现

response = chain.batch(["ice cream", "spaghetti", "dumplings"])
print('response >> ', response)

输出

(develop)⚡ % python LCEL/invoke_lcel.py                                                                  ~/Workspace/LLM/langchain-llm-app
response >>  ['Why did the ice cream go to therapy? \n\nBecause it was feeling a little rocky road!', 'Why did the spaghetti go to the party? \n\nBecause it heard it was going to "meat" some saucy friends!', 'Why did the dumpling go to the gym?\n\nBecause it wanted to get a little "dough" in shape!']

4.2 不用 LCEL实现

from typing import List

import openai
from langchain_core.runnables import RunnablePassthrough
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from concurrent.futures import ThreadPoolExecutor
from dotenv import load_dotenv

load_dotenv()

prompt_template = "Tell me a short joke about {topic}"
client = openai.OpenAI()

def call_chat_model(messages: List[dict]) -> str:
    response = client.chat.completions.create(
        model="gpt-3.5-turbo", 
        messages=messages,
    )
    return response.choices[0].message.content

def invoke_chain(topic: str) -> str:
    prompt_value = prompt_template.format(topic=topic)
    messages = [{"role": "user", "content": prompt_value}]
    return call_chat_model(messages)

def batch_chain(topics: list) -> list:
    with ThreadPoolExecutor(max_workers=5) as executor:
        return list(executor.map(invoke_chain, topics))

response = batch_chain(["ice cream", "spaghetti", "dumplings"])
print('response >> ', response)

输出

develop)[1] % python LCEL/batch_chain.py                                                              ~/Workspace/LLM/langchain-llm-app
response >>  ['Why did the ice cream go to therapy?\n\nBecause it had too many sprinkles of anxiety!', 'Why did the spaghetti go to the spa? \n\nBecause it needed to pasta tense!', 'Why did the dumpling go to the bakery? \nBecause it wanted to make some dough!']

5. Async 异步执行

如果我们需要一个异步版本:

5.1 LCEL实现

chain.ainvoke("ice cream")

调用实现


from langchain.globals import set_debug

set_debug(True)

async def main():
    await chain.ainvoke("ice cream")

import asyncio
asyncio.run(main())

debug 输出

(develop)⚡ % python LCEL/invoke_lcel.py                                                                  ~/Workspace/LLM/langchain-llm-app
[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
  "input": "ice cream"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel] Entering Chain run with input:
{
  "input": "ice cream"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel > 3:chain:RunnablePassthrough] Entering Chain run with input:
{
  "input": "ice cream"
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel > 3:chain:RunnablePassthrough] [3ms] Exiting Chain run with output:
{
  "output": "ice cream"
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel] [11ms] Exiting Chain run with output:
{
  "topic": "ice cream"
}
[chain/start] [1:chain:RunnableSequence > 4:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "topic": "ice cream"
}
[chain/end] [1:chain:RunnableSequence > 4:prompt:ChatPromptTemplate] [3ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "Tell me a short joke about ice cream",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[llm/start] [1:chain:RunnableSequence > 5:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: Tell me a short joke about ice cream"
  ]
}
[llm/end] [1:chain:RunnableSequence > 5:llm:ChatOpenAI] [2.32s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "Why did the ice cream go to therapy?\n\nBecause it had too many toppings and couldn't cone-trol itself!",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "Why did the ice cream go to therapy?\n\nBecause it had too many toppings and couldn't cone-trol itself!",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 23,
      "prompt_tokens": 15,
      "total_tokens": 38
    },
    "model_name": "gpt-3.5-turbo",
    "system_fingerprint": null
  },
  "run": null
}
[chain/start] [1:chain:RunnableSequence > 6:parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [1:chain:RunnableSequence > 6:parser:StrOutputParser] [3ms] Exiting Parser run with output:
{
  "output": "Why did the ice cream go to therapy?\n\nBecause it had too many toppings and couldn't cone-trol itself!"
}
[chain/end] [1:chain:RunnableSequence] [2.36s] Exiting Chain run with output:
{
  "output": "Why did the ice cream go to therapy?\n\nBecause it had too many toppings and couldn't cone-trol itself!"
}

5.2 不用LCEL实现

async_client = openai.AsyncOpenAI()

async def acall_chat_model(messages: List[dict]) -> str:
    response = await async_client.chat.completions.create(
        model="gpt-3.5-turbo", 
        messages=messages,
    )
    return response.choices[0].message.content

async def ainvoke_chain(topic: str) -> str:
    prompt_value = prompt_template.format(topic=topic)
    messages = [{"role": "user", "content": prompt_value}]
    return await acall_chat_model(messages)

async def main():
    response = await ainvoke_chain("ice cream")
    print('response >> ', response)

import asyncio
asyncio.run(main())

输出

(develop)⚡ % python LCEL/invoke.py                                                                       ~/Workspace/LLM/langchain-llm-app
response >>  Why did the ice cream go to therapy?

Because it had too many toppings and couldn't handle the sprinkles anymore!

代码

https://github.com/zgpeace/pets-name-langchain/tree/develop

参考

https://python.langchain.com/docs/expression_language/why

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值