LangChain 51 深入理解LangChain 表达式语言十四 自动修复配置RunnableConfig LangChain Expression Language (LCEL)

LangChain系列文章

  1. LangChain 36 深入理解LangChain 表达式语言优势一 LangChain Expression Language (LCEL)
  2. LangChain 37 深入理解LangChain 表达式语言二 实现prompt+model+output parser LangChain Expression Language (LCEL)
  3. LangChain 38 深入理解LangChain 表达式语言三 实现RAG检索增强生成 LangChain Expression Language (LCEL)
  4. LangChain 39 深入理解LangChain 表达式语言四 为什么要用LCEL LangChain Expression Language (LCEL)
  5. LangChain 40 实战Langchain访问OpenAI ChatGPT API Account deactivated的另类方法,访问跳板机API
  6. LangChain 41 深入理解LangChain 表达式语言五 为什么要用LCEL调用大模型LLM LangChain Expression Language (LCEL)
  7. LangChain 42 深入理解LangChain 表达式语言六 Runtime调用不同大模型LLM LangChain Expression Language (LCEL)
  8. LangChain 43 深入理解LangChain 表达式语言七 日志和Fallbacks异常备选方案 LangChain Expression Language (LCEL)
  9. LangChain 44 深入理解LangChain 表达式语言八 Runnable接口输入输出模式 LangChain Expression Language (LCEL)
  10. LangChain 45 深入理解LangChain 表达式语言九 Runnable 调用、流输出、批量调用、异步处理 LangChain Expression Language (LCEL)
  11. LangChain 46 深入理解LangChain 表达式语言十 Runnable 调用中间状态调试日志 LangChain Expression Language (LCEL)
  12. LangChain 47 深入理解LangChain 表达式语言十一 Runnable 并行处理 LangChain Expression Language (LCEL)
  13. LangChain 48 终极解决 实战Langchain访问OpenAI ChatGPT API Account deactivated的另类方法,访问跳板机API
  14. LangChain 49 深入理解LangChain 表达式语言十二 Runnable 透传数据保持输入不变 LangChain Expression Language (LCEL)
  15. LangChain 50 深入理解LangChain 表达式语言十三 自定义pipeline函数 LangChain Expression Language (LCEL)

在这里插入图片描述

1. 接受可运行的配置RunnableConfig

可运行的lambda表达式可以选择接受一个RunnableConfig,它们可以使用它来传递回调,日志打点和其他配置信息给嵌套的运行。

本例子自动修复解析Json失败,通过OpenAI 的API prompt,自动修复未正常格式的Json。

# 从 'operator' 模块导入 'itemgetter' 函数,用于项查找
from operator import itemgetter

# 从 langchain 和 langchain_core 包中导入各种类和函数
from langchain_core.runnables import RunnablePassthrough
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableParallel
from langchain_core.runnables import RunnableLambda
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableConfig
import json  # 导入 JSON 库来解析和生成 JSON
from langchain.callbacks import get_openai_callback  # 导入获取 OpenAI 回调的函数

from dotenv import load_dotenv  # 导入从 .env 文件加载环境变量的函数
load_dotenv()  # 调用函数实际加载环境变量

from langchain.globals import set_debug  # 导入在 langchain 中设置调试模式的函数
set_debug(True)  # 启用 langchain 的调试模式

# 定义一个函数来解析 JSON 或修复它(如果它格式不正确)
def parse_or_fix(text: str, config: RunnableConfig):
    # 使用模板提示和 OpenAI 模型定义一系列操作来修复文本
    fixing_chain = (
        ChatPromptTemplate.from_template(
            "Fix the following text:\n\n```text\n{input}\n```\nError: {error}"
            " Don't narrate, just respond with the fixed data."
        )
        | ChatOpenAI()  # 连接到 ChatOpenAI 模型来处理提示
        | StrOutputParser()  # 解析聊天模型的字符串输出
    )
    
    # 尝试解析 JSON 三次,如果有错误,使用链来修复
    for _ in range(3):
        try:
            return json.loads(text)  # 尝试将文本解析为 JSON
        except Exception as e:  # 捕获解析异常
            # 使用格式错误的文本和错误信息调用修复链
            text = fixing_chain.invoke({"input": text, "error": e}, config)
    return "Failed to parse"  # 如果三次解析都失败,返回失败消息

# 使用上下文管理器从 OpenAI 获取回调,并调用 parse_or_fix 函数
with get_openai_callback() as cb:
    # 创建一个 RunnableLambda 对象,其中包含 parse_or_fix 函数
    # 并使用类似 JSON 的字符串和配置字典调用它
    output = RunnableLambda(parse_or_fix).invoke(
        "{foo: bar}", {"tags": ["my-tag"], "callbacks": [cb]}
    )
    print(output)  # 打印调用结果
    print(cb)  # 打印回调对象

运行结果

develop* $ python LCEL/config.py                                                                                             [21:49:30]
[chain/start] [1:chain:parse_or_fix] Entering Chain run with input:
{
  "input": "{foo: bar}"
}
[chain/start] [1:chain:parse_or_fix > 2:chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [1:chain:parse_or_fix > 2:chain:RunnableSequence > 3:prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [1:chain:parse_or_fix > 2:chain:RunnableSequence > 3:prompt:ChatPromptTemplate] [2ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "Fix the following text:\n\n```text\n{foo: bar}\n```\nError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) Don't narrate, just respond with the fixed data.",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[llm/start] [1:chain:parse_or_fix > 2:chain:RunnableSequence > 4:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: Fix the following text:\n\n```text\n{foo: bar}\n```\nError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) Don't narrate, just respond with the fixed data."
  ]
}
[llm/end] [1:chain:parse_or_fix > 2:chain:RunnableSequence > 4:llm:ChatOpenAI] [2.00s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "{\n  \"foo\": \"bar\"\n}",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "{\n  \"foo\": \"bar\"\n}",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 9,
      "prompt_tokens": 56,
      "total_tokens": 65
    },
    "model_name": "gpt-3.5-turbo",
    "system_fingerprint": null
  },
  "run": null
}
[chain/start] [1:chain:parse_or_fix > 2:chain:RunnableSequence > 5:parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [1:chain:parse_or_fix > 2:chain:RunnableSequence > 5:parser:StrOutputParser] [1ms] Exiting Parser run with output:
{
  "output": "{\n  \"foo\": \"bar\"\n}"
}
[chain/end] [1:chain:parse_or_fix > 2:chain:RunnableSequence] [2.01s] Exiting Chain run with output:
{
  "output": "{\n  \"foo\": \"bar\"\n}"
}
[chain/end] [1:chain:parse_or_fix] [2.38s] Exiting Chain run with output:
{
  "foo": "bar"
}
{'foo': 'bar'}
Tokens Used: 65
        Prompt Tokens: 56
        Completion Tokens: 9
Successful Requests: 1

代码

https://github.com/zgpeace/pets-name-langchain/tree/develop

参考

https://python.langchain.com/docs/expression_language/how_to/functions

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值