langchain中 callbacks constructor实现

代码

from typing import Any, Dict, List

from langchain_openai import ChatOpenAI
from langchain_core.callbacks import BaseCallbackHandler
from langchain_core.messages import BaseMessage
from langchain_core.outputs import LLMResult
from langchain_core.prompts import ChatPromptTemplate


class LoggingHandler(BaseCallbackHandler):
    def on_chat_model_start(
        self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs
    ) -> None:
        print("Chat model started")

    def on_llm_end(self, response: LLMResult, **kwargs) -> None:
        print(f"Chat model ended, response: {response}")

    def on_chain_start(
        self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs
    ) -> None:
        chain_name = serialized.get('name') if serialized else 'Unknown'
        print(f"Chain {chain_name } started")

    def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:
        print(f"Chain ended, outputs: {outputs}")


callbacks = [LoggingHandler()]
llm = ChatOpenAI(
temperature=0,
 model="GLM-4-Flash-250414 ",openai_api_key="your api key",openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")

chain = prompt | llm

chain.invoke({"number": "2"}, config={"callbacks": callbacks})

Chain Unknown started
Chain ChatPromptTemplate started
Chain ended, outputs: messages=[HumanMessage(content='What is 1 + 2?', additional_kwargs={}, response_metadata={})]
Chat model started
Chat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', generation_info={'finish_reason': 'stop', 'logprobs': None}, message=AIMessage(content='1 + 2 = 3', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414 ', 'system_fingerprint': None, 'id': '20250504114226d2d879c98ff0426b', 'finish_reason': 'stop', 'logprobs': None}, id='run-611d18e4-0d44-486e-b135-95ce31f092de-0', usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}}))]] llm_output={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414 ', 'system_fingerprint': None, 'id': '20250504114226d2d879c98ff0426b'} run=None type='LLMResult'
Chain ended, outputs: content='1 + 2 = 3' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414 ', 'system_fingerprint': None, 'id': '20250504114226d2d879c98ff0426b', 'finish_reason': 'stop', 'logprobs': None} id='run-611d18e4-0d44-486e-b135-95ce31f092de-0' usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}}





AIMessage(content='1 + 2 = 3', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414 ', 'system_fingerprint': None, 'id': '20250504114226d2d879c98ff0426b', 'finish_reason': 'stop', 'logprobs': None}, id='run-611d18e4-0d44-486e-b135-95ce31f092de-0', usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}})

代码解释

代码结构

  1. 导入模块:

    • langchain_openailangchain_core.callbackslangchain_core.messageslangchain_core.outputslangchain_core.prompts 是用于处理 OpenAI 的聊天模型和回调机制的模块。
  2. LoggingHandler 类:

    • 继承自 BaseCallbackHandler,用于处理不同阶段的回调。
    • on_chat_model_start: 当聊天模型开始时打印消息。
    • on_llm_end: 当聊天模型结束时打印响应。
    • on_chain_start: 当链开始时打印链的名称。
    • on_chain_end: 当链结束时打印输出。
  3. 回调设置:

    • 创建一个 LoggingHandler 实例并将其添加到回调列表中。
  4. ChatOpenAI 实例:

    • 创建一个 ChatOpenAI 实例,设置温度、模型、API 密钥和基础 URL。
  5. ChatPromptTemplate:

    • 使用模板创建一个聊天提示,模板内容为 “What is 1 + {number}?”。
  6. 链的创建和调用:

    • 使用管道操作符 | 将提示和聊天模型连接成一个链。
    • 调用链的 invoke 方法,传入参数 {"number": "2"} 和回调配置。

代码功能

该代码的主要功能是使用 langchain 库创建一个简单的聊天模型,并通过回调机制记录模型的启动和结束状态。它通过 LoggingHandler 类实现了对模型和链的生命周期事件的日志记录。通过 ChatPromptTemplateChatOpenAI 的结合,代码能够生成一个简单的数学问题并获取其答案。

类似例子

from typing import Any, Dict, List
import time

from langchain_openai import ChatOpenAI
from langchain_core.callbacks import BaseCallbackHandler
from langchain_core.messages import BaseMessage
from langchain_core.outputs import LLMResult
from langchain_core.prompts import ChatPromptTemplate

class StatsHandler(BaseCallbackHandler):
    def __init__(self):
        self.call_count = 0
        self.total_time = 0

    def on_chain_start(
        self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs
    ) -> None:
        self.start_time = time.time()
        print(f"Chain started with inputs: {inputs}")

    def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:
        end_time = time.time()
        duration = end_time - self.start_time
        self.call_count += 1
        self.total_time += duration
        average_time = self.total_time / self.call_count
        print(f"Chain ended with outputs: {outputs}")
        print(f"Call count: {self.call_count}, Average response time: {average_time:.2f} seconds")

callbacks = [StatsHandler()]
llm = ChatOpenAI(
    temperature=0,
    model="GLM-4-Flash-250414",
    openai_api_key="your api key",
    openai_api_base="https://open.bigmodel.cn/api/paas/v4/"
)
prompt = ChatPromptTemplate.from_template("What is 1 + {number}?")

chain = prompt | llm

chain.invoke({"number": "2"}, config={"callbacks": callbacks})
Chain started with inputs: {'number': '2'}
Chain started with inputs: {'number': '2'}
Chain ended with outputs: messages=[HumanMessage(content='What is 1 + 2?', additional_kwargs={}, response_metadata={})]
Call count: 1, Average response time: 0.00 seconds
Chain ended with outputs: content='1 + 2 = 3' additional_kwargs={'refusal': None} response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414', 'system_fingerprint': None, 'id': '2025050411475196802f6ee9134ac8', 'finish_reason': 'stop', 'logprobs': None} id='run-7c41ec92-dbc1-4fbc-8dd8-708379db745f-0' usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}}
Call count: 2, Average response time: 0.25 seconds





AIMessage(content='1 + 2 = 3', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 15, 'total_tokens': 25, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'GLM-4-Flash-250414', 'system_fingerprint': None, 'id': '2025050411475196802f6ee9134ac8', 'finish_reason': 'stop', 'logprobs': None}, id='run-7c41ec92-dbc1-4fbc-8dd8-708379db745f-0', usage_metadata={'input_tokens': 15, 'output_tokens': 10, 'total_tokens': 25, 'input_token_details': {}, 'output_token_details': {}})
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值