【langchain中自定义LLM 超详细一步到位 以together为例子】

本文介绍了如何在LangChain中使用TogetherAILLM,包括创建自定义LLM包装器,重点提到了必须实现的`_call`和`_llm_type`方法,以及可选的`_identifying_params`属性。通过实例代码展示了如何处理类型验证、异常管理和日志记录。
摘要由CSDN通过智能技术生成

为了在LangChain中使用TogetherAI,我们必须扩展基本LLM抽象类。

这里有一个创建自定义LLM包装器的示例代码,但我们将通过类型验证、异常处理和日志记录使其变得更好。

This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.

There are only two required things that a custom LLM needs to implement:

  • _call method that takes in a string, some optional stop words, and returns a string.
  • _llm_type property that returns a string. Used for logging purposes only.

There is a second optional thing it can implement:

  • An _identifying_params property that is used to help with printing of this class. Should return a dictionary.

省流版:必须实现_call,_llm_type两个方法,_identifying_params 选择性实现

让我们开始

先导入相应的包

pip install -U together langchain python-dotenv

import os
import time
import json
import logging
from datetime import datetime

import together
from langchain.llms.base import LLM
from langchain import PromptTemplate,  LLMChain

from dotenv import load_dotenv # The dotenv library's load_dotenv function reads a .env file to load environment variables into the process environment. This is a common method to handle configuration settings securely.
# Load env variables
load_dotenv()

# Set up logging
logging.basicConfig(level=logging.INFO)

 langchain.lms.base模块通过提供比直接实现_generate方法用户更友好的界面来简化与LLM的交互。 类langchain.lms.base.LLM是LLM的一个抽象基类,这意味着它为其他类提供了一个模板,但并不意味着它自己被实例化。它旨在通过在内部处理LLM的复杂性,为LLM的工作提供一个更简单的界面,允许用户更容易地与这些模型交互。

__call__方法允许像函数一样调用类,它检查缓存并在给定提示下运行LLM。

class TogetherLLM(LLM):
    """
    Together LLM integration.

    Attributes:
        model (str): Model endpoint to use.
        together_api_key (str): Together API key.
        temperature (float): Sampling temperature to use.
        max_tokens (int): Maximum number of tokens to generate.
    """
    
    model: str = "togethercomputer/llama-2-7b-chat"
    together_api_key: str = os.environ["TOGETHER_API_KEY"]
    temperature: float = 0.7
    max_tokens: int = 512

    @property
    def _llm_type(self) -> str:
        """Return type of LLM."""
        return "together"

    def _call(self, prompt: str, **kwargs: Any) -> str:
        """Call to Together endpoint."""
        try:
            logging.info("Making API call to Together endpoint.")
            return self._make_api_call(prompt)
        except Exception as e:
            logging.error(f"Error in TogetherLLM _call: {e}", exc_info=True)
            raise

    def _make_api_call(self, prompt: str) -> str:
        """Make the API call to the Together endpoint."""
        together.api_key = self.together_api_key
        output = together.Complete.create(
            prompt,
            model=self.model,
            max_tokens=self.max_tokens,
            temperature=self.temperature,
        )
        logging.info("API call successful.")
        return output['output']['choices'][0]['text']

使用创建的LLM

# 初始化实例
llm = TogetherLLM(
    model = model,
    max_tokens = 256,
    temperature = 0.8
)

# 创建chain
prompt_template = "You are a friendly bot, answer the following question: {question}"
prompt = PromptTemplate(
    input_variables=["question"], template=prompt_template
)

chat = LLMChain(llm=llm, prompt=prompt)


# chat
chat("Can AI take over developer jobs?")

其他的例子

照葫芦画瓢即可!

官方的例子

from typing import Any, List, Mapping, Optional

from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM


class CustomLLM(LLM):
    n: int

    @property
    def _llm_type(self) -> str:
        return "custom"

    def _call(
        self,
        prompt: str,
        stop: Optional[List[str]] = None,
        run_manager: Optional[CallbackManagerForLLMRun] = None,
        **kwargs: Any,
    ) -> str:
        if stop is not None:
            raise ValueError("stop kwargs are not permitted.")
        return prompt[: self.n]

    @property
    def _identifying_params(self) -> Mapping[str, Any]:
        """Get the identifying parameters."""
        return {"n": self.n}



llm = CustomLLM(n=10)
llm.invoke("This is a foobar thing")
# 'This is a '


print(llm)
#CustomLLM
#Params: {'n': 10}

InternLM例子

from langchain.llms.base import LLM
from typing import Any, List, Optional
from langchain.callbacks.manager import CallbackManagerForLLMRun
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

class InternLM_LLM(LLM):
    # 基于本地 InternLM 自定义 LLM 类
    tokenizer : AutoTokenizer = None
    model: AutoModelForCausalLM = None

    def __init__(self, model_path :str):
        # model_path: InternLM 模型路径
        # 从本地初始化模型
        super().__init__()
        print("正在从本地加载模型...")
        self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
        self.model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True).to(torch.bfloat16).cuda()
        self.model = self.model.eval()
        print("完成本地模型的加载")

    def _call(self, prompt : str, stop: Optional[List[str]] = None,
                run_manager: Optional[CallbackManagerForLLMRun] = None,
                **kwargs: Any):
        # 重写调用函数
        system_prompt = """You are an AI assistant whose name is InternLM (书生·浦语).
        - InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.
        - InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文.
        """
        
        messages = [(system_prompt, '')]
        response, history = self.model.chat(self.tokenizer, prompt , history=messages)
        return response
        
    @property
    def _llm_type(self) -> str:
        return "InternLM"

参考链接

LLM推理部署(六):TogetherAI推出世界上LLM最快推理引擎,性能超过vLLM和TGI三倍

非常的详细,推荐看一看

  • 7
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

放飞自我的Coder

你的鼓励很棒棒哦~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值