vllm使用BitAndBytes量化模型失败

ValueError: BitAndBytes quantization with TP or PP is not supported yet

使用加载hf模型时,使用load_in_8bit来量化模型(底层其实是调用bitsandbytes来量化):

import argparse
import os
import torch

def parse_arguments():
    parser = argparse.ArgumentParser()
    parser.add_argument('--model_path',
                        help="model and tokenizer path",
                        default='/docker_shared/Baichuan2-7B-Chat-test2',
                        )
    return parser.parse_args()


def convert_bin2st_from_pretrained(model_path):
    from transformers import AutoModelForCausalLM
    model = AutoModelForCausalLM.from_pretrained(
        pretrained_model_name_or_path=model_path,
        low_cpu_mem_usage=True,
        trust_remote_code=True,
        torch_dtype=torch.float16, load_in_8bit=True) 
    model.save_pretrained(model_path, safe_serialization=True)

if __name__ == '__main__':
    args = parse_arguments()

    print(f"covert  {args.model_path} into safetensor")
    convert_bin2st_from_pretrained(args.model_path)

然后使用vllm加载量化后的模型,报错了:

WARNING 09-07 23:25:16 config.py:318] bitsandbytes quantization is not fully optimized yet. The speed can be slower than non-quantized models.
........
File "/usr/local/lib/python3.10/dist-packages/vllm/config.py", line 353, in verify_with_parallel_config
    raise ValueError(
ValueError: BitAndBytes quantization with TP or PP is not supported yet.
ERROR 09-07 23:25:19 api_server.py:171] RPCServer process died before responding to readiness probe

意思是vllm不支持在bitsandbytes量化后的模型中使用tensor并行加速,也就是–tensor-parallel-size的值不能大于1。

WARNING 09-07 23:44:11 config.py:357] CUDA graph is not supported on BitAndBytes yet, fallback to the eager mode

使用–tensor-parallel-size 1 加载模型,继续遇到错误

WARNING 09-07 23:44:11 config.py:318] bitsandbytes quantization is not fully optimized yet. The speed can be slower than non-quantized models.
WARNING 09-07 23:44:11 config.py:357] CUDA graph is not supported on BitAndBytes yet, fallback to the eager mode.
.......
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/baichuan.py", line 405, in load_weights
    param = params_dict[name]
KeyError: 'model.layers.0.mlp.down_proj.SCB'
Loading safetensors checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]

ERROR 09-07 23:44:19 api_server.py:171] RPCServer process died before responding to readiness probe

仍然加载失败。

vllm支持哪些量化方式呢

查看vllm的help信息,可以看到vllm支持的量化方式

--quantization {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_fp8,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,squeezellm,compressed-tensors,bitsandbytes,qqq,experts_int8,None}, -q {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_fp8,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,squeezellm,compressed-tensors,bitsandbytes,qqq,experts_int8,None}
                        Method used to quantize the weights. If None, we first check the `quantization_config` attribute in the model config file. If that is None, we assume the model weights are not quantized and use
                        `dtype` to determine the data type of the weights.

这些量化方式并不是vllm启动时做的,而是提前转换好的,vllm只是支持这些量化模型的加载,这些量化功能本身不在vllm里。

关于vllm支持的量化方式文档在:https://docs.vllm.ai/en/latest/quantization/supported_hardware.html。这个网页中有关于各种量化方法的使用。
在这里插入图片描述
按照这里的说法,bitsandbytes也是支持的,不清楚为啥我上面加载失败了。

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值