大模型llama.cp编译

一、大模型部署工具 llama.cpp

二、使用 llama.cpp 量化模型

2.1 克隆llama.cp

项目地址:

https://github.com/ggerganov/llama.cpp

一般配置SSH KEY,然后采用SSH克隆。

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

克隆项目,然后进行一次编译。

➜  llama.cpp git:(master) ✗ make
I ccache not found. Consider installing it for faster compilation.
I llama.cpp build info:
I UNAME_S:   Darwin
I UNAME_P:   arm
I UNAME_M:   arm64
I CFLAGS:    -I. -Icommon -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -DNDEBUG -DGGML_USE_ACCELERATE -DACCELERATE_NEW_LAPACK -DACCELERATE_LAPACK_ILP64 -DGGML_USE_LLAMAFILE -DGGML_USE_METAL  -std=c11   -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -pthread -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion
I CXXFLAGS:  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread   -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -I. -Icommon -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -DNDEBUG -DGGML_USE_ACCELERATE -DACCELERATE_NEW_LAPACK -DACCELERATE_LAPACK_ILP64 -DGGML_USE_LLAMAFILE -DGGML_USE_METAL
I NVCCFLAGS: -std=c++11 -O3
I LDFLAGS:   -framework Accelerate -framework Foundation -framework Metal -framework MetalKit
I CC:        Apple clang version 14.0.3 (clang-1403.0.22.14.1)
I CXX:       Apple clang version 14.0.3 (clang-1403.0.22.14.1)

make: Nothing to be done for `default'.

提示缺少 ccache,安装

brew install ccache

安装完成以后,再次make

➜  llama.cpp git:(master) ✗ make
I ccache found, compilation results will be cached. Disable with LLAMA_NO_CCACHE.
I llama.cpp build info:
I UNAME_S:   Darwin
I UNAME_P:   arm
I UNAME_M:   arm64
I CFLAGS:    -I. -Icommon -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -DNDEBUG -DGGML_USE_ACCELERATE -DACCELERATE_NEW_LAPACK -DACCELERATE_LAPACK_ILP64 -DGGML_USE_LLAMAFILE -DGGML_USE_METAL  -std=c11   -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -pthread -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion
I CXXFLAGS:  -std=c++11 -fPIC -O3 -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wmissing-declarations -Wmissing-noreturn -pthread   -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -I. -Icommon -D_XOPEN_SOURCE=600 -D_DARWIN_C_SOURCE -DNDEBUG -DGGML_USE_ACCELERATE -DACCELERATE_NEW_LAPACK -DACCELERATE_LAPACK_ILP64 -DGGML_USE_LLAMAFILE -DGGML_USE_METAL
I NVCCFLAGS: -std=c++11 -O3
I LDFLAGS:   -framework Accelerate -framework Foundation -framework Metal -framework MetalKit
I CC:        Apple clang version 14.0.3 (clang-1403.0.22.14.1)
I CXX:       Apple clang version 14.0.3 (clang-1403.0.22.14.1)

在目录下会生成一系列可执行文件:

  • main:使用模型进行推理
  • quantize:量化模型
  • server:提供模型 API 服务

2.2 准备llama.cp支持的模型

llama.cpp 支持转换的模型格式有 PyTorch 的 .pth 、huggingface 的 .safetensors 、还有之前 llama.cpp 采用的 ggmlv3。

在 huggingface 上找到合适格式的模型,下载至 llama.cpp 的 models 目录下。

cd models
git clone https://huggingface.co/4bit/Llama-2-7b-chat-hf ./models/Llama-2-7b-chat-hf

2.3 转化GGUF格式

2.3.1 安装依赖
llama.cpp 项目下带有 requirements.txt 文件,直接安装依赖即可。

pip install -r requirements.txt

如果本地未安装pip和python 工具,则先进行安装

brew install pip
brew install python

2.3.2  转换模型

python convert.py ./models/Llama-2-7b-chat-hf --vocabtype spm

params = Params(n_vocab=32000, n_embd=4096, n_mult=5504, n_layer=32, n_ctx=2048, n_ff=11008, n_head=32, n_head_kv=32, f_norm_eps=1e-05, f_rope_freq_base=None, f_rope_scale=None, ftype=None, path_model=PosixPath('models/Llama-2-7b-chat-hf'))
Loading vocab file 'models/Llama-2-7b-chat-hf/tokenizer.model', type 'spm'
...
Wrote models/Llama-2-7b-chat-hf/ggml-model-f16.gguf

vocabtype 指定分词算法,默认值是 spm,如果是 bpe,需要显示指定。


2.4 开始量化模型


quantize 提供各种精度的量化。

./quantize

usage: ./quantize [--help] [--allow-requantize] [--leave-output-tensor] model-f32.gguf [model-quant.gguf] type [nthreads]

  --allow-requantize: Allows requantizing tensors that have already been quantized. Warning: This can severely reduce quality compared to quantizing from 16bit or 32bit
  --leave-output-tensor: Will leave output.weight un(re)quantized. Increases model size but may also increase quality, especially when requantizing

Allowed quantization types:
   2  or  Q4_0   :  3.56G, +0.2166 ppl @ LLaMA-v1-7B
   3  or  Q4_1   :  3.90G, +0.1585 ppl @ LLaMA-v1-7B
   8  or  Q5_0   :  4.33G, +0.0683 ppl @ LLaMA-v1-7B
   9  or  Q5_1   :  4.70G, +0.0349 ppl @ LLaMA-v1-7B
  10  or  Q2_K   :  2.63G, +0.6717 ppl @ LLaMA-v1-7B
  12  or  Q3_K   : alias for Q3_K_M
  11  or  Q3_K_S :  2.75G, +0.5551 ppl @ LLaMA-v1-7B
  12  or  Q3_K_M :  3.07G, +0.2496 ppl @ LLaMA-v1-7B
  13  or  Q3_K_L :  3.35G, +0.1764 ppl @ LLaMA-v1-7B
  15  or  Q4_K   : alias for Q4_K_M
  14  or  Q4_K_S :  3.59G, +0.0992 ppl @ LLaMA-v1-7B
  15  or  Q4_K_M :  3.80G, +0.0532 ppl @ LLaMA-v1-7B
  17  or  Q5_K   : alias for Q5_K_M
  16  or  Q5_K_S :  4.33G, +0.0400 ppl @ LLaMA-v1-7B
  17  or  Q5_K_M :  4.45G, +0.0122 ppl @ LLaMA-v1-7B
  18  or  Q6_K   :  5.15G, -0.0008 ppl @ LLaMA-v1-7B
   7  or  Q8_0   :  6.70G, +0.0004 ppl @ LLaMA-v1-7B
   1  or  F16    : 13.00G              @ 7B
   0  or  F32    : 26.00G              @ 7B

./quantize ./models/Llama-2-7b-chat-hf/ggml-model-f16.gguf ./models/Llama-2-7b-chat-hf/ggml-model-q4_0.gguf Q4_0

llama_model_quantize_internal: model size  = 12853.02 MB
llama_model_quantize_internal: quant size  =  3647.87 MB
llama_model_quantize_internal: hist: 0.036 0.015 0.025 0.039 0.056 0.076 0.096 0.112 0.118 0.112 0.096 0.077 0.056 0.039 0.025 0.021

三、安卓集成llama.cpp

 在项目下,examples中,存在安卓端的Demo。

3.2 手动安装BLIS

克隆代码

git clone https://github.com/flame/blis.git
#或者采用SSH克隆
git clone git@github.com:flame/blis.git
cd blis
./configure --enable-cblas -t openmp,pthreads auto

开始编译

make -j4

发生错误,不支持 -fopenmp flag

clang: clang: clang: clang: fatal error: fatal error: unsupported option '-fopenmp'unsupported option '-fopenmp'
fatal error:
fatal error: unsupported option '-fopenmp'
unsupported option '-fopenmp'
make: *** [obj/firestorm/kernels/armv8a/3/bli_gemm_armv8a_asm_d6x8.o] Error 1
make: *** Waiting for unfinished jobs....
make: *** [obj/firestorm/config/firestorm/bli_cntx_init_firestorm.o] Error 1
make: *** [obj/firestorm/kernels/armv8a/1m/bli_packm_armv8a_int_d6x8.o] Error 1
make: *** [obj/firestorm/kernels/armv8a/1m/bli_packm_armv8a_int_s8x12.o] Error 1
➜  blis git:(master)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值