在手机上运行书生·浦语大语言模型

参考
https://llm.mlc.ai/docs/deploy/android.html#android-sdk

环境准备

1. 安装rust (ubuntu)

参考 https://forge.rust-lang.org/infra/other-installation-methods.html#which

export https_proxy=192.168.2.4:21083
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup default stable  
export PATH=~/.cargo/bin:$PATH

2. 安装Android Studio

参考 https://developer.android.com/studio
需要在Projects → SDK Manager → SDK Tools 中安装NDK和Cmake
在这里插入图片描述

3. 设置环境变量

export ANDROID_NDK=~/Android/Sdk/ndk/27.0.12077973
export TVM_NDK_CC=$ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android24-clang
export JAVA_HOME=~/Downloads/android-studio-2024.1.1.12-linux/android-studio/jbr
export PATH=/usr/local/cuda-12/bin:$PATH
export PATH=~/Android/Sdk/cmake/3.22.1/bin:$PATH
export PATH=~/.cargo/bin:$PATH

转换模型

1. 安装mlc-llm

参考https://llm.mlc.ai/docs/install/mlc_llm.html
(提前安装cuda和对应版本的pytorch)

conda create --name mlc-prebuilt  python=3.11
conda activate mlc-prebuilt
python -m pip install --pre -U -f https://mlc.ai/wheels mlc-llm-nightly-cu122 mlc-ai-nightly-cu122
conda install -c conda-forge git-lfs
git clone https://github.com/mlc-ai/mlc-llm.git
cd mlc-llm
git submodule update --init --recursive
cd android/MLCChat  
export TVM_SOURCE_DIR=~/Documents/mlc-llm/3rdparty/tvm
export MLC_LLM_SOURCE_DIR=~/Documents/mlc-llm

如果报错如下,说明cmake版本太低

CMake Error at ../../3rdparty/tokenizers-cpp/CMakeLists.txt:132 (target_link_libraries): Cannot specify link libraries for target "tokenizers_c" which is not built by this project.

使用Android Studio安装的cmake

export PATH=~/Android/Sdk/cmake/3.22.1/bin:$PATH

2. 克隆模型到本地,运行转换

You can be under the mlc-llm repo, or your own working directory. Note that all platforms can share the same compiled/quantized weights. See Compile Command Specification for specification of convert_weight.

   mlc_llm convert_weight ./dist/models/internlm2-1_8b/ \
    --quantization q4f16_1 \
    -o dist/internlm2-1_8b-q4f16_1-MLC

如果报错 No such file or directory: ‘nvcc’

export PATH=/usr/local/cuda-12/bin:$PATH

3. 生成配置(可能需要调整)

Use mlc_llm gen_config to generate mlc-chat-config.json and process tokenizers. See Compile Command Specification for specification of gen_config.

mlc_llm gen_config ./dist/models/internlm2-1_8b/ \
    --quantization q4f16_1 --conv-template chatml  \
    -o dist/internlm2-1_8b-q4f16_1-MLC

4. 编译成二进制文件(用于测试)

mlc_llm compile ./dist/internlm2-1_8b-q4f16_1-MLC/mlc-chat-config.json \
    --device cuda -o dist/libs/internlm2-1_8b-q4f16_1-cuda.so

5. 测试编译的模型是否符合预期

(手机端运行的效果和测试效果接近)

from mlc_llm import MLCEngine

# Create engine
engine = MLCEngine(model="./dist/internlm2-1_8b-q4f16_1-MLC", model_lib="./dist/libs/internlm2-1_8b-q4f16_1-MLC-cuda.so")

# Run chat completion in OpenAI API.
print(engine)
for response in engine.chat.completions.create(
    messages=[{"role": "user", "content": "你是谁?"}],
    stream=True
):
    for choice in response.choices:
        print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()

6. 上传到huggingface

参考 https://huggingface.co/timws/internlm2-1_8b-q4f16_1-MLC

打包运行

1. 修改配置文件mlc-package-config.json

参考如下

{
    "device": "android",
    "model_list": [
        {
            "model": "HF://mlc-ai/gemma-2b-it-q4f16_1-MLC",
            "model_id": "gemma-2b-q4f16_1-MLC",
            "estimated_vram_bytes": 3980990464
        }
        ,
        {
            "model": "HF://timws/internlm2-1_8b-q4f16_1-MLC",
            "model_id": "internlm2-1_8b-q4f16_1-MLC",
            "estimated_vram_bytes": 3980990464
        }
    ]
}

2. 运行打包命令

mlc_llm package

3. 编译和运行

使用Android Studio打开./android/MLCChat
点击 “Build → Make Project”进行编译
使用WiFi调试连接手机
点击 “Run → Run ‘app’” 进行调试
在手机上运行时,需要有外部网络访问权限

  • 5
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值