RK3588 Android平台部署DeepSeek-R1教程

RK3588 Android平台部署DeepSeek-R1教程

开发环境

  • ubuntu20.04
  • python3.8
  • RK3588开发板或者RK3576开发板

Ubuntu上RKNN环境搭建

下载rknn-llm :

 git clone https://github.com/airockchip/rknn-llm.git

参考doc/Rockchip_RKLLM_SDK_CN_1.1.0.pdf文档进行搭建

RKLLM-Toolkit 安装
安装 miniforge3 工具

检查是否安装 miniforge3 和 conda 版本信息,若已安装则可省略此小节步骤

conda -V

# 提示 conda: command not found 则表示未安装 conda

# 提示 例如版本 conda 23.9.0

  • 下载 miniforge3 安装包
wget -c https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh
  • 安装 miniforge3
chmod 777 Miniforge3-Linux-x86_64.sh
./Miniforge3-Linux-x86_64.sh
创建 RKLLM-Toolkit Conda 环境
  • 进入 Conda base 环境
source ~/miniforge3/bin/activate
# (base) xxx@xxx-pc:~$
  • 创建一个 Python3.8 版本(建议版本)名为 RKLLM-Toolkit 的 Conda 环境
conda create -n RKLLM-Toolkit python=3.8
  • 进入 RKLLM-Toolkit Conda 环境
conda activate RKLLM-Toolkit
# (RKLLM-Toolkit) xxx@xxx-pc:~$
安装 RKLLM-Toolkit

在 RKLLM-Toolkit Conda 环境下使用 pip 工具直接安装所提供的工具链 whl 包,在安装过程
中,安装工具会自动下载 RKLLM-Toolkit 工具所需要的相关依赖包

pip3 install 1.1.4/rkllm-1.1.4/rkllm-toolkit/packages/rkllm_toolkit-1.1.4-cp38-cp38-linux_x86_64.whl                 //whl文件指定的是前面下载rknn-llm中的文件路径

若执行以下命令没有报错,则安装成功

(RKLLM-Toolkit) xxx@sys2206:~/temp/SDK$ python
Python 3.8.20 | packaged by conda-forge | (default, Sep 30 2024, 17:52:49)
[GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from rkllm.api import RKLLM
INFO: Note: NumExpr detected 64 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
INFO: NumExpr defaulting to 8 threads.
>>>

DeepSeek-R1-1.5B HunggingFace转换成RKLLM模型

  • 下载 DeepSeek-R1-Distill-Qwen-1.5B模型
    将所有文件一个个点击下载到本地,本地保存到目录DeepSeek-R1-Distill-Qwen-1.5B。
    其他模型也是类似的操作,可以在上面下载对应的模型
  • 编写转换脚本 transform.py 保存到DeepSeek-R1-Distill-Qwen-1.5B目录下

转换API说明可以参考文档doc/Rockchip_RKLLM_SDK_CN_1.1.0.pdf

from rkllm.api import RKLLM
from datasets import load_dataset
from transformers import AutoTokenizer
from tqdm import tqdm
import torch
from torch import nn
import os
# os.environ['CUDA_VISIBLE_DEVICES']='1'

modelpath = '.'
llm = RKLLM()

# Load model
# Use 'export CUDA_VISIBLE_DEVICES=2' to specify GPU device
# options ['cpu', 'cuda']
ret = llm.load_huggingface(model=modelpath, model_lora = None, device='cpu')
# ret = llm.load_gguf(model = modelpath)
if ret != 0:
    print('Load model failed!')
    exit(ret)

# Build model
dataset = "./data_quant.json"
# Json file format, please note to add prompt in the input,like this:
# [{"input":"Human: 你好!\nAssistant: ", "target": "你好!我是人工智能助手KK!"},...]

qparams = None
# qparams = 'gdq.qparams' # Use extra_qparams
ret = llm.build(do_quantization=True, optimization_level=1, quantized_dtype='w8a8',
                quantized_algorithm='normal', target_platform='rk3588', num_npu_core=3, extra_qparams=qparams, dataset=dataset)

#ret = llm.build(do_quantization=True, optimization_level=1, quantized_dtype='w8a8',
#                quantized_algorithm='normal', target_platform='rk3576', num_npu_core=2, extra_qparams=qparams, dataset=dataset)

if ret != 0:
    print('Build model failed!')
    exit(ret)

# Evaluate Accuracy
def eval_wikitext(llm):
    seqlen = 512
    tokenizer = AutoTokenizer.from_pretrained(
        modelpath, trust_remote_code=True)
    # Dataset download link:
    # https://huggingface.co/datasets/Salesforce/wikitext/tree/main/wikitext-2-raw-v1
    testenc = load_dataset(
        "parquet", data_files='./wikitext/wikitext-2-raw-1/test-00000-of-00001.parquet', split='train')
    testenc = tokenizer("\n\n".join(
        testenc['text']), return_tensors="pt").input_ids
    nsamples = testenc.numel() // seqlen
    nlls = []
    for i in tqdm(range(nsamples), desc="eval_wikitext: "):
        batch = testenc[:, (i * seqlen): ((i + 1) * seqlen)]
        inputs = {"input_ids": batch}
        lm_logits = llm.get_logits(inputs)
        if lm_logits is None:
            print("get logits failed!")
            return
        shift_logits = lm_logits[:, :-1, :]
        shift_labels = batch[:, 1:].to(lm_logits.device)
        loss_fct = nn.CrossEntropyLoss().to(lm_logits.device)
        loss = loss_fct(
            shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
        neg_log_likelihood = loss.float() * seqlen
        nlls.append(neg_log_likelihood)
    ppl = torch.exp(torch.stack(nlls).sum() / (nsamples * seqlen))
    print(f'wikitext-2-raw-1-test ppl: {round(ppl.item(), 2)}')

# eval_wikitext(llm)


# Chat with model
messages = "<|im_start|>system You are a helpful assistant.<|im_end|><|im_start|>user你好!\n<|im_end|><|im_start|>assistant"
kwargs = {"max_length": 128, "top_k": 1, "top_p": 0.8,
          "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
# print(llm.chat_model(messages, kwargs))


# Export rkllm model
ret = llm.export_rkllm("./deepseek-r1.rkllm")
if ret != 0:
    print('Export model failed!')
    exit(ret)

  • 编写量化校正数据集data_quant.json
[{"input":"Human: 你好!\nAssistant: ", "target": "你好!我是人工智能助手!"}]

保存到DeepSeek-R1-Distill-Qwen-1.5B目录下。

  • 运行转接脚本transform.py
(RKLLM-Toolkit) xxx@sys2206:~/temp/DeepSeek-R1-Distill-Qwen-1.5B$ python transform.py
INFO: Note: NumExpr detected 64 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
INFO: NumExpr defaulting to 8 threads.
INFO: rkllm-toolkit version: 1.1.4
The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 8648.05it/s]
Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1655.86it/s]
Generating train split: 1 examples [00:00, 32.01 examples/s]
Optimizing model: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28/28 [00:21<00:00,  1.31it/s]
Building model: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 399/399 [00:12<00:00, 31.35it/s]
WARNING: The bos token has two ids: 151646 and 151643, please ensure that the bos token ids in config.json and tokenizer_config.json are consistent!
INFO: The token_id of bos is set to 151646
INFO: The token_id of eos is set to 151643
INFO: The token_id of pad is set to 151643
Converting model: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 339/339 [00:00<00:00, 2160895.22it/s]
INFO: Exporting the model, please wait ....
[=================================================>] 597/597 (100%)
INFO: Model has been saved to ./deepseek-r1.rkllm!

运行成功得到转换后的模型:deepseek-r1.rkllm

RK3588端运行demo

使用DeepSeek-R1-Distill-Qwen-1.5B_Demo进行测试验证

  • DeepSeek-R1-Distill-Qwen-1.5B_Demo代码路径
cd rknn-llm/examples/DeepSeek-R1-Distill-Qwen-1.5B_Demo/deploy
  • 编译DeepSeek-R1-Distill-Qwen-1.5B_Demo

这里以编译Android版本为例,下载安装编译需要的ndk

wget https://dl.google.com/android/repository/android-ndk-r21e-linux-x86_64.zip
unzip android-ndk-r21e-linux-x86_64.zip

修改编译脚本指定NDK路径

vim build-android.sh
ANDROID_NDK_PATH=~/android-ndk-r21e

执行build-android.sh开始编译

(RKLLM-Toolkit) xxx@sys2206:~/temp/SDK/rknn-llm/examples/DeepSeek-R1-Distill-Qwen-1.5B_Demo/deploy$ ./build-android.sh
-- Android: Targeting API '23' with architecture 'arm64', ABI 'arm64-v8a', and processor 'aarch64'
-- Android: Selected unified Clang toolchain
-- The C compiler identification is Clang 9.0.9
-- The CXX compiler identification is Clang 9.0.9
-- Check for working C compiler: /home/xxx/android-ndk-r21e/toolchains/llvm/prebuilt/linux-x86_64/bin/clang
-- Check for working C compiler: /home/xxx/android-ndk-r21e/toolchains/llvm/prebuilt/linux-x86_64/bin/clang -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /home/xxx/android-ndk-r21e/toolchains/llvm/prebuilt/linux-x86_64/bin/clang++
-- Check for working CXX compiler: /home/xxx/android-ndk-r21e/toolchains/llvm/prebuilt/linux-x86_64/bin/clang++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found OpenMP_C: -fopenmp=libomp (found version "3.1")
-- Found OpenMP_CXX: -fopenmp=libomp (found version "3.1")
-- Found OpenMP: TRUE (found version "3.1")
-- Configuring done
-- Generating done
-- Build files have been written to: /home/xxx/temp/SDK/rknn-llm/examples/DeepSeek-R1-Distill-Qwen-1.5B_Demo/deploy/build/build_android_arm64-v8a_Release
Scanning dependencies of target llm_demo
[ 50%] Building CXX object CMakeFiles/llm_demo.dir/src/llm_demo.cpp.o
[100%] Linking CXX executable llm_demo
[100%] Built target llm_demo
[100%] Built target llm_demo
Install the project...
-- Install configuration: "Release"
-- Installing: /home/xxx/temp/SDK/rknn-llm/examples/DeepSeek-R1-Distill-Qwen-1.5B_Demo/deploy/install/demo_Android_arm64-v8a/./llm_demo
-- Installing: /home/xxx/temp/SDK/rknn-llm/examples/DeepSeek-R1-Distill-Qwen-1.5B_Demo/deploy/install/demo_Android_arm64-v8a/lib/librkllmrt.so
(RKLLM-Toolkit) xxx@sys2206:~/temp/SDK/rknn-llm/examples/DeepSeek-R1-Distill-Qwen-1.5B_Demo/deploy$ tar -zcvf install/demo_Android_arm64-v8a.tar.gz install/demo_Android_arm64-v8a/
install/demo_Android_arm64-v8a/
install/demo_Android_arm64-v8a/lib/
install/demo_Android_arm64-v8a/lib/librkllmrt.so
install/demo_Android_arm64-v8a/llm_demo
(RKLLM-Toolkit) xxx@sys2206:~/temp/SDK/rknn-llm/examples/DeepSeek-R1-Distill-Qwen-1.5B_Demo/deploy$ tar -zcvf install/demo_Android_arm64-v8a.tar.gz install/demo_Android_arm64-v8a
install/demo_Android_arm64-v8a/
install/demo_Android_arm64-v8a/lib/
install/demo_Android_arm64-v8a/lib/librkllmrt.so
install/demo_Android_arm64-v8a/llm_demo

编译后需要手动拷贝ndk中的libomp.so到install/demo_Android_arm64-v8a/lib/, 否则运行的时候可能会报无法找到libomp.so的错误

(RKLLM-Toolkit) xxx@sys2206:~/temp/SDK/rknn-llm/examples/DeepSeek-R1-Distill-Qwen-1.5B_Demo/deploy$ cp ~/android-ndk-r21e/toolchains/llvm/prebuilt/linux-x86_64/lib64/clang/9.0.9/lib/linux/aarch64/libomp.so install/demo_Android_arm64-v8a/lib/

打包编译生成的文件,方便push到设备中

(RKLLM-Toolkit) xxx@sys2206:~/temp/SDK/rknn-llm/examples/DeepSeek-R1-Distill-Qwen-1.5B_Demo/deploy$ tar -zcvf install/demo_Android_arm64-v8a.tar.gz install/demo_Android_arm64-v8a/
  • 运行llm_demo
# push deepseek-r1.rkllm to device
adb push ~/206_samba/temp/DeepSeek-R1-Distill-Qwen-1.5B/deepseek-r1.rkllm data/
# push install dir to device
adb push ~/206_samba/temp/SDK/rknn-llm/examples/DeepSeek-R1-Distill-Qwen-1.5B_Demo/deploy/install/demo_Android_arm64-v8a.tar.gz data/
# Unzip the demo
xxx@xxx-pc:~/work/0_download$ adb shell
rk3588_u_evb7:/ # cd data
rk3588_u_evb7:/data # tar -zxvf demo_Android_arm64-v8a.tar.gz
rk3588_u_evb7:/data # cd install/demo_Android_arm64-v8a/
# Run Demo
rk3588_u_evb7:/data/install/demo_Android_arm64-v8a # export LD_LIBRARY_PATH=./lib
rk3588_u_evb7:/data/install/demo_Android_arm64-v8a # taskset f0 ./llm_demo /data/deepseek-r1.rkllm  2048 4096

# Running result                                                          
rkllm init start
rkllm init success

开始使用DeepSeek

rkllm init start
rkllm init success

**********************可输入以下问题对应序号获取回答/或自定义输入********************

[0] 现有一笼子,里面有鸡和兔子若干只,数一数,共有头14个,腿38条,求鸡和兔子各有多少只?
[1] 有28位小朋友排成一行,从左边开始数第10位是学豆,从右边开始数他是第几位?

*************************************************************************


user:
robot: )

</think>

It seems like you might be asking about something related to technology, but the question is a bit unclear. Could you clarify or provide more details? Are you asking about:

1. A specific piece of technology?
2. How to use a particular tool or software?
3. Information about a topic I can explain?

Let me know how I can assist!

user: 现有一笼子,里面有鸡和兔子若干只,数一数,共有头14个,腿38条,求鸡和兔子各有多少只?
robot: <think>
首先,设鸡的数量为x,兔子的数量为y。

根据题目中的条件,我们知道每只鸡有1个头和2条腿,每只兔子也有1个头但有4条腿。因此,总头数是x + y = 14。

接下来,考虑腿的总数。鸡贡献了2x条腿,兔子贡献了4y条腿,所以总腿数为2x + 4y = 38。

现在我们有两个方程:
1. x + y = 14
2. 2x + 4y = 38

通过解这两个方程组,我们可以找到x和y的值。首先,从第一个方程中可以得到x = 14 - y。将这个表达式代入第二个方程:
2(14 - y) + 4y = 38
展开并简化:
28 - 2y + 4y = 38
合并同类项:
2y = 10
解得y = 5。

然后,将y的值代入x = 14 - y,得到x = 9</think>

要解决这个问题,我们可以设鸡的数量为 \( x \),兔子的数量为 \( y \)。根据题目中的条件,可以列出以下方程:

1. **头的总数**:每只鸡和兔子都有一个头,所以:
   \[
   x + y = 14
   \]

2. **腿的总数**:鸡有两条腿,兔子有四条腿,所以:
   \[
   2x + 4y = 38
   \]

接下来,我们可以通过解这个方程组来找到 \( x \)\( y \) 的值。

首先,从第一个方程中可以得到:
\[
x = 14 - y
\]

将这个表达式代入第二个方程:
\[
2(14 - y) + 4y = 38
\]
展开并简化:
\[
28 - 2y + 4y = 38 \\
2y = 10 \\
y = 5
\]

然后,将 \( y = 5 \) 代入 \( x = 14 - y \)\[
x = 14 - 5 = 9
\]

**最终答案:**
鸡有 \(\boxed{9}\) 只,兔子有 \(\boxed{5}\) 只。

user: 有28位小朋友排成一行,从左边开始数第10位是学豆,从右边开始数他是第几位?
robot: <think>
首先,总共有28位小朋友。

从左边开始数,第10位是学豆。

因此,剩下的位置数为28 - 10 = 18位。

从右边开始数,第18位就是学豆的位置。
</think>

**解答:**

我们有28位小朋友排成一行。从左边开始数,第10位是学豆。我们需要确定从右边开始数时,这个学豆的位置。

**步骤如下:**

1. **计算剩下的位置数:**

   总共有28位小朋友。

   已知从左边开始数的第10位是学豆,所以剩下的位置数为:

   \[
   28 - 10 = 18
   \]

2. **确定从右边开始数的位置:**

   剩下的18位小朋友从右边开始数,那么这个学豆的位置就是第18位。

**最终答案:**

RK3588 资源消耗情况

运行DeepSeek-R1-Distill-Qwen-1.5B_Demo

内存:
2,243,776K: llm_demo (pid 2132)
------------------------------------------------------
policy0 frequency: 1800000 Hz
policy4 frequency: 2256000 Hz
policy6 frequency: 2256000 Hz
dmc frequency: 2112000000 Hz
fb000000.gpu frequency: 1000000000 Hz
fdab0000.npu frequency: 1000000000 Hz
fdd90000.vop frequency: 850000000 Hz
soc-thermal: 50 °C
bigcore0-thermal: 51 °C
bigcore1-thermal: 51 °C
littlecore-thermal: 50 °C
center-thermal: 50 °C
gpu-thermal: 49 °C
npu-thermal: 49 °C
test_battery: 2 °C
GPU LOAD: 0%
NPU load:  Core0: 77%, Core1: 77%, Core2: 77%,
num of scheduler = 3
================= load ==================
scheduler[0]: rga3
         load = 0%
-----------------------------------
scheduler[1]: rga3
         load = 0%
-----------------------------------
scheduler[2]: rga2
         load = 0%
-----------------------------------
=========================================
<session>  <status>  <tgid>  <process>
cpu0: 8%
cpu1: 3%
cpu2: 1%
cpu3: 1%
cpu4: 100%
cpu5: 20%
cpu6: 19%
cpu7: 19%
------------------------------------------------------
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

loitawu

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值