操作系统
- 香橙派5官方的ubuntu 22(非Gnome, 否则Panfrost好像无法调用OpenCL), 大部分驱动已经配好,可以减少折腾
rk3588 gpu简略信息
- ARM Mail-G610
- OpenGL ES1.1/2.0/3.2, OpenCL 2.2, Vulkan 1.2
- 2D硬件加速(librga)
- 600MHz, 610GFlops
- glmark2-es可以跑分,800多
参考链接
- https://github.com/apache/tvm
- https://tvm.apache.org/docs/install/index.html
- https://zhuanlan.zhihu.com/p/584577656
- https://zhuanlan.zhihu.com/p/584849555
配置步骤
1.编译cpp部分
- 克隆tvm源码到板子上, 注意submodule也要克隆
- 然后安装基础依赖:
-
sudo apt-get update sudo apt-get install -y python3 python3-dev python3-setuptools gcc libtinfo-dev zlib1g-dev build-essential cmake libedit-dev libxml2-dev
-
- 安装LLVM 14(注意不能太新,tvm会编译不过)
- apt.llvm.org有自动安装脚本
sudo ./llvm.sh 14 all
即可完成安装
- apt.llvm.org有自动安装脚本
- 回到源码
mkdir build, cp cmake/config.cmake build
- 修改build/config.cmake
- USE_OPENCL填写ON, 用于驱动gpu
- USE_LLVM填写ON, 用于读取cpu的关键信息
- USE_LIBBACKTRACE OFF, 换行符问题编译会异常
cmake -DOpenCL_LIBRARIES=libmali.so所在位置 ..
- thirdpart里只有opencl的头文件,arm的opencl驱动需要指定位置
- 他的位置一般是
/usr/lib/aarch64-linux-gnu/libmali.so
- 等待编译完成即可
sudo make install
安装到主机
2. 安装py接口
在源码的python文件夹下,使用python setup.py install
即可, 由于会安装依赖,推荐百度源,香橙派的很多依赖从百度下载会快很多
3. 代码测试
tvm会自动去外网下载一些缓存,所以网络得好
此处照搬了知乎的代码
wget https://github.com/onnx/models/raw/b9a54e89508f101a1611cd64f4ef56b9cb62c7cf/vision/classification/resnet/model/resnet50-v2-7.onnx
wget https://s3.amazonaws.com/model-server/inputs/kitten.jpg
import onnx
import numpy as np
from scipy.special import softmax
from PIL import Image
import tvm
import tvm.relay as relay
from tvm.contrib import graph_executor
import tvm.contrib.graph_executor as runtime
from tvm.contrib.download import download_testdata
import timeit
import logging
logging.basicConfig(level=logging.INFO)
#ONNX model path
model_path = "resnet50-v2-7.onnx"
onnx_model = onnx.load(model_path)
np.random.seed(0)
# img_url = "https://s3.amazonaws.com/model-server/inputs/kitten.jpg"
# img_path = download_testdata(img_url, "imagenet_cat.png", module="data")
img_path = "kitten.jpg"
# Resize it to 224x224
resized_image = Image.open(img_path).resize((224, 224))
img_data = np.asarray(resized_image).astype("float32")
# Our input image is in HWC layout while ONNX expects CHW input, so convert the array
img_data = np.transpose(img_data, (2, 0, 1))
# Normalize according to the ImageNet input specification
imagenet_mean = np.array([0.485, 0.456, 0.406]).reshape((3, 1, 1))
imagenet_stddev = np.array([0.229, 0.224, 0.225]).reshape((3, 1, 1))
norm_img_data = (img_data / 255 - imagenet_mean) / imagenet_stddev
# Add the batch dimension, as we are expecting 4-dimensional input: NCHW.
img_data = np.expand_dims(norm_img_data, axis=0)
target = tvm.target.mali(model='rk3588')
target_host = tvm.target.arm_cpu(model='rk3588')
input_name = "data"
shape_dict = {input_name: img_data.shape}
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
with tvm.transform.PassContext(opt_level=3):
lib = relay.build(mod, target=tvm.target.Target(target, host=target_host), params=params)
dev = tvm.device(str(target), 0)
module = runtime.GraphModule(lib["default"](dev))
dtype = "float32"
module.set_input(input_name, img_data)
ftimer = module.module.time_evaluator("run", dev, number=1, repeat=30)
prof_res = np.array(ftimer().results) * 1000 # multiply 1000 for converting to millisecond
print(
"%-20s %-19s (%s)" % ("resnet50", "%.2f ms" % np.mean(prof_res), "%.2f ms" % np.std(prof_res))
)
module.run()
output_shape = (1, 1000)
tvm_output = module.get_output(0, tvm.nd.empty(output_shape)).numpy()
# Download a list of labels
labels_url = "https://s3.amazonaws.com/onnx-model-zoo/synset.txt"
labels_path = download_testdata(labels_url, "synset.txt", module="data")
with open(labels_path, "r") as f:
labels = [l.rstrip() for l in f]
# Open the output and read the output tensor
scores = softmax(tvm_output)
scores = np.squeeze(scores)
ranks = np.argsort(scores)[::-1]
for rank in ranks[0:5]:
print("class='%s' with probability=%f" % (labels[rank], scores[rank]))
运行结果
- 程序会编译算子到cpu以及gpu上,如果llvm没装会报错
- Cannot parse Arm-based target features without LLVM support
- 最终跑下来reset50大概300ms对于嵌入式板来说很不错了