neon浮点运算_ARM NEON编程初探——一个简单的BGR888转YUV444实例详解

本文介绍了如何使用ARM NEON SIMD指令集将BGR888图像转换为YUV444格式,通过浮点运算和整型运算的比较,展示了NEON指令如何提升处理速度。文中详细解析了向量化过程,并给出了使用NEON Intrinsics的转换代码示例。
摘要由CSDN通过智能技术生成

原文请猛戳这里

敲黑板划重点——顺求异构计算/高性能计算/CUDA/ARM优化类开发职位

最近在学习ARM的SIMD指令集NEON,发现这方面的资料真是太少了,我便来给NEON凑凑人气,姑且以这篇入门文章来分享一些心得吧。

学习一门新技术,总是有一些经典是绕不开的,对于NEON来说,这份必备的武林秘籍自然就是ARM官方的《NEON Programmer's Guide》(以下简称Guide)啦。别看这份Guide有四百多页,其实只有一百来页是正文,后面都是供查阅的手册,通读一番还是不难的。所以这里我也就不打算把Guide里的内容翻译过来敷衍了事了。在此我想借一个简单例子,展示我是如何把一个没采用NEON的普通程序改写为NEON程序、中间又是如何debug、如何调优的。当然,作为一枚ARM小白,我接触NEON指令集毕竟也才两周左右时间,错误在所难免,还请各位方家多多指正。

BGR888ToYUV444

在众多并行操作模式(Map, Reduce, Scatter, Stencil等)中,最简单的也许要算Map了,所以我选了一个Map的例子——BGR888转YUV444。这两者都是颜色空间的格式:前者表示一个像素有B、G、R三个颜色通道,每个通道占8-bit,在内存中按照B G R的顺序从高位到低位排列;后者表示一个像素有Y、U、V三个通道,每个通道也是8-bit(444仅指Y、U、V的采样率比值为4:4:4,其他类型的采样率还有YUV422、YUV420),我们也假设它在内存中按照V U Y的顺序从高位到低位排列。如何把BGR888格式转成YUV444呢?根据wiki上的转换公式,我们可以写出如下代码(很显然,这是一一对应,典型的Map模式):

void BGR888ToYUV444(unsigned char *yuv, unsigned char *bgr, int pixel_num)

{

int i;

for (i = 0; i < pixel_num; ++i) {

uint8_t r = bgr[i * 3];

uint8_t g = bgr[i * 3 + 1];

uint8_t b = bgr[i * 3 + 2];

uint8_t y = 0.299 * r + 0.587 * g + 0.114 * b;

uint8_t u = -0.169 * r - 0.331 * g + 0.5 * b + 128;

uint8_t v = 0.5 * r - 0.419 * g - 0.081 * b + 128;

yuv[i * 3] = y;

yuv[i * 3 + 1] = u;

yuv[i * 3 + 2] = v;

}

}

这段代码的意思很简单,我们遍历所有像素,每次把B、G、R三个通道的值从内存中加载进来,再做浮点数乘法和加减法,得到Y、U、V的值,写入相应的内存中。那么,使用NEON可以怎样帮助这段程序跑得更快呢?

前面提到,NEON是一种SIMD(Single Instruction Multiple Data)指令,也就是说,NEON可以把若干源(source)操作数(operand)打包放到一个源寄存器中,对他们执行相同的操作,产生若干目的(dest)操作数,这种方式也叫向量化(vectorization)。这样的话,能打包多少数据同时做运算就取决于寄存器位宽,在ARMv7的NEON unit中,register file总大小是1024-bit,可以划分为16个128-bit的Q-register(Quadword register)或者32个64-bit的D-register(Dualword register),也就是说,最长的寄存器位宽是128-bit(详见Guide第一章)。以上面的R888ToYUV444函数为例,假设我们采用32-bit单精度浮点数float来做浮点运算,那么我们可以 把最多128/32=4个浮点数打包放到Q-register中做SIMD运算,一次拿4个BGR算出4个YUV,从而提高吞吐量,减少loop次数。

(细心的看官可能会问到双精度浮点数double的运算吧?遗憾的是,根据Guide,NEON并不支持double,你可以考虑使用VFP/Vector Floating Point,但VFP并非SIMD单元)。

从浮点运算到整型运算

那么,我们还可以继续提高向量化程度吗?如果我们回头看wiki,我们会发现在早期不支持浮点操作的SIMD处理器中,使用了如下整型运算来把BGR转成YUV:

// Refer to: https://en.wikipedia.org/wiki/YUV (Full swing for BT.601)

void BGR888ToYUV444(unsigned char *yuv, unsigned char *bgr, int pixel_num)

{

int i;

for (i = 0; i < pixel_num; ++i) {

uint8_t r = bgr[i * 3];

uint8_t g = bgr[i * 3 + 1];

uint8_t b = bgr[i * 3 + 2];

// 1. Multiply transform matrix (Y′: unsigned, U/V: signed)

uint16_t y_tmp = 76 * r + 150 * g + 29 * b;

int16_t u_tmp = -43 * r - 84 * g + 127 * b;

int16_t v_tmp = 127 * r - 106 * g - 21 * b;

// 2. Scale down (">>8") to 8-bit values with rounding ("+128") (Y′: unsigned, U/V: signed)

y_tmp = (y_tmp + 128) >> 8;

u_tmp = (u_tmp + 128) >> 8;

v_tmp = (v_tmp + 128) >> 8;

// 3. Add an offset to the values to eliminate any negative values (all results are 8-bit unsigned)

yuv[i * 3] = (uint8_t) y_tmp;

yuv[i * 3 + 1] = (uint8_t) (u_tmp + 128);

yuv[i * 3 + 2] = (uint8_t) (v_tmp + 128);

}

}

从这段代码我们不难发现,32-bit的float运算被16-bit的加减、乘法和移位运算所代替。这样的话,我们可以把最多128/16=8个整型数放到Q-register中做SIMD运算,一次拿8个BGR算出8个YUV,把向量化程度再提一倍。使用整型运算还有一个好处:一般而言,整型运算指令所需要的时钟周期少于浮点运算的时钟周期。所以,我以这段代码为基准(baseline),用NEON来加速它。(细心的看官也许已经看到我说法中的纰漏:虽然单个整型指令的周期小于单个相同运算的浮点指令的周期,但整型版本的BGR888ToYUV444比起浮点版本的多了移位和加法的overhead,指令数目是不同的,总的时钟周期不一定就短。Good point! 看官不妨看完本文后也用NEON改写浮点版本练练手,两相比较就不难得出最终的结论啦XD)

NEON Intrinsics

接下来可以着手写NEON代码了。个人推荐新手先别急着一上来就写汇编,NEON提供了intrinsics,其实就是一套可在C/C++中调用的函数API,编译器会代劳把这些intrinsics转成NEON指令(详见Guide的第四章),这样就方便一些。我用NEON intrinsics改写的BGR888ToYUV444函数如下:

void BGR888ToYUV444(unsigned char * __restrict__ yuv, unsigned char * __restrict__ bgr, int pixel_num)

{

const uint8x8_t u8_zero = vdup_n_u8(0);

const uint16x8_t u16_rounding = vdupq_n_u16(128);

const int16x8_t s16_rounding = vdupq_n_s16(128);

const int8x16_t s8_rounding = vdupq_n_s8(128);

int count = pixel_num / 16;

int i;

for (i = 0; i < count; ++i) {

// Load bgr

uint8x16x3_t pixel_bgr = vld3q_u8(bgr);

uint8x8_t high_r = vget_high_u8(pixel_bgr.val[0]);

uint8x8_t low_r = vget_low_u8(pixel_bgr.val[0]);

uint8x8_t high_g = vget_high_u8(pixel_bgr.val[1]);

uint8x8_t low_g = vget_low_u8(pixel_bgr.val[1]);

uint8x8_t high_b = vget_high_u8(pixel_bgr.val[2]);

uint8x8_t low_b = vget_low_u8(pixel_bgr.val[2]);

int16x8_t signed_high_r = vreinterpretq_s16_u16(vaddl_u8(high_r, u8_zero));

int16x8_t signed_low_r = vreinterpretq_s16_u16(vaddl_u8(low_r, u8_zero));

int16x8_t signed_high_g = vreinterpretq_s16_u16(vaddl_u8(high_g, u8_zero));

int16x8_t signed_low_g = vreinterpretq_s16_u16(vaddl_u8(low_g, u8_zero));

int16x8_t signed_high_b = vreinterpretq_s16_u16(vaddl_u8(high_b, u8_zero));

int16x8_t signed_low_b = vreinterpretq_s16_u16(vaddl_u8(low_b, u8_zero));

// NOTE:

// declaration may not appear after executable statement in block

uint16x8_t high_y;

uint16x8_t low_y;

uint8x8_t scalar = vdup_n_u8(76);

int16x8_t high_u;

int16x8_t low_u;

int16x8_t signed_scalar = vdupq_n_s16(-43);

int16x8_t high_v;

int16x8_t low_v;

uint8x16x3_t pixel_yuv;

int8x16_t u;

int8x16_t v;

// 1. Multiply transform matrix (Y′: unsigned, U/V: signed)

high_y = vmull_u8(high_r, scalar);

low_y = vmull_u8(low_r, scalar);

high_u = vmulq_s16(signed_high_r, signed_scalar);

low_u = vmulq_s16(signed_low_r, signed_scalar);

signed_scalar = vdupq_n_s16(127);

high_v = vmulq_s16(signed_high_r, signed_scalar);

low_v = vmulq_s16(signed_low_r, signed_scalar);

scalar = vdup_n_u8(150);

high_y = vmlal_u8(high_y, high_g, scalar);

low_y = vmlal_u8(low_y, low_g, scalar);

signed_scalar = vdupq_n_s16(-84);

high_u = vmlaq_s16(high_u, signed_high_g, signed_scalar);

low_u = vmlaq_s16(low_u, signed_low_g, signed_scalar);

signed_scalar = vdupq_n_s16(-106);

high_v = vmlaq_s16(high_v, signed_high_g, signed_scalar);

low_v = vmlaq_s16(low_v, signed_low_g, signed_scalar);

scalar = vdup_n_u8(29);

high_y = vmlal_u8(high_y, high_b, scalar);

low_y = vmlal_u8(low_y, low_b, scalar);

signed_scalar = vdupq_n_s16(127);

high_u = vmlaq_s16(high_u, signed_high_b, signed_scalar);

low_u = vmlaq_s16(low_u, signed_low_b, signed_scalar);

signed_scalar = vdupq_n_s16(-21);

high_v = vmlaq_s16(high_v, signed_high_b, signed_scalar);

low_v = vmlaq_s16(low_v, signed_low_b, signed_scalar);

// 2. Scale down (">>8") to 8-bit values with rounding ("+128") (Y′: unsigned, U/V: signed)

// 3. Add an offset to the values to eliminate any negative values (all results are 8-bit unsigned)

high_y = vaddq_u16(high_y, u16_rounding);

low_y = vaddq_u16(low_y, u16_rounding);

high_u = vaddq_s16(high_u, s16_rounding);

low_u = vaddq_s16(low_u, s16_rounding);

high_v = vaddq_s16(high_v, s16_rounding);

low_v = vaddq_s16(low_v, s16_rounding);

pixel_yuv.val[0] = vcombine_u8(vqshrn_n_u16(low_y, 8), vqshrn_n_u16(high_y, 8));

u = vcombine_s8(vqshrn_n_s16(low_u, 8), vqshrn_n_s16(high_u, 8));

v = vcombine_s8(vqshrn_n_s16(low_v, 8), vqshrn_n_s16(high_v, 8));

u = vaddq_s8(u, s8_rounding);

pixel_yuv.val[1] = vreinterpretq_u8_s8(u);

v = vaddq_s8(v, s8_rounding);

pixel_yuv.val[2] = vreinterpretq_u8_s8(v);

// Store

vst3q_u8(yuv, pixel_yuv);

bgr += 3 * 16;

yuv += 3 * 16;

}

// Handle leftovers

for (i = count * 16; i < pixel_num; ++i) {

uint8_t r = bgr[i * 3];

uint8_t g = bgr[i * 3 + 1];

uint8_t b = bgr[i * 3 + 2];

uint16_t y_tmp = 76 * r + 150 * g + 29 * b;

int16_t u_tmp = -43 * r - 84 * g + 127 * b;

int16_t v_tmp = 127 * r - 106 * g - 21 * b;

y_tmp = (y_tmp + 128) >> 8;

u_tmp = (u_tmp + 128) >> 8;

v_tmp = (v_tmp + 128) >> 8;

yuv[i * 3] = (uint8_t) y_tmp;

yuv[i * 3 + 1] = (uint8_t) (u_tmp + 128);

yuv[i * 3 + 2] = (uint8_t) (v_tmp + 128);

}

}

这个函数中大部分都是很常用的NEON intrinsics,看官不妨结合查阅Guide的附录D自行熟悉,这里我仅针对几个点解释一下:

我用vld3q_u8指令从内存一次加载16个像素(也就是uint8_t类型的B、G、R三个通道的数值),将各个通道的16个数值放到一个Q-register中(也就是用了三个Q-register,每个分别存放16个B、16个G和16个R),vst3q_u8的写入操作也是类似的,这充分利用128-bit的带宽,使内存存取(memory access)的次数尽可能少。但是,后边运算的向量化程度其实仍然不变,只能同时进行8个16-bit整型运算,也就是说,对于运算的部分,理想加速比(speedup)是8而非16。(当然,一次从内存加载多少数据是一个design choice,没有绝对的答案,看官也可以尝试别的方式XD)

对于像素个数不能被16整除的情况,可能会有剩下的1到15个像素没被上述NEON代码处理到,这里我把它们称作leftovers,简单粗暴地跑了之前的for循环。其实leftover的情况如果占比高的话,还有更高明的处理方式,请各位看官参见Guide的6.2 Handling non-multiple array lengths一节。

我把Y通道计算的一部分放到一起,稍作解释,其他的很多运算都是类似的:

// 复制8个值为128的uint16_t类型整数到某个Q-register,该Q-register对应的C变量是u16_rounding

const uint16x8_t u16_rounding = vdupq_n_u16(128);

// 复制8个值为128的uint8_t类型整数到某个D-register,该D-register对应的C变量是scalar

uint8x8_t scalar = vdup_n_u8(76);

// pixel_bgr.val[0]为一个Q-register,16个uint8_t类型的数,对应R通道

// pixel_bgr.val[1]为一个Q-register,16个uint8_t类型的数,对应G通道

// pixel_bgr.val[2]为一个Q-register,16个uint8_t类型的数,对应B通道

uint8x16x3_t pixel_bgr = vld3q_u8(bgr);

// 一个Q-register对应有两个D-register

// 这是拿到对应R通道的Q-register高位的D-register,有8个R值

// 参见Guide中Register overlap一节(这部分内容很重要!)

// 其他如low_r、high_g的情况相似,这里从略

uint8x8_t high_r = vget_high_u8(pixel_bgr.val[0]);

// 对8个R值执行乘76操作

// vmull是变长指令(常以单词末尾额外的L作为标记),源操作数是两个uint8x8_t的向量,目的操作数是uint16x8_t的向量

// 到这一步:Y = R * 76

// low_y的情况类似,从略

high_y = vmull_u8(high_r, scalar);

// 把scalar的8个128改为8个150

scalar = vdup_n_u8(150);

// 执行乘加运算

// 到这一步:Y = R * 76 + G * 150

high_y = vmlal_u8(high_y, high_g, scalar);

// 把scalar的8个150改为8个29

scalar = vdup_n_u8(29);

// 执行乘加运算

// 到这一步:Y = R * 76 + G * 150 + B * 29

high_y = vmlal_u8(high_y, high_b, scalar);

// 到这一步:Y = (R * 76 + G * 150 + B * 29) + 128

high_y = vaddq_u16(high_y, u16_rounding);

// vqshrn_n_u16是变窄指令(常以单词末尾额外的N作为标记,N为Narrow),将uint16x8_t的向量压缩为uint8x8_t

// 到这一步:Y = ((R * 76 + G * 150 + B * 29) + 128) >> 8

// vcombine_u8用于将两个D-register的值组装到一个Q-register中

pixel_yuv.val[0] = vcombine_u8(vqshrn_n_u16(low_y, 8), vqshrn_n_u16(high_y, 8));

看官也许已经注意到了,在前面的BGR888ToYUV444函数中,我并非像上面的代码一样,将一个通道的运算放到一块,而是有意将Y、U、V三个通道的运算打散搅在一起。这是为了尽可能减少data dependency所引起的stall,这也是优化NEON代码需要着重考虑的方向之一。

对NEON稍有了解的细心看官可能会问:乘法和加法所需要的128、76等常数为何不用立即数——例如NEON的vmul_n——而是采用先批量复制常数到向量再做向量运算?这是因为我们很多运算的源操作数是int8x8_t或uint8x8_t的向量,vmul_n等API很不幸不支持这种格式。这里也良心提醒看官,写NEON intrinsics或者汇编一定要对照Guide后面的附录列出的格式,否则编译器常常会报一些风马牛不相及的错误,把人往坑里带——我当初可是各种踩坑各种在不相关的地方纠结啊555...

交叉编译及debug

NEON汇编

GCC内联汇编

从intrinsics到汇编

如何debug汇编代码

继续优化

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值