深度学习与bfloat16(BF16)

Deep learning has spurred interest in novel floating point formats. Algorithms often don’t need as much precision as standard IEEE-754 doubles or even single precision floats. Lower precision makes it possible to hold more numbers in memory, reducing the time spent swapping numbers in and out of memory. Also, low-precision circuits are far less complex. Together these can benefits can give significant speedup.

深度学习促使了人们对新的浮点数格式的兴趣。通常(深度学习)算法并不需要64位,甚至32位的浮点数精度。更低的精度可以使在内存中存放更多数据成为可能,并且减少在内存中移动进出数据的时间。低精度浮点数的电路也会更加简单。这些好处结合在一起,带来了明显了计算速度的提升。

BF16 (bfloat16) is becoming a de facto standard for deep learning. It is supported by several deep learning accelerators (such as Google’s TPU), and will be supported in Intel processors two generations from now.

bfloat16,BF16格式的浮点数已经成为深度学习事实上的标准。已有一些深度学习“加速器”支持了这种格式,比如Google的TPU。Intel的处理与在未来也可能支持。

The BF16 format is sort of a cross between FP16 and FP32, the 16- and 32-bit formats defined in the IEEE 754-2008 standard, also known as half precision and single precision.

BF16浮点数在格式,介于FP16和FP32之间。(FP16和FP32是 IEEE 754-2008定义的16位和32位的浮点数格式。)

FormatBitsExponentFractionsign(符号)
FP32328231
FP16165101
BF1616871

BF16的指数位比FP16多,跟FP32一样,不过小数位比较少。这样设计说明了设计者希望在16bits的空间中,通过降低精度(比FP16的精度还低)的方式,来获得更大的数值空间(Dynamic Range)。

reference

https://www.maixj.net/ict/bfloat16-19900
https://blog.csdn.net/qq_36533552/article/details/105885714

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值