原文链接:https://www.yuque.com/yahei/hey-yahei/quantization-retrain_improved_qat
欢迎引用&转载,但烦请注明出处~
Quantize Aware Training(QAT)通过在训练过程中融入量化和反量化过程,来实现量化模型的精度恢复,但考虑一下量化过程
![image.png](https://img-blog.csdnimg.cn/img_convert/a86094a7396694cbba5791bd488c0b63.png#align=left&display=inline&height=254&margin=[object Object]&name=image.png&originHeight=329&originWidth=215&size=23595&status=done&style=none&width=166)
w q = q ( w ) = α ⋅ C l a m p ( R o u n d ( w α ) ) w_q = q(w) = \alpha \cdot Clamp(Round(\frac{w}{\alpha})) wq=q(w)=α⋅Clamp(Round(αw))
∂ L ∂ w = ∂ L ∂ w q ⋅ ∂ w q ∂ w ≈ S T E ∂ L ∂ w q ⋅ 1 = ∂ L ∂ w q \frac{\partial L}{\partial w} = \frac{\partial L}{\partial w_q} \cdot \frac{\partial w_q}{\partial w} \mathop{\approx} \limits_{STE} \frac{\partial L}{\partial w_q} \cdot 1 = \frac{\partial L}{\partial w_q} ∂w∂L=∂wq∂L⋅∂w∂wqSTE≈