***开源***:
intel distiller pytorch : https://github.com/NervanaSystems/distiller
tencent ai lab pocketFlow: https://github.com/Tencent/PocketFlow
https://www.jiqizhixin.com/articles/2017-10-29 ***综述论文:当前深度神经网络模型压缩和加速方法速览
https://blog.csdn.net/wspba/article/details/75671573 ***深度学习模型压缩方法综述(一)
https://blog.csdn.net/shuzfan/article/details/51678499 ***网络压缩-量化方法对比
https://www.jianshu.com/p/4f0735462a83 ***漫谈Deep Compression(一)简介与背景
https://www.jianshu.com/p/46a645c0e56c ***漫谈Deep Compression(二)剪枝
https://www.jianshu.com/p/89ef257235f6 ***漫谈Deep Compression(三)量化
hhttps://stanford.edu/~songhan/index.html ***HanSong web
https://www.oreilly.com/ideas/compressing-and-regularizing-deep-neural-networks ***Compressing and regularizing deep neural networks
https://blog.csdn.net/may0324/article/details/52935869 ***Deep Compression阅读理解及Caffe源码修改*****
http://machinethink.net/blog/compressing-deep-neural-nets/ ***Compressing deep neural nets***不删除链接,删除卷积核(裁剪通道数)to get fast and small network
http://machinethink.net/blog/how-fast-is-my-model/ ***How fast is my model?
https://www.jiqizhixin.com/articles/2018-06-01-11 ***超全总结:神经网络加速之量化模型 | 附带代码
https://www.jiqizhixin.com/articles/2018-06-22-9 ***Intel发布神经网络压缩库Distiller:快速利用前沿算法压缩PyTorch模型
https://www.jiqizhixin.com/articles/2018-09-20-7 ***TensorFlow推出模型优化工具包,可将模型压缩75%
https://www.tensorflow.org/performance/model_optimization
https://www.jiqizhixin.com/articles/2018-09-17-6?from=synced&keyword=%E6%A8%A1%E5%9E%8B%E5%8E%8B%E7%BC%A9 ***腾讯开源框架pocketFlow