model compression

***开源***:

intel distiller pytorch : https://github.com/NervanaSystems/distiller

tencent ai lab pocketFlow: https://github.com/Tencent/PocketFlow

 

https://www.jiqizhixin.com/articles/2017-10-29   ***综述论文:当前深度神经网络模型压缩和加速方法速览

https://blog.csdn.net/wspba/article/details/75671573   ***深度学习模型压缩方法综述(一)

 

https://blog.csdn.net/shuzfan/article/details/51678499   ***网络压缩-量化方法对比

https://www.jianshu.com/p/4f0735462a83   ***漫谈Deep Compression(一)简介与背景

https://www.jianshu.com/p/46a645c0e56c   ***漫谈Deep Compression(二)剪枝

https://www.jianshu.com/p/89ef257235f6   ***漫谈Deep Compression(三)量化

 

hhttps://stanford.edu/~songhan/index.html   ***HanSong web

https://www.oreilly.com/ideas/compressing-and-regularizing-deep-neural-networks   ***Compressing and regularizing deep neural networks

https://blog.csdn.net/may0324/article/details/52935869   ***Deep Compression阅读理解及Caffe源码修改*****

 

http://machinethink.net/blog/compressing-deep-neural-nets/   ***Compressing deep neural nets***不删除链接,删除卷积核(裁剪通道数)to get fast and small network

http://machinethink.net/blog/how-fast-is-my-model/   ***How fast is my model?

 

https://www.jiqizhixin.com/articles/2018-06-01-11   ***超全总结:神经网络加速之量化模型 | 附带代码

 

https://www.jiqizhixin.com/articles/2018-06-22-9   ***Intel发布神经网络压缩库Distiller:快速利用前沿算法压缩PyTorch模型

 

https://www.jiqizhixin.com/articles/2018-09-20-7   ***TensorFlow推出模型优化工具包,可将模型压缩75%

https://www.tensorflow.org/performance/model_optimization   

https://www.jiqizhixin.com/articles/2018-09-17-6?from=synced&keyword=%E6%A8%A1%E5%9E%8B%E5%8E%8B%E7%BC%A9   ***腾讯开源框架pocketFlow

 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值