深度学习:模型压缩/加速(Model Compression/acceleration)资源汇总

一.CVPR历年模型压缩(减枝)项目


https://github.com/he-y/Awesome-Pruning

二.结构 structure

Searching for MobileNetV3
arxiv:https://arxiv.org/abs/1905.02244v1
中文解读:重磅!MobileNetV3 来了!
 

[BMVC2018] IGCV3: Interleaved Low-Rank Group Convolutions for Efficient Deep Neural Networks
arxiv:https://arxiv.org/abs/1806.00178
github:https://github.com/homles11/IGCV3


[CVPR2018] IGCV2: Interleaved Structured Sparse Convolutional Neural Networks

arxiv:https://arxiv.org/abs/1804.06202


[CVPR2018] MobileNetV2: Inverted Residuals and Linear Bottlenecks

arxiv:https://arxiv.org/abs/1801.04381
github:https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet


[ECCV2018] ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design

arxiv:https://arxiv.org/abs/1807.11164


三.量化 quantization


Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
intro:二值网络
arxiv:https://arxiv.org/abs/1602.02830
github: https://github.com/MatthieuCourbariaux/BinaryNet
https://github.com/itayhubara/BinaryNet


[ACM2017] FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
intro:二值网络
pdf:http://www.idi.ntnu.no/~yamanu/2017-fpga-finn-preprint.pdf
github:https://github.com/Xilinx/FINN


[CVPR2016] DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
intro:低bit位
arxiv:https://arxiv.org/abs/1606.06160
github:https://github.com/tensorpack/tensorpack/tree/master/examples/DoReFa-Net


[CVPR2016] XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
intro:darknet团队出品
arxiv:https://arxiv.org/abs/1603.05279
github:https://github.com/allenai/XNOR-Net


[CVPR2016] Ternary Weight Networks

arxiv:https://arxiv.org/abs/1605.04711
github:https://github.com/fengfu-chris/caffe-twns


Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
Google出品
arxiv:https://arxiv.org/abs/1712.05877
github:https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize


[ACM2017] Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
intro:QNNs
arxiv:https://arxiv.org/abs/1609.07061
github:https://github.com/peisuke/qnn


Two-Step Quantization for Low-bit Neural Networks

paper:http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Two-Step_Quantization_for_CVPR_2018_paper.pdf

 

四.剪枝 pruning

通道裁剪channel pruning


[NIPS2018] Discrimination-aware Channel Pruning for Deep Neural Networks

arxiv:https://arxiv.org/abs/1810.11809
github:https://github.com/Tencent/PocketFlow支持DisChnPrunedLearner


[ICCV2017] Channel Pruning for Accelerating Very Deep Neural Networks
intro:Lasso回归
arxiv:https://arxiv.org/abs/1707.06168
github:https://github.com/yihui-he/channel-pruning


[ECCV2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
intro:自动学习优化
arxiv:https://arxiv.org/abs/1802.03494
https://www.jiqizhixin.com/articles/AutoML-for-Model-Compression-and-Acceleration-on-Mobile-Devices论文翻译
github:https://github.com/Tencent/PocketFlow


[ICCV2017] Learning Efficient Convolutional Networks through Network Slimming
intro:Zhuang Liu
arxiv:https://arxiv.org/abs/1708.06519
github:https://github.com/Eric-mingjie/network-slimming
https://github.com/foolwood/pytorch-slimming


[ICLR2018] Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers

arxiv:https://arxiv.org/abs/1802.00124
github:[PyTorch]https://github.com/jack-willturner/batchnorm-pruning
[TensorFlow]https://github.com/bobye/batchnorm_prune


[CVPR2017] NISP: Pruning Networks using Neuron Importance Score Propagation

arxiv:https://arxiv.org/abs/1711.05908


[ICCV2017] ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression

web:http://lamda.nju.edu.cn/luojh/project/ThiNet_ICCV17/ThiNet_ICCV17_CN.html
github:https://github.com/Roll920/ThiNet
https://github.com/Roll920/ThiNet_Code
 

稀疏sparsity


SBNet: Sparse Blocks Network for Fast Inference
intro: Uber
arxiv:https://arxiv.org/abs/1801.02108
github:https://github.com/uber/sbnet


To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression
intro:稀疏
arxiv:https://arxiv.org/abs/1710.01878
github:https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/model_pruning


Submanifold Sparse Convolutional Networks
intro:Facebook
arxiv:https://arxiv.org/abs/1706.01307
github:https://github.com/facebookresearch/SparseConvNet
 

五.蒸馏distillation


[NIPS2014] Distilling the Knowledge in a Neural Network
intro:Hinton出品
arxiv:https://arxiv.org/abs/1503.02531
github:https://github.com/peterliht/knowledge-distillation-pytorch
 

六.综合comprehensive


[ICLR2016] Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
intro:开创先河
arxiv:https://arxiv.org/abs/1510.00149
github:https://github.com/songhan


Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verification
intro:实验比较多,适合工程化
arxiv:https://arxiv.org/abs/1709.02929

  • 1
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
OpenSSH_9.2p1 Debian-2, OpenSSL 3.0.9 30 May 2023 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files debug1: /etc/ssh/ssh_config line 21: Applying options for * debug1: Connecting to lxslc702.ihep.ac.cn [2401:de00:2:332::186] port 22. debug1: Connection established. debug1: identity file /home/fyf/.ssh/id_rsa type -1 debug1: identity file /home/fyf/.ssh/id_rsa-cert type -1 debug1: identity file /home/fyf/.ssh/id_ecdsa type -1 debug1: identity file /home/fyf/.ssh/id_ecdsa-cert type -1 debug1: identity file /home/fyf/.ssh/id_ecdsa_sk type -1 debug1: identity file /home/fyf/.ssh/id_ecdsa_sk-cert type -1 debug1: identity file /home/fyf/.ssh/id_ed25519 type -1 debug1: identity file /home/fyf/.ssh/id_ed25519-cert type -1 debug1: identity file /home/fyf/.ssh/id_ed25519_sk type -1 debug1: identity file /home/fyf/.ssh/id_ed25519_sk-cert type -1 debug1: identity file /home/fyf/.ssh/id_xmss type -1 debug1: identity file /home/fyf/.ssh/id_xmss-cert type -1 debug1: identity file /home/fyf/.ssh/id_dsa type -1 debug1: identity file /home/fyf/.ssh/id_dsa-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_9.2p1 Debian-2 debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4 debug1: compat_banner: match: OpenSSH_7.4 pat OpenSSH_7.4* compat 0x04000006 debug1: Authenticating to lxslc702.ihep.ac.cn:22 as 'fanyufan' debug1: load_hostkeys: fopen /home/fyf/.ssh/known_hosts: No such file or directory debug1: load_hostkeys: fopen /home/fyf/.ssh/known_hosts2: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: algorithm: curve25519-sha256 debug1: kex: host key algorithm: ssh-ed25519 debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none debug1: expecting SSH2_MSG_KEX_ECDH_REPLY Connection closed by 2401:de00:2:332::186 port 22
07-08

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值