文章目录
VGG
2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
ResNet
2015
Deep Residual Learning for Image Recognition
- Residual Representations / Shortcut Connections
PreAct-ResNet
2016
Identity Mappings in Deep Residual Networks
- 为了构造identity mapping f(y) = y,因此作者对activation functions(BN和reLU)进行更改.那么在forward或者backward的时候,信号都能直接propagate from 一个unit to other unit。
GoogLeNet
Inception V1
2014
Going deeper with convolutions
- 利用1x1的卷积解决维度爆炸
Inception V2
2015
v2:Batch Normalization: Accelerating Deep Network Training by ReducingInternal Covaria