Papers Notes_3_ GoogLeNet--Going deeper with convolutions

Architecture

Inception module

在这里插入图片描述

  1. parallel conv. with different kernel size
    visual information should be processed at various scales and then aggregated
    →the next stage can abstract features from different scales simultaneously
  2. max pooling
    pooling operations have been essential for the success in current state of the art convolutional networks
  3. 1×1 conv.
    ① dimension reduction→remove computational bottlenecks
    ② increase the representational power
    refer to Network-in-Network

GoogLeNet

在这里插入图片描述
i.e. inception(3a)+(3b):
在这里插入图片描述
all conv. use ReLU
input→224×224 taking RGB channels with mean subtraction
#3×3 reduce→1×1filters in the reduction layer used before the 3×3 conv.
linear→FC
add auxiliary classifiers connected to intermediate layers
→encourage discrimination in lower stages in the classifier, increase the gradient signal that gets propagated back, provide additional regularization
put on top of output of the Inception(4a) and (4d) modules
during training, their loss gets added to the total loss of the network with a sidcount weight(0.3)
the structure:
在这里插入图片描述

Training

asynchronous stochastic gradient descent with 0.9 momentum, fixed learning rate schedule(decreasing the learning rate by 4% every 8 epochs)
image sampling method work well
sampling of various sized patches of the image whose size is distributed evenly between 8% and 100% of the image area and whose aspect radio is chosen randomly between 3/4 and 4/3

testing

resize the image to 4 scales where the shorter dimense is 256, 288, 320 and 352 respectively
take the left, center and right square of resized images (portrait-top, center and bottom)
for each square, take the 4 corners and center 224×224 crop +square resized to 224×224+mirrored version
→4×3×6×2=144 crops per image
note: such aggressive cropping may not be necessary in real application

Appendix

Network In Network

convolution filter in CNN is a generalized linear model(GLM) for the underlying data patch
replace the GLM with more potent nonlinear function approximator (in this paper, multilayer perceptron) can enhance the abstraction ability of the local model
在这里插入图片描述
The mlpconv maps the input local patch to the output feature vector with a multilayer perceptron (MLP) consisting of multiple fully connected layers with nonlinear activation functions
the calculation performed by mlpconv layer:
f i , j , k 1 1 = m a x ( w k 1 1 T x i , j + b k 1 , 0 ) f_{i,j,k_1}^1=max({w_{k_1}^1}^Tx_{i,j}+b_{k_1},0) fi,j,k11=max(wk11Txi,j+bk1,0)

f i , j , k n n = m a x ( w k 1 n T f i , j n − 1 + b k n , 0 ) f_{i,j,k_n}^n=max({w_{k_1}^n}^Tf_{i,j}^{n-1}+b_{k_n},0) fi,j,knn=max(wk1nTfi,jn1+bkn,0)
n→the number of layers in the multilayer perceptron
x i , j x_{i,j} xi,j→the input patch centered at location (i,j)
Notes:
different location(i,j) in first layer use the same w k 1 w_{k_1} wk1, so it is different with the normal FC layer
i think it is exactly conv. layer, a neuron can be regarded as a channel
"The cross channel parametric pooling layer is also equivalent to a convolution layer with 1x1 convolution kernel"→i think it is similar idea
on the internet, i found NIN structure is implemented by the stack of conv. layers
first one is n×n, later is 1×1 kernel
so, why authors of this paper use “multiple fully connected layers”? i am confused about it…
more confusing→there is no conv. layer before the 1×1 conv. layer in GoogLeNet, different with the structure of NIN. how can 1×1 conv. layer increase the representational power?

More Confusion

in addition to the question i mentioned above, i have plenty of confusion about this paper, especially the part of “3 Motivation and High Level Consideration”.
crying cat.jpg
The Inception architecture→approximate a sparse structure implied by Provable Bounds for Learning Some Deep Representations
Their main result states that if the probability distribution of the data-set is representable by a large, very sparse deep neural network, then the optimal network topology can be constructed layer by layer by analyzing the correlation statistics of the activations of the last layer and clustering neurons with highly correlated outputs.
however, i have not found “the optimal network topology” in that paper. what’s more, i can not find the relation of the inception module and sparsity.

References

Going deeper with convolutions
Network in Network
Provable Bounds for Learning Some Deep Representations

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值