[深度学习论文笔记][Visualizing] 网络可视化部分论文导读

There are several ways to understanding and visualing CNN


1 Visualizing Activations

Show the activations of the network during the forward pass. It turns out that the activations usually start out looking relatively blobby and dense, but as the training progresses the activations usually become more sparse and localized.


If you find some activation may be all zero for many different inputs, which can be a

symptom of high learning rates.


2 Visualize Patches that Maximally Activate Neurons

See [Girshich et al. ].


3 Visualize the Raw Weights

Only interpretable on the first layer’s weights (Gabor like features) since the first layer has contact with the raw image. You can still do it for higher layers, it is just not that interesting because it looks at the previous activation values but not the raw image.


Noisy patterns can be an indicator of a network that hasn’t been trained for long enough, or possibly a very low regularization strength that may have led to overfitting.

4 Visualize the Representation Space
For example, use t-SNE visualization on fc7 features (crucially, features after the ReLU non-linearity). It embed high-dimensional points so that locallly, pairwise distances are
conserved. 


ConvNets can be interpreted as gradually transforming the images into a representation in which the classes are separable by a linear classifier. It shows that the similarities are more often class-based and semantic rather than pixel and color-based.

5 Occlusion Experiments

See [Zeiler and Fergus. 2013]

6 Deconv Approaches
This topic will be covered in [Simonyan et al. 2014], [Dosovitskiy et al. 2015], and [Zeiler and Fergus. 2013]. This approach is image-specific. It only requires a single backward
pass.

6.1 Goal
Compute the gradient of any arbitrary neuron in the network wrt the image.


It shows the input pattern that is most strongly activating this neuron (and hence the part that is most discriminative to it). That is, the gradient is telling that making a small step of the image towards the gradient’s direction will have a locally positive influence on this neuron’s activation. The gradient indicates which pixels need to be changed the least to affect the class score the most.


6.2 Method
Pick a layer, set the gradient there to be all zero except for one 1 for some neuron of interest. Then backprop to image. We will modify the back pass of relu layer, and otherwise we will back through the image.


7 Optimization Approaches
See [Simonyan et al. 2014]. This approach is not specific to any particular image.

8
References
[1]. F.-F. Li, A. Karpathy, and J. Johnson. http://cs231n.stanford.edu/slides/winter1516_lecture9.pdf.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值