How transferable are features in deep neuralnetworks?个人笔记

1 Title:

        How transferable are features in deep neuralnetworks?(Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson)(Advances in Neural Information Processing Systems (NIPS))

2 Conclusion

        While the paper may not introduce a novel method, it has drawn several conclusions through experiments that hold significant guidance for future research in deep learning and deep transfer learning.

  1. The first three layers of the neural network are predominantly composed of general features, making the transfer effect more favorable.

  2. Introducing fine-tuning into deep transfer networks results in a substantial improvement in performance, potentially surpassing the performance of the original network.

  3. Fine-tuning proves effective in mitigating the dissimilarities between datasets.

  4. Deep transfer networks outperform random weight initialization.

  5. Transfer of network layers can expedite learning and optimization of the network

3 Good Sentence

        1、Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs.(Background)

        2、However, in this study we aim not to maximize absolute performance, but rather to study transfer results on a well-known architecture.(Research methods)

        3、 The results yield many different conclusions. In each of the following interpretations, we compare the performance to the base case (white circles and dotted line in Figure 2).(Result and introducing the discuss)

文章总结:

微调对深度迁移学习的效果很好

对于AnB: 3-5层效果下降是由于co-adapation的原因,即3-5层的特征是联合特征,若拆分,降低泛化能力,因此效果下降。而6-7层是由于接近于final layer,specific to task and dataset,所以效果下降。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值