Paper
jerry173985
这个作者很懒,什么都没留下…
展开
-
2020-12-07
3If you look into the original ResNet Paper (http://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf) they use strided convolutions to downsample the image. The main path is downsampled automatically using these原创 2020-12-07 21:12:23 · 144 阅读 · 0 评论 -
Paper -- DenseNet:Densely Connected Convolutional Network
Abstracts:DenseNet breaks away from the fixed thinking of deepening the number of network layers (ResNet) and widening the network structure (Inception) to improve network performance. From the perspective of features, through feature reuse and bypass (By原创 2020-12-07 18:45:52 · 284 阅读 · 0 评论 -
Paper--Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
An article on batch processing, detailing the specific advantages of using BN1. The relationship between weight initialization and preprocessing methods in neural networksIf you have done the dnn experiment, you may find that preprocessing the data, such原创 2020-12-07 17:33:19 · 203 阅读 · 0 评论 -
Paper -- Deep Residual Learning for Image Recognition
Now, Residual Connections are everywhere: Not only in image recognition, they’re in transformers.Problem: people knew if you can increase depth pof a neural network, you can make it perform better, make it generalize better, you can reach lower training原创 2020-12-07 12:22:44 · 128 阅读 · 0 评论