读文献
Annie-qu
这个作者很懒,什么都没留下…
展开
-
读文献--《U-Net: Convolution Networks for Biomedical Image Segmentation 》
声明:作者翻译论文仅为学习,如有侵权请联系作者删除博文,谢谢!1.AbstractIn this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path翻译 2020-10-30 14:51:17 · 467 阅读 · 0 评论 -
读文献——《Curriculum learning》
原文地址:https://dl.acm.org/doi/10.1145/1553374.15533801、AbstractHumans and animals learn much better when the examples are not randomly presented but organized in a meaningful order which illustrates gradually more concepts, and gradually more complex ones.翻译 2020-08-03 14:56:24 · 753 阅读 · 0 评论 -
读文献——《Very Deep Convolutional Networks for Large-scale Image Recognition》
1. AbstractOur main contribution is a thorough evaluation of increasing depth using an architecture with very small (3×3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth翻译 2020-07-29 15:36:47 · 343 阅读 · 0 评论 -
读文献——《ImageNet classification with deep convolutional neural networks》
1、Introduction当前的对象识别方法中,必不可少的的属机器学习了,而以前万级别的图像数据库已经做得很好了,现在百万级的数据的数据库产生,例如LabelMe, ImageNet。所以开始这数百万级的对象识别研究。However, the immense complexity of the object recognition task means that this problem cannot be specified even by a dataset as large as Image翻译 2020-07-23 11:18:40 · 552 阅读 · 0 评论 -
读文献——《Learning representations by back-propagating errors》
Back-procedure, the procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. Internal ‘hidden’ units are not part of原创 2020-07-16 10:40:08 · 446 阅读 · 0 评论 -
读文献——《Deep Residual Learning for Image Recognition》
1、AbstractWe present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. (本文提出一种比过去使用的神经网络更深的训练网络,残差学习结构。)We provide residual networks are easier to optimize, and can gain accuracy fr原创 2020-07-15 10:38:55 · 299 阅读 · 0 评论 -
读文献——《Going deeper with convolutions》
Going deeper with convolutions其实这篇文献是在上一个BN那篇文献之前读的,读完这篇日常百度看看相关的时候看到有人推荐四篇相关的文章才去读的BN和ReNet。现在刚看完ResNet,先把这个之前读的回顾一遍再捋一捋ResNet。首先,这是Google在ImageNet2014上的新卷积神经网络GoogLeNet,从名字上可以看出来它和LeNet的密切关系。此外,文中提出了一种新的深度卷积神经网络结构叫做Inception,这是GoogLeNet的关键。文中提到的Heb原创 2020-07-08 20:56:49 · 361 阅读 · 0 评论 -
读文献——《Batch Normalization Accelerating Deep Network Training by Reducing Internal Covariate Shift》
在自己阅读文章之前,通过网上大神的解读先了解了一下这篇文章的大意,英文不够好的惭愧...大佬的文章在https://blog.csdn.net/happynear/article/details/44238541,掺杂着我一小些理解和大佬的总结,记录一下我的学习,大家一起交流。首先,引用大佬对文章的梳理解释,大概对本文的理解如下:文章主要理解...翻译 2020-06-29 17:28:13 · 320 阅读 · 0 评论