
base_model
文章平均质量分 93
leo_whz
生命不息,奋斗不止
展开
-
深度学习之基础模型-NIN
Network in Network 是google提出的一种在网络结构上非常具有创造性的文章。以前的深度学习结构,往往是CNN+FC之类的组合,这样在CNN与FC进行结合的过程中,则把两部分过程独立来开,CNN负责提取特征,FC负责特征分类。而NIN则利用mlpconv和global average pooling将CNN和FC两部分有机的结合在一起,使得其可解释性更强。从而文章的创新点: *提出原创 2017-09-24 10:18:31 · 2505 阅读 · 0 评论 -
深度学习之基础模型-SEP-Nets
Albeit there are already intensive studies on compressing the size of CNNs, the considerable drop of performance is still a key concern in many designs. This paper addresses this concern with several n原创 2017-11-15 23:29:05 · 1275 阅读 · 0 评论 -
深度学习之基础模型-PolyNet
On one hand, the pursuit for very deep networks is met with a diminishing return and increased training difficulty; On the other hand, widening a network would result in a quadratic growth in both comp原创 2017-11-14 22:58:17 · 5371 阅读 · 0 评论 -
深度学习之基础模型-DenseNet
In this paper, we embrace this observation and introduce the Dense Convolutional Network(DenseNet), which connects each layer to every other layer in a feed-forward fashion. DenseNets have several c原创 2017-11-05 15:22:30 · 2853 阅读 · 0 评论 -
深度学习之基础模型-ShuffleNet
The new architecture utilizes two proposed operations, point wise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy.该论文利用了point wise group convolutio原创 2017-11-04 23:59:53 · 1510 阅读 · 0 评论 -
深度学习之基础模型-Inception-V2(BN)
Training Deep Neural Networks is complicated by the fact that the distribution os each layer’s inputs changes during training, as the parameters of the previous layers change作者指出,深度网络的一直都非常困难,主要是由于在训练原创 2017-10-17 12:58:35 · 4028 阅读 · 0 评论 -
深度学习之基础模型-mobileNet
MobileNets are based on streamlined architecture that uses depthwise separable convolutions to build light weight deep nueral network. 思想基于depthwise separable convolution(来源于FactorizedNet)来实现 depth原创 2017-10-24 21:40:14 · 7756 阅读 · 1 评论 -
深度学习之基础模型—InceptionV1(GoogLeNet)
与VGG同年,google也独立在深度网络的深度方向进行了研究。发明了一个22层的深度神经网络,为了致敬LeNet,取名GoogLeNet(非GoogleNet)。该结构充分利用了网络结构中子网络的计算,使得网络的深度进一步提升。在设计过程中,受到了多尺度处理的启发,并且结合Hebbian principle原理进行网络结构设计,使得网络达到最优的状态。简介GoogLeNet的参数量相比AlexN原创 2017-10-05 08:25:33 · 6341 阅读 · 0 评论 -
深度学习之基础模型—resNet
The depth of representation is of central importance for many visual recognition tasks继VGG和GoogLeNet在网络深度上进行了进一步的尝试,取得了比较大的进展-网络越深效果也越好-,但也遇到了问题: 网络越深,越容易出现梯度消失,导致模型训练难度变大,出现“退化”现象注:从VGG的结果也能感原创 2017-10-13 18:12:21 · 14302 阅读 · 1 评论 -
深度学习之基础模型-FractalNet
We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are preci原创 2017-11-01 00:14:42 · 2833 阅读 · 1 评论 -
深度学习之基础网络-SqueezeNet
For a given accuracy level, it is typically possible to identify multiple CNN architecture that achieve that accuracy level.思想现在研究深度卷机模型,主要有两个出发点:给定数据集,提高精度给定精度,降低模型参数大小参数少的好处:在分布式训练更有效:more effic原创 2017-10-23 20:42:26 · 1490 阅读 · 0 评论 -
深度学习之基础模型-Xception
We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depth wise separable convolution operation(a dept原创 2017-10-30 18:08:44 · 18049 阅读 · 0 评论 -
深度学习之基础网络-Inception-V4
One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction wi原创 2017-10-21 19:36:33 · 1445 阅读 · 0 评论 -
深度学习之基础模型-Inception-V3
Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and原创 2017-10-19 20:59:52 · 13378 阅读 · 0 评论 -
深度学习之基础模型---AlexNet
论文发表于2012年,由深度学习之父Hinton和他的学生发表。Alex Krizhevshy, Ilya Sutskever Geoffre E.Hinton。Hinton在深度学习领域的地位不用多说,神级别的人物,本人只能跪拜了。摘要论文训练一个非常大,深的卷机神经网络,针对120w的高精度图像(来自ImageNet LSVRC-2010)进行1000类进行识别。在测试集中,top1和top5的原创 2017-09-13 14:36:52 · 1886 阅读 · 0 评论 -
深度学习之基础模型-VGG
网络结构结构Architecture模型框架效果分析单尺度多尺度多尺度裁剪模型融合对比总结参考文献VGG论文给出了一个非常振奋人心的结论:卷积神经网络的深度增加和小卷积核的使用对网络的最终分类识别效果有很大的作用。记得在AlexNet论文中,也做了最后指出了网络深度的对最终的分类结果有很大的作用。这篇论文则更加直接的论证了这一结论。作者Karen Simonyan & Andre原创 2017-09-27 12:45:43 · 32404 阅读 · 6 评论 -
深度学习之基础模型-总结
目前来看,很多对 NN 的贡献(特别是核心的贡献),都在于NN的梯度流上,比如- sigmoid会饱和,造成梯度消失。于是有了ReLU。- ReLU负半轴是死区,造成梯度变0。于是有了LeakyReLU,PReLU。- 强调梯度和权值分布的稳定性,由此有了ELU,以及较新的SELU。- 太深了,梯度传不下去,于是有了highway。- 干脆连highway的参数都不要,直接变残差,于是原创 2017-11-16 17:45:07 · 2441 阅读 · 1 评论