deep-learning
fourye007
Work for fun, Live for love
展开
-
什么是 Explaining Away?(个人理解)
有3个事件:AA,BB和CC 其中: A⊥BA \bot B(AA和BB独立), 而 C=A∪BC = A\cup B 如果事件CC发生,则Explaining Away A和B发生的概率增大 如果事件CC和AA都发生,则Explaining Away 事件BB发生的概率降低 从这张图中:我们认为红色是C发生,它由AA和BB共同影响。 所以上面的解释很容一说通。图片来源:https:/原创 2017-05-09 22:19:34 · 2187 阅读 · 0 评论 -
Network in Network
Key ProblemsCNN implicitly makes the assumption that the latent concepts are linearly separablethe data for the same concept often live on a nonlinear manifold, therefore the representations that cap原创 2017-11-03 17:17:07 · 281 阅读 · 0 评论 -
Maxout Networks
Motivationin multiple dimensions a maxout unit can approximate arbitrary convex functionsContributionsmaxout is cross channel poolingmaxout enhances dropout’s abilities as a model averaging techniq原创 2017-11-03 19:35:28 · 436 阅读 · 0 评论 -
VGG--VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
Key Pointswe fix other parameters of the architecture, and steadily increase the depth of the network by adding more convolutional layers, which is feasible due to the use of very small (3 × 3) convol原创 2017-11-03 19:47:13 · 279 阅读 · 0 评论 -
ResNet--Deep Residual Learning for Image Recognition
Key questionVanishing/exploding gradients hamper convergence from the beginning, as the network becomes more deeper.with the network depth increasing, accuracy gets saturated (which might be unsurpri原创 2017-11-03 20:57:57 · 348 阅读 · 0 评论 -
DenseNet--Densely Connected Convolutional Networks
Compelling advantagesalleviate the vanishing-gradient problemstrengthen feature propagationencourage feature reusesubstantially reduce the number of parameters.requires fewer parametersno need to原创 2017-11-03 22:16:33 · 430 阅读 · 0 评论 -
SQUEEZENET
Compelling AdvantagesSmaller CNNs require less communication across servers during distributed training.Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car.原创 2017-11-05 10:32:17 · 572 阅读 · 0 评论 -
分类Global-View
Fine-Grained recognitionrequire recognition of highly localized attributes of objects while being invariant to their pose and location in the imagePart-based models construct representations by loca原创 2017-11-20 09:58:46 · 510 阅读 · 0 评论 -
Highly Efficient Forward and Backward Propagation of Convolutional Neural Networks for Pixelwise Cla
Contributionseliminate all the redundant computation in convolution and pooling on images by introducing novel d-regularly sparse kernels.It generates exactly the same results as those by patch-by-pa原创 2017-11-06 16:24:24 · 518 阅读 · 0 评论 -
AlexNet--ImageNet Classification with Deep Convolutional Neural Networks
LRN(Local Response Normalization)applied this normalization after applying the ReLU nonlinearity in certain layersOverlapping PoolingWe generally observe during training that models with overlapping原创 2017-11-03 16:48:27 · 346 阅读 · 0 评论 -
DSOD: Learning Deeply Supervised Object Detectors from Scratch
Key ProblemsLimited structure design space.Learning bias As both the loss functions and the category distributions between classification and detection tasks are different, we argue that this will l原创 2017-10-29 22:16:32 · 568 阅读 · 0 评论 -
SVM 中 rbf kernel 的意义 —— 一个不怎么严谨的解释
http://discussions.youdaxue.com/t/svm-rbf-kernel/6088在我们机器学习的过程中,很多同学包括我自己也疑惑过rbf kernel函数的实际作用是什么?不同的参数又有什么作用。上周我参加了上海的夏令营,这里就是我们讨论的结果。首先我们要知道Support Vector Machine到底是什么东西。看过课程视频的同学都是到,转载 2017-05-10 13:52:36 · 14471 阅读 · 1 评论 -
SPP(Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition)
Introduction在一般的CNN结构中,在卷积层后面通常连接着全连接。而全连接层的特征数是固定的,所以在网络输入的时候,会固定输入的大小(fixed-size)。但在现实中,我们的输入的图像尺寸总是不能满足输入时要求的大小。然而通常的手法就是裁剪(crop)和拉伸(warp)。 这样做总是不好的:图像的纵横比(ratio aspect) 和 输入图像的尺寸是被改变的。这样就会扭曲原始的图原创 2017-07-16 22:58:49 · 20685 阅读 · 7 评论 -
fast-rcnn 详解
Introduction图像检测有两大挑战: 1. 需要处理大量的候选对象(proposals or candidate objects) 2. 提供的候选对象的位置通常是粗略的,需要修缮 本文一大特点:在single stage 进行model:区分proposal和其位置信息(spatial location)同时进行RCNN缺点Training is a multi-stage pip原创 2017-08-06 10:56:00 · 6383 阅读 · 2 评论 -
Transposed Convolution, Fractionally Strided Convolution or Deconvolution
点击打开链接反卷积(Deconvolution)的概念第一次出现是Zeiler在2010年发表的论文Deconvolutional networks中,但是并没有指定反卷积这个名字,反卷积这个术语正式的使用是在其之后的工作中(Adaptive deconvolutional networks for mid a转载 2017-10-08 15:55:44 · 449 阅读 · 0 评论 -
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Key Problemsthe distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and carefu原创 2017-10-29 09:36:08 · 320 阅读 · 0 评论 -
Going deeper with convolutions
Key ProblemsBigger size has two disadvantages: prone to over-fitting with less labeled datacomputation consumingMethodThe fundamental way of solving both issues would be by ultimately moving from原创 2017-10-30 22:23:43 · 282 阅读 · 0 评论 -
Training Region-based Object Detectors with Online Hard Example Mining
OHEM: Training Region-based Object Detectors with Online Hard Example Mining原创 2017-10-14 21:13:26 · 408 阅读 · 0 评论 -
Focal Loss for Dense Object Detection
Focal Loss for Dense Object Detection原创 2017-10-14 23:05:47 · 538 阅读 · 0 评论 -
YoloV3
与之前版本的不同之处Boundingobject scores 使用logistic 回归预测bounding box中心点使用sigmoidClass Prediction使用binary cross-entropy 损失,使用sigmoid作为预测概率Proposal Overlap Problem一个scale多预测几个boxFeatur...原创 2018-04-16 17:18:32 · 841 阅读 · 0 评论