每日小记录--持续更新

2018-05-16

2018-05-15

论文参考

【Convolutional Networks Can Learn to Generate Affinity Graphs for Image Segmentation 】

【Network In Network】

  • Convolutional neural networks (CNNs) [1] consist of alternating convolutional layers and pooling layers.
    卷积神经网络(CNNs)1由交替的卷积层和池层组成。
  • Convolution layers take inner product of the linear filter and the underlying receptive field followed by a nonlinear activation function at every local portion of the input.
    卷积层采用线性滤波器的内积和底层接受域,然后在输入的每个局部部分都有一个非线性激活函数。
  • The resulting outputs are called feature maps.
    所得到的输出称为特性映射。
  • The convolution filter in CNN is a generalized linear model (GLM) for the underlying data patch, and we argue that the level of abstraction is low with GLM.
    CNN的卷积滤波是一个广义的线性模型(GLM),用于底层的数据补丁,我们认为抽象的层次是低的。
  • By abstraction we mean that the feature is invariant to the variants of the same concept [2].
    通过抽象,我们的意思是这个特性对于相同概念2的变体是不变的。
  • Replacing the GLM with a more potent nonlinear function approximator can enhance the abstraction ability of the local model.
    用一个更有效的非线性函数近似器代替GLM可以增强局部模型的抽象能力。
  • GLM can achieve a good extent of abstraction when the samples of the latent concepts are linearly separable, i.e. the variants of the concepts all live on one side of the separation plane defined by the GLM.
    当潜在概念的样本是线性可分的时,GLM可以达到很好的抽象程度,也就是说,概念的变体都存在于由GLM定义的分离平面的一边。
  • Thus conventional CNN implicitly makes the assumption that the latent concepts are linearly separable.
    因此,传统的CNN含蓄地假设了潜在的概念是线性可分的。
  • However, the data for the same concept often live on a nonlinear manifold, therefore the representations that capture these concepts are generally highly nonlinear function of the input.
    然而,相同概念的数据通常存在于非线性的廖中,因此捕获这些概念的表示通常是输入的高度非线性函数。
  • In NIN, the GLM is replaced with a micro network structure which is a general nonlinear function approximator.
    在NIN中,GLM被一个微网络结构取代,它是一个一般的非线性函数近似器。
  • In this work, we choose multilayer perceptron [3] as the instantiation of the micro network, which is a universal function approximator and a neural network trainable by back-propagation.
    在这项工作中,我们选择多层感知器3作为微网络的实例化,它是一个通用的函数近似器和一个由反向传播的神经网络。

1、Convolutional Neural Network UFLDL Tutorial

2、决策树和随机森林 博客园
https://www.cnblogs.com/fionacai/p/5894142.html
机器之心公众号 决策树到随机森林
决策森林和卷积神经网络二道归一

3、卷积层的实现
CNN详解(卷积层及下采样层)

深度学习—-之全卷积神经网络取代全连接层—-用于图像分割

卷积层上的滑动窗口(将全连接层转化为卷积层)

4、ResNet
http://kaiminghe.com/

 Group Normalization
Yuxin Wu and Kaiming He 
Tech report, arXiv, Mar. 2018
arXiv   code/models


Non-local Neural Networks
Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018
arXiv   code/models

5、CS231n Convolutional Neural Networks for Visual Recognition

6 . Convolutional Neural Networks (LeNet)

7 . A 2017 Guide to Semantic Segmentation with Deep Learning

【8 、 CNN 入门讲解:什么是全连接层(Fully Connected Layer)?】


2018-05-04

自动驾驶领域的语义分割数据集
https://blog.csdn.net/sparkapi/article/details/79652571

2018-04-21

https://github.com/torrvision/crfasrnn
PSPNet搭建成功,但是大部分时MATLAB的代码,跑cityscapes数据集对显卡有要求

2018-04-19


http://www.6d-vision.com/lostandfounddataset

LostAndFoundDataset
6D-Vision

这里写图片描述


https://hszhao.github.io/projects/pspnet/

Pyramid Scene Parsing Network

这里写图片描述

https://github.com/hszhao/PSPNet
这里写图片描述


https://www.cityscapes-dataset.com/downloads/

cityscapes 数据集


http://synthia-dataset.net/download-2/#downloads

Download SYNTHIA-CVPR’16

这里写图片描述


Markus Enzweiler
Datasets
https://www.markus-enzweiler.de/docs/datasets.html


opencv

腐蚀和膨胀
http://www.opencv.org.cn/opencvdoc/2.3.2/html/doc/tutorials/imgproc/erosion_dilatation/erosion_dilatation.html

https://blog.csdn.net/poem_qianmo/article/details/23710721


Deep Learning论文笔记之(四)CNN卷积神经网络推导和实现
https://blog.csdn.net/zouxy09/article/details/9993371/


issues caffe-segnet
https://github.com/alexgkendall/caffe-segnet/issues/21
https://github.com/alexgkendall/SegNet-Tutorial/issues/37
这里写图片描述

https://github.com/alexgkendall/SegNet-Tutorial/blob/master/Example_Models/segnet_model_zoo.md
segnet-tutorial SegNet Model Zoo

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值