深度学习 下采样层_您应该了解深度学习中的子采样层

深度学习 下采样层 说明 (Explanation)Convolutional Neural Networks (CNN) have characteristics that enable invariance to the affine transformations of images that are fed through the network. This provides the a...
摘要由CSDN通过智能技术生成

深度学习 下采样层

说明 (Explanation)

Convolutional Neural Networks (CNN) have characteristics that enable invariance to the affine transformations of images that are fed through the network. This provides the ability to recognize patterns that are shifted, tilted or slightly warped within images.

卷积神经网络(CNN)具有使通过网络馈送的图像的仿射变换具有不变性的特征。 这提供了识别图像中偏移,倾斜或略微扭曲的图案的能力。

These characteristics of affine invariance are introduced due to three main properties of the CNN architecture.

仿射不变性的这些特征是由于CNN架构的三个主要属性而引入的。

  1. Local Receptive Fields

    局部感受野

  2. Shared Weights (parameter sharing)

    共享权重(参数共享)

  3. Spatial Sub-sampling

    空间子采样

In this article, we’ll be exploring spatial sub-sampling and understanding their purpose and the advantages they serve within CNN architectures.

在本文中,我们将探索空间子采样,并了解它们的目的以及它们在CNN架构中的优势。

This article is aimed at all levels of individuals that practice machine learning or more specifically deep learning.

本文针对从事机器学习或更具体地讲深度学习的所有层次的个人。

介绍 (Introduction)

Sub-sampling is a technique that has been devised to reduce the reliance of precise positioning within feature maps that are produced by convolutional layers within a CNN.

二次采样是一种旨在减少对CNN中卷积层产生的特征图内精确定位的依赖的技术。

CNN internals contains kernels/filters of fixed dimensions, and these are referred to as feature detectors. Once features from an image are detected, the information in regards to the position of the feature within the image can actually be disregarded, and there are benefits to this.

CNN内部包含固定尺寸的内核/过滤器,这些被称为特征检测器。 一旦检测到图像中的特征,实际上就可以忽略关于特征在图像中的位置的信息,这是有好处的。

It turns out that specific feature positioning reliance is a disadvantage to building and developing a network that can perform relatively well on input data that have undergone some form of an affine transformation. We mostly don’t want the weights within the networks learning patterns that are too specific to the training data.

事实证明,特定特征定位依赖对构建和开发可以对经过某种形式的仿射变换的输入数据执行相对较好的网络是不利的。 我们大多不希望网络学习模式中的权重过于特定于训练数据。

So, the information that matters in terms of positioning of features is the relative position of a feature to other features within the feature map, as opposed to the exact location of the feature

  • 4
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值