ground breaking work:Gradient-Based Learning Applied to Document Recognition

This article is written by Yann LeCun who got the Turing Award in 2018, today I’d like to share one of his most significant papers with you, which give him the name the father of CNN

Background

In the 1986s, David Rumelhart, Geoffrey Hinton and Ronald Williams published a famous paper named earning representations by back-propagating errors(BP algorithm) And next in 1998 Yann LeCun first successfully applied the Convolutional neural network in number recognition and got very nice results.

Introduction

In the early time of Pattern Recognition, researchers have known the importance of nature data, and it is almost impossible to build an accurate recognition system,
so, most of the Pattern Recognition were made by hand and algorithms. But the author thought we must use machine learning and rely on auto learning to deal with so much data. And in this paper, LeCun want to prove it in character recognition.
The approximate steps are as follows:
在这里插入图片描述

The first module is called feature extraction module, its task is to transport the input to the Low dimension (like a one-dimension vector), and the second module is trainable classifier module, its task is to train the module and adjust the hyperparameters to get more accurate results.

Algorithm

In general, the CNN used 3 main methods:

  1. Local Receptive Fields
  2. Shared Weights
  3. Spatial or Temporal Subsampling
    I don’t want to tell the 3 methods in detail, the second is easy to understand, meaning all the neural units use the same hyperparameters. And the third method means reduce the resolution of the input to make the computer be able to run the algorithm in specified time.
    I think the first is the key to CNN.I will tell you what it means in an easy way.
    在这里插入图片描述
    To put it simply, you can understand it that each neural unit just read a small part of the input, obviously, it can make each unit run much faster. And other advantages are:
  • The neural network needs less data.
  • It is hard to overfit.
    I’m not going to go deeper into convolutional neural network. And in fact, if you sometimes hear the latest structures, many of them are still using the convolutional method to solve problems.
  • 9
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
gradient-based neural dag learning(梯度优化的神经有向无环图学习)是一种用于构建和训练神经网络结构的方法。它通过学习网络的拓扑结构,即神经网络的连接方式和层次结构,来优化网络性能。 传统的神经网络结构通常是由人工设计的,而在gradient-based neural dag learning中,网络的结构可以通过梯度下降算法进行优化。该方法的核心思想是在训练过程中不仅学习网络的权重参数,还学习网络的拓扑结构。 在gradient-based neural dag learning中,网络的结构可以表示为有向无环图(DAG),图中的节点表示网络中的层或操作,边表示连接。我们可以用一组变量来表示每个节点的状态和连接关系,通过优化这些变量,实现网络结构的优化。 具体地,gradient-based neural dag learning通过计算网络中每个操作或层对目标函数的梯度来优化变量。在梯度下降的过程中,网络的结构随着反向传播算法的迭代而逐渐优化。这种方法可以使得网络自动完成结构的搜索和选择,提高了网络的表达能力和性能。 由于gradient-based neural dag learning可以自动进行网络结构的学习和优化,它可以减轻人工设计网络结构的负担,并且在处理复杂任务时能够获得更好的性能。然而,由于网络结构的搜索空间非常大,优化过程可能会很复杂,需要大量的计算资源和时间。 总之,gradient-based neural dag learning是一种通过梯度下降优化网络结构的方法,能够自动学习和优化神经网络的拓扑结构,提高网络性能。这种方法在深度学习领域有着广泛的应用潜力,并且为网络的设计和训练带来了新的思路和方法。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

江安的猪猪

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值