深度置信网络_人工智能深度学习之父Hinton深度置信网络北大最新演讲(含PPT)...

这是2019年5月14日Hinton在北大做的远程讲座

Abstract

In 2006, there was a resurgence of interest in deep neural networks. This was triggered by the discovery that there was a simple and effective way to pre-train deep networks as generative models of unlabeled data. The pre-trained networks could then be fine-tuned discriminatively to give excellent performance on labeled data. In this lecture, I will describe the pre-training procedure used for Deep Belief Nets and show how it evolved from an earlier training procedure for Boltzmann machines that was theoretically elegant but too inefficient to be practical. I will also show how the pre-training procedure overcame a major practical problem in training densely connected belief nets.

2006年,人们对深部神经网络的兴趣重新抬头。这是由于发现有一种简单而有效的方法将深层网络预训练为未标记数据的生成模型而引发的。然后,可以对经过预先培训的网络进行有区别的微调,以在标记的数据上提供出色的性能。在这堂课中,我将描述用于深度信念网的预培训程序,并展示它是如何从早期的Boltzmann机器培训程序演变而来的,Boltzmann机器在理论上很优雅,但效率太低,不实用。我还将展示训练前的程序如何克服训练紧密相连的信念网中的一个重大实际问题。

Biography

c0e6c15323e6c765d622a405427e15c9.png

Geoffrey Hinton received his PhD in Artificial Intelligence from Edinburgh in 1978. After five years as a faculty member at Carnegie-Mellon he became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto where he is now an Emeritus Distinguished Professor. He is also a Vice President & Engineering Fellow at Google and Chief Scientific Adviser of t

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值