【读书1】【2017】MATLAB与深度学习——Dropout(4)

小结(Summary)

本章涵盖以下主题:

This chapter covered the following topics:

深度学习可以简单地定义为采用深度神经网络的机器学习技术。

Deep Learning can be simply defined as aMachine Learning technique that employs the deep neural network.

以前的神经网络存在一个问题,即隐藏层越多,就越难训练,并且会降低性能。

The previous neural networks had a problemwhere the deeper (more) hidden layers were harder to train and degraded theperformance.

深度学习解决了这个问题。

Deep Learning solved this problem.

深度学习的巨大成就不是由一个关键技术来完成的,而是来源于许多细微的改进。

The outstanding achievements of DeepLearning were not made by a critical technique but rather are due to many minorimprovements.

深度神经网络的性能不佳是由于缺乏适当的训练。

The poor performance of the deep neuralnetwork is due to the failure of proper training.

与之相关的因素主要有三个:梯度消失、过度拟合和计算负荷。

There are three major showstoppers: thevanishing gradient, overfitting, and computational load.

梯度消失问题可以采用ReLU激活函数和交叉熵驱动的学习规则来获得极大提升。

The vanishing gradient problem is greatlyimproved by employing the ReLU activation function and the cross entropy-drivenlearning rule.

使用先进的梯度下降法也是有益的。

Use of the advanced gradient descent methodis also beneficial.

深度神经网络更容易被过度拟合。

The deep neural network is more vulnerableto overfitting.

深度学习利用dropout或正则化解决了这个问题。

Deep Learning solves this problem using thedropout or regularization.

由于计算量很大,因此需要大量的训练时间。

The significant training time is requireddue to the heavy calculations.

这也导致我们需要对GPU和各种算法进行更多的改进扩展。

This is relieved to a large extent by theGPU and various algorithms.

第六章 卷积神经网络(CHAPTER 6 Convolutional NeuralNetwork)

第5章指出不完全训练是导致深度神经网络性能较差的原因,并介绍了深度学习如何解决这一问题。

Chapter 5 showed that incomplete trainingis the cause of the poor performance of the deep neural network and introducedhow Deep Learning solved the problem.

深度神经网络的重要性在于它为知识的分层处理打开了复杂非线性模型和系统方法的大门。

The importance of the deep neural networklies in the fact that it opened the door to the complicated non-linear modeland systematic approach for the hierarchical processing of knowledge.

本章介绍卷积神经网络(ConvNet),它是一种专门用于图像识别的深度神经网络。

This chapter introduces the convolutionalneural network (ConvNet), which is a deep neural network specialized for imagerecognition.

该技术展示了深层网络改进对于信息(图像)处理的重要性。

This technique exemplifies how significantthe improvement of the deep layers is for information (images) processing.

事实上,ConvNet是一种较老的技术,它是在20世纪80年代至90年代之间发展起来的。(历史事实说明,很多旧文献、老技术并不一定过时了,是金子总会发光的,但需要能手去挖掘!)

Actually, ConvNet is an old technique,which was developed in the 1980s and 1990s.

然而,它被遗忘了很长的一段时间,因为在当时那个计算机还很落后的年代,它只是一种针对复杂图像的不可实现的技术。(计算能力的不断提高是人工智能高速发展的重要基石!)

However, it has been forgotten for a while,as it was impractical for real-world applications with complicated images.

2012年的一篇论文让ConvNet迅速复苏,它征服了大部分计算机视觉领域的研究人员,并开始进入快速增长期。

Since 2012 when it was dramaticallyrevived, ConvNet has conquered most computer vision fields and is growing at arapid pace.

ConvNet结构(Architecture of ConvNet)

ConvNet不仅仅是一个具有许多隐藏层的深度神经网络。

ConvNet is not just a deep neural networkthat has many hidden layers.

它是一个模仿大脑视觉皮层如何处理、识别图像的深度网络。

It is a deep network that imitates how thevisual cortex of the brain processes and recognizes images.

——本文译自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章请关注微信号:在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值