Rectified Linear Unit (ReLU)

转载 2015年11月18日 15:57:21

ReLUThe Rectified Linear Unit (ReLU) computes the function f(x)=max(0,x), which is simply thresholded at zero.

There are several pros and cons to using the ReLUs:

  1. (Pros) Compared to sigmoid/tanh neurons that involve expensive operations (exponentials, etc.), the ReLU can be implemented by simply thresholding a matrix of activations at zero. Meanwhile, ReLUs does not suffer from saturating.
  2. (Pros) It was found to greatly accelerate the convergence of stochastic gradient descent compared to the sigmoid/tanh functions. It is argued that this is due to its linear, non-saturating form.
  3. (Cons) Unfortunately, ReLU units can be fragile during training and can “die”. For example, a large gradient flowing through a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again. If this happens, then the gradient flowing through the unit will forever be zero from that point on. That is, the ReLU units can irreversibly die during training since they can get knocked off the data manifold. For example, you may find that as much as 40% of your network can be “dead” (i.e., neurons that never activate across the entire training dataset) if the learning rate is set too high. With a proper setting of the learning rate this is less frequently an issue.

Leaky ReLU

Leaky ReLU Leaky ReLUs are one attempt to fix the “dying ReLU” problem. Instead of the function being zero when x<0, a leaky ReLU will instead have a small negative slope(of 0.01, or so). That is, the function computes f(x)=ax if x<0 and f(x)=x if x0, where a is a small constant. Some people report success with this form of activation function, but the results are not always consistent.

Parametric ReLU

rectified unit family
The first variant is called parametric rectified linear unit (PReLU). In PReLU, the slopes of negative part are learned from data rather than pre-defined.

Randomized ReLU

In RReLU, the slopes of negative parts are randomized in a given range in the training, and then fixed in the testing. As mentioned in [B. Xu, N. Wang, T. Chen, and M. Li. Empirical Evaluation of Rectified Activations in Convolution Network. In ICML Deep Learning Workshop, 2015.], in a recent Kaggle National Data Science Bowl (NDSB) competition, it is reported that RReLU could reduce overfitting due to its randomized nature. Moreover, suggested by the NDSB competition winner, the random ai in training is sampled from 1/U(3,8) and in test time it is fixed as its expectation, i.e., 2/(l+u)=2/11.

In conclusion, three types of ReLU variants all consistently outperform the original ReLU in these three data sets. And PReLU and RReLU seem better choices.

深度学习基础(十二)—— ReLU vs PReLU

从算法的命名上来说,PReLU 是对 ReLU 的进一步限制,事实上 PReLU(Parametric Rectified Linear Unit),也即 PReLU 是增加了参数修正的 ReLU。在...

PReLU Introduction

reference: http://blog.csdn.net/shuzfan/article/details/51345832 本次介绍PReLU激活函数,方法来自于何凯明paper 《Delvi...

修正线性单元(Rectified linear unit,ReLU)

标准的sigmoid输出不具备稀疏性,需要通过惩罚因子来训练一堆接近于0的冗余数据,从而产生稀疏数据,比如L1,L2或者student-t作为惩罚因子,进行regularization。使用recti...

【ReLU】Rectified Linear Units, 线性修正单元激活函数

在神经网络中,常用到的激活函数有sigmoid函数、双曲正切函数,而本文要介绍的是另外一种激活函数,Rectified Linear Unit Function(ReLU, 线性激活函数)...

ReLu(Rectified Linear Units)激活函数

论文参考:Deep Sparse Rectifier Neural Networks (很有趣的一篇paper) 起源:传统激活函数、脑神经元激活频率研究、稀疏激活性 传统Sigmoid系激活函数 ...

rectified linear units (ReLUs)

网上看到的,随手抄在这里,以后深入学习再补充。 预训练的用处:规则化,防止过拟合;压缩数据,去除冗余;强化特征,减小误差;加快收敛速度。 标准的sigmoid输出不具备稀疏性,需要...

激活函数实现--4 Rectified linear函数实现

1.Rectified Linear函数的定义 2.Rectified Linear函数的导数 3.Re...

采用ReLU作为激活函数的简单深度神经网络matlab代码设计

本文介绍下如何实现神经元激活函数为ReLU的深度神经网络。ReLU函数的数学公式很简单ReLU(x)=max(x,0)。若DNN用于数据分类,则可以简单的认为其主要由两个部分组成:多隐层网络+分类器。...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:Rectified Linear Unit (ReLU)
举报原因:
原因补充:

(最多只允许输入30个字)