【读书1】【2017】MATLAB与深度学习——Dropout(1)

Dropout

本节给出dropout的执行代码。

This section presents the code thatimplements the dropout.

我们使用sigmoid作为隐藏节点的激活函数。

We use the sigmoid activation function forthe hidden nodes.

该代码的主要目的是给出dropout方法如何编程的示例,由于训练数据可能过于简单,我们很难体会到该方法对于过度拟合的实质性改进。

This code is mainly used to see how thedropout is coded, as the training data may be too simple for us to perceive theactual improvement of overfitting.

函数DeepDropout采用反向传播算法训练该神经网络示例。

The function DeepDropout trains the exampleneural network using the back-propagation algorithm.

它采用神经网络的权值和训练数据,并返回训练后的权值。

It takes the neural network’s weights andtraining data and returns the trained weights.

[W1,W2, W3, W4] = DeepDropout(W1, W2, W3, W4, X, D)

其中该函数中的变量与上节中DeepReLU函数的变量完全相同。

where the notation of the variables is thesame as that of the function DeepReLU of the previous section.

DeepDropout.m文件的代码清单如下所示,该代码实现了DeepDropout函数。

The following listing shows theDeepDropout.m file, which implements the DeepDropout function.

function [W1, W2, W3, W4] = DeepDropout(W1,W2, W3, W4, X, D)

alpha = 0.01;

N = 5;

   fork = 1:N

          x= reshape(X(:, :, k), 25, 1);

          v1 = W1*x;

          y1 = Sigmoid(v1);

          y1 = y1 .* Dropout(y1, 0.2);

          v2 = W2*y1;

          y2 = Sigmoid(v2);

          y2 = y2 .* Dropout(y2, 0.2);

          v3 = W3*y2;

          y3 = Sigmoid(v3);

          y3 = y3 .* Dropout(y3, 0.2);

          v = W4*y3;

          y = Softmax(v);

          d = D(k, :)';

          e = d - y;

          delta = e;

          e3 = W4'*delta;

          delta3 = y3.*(1-y3).*e3;

          e2 = W3'*delta3;

          delta2 = y2.*(1-y2).*e2;

          e1 = W2'*delta2;

          delta1 = y1.*(1-y1).*e1;

          dW4 = alpha*delta*y3';

          W4 = W4 + dW4;

          dW3 = alpha*delta3*y2';

          W3= W3 + dW3;

          dW2= alpha*delta2*y1';

          W2= W2 + dW2;

          dW1= alpha*delta1*x';

          W1= W1 + dW1;

   end

end

该代码导入训练数据,使用增量规则计算权值更新(dW1, dW2, dW3, and dW4),并调整神经网络的权值。

This code imports the training data,calculates the weight updates (dW1, dW2, dW3, and dW4) using the delta rule,and adjusts the weight of the neural network.

以上实现过程与前面的训练代码是相同的。

This process is identical to that of theprevious training codes.

它与前面代码的不同之处在于,一旦计算出从隐藏节点的Sigmoid激活函数输出结果,Dropout函数将改变该节点的最终输出。

It differs from the previous ones in thatonce the output is calculated from the Sigmoid activation function of thehidden node, the Dropout function modifies the final output of the node.

例如,第一隐藏层的输出计算如下:

For example, the output of the first hiddenlayer is calculated as:

y1 = Sigmoid(v1);

y1 = y1 .* Dropout(y1, 0.2);

执行以上代码将第一隐藏层20%的节点输出置为0,即丢弃第一隐藏层节点总数的20%。

Executing these lines switches the outputsfrom 20% of the first hidden nodes to 0; it drops out 20% of the first hiddennodes.

——本文译自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章请关注微信号:在这里插入图片描述

  • 2
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值