BP神经网络工作原理

The whole BP Neural Network computation repeat below procedures:

  1. Forward Propagation
  2. Compute cost
  3. Backward Propagation
  4. Update Parameters

Today I want to do a summary for them per my understanding since have learned long time ago but sometimes cannot remember much detail of each step.

Firstly, let’s get an overview understanding by below picture about the whole computation procedure about FP and BP:

这里写图片描述

Some diagram notations here:

  • Every Rectangle represents a single hidden layer in NN
  • Black rectangles represents Forward Propagation computation sequence
  • Red rectangles represents Backward Propagation computation sequence
  • The formulas inside each rectangle is the computation performs for each layer in FP or BP and we will discuss them in later sections

See here Denotation for all the denotations used in the above picture.


Forward Propagation

Forward Propagation is the sequence which computes from Left (input X ) of the NN to the right (Y^).

For a NN with depth L , actually it repeats below computation procedure for the first L1 layer:

  1. Input: output of previous layer A[1]
  2. Compute linear output Z[]
    Z[]=W[]A[1](1)
  3. Output: activation of current layer A[] . Here g represents activation function, like Relu, tanh, sigmoid, we use relu here for illustration:

    A[]=g(Z[])(2)

the activation output of each layer is the input of the next layer, it’s like a chain.

As in the layer L we need to compute the possibility that the i training example belongs to Label Y , so we use sigmoid function in the last layer. The sigmoid function maps the infinite input to a range [0, 1] which meets our request. So in our program, we can do a map for this possibility, say >=0.5 is means the image is a cat image.

But in practice, we will also store the intermediate output as well as parameter for each layer in a cache, as they are needed when doing BP, so as you can see in the above picture:

linear_cache[]=(A[1],W[],b[])activation_cache[]=(Z[])

python numpy implementation

formula (1):

def linear_forward(A, W, b):
...
Z = np.dot(W, A) + b
return Z

Compute Cost (Loss/Error)

Here the cost function defines “How well our algorithm performs when our prediction is Y^ while the actual class is Y ”. The less the cost is, the better our algorithm works.

From another angle, you can think of it as the error between our prediction and the actual value, lower cost value means our prediction has much higher accuracy, thus works much better.

The cross-entropy cost J can be computed using the following formula:

J=1mi=1m(y(i)log(a[L](i))+(1y(i))log(1a[L](i)))(3)

python numpy implementation

formula (3):

def compute_cost(AL, Y):
...
    m = Y.shape[1]
    cost = -np.sum(np.multiply(Y, np.log(AL)) + np.multiply((1-Y), np.log(1-AL)))/m

    cost = np.squeeze(cost)
    return cost

Backward Propagation

BP is critical in the whole NN algorithm, It computes parameter partial derivative with respect to Cost function J . And these parameter derivatives will be used to update the all the parameters (W,b) in all layers which will introduce in later sections. the updated parameters will be used in next iteration to compute the cost again which will reduce the cost comparing to cost in previous iteration.

The BP algorithm does computation from right most layer L to the left most layer (first layer) - see the picture at the beginning.

Within each layer (a red rectangle in the picture), the BP algorithm computes two kinds of derivatives: activation derivative and linear parameter derivative

Activation derivative

dZ[]=dA[]g(Z[])(4)

g is the derivative which depends on which activation function used in current layer (sigmoid/relu/tanh).

Linear derivative

dZ[] will be used to compute below three outputs:

dW[l]=LW[l]=1mdZ[l]A[l1]T(5)

db[l]=Lb[l]=1mi=1mdZ[l](i)(6)

dA[l1]=LA[l1]=W[l]TdZ[l](7)

a bit trick here is the initial derivative dA[L] in BP computation graph of the L layer, it’s not computed by formula (7), the formula for this initial parameter is

dA[L]=(YA[L]1Y1A[L])(8)

python numpy implementation

formula (5/6/7):

def linear_backward(dZ, cache):
    """
    Implement the linear portion of backward propagation for a single layer (layer l)

    Arguments:
    dZ -- Gradient of the cost with respect to the linear output (of current layer l)
    cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer

    Returns:
    dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
    dW -- Gradient of the cost with respect to W (current layer l), same shape as W
    db -- Gradient of the cost with respect to b (current layer l), same shape as b
    """
    A_prev, W, b = cache
    m = A_prev.shape[1]

    dW = np.dot(dZ, A_prev.T)/m
    db = np.sum(dZ, axis=1, keepdims=True)/m
    dA_prev = np.dot(W.T, dZ)

    return (dA_prev, dW, db)

Update parameters

Since all parameter derivatives in all layers are now available, so we can get the updated parameters by below formulas using gradient decent:

W[l]=W[l]α dW[l]

b[l]=b[l]α db[l]


Summary

By now it looks like the whole NP algorithm is more clear, it repeat below steps. And after each iteration, the cost should be reduced.


ForwardPropagationW[]/b[]UpdateParametersA[L]dW[]/db[]ComputeCostJBackwardPropagation

Actually the number of iteration is some hyper parameter with gradient decent algorithm in some opensource ML framework, like tensorflow. The more iterations we go through this algorithm we may get lower cost, best fit parameters and also higher prediction accuracy in training-set for our model, but may also cause overfiting.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值