神经网络(5)

# output layer error is calculated as follows
    output_layer_error = self.cost_derivative(activations[-1], y) * sigmoid_prime_vec(zs[-1])
    # backward pass
    # in backward pass, we calculate the error of each layer, note that the L layer's error can be derived from the L+1 layer's error
    nabla_b[-1] = output_layer_error
    # although output_layer_error and activations are each a vector, their product is matrix
    # with number of rows equals to size of output_layer_error vector
    # and number of columns equals the size of activation vector
    nabla_w[-1] = np.dot(output_layer_error, activations[-2].transpose())
    # Note that the variable l in the loop below is used a little
    # differently to the notation in Chapter 2 of the book.  Here,
    # l = 1 means the last layer of neurons, l = 2 is the
    # second-last layer, and so on.  It's a renumbering of the
    # scheme in the book, used here to take advantage of the fact
    # that Python can use negative indices in lists.
    for l in xrange(2, self.num_layers):
        z = zs[-l]
        spv = sigmoid_prime_vec(z)
        output_layer_error = np.dot(self.weights[-l+1].transpose(), output_layer_error) * spv
        nabla_b[-l] = output_layer_error
        nabla_w[-l] = np.dot(output_layer_error, activations[-l-1].transpose())
    return (nabla_b, nabla_w)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值