神经网络(2)

"""
Update the network's weights and biases by applying gradient descent using backpropagation to a single mini batch.
The update equations used
clarification: sigma_x(f) means sum of all f(x)
w = w - eta / m * sigma_x(partial derivation of weight corresponding to cost function)
b = b - eta / m * sigma_x(partial derivation of bias corresponding to cost function)
:param mini_batch: a list of tuples "(x, y)
:param eta: learning rate
:return:
"""

# init nabla bias vector and nabla weights
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# for every data (x,y) in mini batch
for x, y in mini_batch:
    # delta_nabla_b is 'partial derivation of weight corresponding to cost function' related to just one line date
    # we need to sum it up for all the data
    delta_nabla_b, delta_nabla_w = self.backprop(x, y)
    # so we accumulate it each time a data is passed in
    nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
    # we do the same for the weight
    nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值