第五章 误差反向传播

一、链式法则

1、什么是链式法则(反向传播是基于链式法则的

 解释:

 

二、反向传播 

加法的反向传播只是将上游的值传给下游,并不需要正向传播的输入信号。但是,乘法的反向传播需要正向传播时的输入信号值。

三、简单层的实现

1、乘法层的实现

# 乘法层
class MulLayer:
    def __init__(self):
    # 保存正向传播时的输入值
        self.x = None
        self.y = None

    def forward(self, x, y):
        self.x = x
        self.y = y                
        out = x * y

        return out

    def backward(self, dout):
        dx = dout * self.y
        dy = dout * self.x

        return dx, dy

2、加法层的实现

class AddLayer:
    def __init__(self):
    # 加法不需要保留正向传播的值
        pass

    def forward(self, x, y):
        out = x + y

        return out

    def backward(self, dout):
        dx = dout * 1
        dy = dout * 1

        return dx, dy

3、实例 

# 注意,调用backward()的顺序与调用forward()的顺序相反
apple = 100
apple_num = 2
orange = 150
orange_num = 3
tax = 1.1
# layer
mul_apple_layer = MulLayer()
mul_orange_layer = MulLayer()
add_apple_orange_layer = AddLayer()
mul_tax_layer = MulLayer()
# forward
apple_price = mul_apple_layer.forward(apple, apple_num) #(1)
orange_price = mul_orange_layer.forward(orange, orange_num) #(2)
all_price = add_apple_orange_layer.forward(apple_price, orange_price) #(3)
price = mul_tax_layer.forward(all_price, tax) #(4)
# backward
dprice = 1
dall_price, dtax = mul_tax_layer.backward(dprice) #(4)
dapple_price, dorange_price = add_apple_orange_layer.backward(dall_price) #(3)
dorange, dorange_num = mul_orange_layer.backward(dorange_price) #(2)
dapple, dapple_num = mul_apple_layer.backward(dapple_price) #(1)
print(price) # 715
print(dapple_num, dapple, dorange, dorange_num, dtax) # 110 2.2 3.3 165 650

四、激活函数层的实现

1、 ReLU层

class Relu:
    def __init__(self):
    # 正向传播时的输入x的元素中小于等于0 的地方保存为True,其他地方(大于0 的元素)保存为False
        self.mask = None
    def forward(self, x):
        # x 为数组
        self.mask = (x <= 0)
        out = x.copy()
        out[self.mask] = 0
        return out
    def backward(self, dout):
        dout[self.mask] = 0
        dx = dout
        return dx

2、Sigmoid 层 

计算推导后:

 

class Sigmoid:
    def __init__(self):
        self.out = None
    def forward(self, x):
        out = 1 / (1 + np.exp(-x))
        self.out = out
        return out
    def backward(self, dout):
        dx = dout * (1.0 - self.out) * self.out
        return dx

 五、Affine/Softmax 层的实现

1、Affine 层

神经网络的正向传播中进行的矩阵的乘积运算在几何学领域被称为“仿射变换”A。因此,这里将进行仿射变换的处理实现为“Affine层”。

2、批版本的Affine 层 

正向传播时,偏置会被加到每一个数据(第1 个、第2 个……)上。因此,反向传播时,各个数据的反向传播的值需要汇总为偏置的元素。

class Affine:
    def __init__(self, W, b):
        self.W = W
        self.b = b
        self.x = None
        self.dW = None
        self.db = None
    def forward(self, x):
        self.x = x
        out = np.dot(x, self.W) + self.b
        return out
    def backward(self, dout):
        dx = np.dot(dout, self.W.T)
        self.dW = np.dot(self.x.T, dout)
        self.db = np.sum(dout, axis=0)
        return dx

3、Softmax-with-Loss 层

Cross Entropy Error 层的反向传播

Softmax-with-Loss层

神经网络的反向传播会把这个差分表示的误差传递给前面的层,这是神经网络学习中的重要性质。

使用交叉熵误差作为softmax 函数的损失函数后,反向传播得到(y1 − t1, y2 − t2, y3 − t3)这样“ 漂亮”的结果。实际上,这样“漂亮”的结果并不是偶然的,而是为了得到这样的结果,特意设计了交叉熵误差函数。回归问题中输出层使用“恒等函数”,损失函数使用“平方和误差”,也是出于同样的理由。也就是说,使用“平方和误差”作为“恒等函数”的损失函数,反向传播才能得到(y1 −t1, y2 − t2, y3 − t3)这样“漂亮”的结果。

class SoftmaxWithLoss:
    def __init__(self):
        self.loss = None # 损失
        self.y = None # softmax的输出
        self.t = None # 监督数据(one-hot vector)
    def forward(self, x, t):
        self.t = t
        self.y = softmax(x)
        self.loss = cross_entropy_error(self.y, self.t)
        return self.loss
    def backward(self, dout=1):
        batch_size = self.t.shape[0]
        dx = (self.y - self.t) / batch_size
        return dx

六、误差反向传播法的实现

目前为止,介绍了两种求梯度的方法,一种是基于数值微分的方法,另一种是解析性地求解数学式的方法。后一种方法通过使用误差反向传播法,即使存在大量的参数,也可以高效地计算梯度。

那么数值微分有什么用呢?实际上,在确认误差反向传播法的实现是否正确时,是需要用到数值微分的。确认数值微分求出的梯度结果和误差反向传播法求出的结果是否一致(严格地讲,是非常相近)的操作称为梯度确认(gradient check)

# 数值微分的方法
def numerical_gradient(f, x):
    h = 1e-4 # 0.0001
    grad = np.zeros_like(x)
    
    # np.nditer 用于遍历多维数组的迭代器
    # flags=['multi_index'] 允许迭代器返回每个元素的索引
    # op_flags=['readwrite'] 表示在迭代过程中可以读写数组的值
    it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
    while not it.finished:
        idx = it.multi_index # 返回当前迭代的元素的索引(0,0,3)
        tmp_val = x[idx]
        x[idx] = tmp_val + h
        # 注意这里传入的值是self.params['W1'],并且允许对其修改,所以w进行了加减
        fxh1 = f(x) # f(x+h)
        
        x[idx] = tmp_val - h 
        fxh2 = f(x) # f(x-h)
        grad[idx] = (fxh1 - fxh2) / (2*h)
        
        x[idx] = tmp_val 
        it.iternext()# 对索引进行更新,用于推进迭代器到下一个元素
        
    return grad
# 定义两层的神经网络
# coding: utf-8
import sys, os
sys.path.append(os.pardir)  
import numpy as np
from common.layers import *
from common.gradient import numerical_gradient
from collections import OrderedDict

# 两层的神经网络
class TwoLayerNet:

    def __init__(self, input_size, hidden_size, output_size, weight_init_std = 0.01):
        # 初始化参数
        self.params = {}
        self.params['W1'] = weight_init_std * np.random.randn(input_size, hidden_size)
        self.params['b1'] = np.zeros(hidden_size)
        self.params['W2'] = weight_init_std * np.random.randn(hidden_size, output_size) 
        self.params['b2'] = np.zeros(output_size)

        # 将神经网络的层保存为 有序字典,“有序”即元素按照向字典中添加元素的顺序排列
        # 因此,神经网络的正向传播只需按照添加元素的顺序调用各层方法就可以完成处理,
        # 而反向传播只需要按forward(照相反的顺序调用各层即可
        # 普通字典(dict)是一个无序的数据结构,其元素的排列顺序是不固定的。这是因为字典是使用哈希表实现的,而哈希表不保证保持元素的顺序
        self.layers = OrderedDict()
        self.layers['Affine1'] = Affine(self.params['W1'], self.params['b1'])
        self.layers['Relu1'] = Relu()
        self.layers['Affine2'] = Affine(self.params['W2'], self.params['b2'])

        self.lastLayer = SoftmaxWithLoss()
        
    def predict(self, x):
        for layer in self.layers.values():
            x = layer.forward(x)
        
        return x
        
    # x:入力データ, t:教師データ
    def loss(self, x, t):
        y = self.predict(x)
        return self.lastLayer.forward(y, t)
    
    def accuracy(self, x, t):
        y = self.predict(x)
        y = np.argmax(y, axis=1)
        if t.ndim != 1 : t = np.argmax(t, axis=1)
        
        accuracy = np.sum(y == t) / float(x.shape[0])
        return accuracy
        
    # x:入力データ, t:教師データ
    def numerical_gradient(self, x, t):
        loss_W = lambda W: self.loss(x, t)
        
        grads = {}
        grads['W1'] = numerical_gradient(loss_W, self.params['W1'])
        grads['b1'] = numerical_gradient(loss_W, self.params['b1'])
        grads['W2'] = numerical_gradient(loss_W, self.params['W2'])
        grads['b2'] = numerical_gradient(loss_W, self.params['b2'])
        
        return grads
        
    def gradient(self, x, t):
        # forward
        self.loss(x, t)

        # backward
        dout = 1
        dout = self.lastLayer.backward(dout)
        
        layers = list(self.layers.values())
        layers.reverse()
        for layer in layers:
            dout = layer.backward(dout)

        # 設定
        grads = {}
        grads['W1'], grads['b1'] = self.layers['Affine1'].dW, self.layers['Affine1'].db
        grads['W2'], grads['b2'] = self.layers['Affine2'].dW, self.layers['Affine2'].db

        return grads
# 梯度确认
# coding: utf-8
import sys, os
sys.path.append(os.pardir)  # 親ディレクトリのファイルをインポートするための設定
import numpy as np
from dataset.mnist import load_mnist
from two_layer_net import TwoLayerNet

# データの読み込み
(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, one_hot_label=True)

network = TwoLayerNet(input_size=784, hidden_size=50, output_size=10)

x_batch = x_train[:3]
t_batch = t_train[:3]

grad_numerical = network.numerical_gradient(x_batch, t_batch)
grad_backprop = network.gradient(x_batch, t_batch)

for key in grad_numerical.keys():
    diff = np.average( np.abs(grad_backprop[key] - grad_numerical[key]) )
    print(key + ":" + str(diff))
# 使用误差反向传播法的学习
# coding: utf-8
import sys, os
sys.path.append(os.pardir)

import numpy as np
from dataset.mnist import load_mnist
from two_layer_net import TwoLayerNet

# 读入数据
(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, one_hot_label=True)

network = TwoLayerNet(input_size=784, hidden_size=50, output_size=10)

iters_num = 10000
train_size = x_train.shape[0]
batch_size = 100
learning_rate = 0.1

train_loss_list = []
train_acc_list = []
test_acc_list = []

iter_per_epoch = max(train_size / batch_size, 1)

for i in range(iters_num):
    batch_mask = np.random.choice(train_size, batch_size)
    x_batch = x_train[batch_mask]
    t_batch = t_train[batch_mask]
    
    # 求梯度 过程包括前向和后向
    #grad = network.numerical_gradient(x_batch, t_batch)
    grad = network.gradient(x_batch, t_batch)
    
    # 更新
    for key in ('W1', 'b1', 'W2', 'b2'):
        network.params[key] -= learning_rate * grad[key]
    
    loss = network.loss(x_batch, t_batch)
    train_loss_list.append(loss)
    
    if i % iter_per_epoch == 0:
        train_acc = network.accuracy(x_train, t_train)
        test_acc = network.accuracy(x_test, t_test)
        train_acc_list.append(train_acc)
        test_acc_list.append(test_acc)
        print(train_acc, test_acc)

整体代码思路: 

# 单层的结构
class Relu_sigmod__Affine_softmax_with_loss:
    def __init__(self):
        # 在反向传播时会用的变量
        pass
    def forword(self,x):
        pass
    def backword(self,dout):
        pass

# 神经网络结构
class TwoLayerNet:
    def __init__(self,input_size, hidden_size, output_size, weight_init_std = 0.01):
        # 初始化参数(w,b)为字典
        # 初始化各层到有序字典中
        self.lastLayer = SoftmaxWithLoss() # 最后一层独立
        pass
    def predict(self,x):
        pass
    def accuracy(self,x,t):
        y=self.predict(x)
        y=np.argmax(y,axis=1)
        if t.ndim != 1 : t = np.argmax(t, axis=1)
        pass
    def loss(self,x,t):
        y=self.predict(x)
        return self.lastLayer.forword(y,t)
    def gradient(self,x):
        # 前向传播,求loss
        self.loss()

        # 反向传播
        dout=1
        dout=self.lastLayer.backword(y,t)
        for layer in layers:
            pass
        grads = {}
        grads['W1'], grads['b1'] = self.layers['Affine1'].dW, self.layers['Affine1'].db
        grads['W2'], grads['b2'] = self.layers['Affine2'].dW, self.layers['Affine2'].db

        return grads

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值