第六章 网络学习相关技巧1(最优路径梯度)

[toc]

神经网络的学习的目的是找到使损失函数的值尽可能小的参数。这是寻找最优参数的问题,解决这个问题的过程称为最优化(optimization)。

在前面的代码实现中,为了找到最优参数,将参数的梯度(导数)作为了线索。使用参数的梯度,沿梯度方向更新参数,并重复这个步骤多次,从而逐渐靠近最优参数,这个过程称为随机梯度下降法(stochastic gradient descent),简称SGD。

当一个人想从山顶前往山底,总会找到一条最快的路径,而如何选择这条路径,则是需要讨论的方法?

求梯度方法的四种最优路径如下图:
img

6.1SGD(随机梯度下降法)方法

梯度的更新实际上是权值的更新,得到SGD式子:

img

【注】将需要更新的权重参数记为W,把损失函数关于W的梯度记为img

η表示学习率,实际上会取0.01或0.001这些事先决定好的值。

代码实现如下:

class SGD:
    def __init__(self, lr=0.01):
        self.lr = lr
        self.params = {}
        self.params['W1'] = 1
        self.params['b1'] = 2
 
    def update(self, params, grads):
        for key in params.keys():
            params[key] -= self.lr * grads[key]
 
        return params
 
 
if __name__ == "__main__":
 
    sgd = SGD()
    grads = {'W1': 0.1, 'b1': 0.3}
    r = sgd.update(sgd.params, grads)  # {'W1': 0.999, 'b1': 1.997}

6.1.1SGD缺点

SGD简单,并且容易实现,但是在解决某些问题时可能没有效率。见下列式子:

img

其图像为:

img

【注】图像和等高线

其梯度图像为:
img

【注】虽然f(x,y)的最小值在(x, y) = (0, 0)处,但是上图中的梯度在很多地方并没有指向(0, 0)。

例:从(x, y) = (−7.0, 2.0)处(初始值)开始搜索,结果如下图所示。

img

【注】SGD呈“之”字形移动。这是一个相当低效的路径。也就是说,SGD的缺点是,如果函数的形状非均向(anisotropic),比如呈延伸状,搜索的路径就会非常低效。SGD低效的根本原因是,梯度的方向并没有指向最小值的方向。

6.2Momentum方法

Momentum是“动量”的意思,该公式如下:
img

【注】和前面的SGD一样,W表示要更新的权重参数, img表示损失函数关于W的梯度,η表示学习率。这里新出现了一个变量v,对应物理上的速度。第一个式子表示了物体在梯度方向上受力,在这个力的作用下,物体的速度增加这一物理法则。Momentum方法给人的感觉就像是小球在地面上滚动。如下图:
img

【注】公式中有αv这一项。在物体不受任何力时,该项承担使物体逐渐减速的任务(α设定为0.9之类的值),对应物理上的地面摩擦或空气阻力。

得到最优化的路径如下图:
img

【注】更新路径就像小球在碗中滚动一样。和SGD相比,我们发现“之”字形的“程度”减轻了。这是因为虽然x轴方向上受到的力非常小,但是一直在同一方向上受力,所以朝同一个方向会有一定的加速。反过来,虽然y轴方向上受到的力很大,但是因为交互地受到正方向和反方向的力,它们会互相抵消,所以y轴方向上的速度不稳定。因此,和SGD时的情形相比,可以更快地朝x轴方向靠近,减弱“之”字形的变动程度。

代码实现如下:

import numpy as np
 
class Momentum:
    def __init__(self, lr=0.01, momentum=0.9):
        self.lr = lr
        self.momentum = momentum
        self.v = None
 
    def update(self, params, grads):
        if self.v is None:
            self.v = {}
            for key, val in params.items():
                self.v[key] = np.zeros_like(val)
 
        for key in params.keys():
            self.v[key] = self.momentum * self.v[key] - self.lr * grads[key]
            params[key] += self.v[key]
 

6.3AdaGrad方法

在神经网络的学习中,学习率(数学式中记为η)的值很重要。学习率过小,会导致学习花费过多时间;反过来,学习率过大,则会导致学习发散而不能正确进行。

在关于学习率的有效技巧中,有一种被称为学习率衰减(learning ratedecay)的方法,即随着学习的进行,使学习率逐渐减小。实际上,一开始“多”学,然后逐渐“少”学的方法,在神经网络的学习中经常被使用.

其数学公式如下:
img

【注】和前面的SGD一样,W表示要更新的权重参数,img 表示损失函数关于W的梯度,η表示学习率。这里新出现了变量h,如第一个式子所示,它保存了以前的所有梯度值的平方和中的img表示对应矩阵元素的乘法)。然后,在更新参数时,通过乘以 1/sqrt(h),就可以调整学习的尺度。

这意味着,参数的元素中变动较大(被大幅更新)的元素的学习率将变小。也就是说,可以按参数的元素进行学习率衰减,使变动大的参数的学习率逐渐减小。

代码实现如下:

# 即随着学习的进行,使学习率逐渐减小。
class AdaGrad:
    def __init__(self, lr=0.01):
        self.lr = lr
        self.h = None
 
    def update(self, params, grads):
        if self.h is None:
            self.h = {}
            for key, val in params.items():
                self.h[key] = np.zeros_like(val)
 
        for key in params.keys():
            self.h[key] += grads[key] * grads[key]
            params[key] -= self.lr * grads[key]/(np.sqrt(self.h[key]) + 1e-7)

其最优化路径如下图:

img

【注】由上图知,函数的取值高效地向着最小值移动。由于y轴方向上的梯度较大,因此刚开始变动较大,但是后面会根据这个较大的变动按比例进行调整,减小更新的步伐。因此,y轴方向上的更新程度被减弱, “之”字形的变动程度有所衰减。

6.4Adam方法

将Momentum和AdaGrad两种方法结合起来,实现高效搜索,进行超参数的“偏置校正”。

其数学公式如下:
img

【注】Adam会设置3个超参数。一个是学习率(论文中以α出现),另外两个是一次momentum系数β 1 和二次momentum系数β 2 。根据论文,标准的设定值是β 1 为0.9,β 2 为0.999。设置了这些值后,大多数情况下都能顺利运行。

其最优化路径如下图:

img

【注】基于Adam的更新过程就像小球在碗中滚动一样。虽然Momentun也有类似的移动,但是相比之下,Adam的小球左右摇晃的程度
有所减轻。这得益于学习的更新程度被适当地调整了。

import numpy as np
 
class Adam:
 
    def __init__(self, lr=0.001, beta1=0.9, beta2=0.999):
        self.lr = lr
        self.beta1 = beta1
        self.beta2 = beta2
        self.iter = 0
        self.m = None
        self.v = None
 
    def update(self, params, grads):
        if self.m is None:
            self.m, self.v = {}, {}
            for key, val in params.items():
                self.m[key] = np.zeros_like(val)
                self.v[key] = np.zeros_like(val)
 
        self.iter += 1
        lr_t = self.lr * np.sqrt(1.0 - self.beta2 ** self.iter) / (1.0 - self.beta1 ** self.iter)
 
        for key in params.keys():
            self.m[key] += (1 - self.beta1) * (grads[key] - self.m[key])
            self.v[key] += (1 - self.beta2) * (grads[key] ** 2 - self.v[key])
 
            params[key] -= lr_t * self.m[key] / (np.sqrt(self.v[key]) + 1e-7)
 
          

6.5案例

案例:基于MNIST数据集的更新方法的比较

目录如下:
img

6.5.1 common文件夹

6.5.1.1common/functions.py
import numpy as np
 
 
def sigmoid(x):
    return 1/(1 + np.exp(-x))
 
 
def sigmoid_grad(x):
    return (1.0 - sigmoid(x)) * sigmoid(x)
 
 
# 输出层的激活函数
def softmax(x):
    if x.ndim == 2:
        x = x.T
        x = x - np.max(x, axis=0)
        y = np.exp(x)/np.sum(np.exp(x), axis=0)
        return y.T
 
    x = x - np.max(x, axis=0)
    return np.exp(x)/np.sum(np.exp(x))
 
 
# 交叉熵误差
def cross_entropy_error(y, t):
    if y.ndim == 1:
        t = t.reshape(1, t.size)
        y = y.reshape(1, y.size)
 
    if y.size == t.size:
        t = t.argmax(axis=1)
 
    batch_size = y.shape[0]
    return -np.sum(np.log(y[np.arange(batch_size), t] + 1e-7))/batch_size
6.5.1.2common/gradient.py
import numpy as np
 
 
def numerical_gradient(f, x):
    h = 1e-4
    grad = np.zeros_like(x)  # 构造相同的维度
    # 默认情况下,nditer将视待迭代遍历的数组为只读对象(read-only)
    # 为了在遍历数组的同时,实现对数组元素值得修改,必须指定op_flags=['readwrite']模式:
    it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
    # 对数组x进行行遍历
    while not it.finished:
        idx = it.multi_index  # 索引
        tmp_val = x[idx]
        x[idx] = tmp_val + h
        fxh1 = f(x)
 
        x[idx] = tmp_val - h
        fxh2 = f(x)
 
        grad[idx] = (fxh1 - fxh2)/2*h
 
        x[idx] = tmp_val
        it.iternext()
    return grad
 
 
6.5.1.3common/layers.py
# coding: utf-8
import numpy as np
from common.functions import *
from common.util import im2col, col2im
 
 
class Relu:
    def __init__(self):
        self.mask = None
 
    def forward(self, x):
        self.mask = (x <= 0)
        out = x.copy()
        out[self.mask] = 0
 
        return out
 
    def backward(self, dout):
        dout[self.mask] = 0
        dx = dout
 
        return dx
 
 
class Sigmoid:
    def __init__(self):
        self.out = None
 
    def forward(self, x):
        out = sigmoid(x)
        self.out = out
        return out
 
    def backward(self, dout):
        dx = dout * (1.0 - self.out) * self.out
 
        return dx
 
 
class Affine:
    def __init__(self, W, b):
        self.W =W
        self.b = b
        
        self.x = None
        self.original_x_shape = None
        # 
        self.dW = None
        self.db = None
 
    def forward(self, x):
        # 
        self.original_x_shape = x.shape
        x = x.reshape(x.shape[0], -1)
        self.x = x
 
        out = np.dot(self.x, self.W) + self.b
 
        return out
 
    def backward(self, dout):
        dx = np.dot(dout, self.W.T)
        self.dW = np.dot(self.x.T, dout)
        self.db = np.sum(dout, axis=0)
        
        dx = dx.reshape(*self.original_x_shape)  # 
        return dx
 
 
class SoftmaxWithLoss:
    def __init__(self):
        self.loss = None
        self.y = None # 
        self.t = None # 
 
    def forward(self, x, t):
        self.t = t
        self.y = softmax(x)
        self.loss = cross_entropy_error(self.y, self.t)
        
        return self.loss
 
    def backward(self, dout=1):
        batch_size = self.t.shape[0]
        if self.t.size == self.y.size: 
            dx = (self.y - self.t) / batch_size
        else:
            dx = self.y.copy()
            dx[np.arange(batch_size), self.t] -= 1
            dx = dx / batch_size
        
        return dx
 
 
class Dropout:
    """
    http://arxiv.org/abs/1207.0580
    """
    def __init__(self, dropout_ratio=0.5):
        self.dropout_ratio = dropout_ratio
        self.mask = None
 
    def forward(self, x, train_flg=True):
        if train_flg:
            self.mask = np.random.rand(*x.shape) > self.dropout_ratio
            return x * self.mask
        else:
            return x * (1.0 - self.dropout_ratio)
 
    def backward(self, dout):
        return dout * self.mask
 
 
class BatchNormalization:
    """
    http://arxiv.org/abs/1502.03167
    """
    def __init__(self, gamma, beta, momentum=0.9, running_mean=None, running_var=None):
        self.gamma = gamma
        self.beta = beta
        self.momentum = momentum
        self.input_shape = None #   
 
        # 
        self.running_mean = running_mean
        self.running_var = running_var  
        
        # 
        self.batch_size = None
        self.xc = None
        self.std = None
        self.dgamma = None
        self.dbeta = None
 
    def forward(self, x, train_flg=True):
        self.input_shape = x.shape
        if x.ndim != 2:
            N, C, H, W = x.shape
            x = x.reshape(N, -1)
 
        out = self.__forward(x, train_flg)
        
        return out.reshape(*self.input_shape)
            
    def __forward(self, x, train_flg):
        if self.running_mean is None:
            N, D = x.shape
            self.running_mean = np.zeros(D)
            self.running_var = np.zeros(D)
                        
        if train_flg:
            mu = x.mean(axis=0)
            xc = x - mu
            var = np.mean(xc**2, axis=0)
            std = np.sqrt(var + 10e-7)
            xn = xc / std
            
            self.batch_size = x.shape[0]
            self.xc = xc
            self.xn = xn
            self.std = std
            self.running_mean = self.momentum * self.running_mean + (1-self.momentum) * mu
            self.running_var = self.momentum * self.running_var + (1-self.momentum) * var            
        else:
            xc = x - self.running_mean
            xn = xc / ((np.sqrt(self.running_var + 10e-7)))
            
        out = self.gamma * xn + self.beta 
        return out
 
    def backward(self, dout):
        if dout.ndim != 2:
            N, C, H, W = dout.shape
            dout = dout.reshape(N, -1)
 
        dx = self.__backward(dout)
 
        dx = dx.reshape(*self.input_shape)
        return dx
 
    def __backward(self, dout):
        dbeta = dout.sum(axis=0)
        dgamma = np.sum(self.xn * dout, axis=0)
        dxn = self.gamma * dout
        dxc = dxn / self.std
        dstd = -np.sum((dxn * self.xc) / (self.std * self.std), axis=0)
        dvar = 0.5 * dstd / self.std
        dxc += (2.0 / self.batch_size) * self.xc * dvar
        dmu = np.sum(dxc, axis=0)
        dx = dxc - dmu / self.batch_size
        
        self.dgamma = dgamma
        self.dbeta = dbeta
        
        return dx
 
 
class Convolution:
    def __init__(self, W, b, stride=1, pad=0):
        self.W = W
        self.b = b
        self.stride = stride
        self.pad = pad
        
        # 
        self.x = None   
        self.col = None
        self.col_W = None
        
        # 
        self.dW = None
        self.db = None
 
    def forward(self, x):
        FN, C, FH, FW = self.W.shape
        N, C, H, W = x.shape
        out_h = 1 + int((H + 2*self.pad - FH) / self.stride)
        out_w = 1 + int((W + 2*self.pad - FW) / self.stride)
 
        col = im2col(x, FH, FW, self.stride, self.pad)
        col_W = self.W.reshape(FN, -1).T
 
        out = np.dot(col, col_W) + self.b
        out = out.reshape(N, out_h, out_w, -1).transpose(0, 3, 1, 2)
 
        self.x = x
        self.col = col
        self.col_W = col_W
 
        return out
 
    def backward(self, dout):
        FN, C, FH, FW = self.W.shape
        dout = dout.transpose(0,2,3,1).reshape(-1, FN)
 
        self.db = np.sum(dout, axis=0)
        self.dW = np.dot(self.col.T, dout)
        self.dW = self.dW.transpose(1, 0).reshape(FN, C, FH, FW)
 
        dcol = np.dot(dout, self.col_W.T)
        dx = col2im(dcol, self.x.shape, FH, FW, self.stride, self.pad)
 
        return dx
 
 
class Pooling:
    def __init__(self, pool_h, pool_w, stride=1, pad=0):
        self.pool_h = pool_h
        self.pool_w = pool_w
        self.stride = stride
        self.pad = pad
        
        self.x = None
        self.arg_max = None
 
    def forward(self, x):
        N, C, H, W = x.shape
        out_h = int(1 + (H - self.pool_h) / self.stride)
        out_w = int(1 + (W - self.pool_w) / self.stride)
 
        col = im2col(x, self.pool_h, self.pool_w, self.stride, self.pad)
        col = col.reshape(-1, self.pool_h*self.pool_w)
 
        arg_max = np.argmax(col, axis=1)
        out = np.max(col, axis=1)
        out = out.reshape(N, out_h, out_w, C).transpose(0, 3, 1, 2)
 
        self.x = x
        self.arg_max = arg_max
 
        return out
 
    def backward(self, dout):
        dout = dout.transpose(0, 2, 3, 1)
        
        pool_size = self.pool_h * self.pool_w
        dmax = np.zeros((dout.size, pool_size))
        dmax[np.arange(self.arg_max.size), self.arg_max.flatten()] = dout.flatten()
        dmax = dmax.reshape(dout.shape + (pool_size,)) 
        
        dcol = dmax.reshape(dmax.shape[0] * dmax.shape[1] * dmax.shape[2], -1)
        dx = col2im(dcol, self.x.shape, self.pool_h, self.pool_w, self.stride, self.pad)
        
        return dx
6.5.1.4common/util.py
# coding: utf-8
import numpy as np
 
 
def smooth_curve(x):
    
    window_len = 11
    s = np.r_[x[window_len-1:0:-1], x, x[-1:-window_len:-1]]
    w = np.kaiser(window_len, 2)
    y = np.convolve(w/w.sum(), s, mode='valid')
    return y[5:len(y)-5]
 
 
def shuffle_dataset(x, t):
    permutation = np.random.permutation(x.shape[0])
    x = x[permutation,:] if x.ndim == 2 else x[permutation,:,:,:]
    t = t[permutation]
 
    return x, t
 
def conv_output_size(input_size, filter_size, stride=1, pad=0):
    return (input_size + 2*pad - filter_size) / stride + 1
 
 
def im2col(input_data, filter_h, filter_w, stride=1, pad=0):
    N, C, H, W = input_data.shape
    out_h = (H + 2*pad - filter_h)//stride + 1
    out_w = (W + 2*pad - filter_w)//stride + 1
 
    img = np.pad(input_data, [(0,0), (0,0), (pad, pad), (pad, pad)], 'constant')
    col = np.zeros((N, C, filter_h, filter_w, out_h, out_w))
 
    for y in range(filter_h):
        y_max = y + stride*out_h
        for x in range(filter_w):
            x_max = x + stride*out_w
            col[:, :, y, x, :, :] = img[:, :, y:y_max:stride, x:x_max:stride]
 
    col = col.transpose(0, 4, 5, 1, 2, 3).reshape(N*out_h*out_w, -1)
    return col
 
 
def col2im(col, input_shape, filter_h, filter_w, stride=1, pad=0):
    N, C, H, W = input_shape
    out_h = (H + 2*pad - filter_h)//stride + 1
    out_w = (W + 2*pad - filter_w)//stride + 1
    col = col.reshape(N, out_h, out_w, C, filter_h, filter_w).transpose(0, 3, 4, 5, 1, 2)
 
    img = np.zeros((N, C, H + 2*pad + stride - 1, W + 2*pad + stride - 1))
    for y in range(filter_h):
        y_max = y + stride*out_h
        for x in range(filter_w):
            x_max = x + stride*out_w
            img[:, :, y:y_max:stride, x:x_max:stride] += col[:, :, y, x, :, :]
 
    return img[:, :, pad:H + pad, pad:W + pad]
6.5.1.5common/optimizer.py
import numpy as np
 
 
class SGD:
    def __init__(self, lr=0.01):
        self.lr = lr
        self.params = {}
        self.params['b1'] = 1
        self.params['b2'] = 2
 
    def update(self, params, grads):
        for key in params.keys():
            params[key] -= self.lr * grads[key]
 
        return params
 
 
 
class Momentum:
    def __init__(self, lr=0.01, momentum=0.9):
        self.lr = lr
        self.momentum = momentum
        self.v = None
 
    def update(self, params, grads):
        if self.v is None:
            self.v = {}
            for key, val in params.items():
                self.v[key] = np.zeros_like(val)
 
        for key in params.keys():
            self.v[key] = self.momentum * self.v[key] - self.lr * grads[key]
            params[key] += self.v[key]
 
 
# 即随着学习的进行,使学习率逐渐减小。
class AdaGrad:
    def __init__(self, lr=0.01):
        self.lr = lr
        self.h = None
 
    def update(self, params, grads):
        if self.h is None:
            self.h = {}
            for key, val in params.items():
                self.h[key] = np.zeros_like(val)
 
        for key in params.keys():
            self.h[key] += grads[key] * grads[key]
            params[key] -= self.lr * grads[key]/(np.sqrt(self.h[key]) + 1e-7)
 
 
class Adam:
 
    def __init__(self, lr=0.001, beta1=0.9, beta2=0.999):
        self.lr = lr
        self.beta1 = beta1
        self.beta2 = beta2
        self.iter = 0
        self.m = None
        self.v = None
 
    def update(self, params, grads):
        if self.m is None:
            self.m, self.v = {}, {}
            for key, val in params.items():
                self.m[key] = np.zeros_like(val)
                self.v[key] = np.zeros_like(val)
 
        self.iter += 1
        lr_t = self.lr * np.sqrt(1.0 - self.beta2 ** self.iter) / (1.0 - self.beta1 ** self.iter)
 
        for key in params.keys():
            self.m[key] += (1 - self.beta1) * (grads[key] - self.m[key])
            self.v[key] += (1 - self.beta2) * (grads[key] ** 2 - self.v[key])
 
            params[key] -= lr_t * self.m[key] / (np.sqrt(self.v[key]) + 1e-7)
 
           
6.5.1.6common/multi_layer_net.py
# coding: utf-8
import sys, os
sys.path.append(os.pardir)  # 
import numpy as np
from collections import OrderedDict
from common.layers import *
from common.gradient import numerical_gradient
 
 
class MultiLayerNet:
 
    def __init__(self, input_size, hidden_size_list, output_size,
                 activation='relu', weight_init_std='relu', weight_decay_lambda=0):
        self.input_size = input_size
        self.output_size = output_size
        self.hidden_size_list = hidden_size_list
        self.hidden_layer_num = len(hidden_size_list)
        self.weight_decay_lambda = weight_decay_lambda
        self.params = {}
 
        # 
        self.__init_weight(weight_init_std)
 
        # 
        activation_layer = {'sigmoid': Sigmoid, 'relu': Relu}
        self.layers = OrderedDict()
        for idx in range(1, self.hidden_layer_num+1):
            self.layers['Affine' + str(idx)] = Affine(self.params['W' + str(idx)],
                                                      self.params['b' + str(idx)])
            self.layers['Activation_function' + str(idx)] = activation_layer[activation]()
 
        idx = self.hidden_layer_num + 1
        self.layers['Affine' + str(idx)] = Affine(self.params['W' + str(idx)],
            self.params['b' + str(idx)])
 
        self.last_layer = SoftmaxWithLoss()
 
    def __init_weight(self, weight_init_std):
        
        all_size_list = [self.input_size] + self.hidden_size_list + [self.output_size]
        for idx in range(1, len(all_size_list)):
            scale = weight_init_std
            if str(weight_init_std).lower() in ('relu', 'he'):
                scale = np.sqrt(2.0 / all_size_list[idx - 1])  # 
            elif str(weight_init_std).lower() in ('sigmoid', 'xavier'):
                scale = np.sqrt(1.0 / all_size_list[idx - 1])  # 
 
            self.params['W' + str(idx)] = scale * np.random.randn(all_size_list[idx-1], all_size_list[idx])
            self.params['b' + str(idx)] = np.zeros(all_size_list[idx])
 
    def predict(self, x):
        for layer in self.layers.values():
            x = layer.forward(x)
 
        return x
 
    def loss(self, x, t):
        y = self.predict(x)
 
        weight_decay = 0
        for idx in range(1, self.hidden_layer_num + 2):
            W = self.params['W' + str(idx)]
            weight_decay += 0.5 * self.weight_decay_lambda * np.sum(W ** 2)
 
        return self.last_layer.forward(y, t) + weight_decay
 
    def accuracy(self, x, t):
        y = self.predict(x)
        y = np.argmax(y, axis=1)
        if t.ndim != 1 : t = np.argmax(t, axis=1)
 
        accuracy = np.sum(y == t) / float(x.shape[0])
        return accuracy
 
    def numerical_gradient(self, x, t):
        loss_W = lambda W: self.loss(x, t)
 
        grads = {}
        for idx in range(1, self.hidden_layer_num+2):
            grads['W' + str(idx)] = numerical_gradient(loss_W, self.params['W' + str(idx)])
            grads['b' + str(idx)] = numerical_gradient(loss_W, self.params['b' + str(idx)])
 
        return grads
 
    def gradient(self, x, t):
        # forward
        self.loss(x, t)
 
        # backward
        dout = 1
        dout = self.last_layer.backward(dout)
 
        layers = list(self.layers.values())
        layers.reverse()
        for layer in layers:
            dout = layer.backward(dout)
 
        # 
        grads = {}
        for idx in range(1, self.hidden_layer_num+2):
            grads['W' + str(idx)] = self.layers['Affine' + str(idx)].dW + self.weight_decay_lambda * self.layers['Affine' + str(idx)].W
            grads['b' + str(idx)] = self.layers['Affine' + str(idx)].db
 
        return grads

6.5.2 ch06文件夹

6.5.2.1 ch06/optimizer_compare_mnist
# coding: utf-8
import os
import sys
sys.path.append(os.pardir)  # 
import matplotlib.pyplot as plt
from dataset.mnist import load_mnist
from common.util import smooth_curve
from common.multi_layer_net import MultiLayerNet
from common.optimizer import *
 
 
# 
(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True)
 
train_size = x_train.shape[0]
batch_size = 128
max_iterations = 2000
 
 
# 
optimizers = {}
optimizers['SGD'] = SGD()
optimizers['Momentum'] = Momentum()
optimizers['AdaGrad'] = AdaGrad()
optimizers['Adam'] = Adam()
# optimizers['RMSprop'] = RMSprop()
 
networks = {}
train_loss = {}
for key in optimizers.keys():
    networks[key] = MultiLayerNet(
        input_size=784, hidden_size_list=[100, 100, 100, 100],
        output_size=10)
    train_loss[key] = []    
 
 
# 2:
for i in range(max_iterations):
    batch_mask = np.random.choice(train_size, batch_size)
    x_batch = x_train[batch_mask]
    t_batch = t_train[batch_mask]
    
    for key in optimizers.keys():
        grads = networks[key].gradient(x_batch, t_batch)
        optimizers[key].update(networks[key].params, grads)
    
        loss = networks[key].loss(x_batch, t_batch)
        train_loss[key].append(loss)
    
    if i % 100 == 0:
        print( "===========" + "iteration:" + str(i) + "===========")
        for key in optimizers.keys():
            loss = networks[key].loss(x_batch, t_batch)
            print(key + ":" + str(loss))
 
 
# 3.
markers = {"SGD": "o", "Momentum": "x", "AdaGrad": "s", "Adam": "D"}
x = np.arange(max_iterations)
for key in optimizers.keys():
    plt.plot(x, smooth_curve(train_loss[key]), marker=markers[key], markevery=100, label=key)
plt.xlabel("iterations")
plt.ylabel("loss")
plt.ylim(0, 1)
plt.legend()
plt.show()

6.5.3结果

得到结果如下:
img

得到的比较结果如下:

img

6.5.4四种梯度优化路径实现

# coding: utf-8
import sys, os
sys.path.append(os.pardir) 
import numpy as np
import matplotlib.pyplot as plt
from collections import OrderedDict
from common.optimizer import *
 
 
def f(x, y):
    return x**2 / 20.0 + y**2
 
 
def df(x, y):
    return x / 10.0, 2.0*y
 
# 初始化参数
init_pos = (-7.0, 2.0)
params = {}
params['x'], params['y'] = init_pos[0], init_pos[1]
grads = {}
grads['x'], grads['y'] = 0, 0
 
 
# 实现对字典元素中元素的排序, (K, V)
optimizers = OrderedDict()
optimizers["SGD"] = SGD(lr=0.95)
optimizers["Momentum"] = Momentum(lr=0.1)
optimizers["AdaGrad"] = AdaGrad(lr=1.5)
optimizers["Adam"] = Adam(lr=0.3)
 
idx = 1
 
for key in optimizers:
    optimizer = optimizers[key]
    x_history = []
    y_history = []
    params['x'], params['y'] = init_pos[0], init_pos[1]
    
    for i in range(30):
        x_history.append(params['x'])
        y_history.append(params['y'])
        
        grads['x'], grads['y'] = df(params['x'], params['y'])
        optimizer.update(params, grads)
 
    x = np.arange(-10, 10, 0.01)
    y = np.arange(-5, 5, 0.01)
    
    # 将向量转换为矩阵
    X, Y = np.meshgrid(x, y) 
    Z = f(X, Y)
    
    # for simple contour line  
    mask = Z > 7
    Z[mask] = 0
    
    # plot 
    plt.subplot(2, 2, idx)
    idx += 1
    plt.plot(x_history, y_history, 'o-', color="red")
    plt.contour(X, Y, Z)
    plt.ylim(-10, 10)
    plt.xlim(-10, 10)
    plt.plot(0, 0, '+')
    #colorbar()
    #spring()
    plt.title(key)
    plt.xlabel("x")
    plt.ylabel("y")
    
plt.show()

运行结果如下:
img

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

追寻远方的人

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值