L2W1作业1 初始化

初始化

欢迎来到“改善深度神经网络”的第一项作业。

训练神经网络需要指定权重的初始值,而一个好的初始化方法将有助于网络学习。

如果你完成了本系列的上一课程,则可能已经按照我们的说明完成了权重初始化。但是,如何为新的神经网络选择初始化?在本笔记本中,你能学习看到不同的初始化导致的不同结果。

好的初始化可以:

  • 加快梯度下降、模型收敛
  • 减小梯度下降收敛过程中训练(和泛化)出现误差的几率

首先,运行以下单元格以加载包和用于分类的二维数据集。

In [1]:

cd ../input/deeplearning34288
/home/kesci/input/deeplearning34288

In [2]:

import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec

%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
Matplotlib is building the font cache using fc-list. This may take a moment.

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-BkP7NpgK-1663742365203)(L2W1%E4%BD%9C%E4%B8%9A1%20%E5%88%9D%E5%A7%8B%E5%8C%96.assets/q1arl9atjr.png)]

我们希望分类器将蓝点和红点分开。

1 神经网络模型

你将使用已经实现了的3层神经网络。 下面是你将尝试的初始化方法:

  • 零初始化 :在输入参数中设置initialization = "zeros"
  • 随机初始化 :在输入参数中设置initialization = "random",这会将权重初始化为较大的随机值。
  • He初始化 :在输入参数中设置initialization = "he",这会根据He等人(2015)的论文将权重初始化为按比例缩放的随机值。

说明:请快速阅读并运行以下代码,在下一部分中,你将实现此model()调用的三种初始化方法。

In [6]:

def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
    """
    Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
    
    Arguments:
    X -- input data, of shape (2, number of examples)
    Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
    learning_rate -- learning rate for gradient descent 
    num_iterations -- number of iterations to run gradient descent
    print_cost -- if True, print the cost every 1000 iterations
    initialization -- flag to choose which initialization to use ("zeros","random" or "he")
    
    Returns:
    parameters -- parameters learnt by the model
    """
        
    grads = {}
    costs = [] # to keep track of the loss
    m = X.shape[1] # number of examples
    layers_dims = [X.shape[0], 10, 5, 1]
    
    # Initialize parameters dictionary.
    if initialization == "zeros":
        parameters = initialize_parameters_zeros(layers_dims)
    elif initialization == "random":
        parameters = initialize_parameters_random(layers_dims)
    elif initialization == "he":
        parameters = initialize_parameters_he(layers_dims)

    # Loop (gradient descent)

    for i in range(0, num_iterations):

        # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
        a3, cache = forward_propagation(X, parameters)
        
        # Loss
        cost = compute_loss(a3, Y)

        # Backward propagation.
        grads = backward_propagation(X, Y, cache)
        
        # Update parameters.
        parameters = update_parameters(parameters, grads, learning_rate)
        
        # Print the loss every 1000 iterations
        if print_cost and i % 1000 == 0:
            print("Cost after iteration {}: {}".format(i, cost))
            costs.append(cost)
            
    # plot the loss
    plt.plot(costs)
    plt.ylabel('cost')
    plt.xlabel('iterations (per hundreds)')
    plt.title("Learning rate =" + str(learning_rate))
    plt.show()
    
    return parameters

2 零初始化

在神经网络中有两种类型的参数要初始化:

  • 权重矩阵 ( W [ 1 ] , W [ 2 ] , W [ 3 ] , . . . , W [ L − 1 ] , W [ L ] ) (W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]}) (W[1],W[2],W[3],...,W[L1],W[L])
  • 偏差向量 ( b [ 1 ] , b [ 2 ] , b [ 3 ] , . . . , b [ L − 1 ] , b [ L ] ) (b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]}) (b[1],b[2],b[3],...,b[L1],b[L])

练习:实现以下函数以将所有参数初始化为零。 稍后你会看到此方法会报错,因为它无法“打破对称性”。总之先尝试一下,看看会发生什么。确保使用正确维度的np.zeros((…,…))。

In [6]:

# GRADED FUNCTION: initialize_parameters_zeros 

def initialize_parameters_zeros(layers_dims):
    """
    Arguments:
    layer_dims -- python array (list) containing the size of each layer.
    
    Returns:
    parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
                    W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
                    b1 -- bias vector of shape (layers_dims[1], 1)
                    ...
                    WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
                    bL -- bias vector of shape (layers_dims[L], 1)
    """
    
    parameters = {}
    L = len(layers_dims)            # number of layers in the network
    
    for l in range(1, L):
        ### START CODE HERE ### (≈ 2 lines of code)
        parameters['W' + str(l)] = np.zeros((layers_dims[l],layers_dims[l-1]))
        parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
        ### END CODE HERE ###
    return parameters

In [7]:

parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
W1 = [[0. 0. 0.]
 [0. 0. 0.]]
b1 = [[0.]
 [0.]]
W2 = [[0. 0.]]
b2 = [[0.]]

预期输出:
W1 = [[0. 0. 0.]
[0. 0. 0.]]
b1 = [[0.]
[0.]]
W2 = [[0. 0.]]
b2 = [[0.]]

运行以下代码使用零初始化并迭代15,000次以训练模型。

In [8]:

parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Cost after iteration 0: 0.6931471805599453
Cost after iteration 1000: 0.6931471805599453
Cost after iteration 2000: 0.6931471805599453
Cost after iteration 3000: 0.6931471805599453
Cost after iteration 4000: 0.6931471805599453
Cost after iteration 5000: 0.6931471805599453
Cost after iteration 6000: 0.6931471805599453
Cost after iteration 7000: 0.6931471805599453
Cost after iteration 8000: 0.6931471805599453
Cost after iteration 9000: 0.6931471805599453
Cost after iteration 10000: 0.6931471805599455
Cost after iteration 11000: 0.6931471805599453
Cost after iteration 12000: 0.6931471805599453
Cost after iteration 13000: 0.6931471805599453
Cost after iteration 14000: 0.6931471805599453

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-56N78A4r-1663742365205)(L2W1%E4%BD%9C%E4%B8%9A1%20%E5%88%9D%E5%A7%8B%E5%8C%96.assets/q1arbessuq.png)]

On the train set:
Accuracy: 0.5
On the test set:
Accuracy: 0.5

性能确实很差,损失也没有真正降低,该算法的性能甚至不如随机猜测。为什么呢?让我们看一下预测的详细信息和决策边界:

In [9]:

print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
predictions_train = [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  0 0 0 0 0 0 0 0 0 0 0 0]]
predictions_test = [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]

In [11]:

plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)

请添加图片描述

该模型预测的每个示例都为0。

通常,将所有权重初始化为零会导致网络无法打破对称性。 这意味着每一层中的每个神经元都将学习相同的东西,并且你不妨训练每一层 n [ l ] = 1 n^{[l]}=1 n[l]=1的神经网络,且该网络的性能不如线性分类器,例如逻辑回归。

你应该记住

  • 权重 W [ l ] W^{[l]} W[l]应该随机初始化以打破对称性。
  • 将偏差 b [ l ] b^{[l]} b[l]初始化为零是可以的。只要随机初始化了 W [ l ] W^{[l]} W[l],对称性仍然会破坏。

3 随机初始化

为了打破对称性,让我们随机设置权重。 在随机初始化之后,每个神经元可以继续学习其输入的不同特征。 在本练习中,你将看到如果将权重随机初始化为非常大的值会发生什么。

练习:实现以下函数,将权重初始化为较大的随机值(按*10缩放),并将偏差设为0。 将 np.random.randn(..,..) * 10用于权重,将np.zeros((.., ..))用于偏差。我们使用固定的np.random.seed(..),以确保你的“随机”权重与我们的权重匹配。因此,如果运行几次代码后参数初始值始终相同,也请不要疑惑。

In [3]:

# GRADED FUNCTION: initialize_parameters_random

def initialize_parameters_random(layers_dims):
    """
    Arguments:
    layer_dims -- python array (list) containing the size of each layer.
    
    Returns:
    parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
                    W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
                    b1 -- bias vector of shape (layers_dims[1], 1)
                    ...
                    WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
                    bL -- bias vector of shape (layers_dims[L], 1)
    """
    
    np.random.seed(3)               # This seed makes sure your "random" numbers will be the as ours
    parameters = {}
    L = len(layers_dims)            # integer representing the number of layers
    
    for l in range(1, L):
        ### START CODE HERE ### (≈ 2 lines of code)
        parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1])*10
        parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
        ### END CODE HERE ###

    return parameters

In [4]:

parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
W1 = [[ 17.88628473   4.36509851   0.96497468]
 [-18.63492703  -2.77388203  -3.54758979]]
b1 = [[0.]
 [0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[0.]]

预期输出:
W1 = [[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
b1 = [[0.]
[0.]]
W2 = [[-0.82741481 -6.27000677]]
b2 = [[0.]]

运行以下代码使用随机初始化迭代15,000次以训练模型。

In [7]:

parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
/home/kesci/input/deeplearning34288/init_utils.py:145: RuntimeWarning: divide by zero encountered in log
  logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
/home/kesci/input/deeplearning34288/init_utils.py:145: RuntimeWarning: invalid value encountered in multiply
  logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
Cost after iteration 0: inf
Cost after iteration 1000: 0.6247924745506072
Cost after iteration 2000: 0.5980258056061102
Cost after iteration 3000: 0.5637539062842213
Cost after iteration 4000: 0.5501256393526495
Cost after iteration 5000: 0.5443826306793814
Cost after iteration 6000: 0.5373895855049121
Cost after iteration 7000: 0.47157999220550006
Cost after iteration 8000: 0.39770475516243037
Cost after iteration 9000: 0.3934560146692851
Cost after iteration 10000: 0.3920227137490125
Cost after iteration 11000: 0.38913700035966736
Cost after iteration 12000: 0.3861358766546214
Cost after iteration 13000: 0.38497629552893475
Cost after iteration 14000: 0.38276694641706693

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-9E7uogOH-1663742365209)(L2W1%E4%BD%9C%E4%B8%9A1%20%E5%88%9D%E5%A7%8B%E5%8C%96.assets/q1armzdh0r.png)]

On the train set:
Accuracy: 0.83
On the test set:
Accuracy: 0.86

因为数值舍入,你可能在0迭代之后看到损失为"inf",我们会在之后用更复杂的数字实现解决此问题。

总之,看起来你的对称性已打破,这会带来更好的结果。 相比之前,模型不再输出全0的结果了。

In [13]:

print (predictions_train)
print (predictions_test)
[[1 0 1 1 0 0 1 1 1 1 1 0 1 0 0 1 0 1 1 0 0 0 1 0 1 1 1 1 1 1 0 1 1 0 0 1
  1 1 1 1 1 1 1 0 1 1 1 1 0 1 0 1 1 1 1 0 0 1 1 1 1 0 1 1 0 1 0 1 1 1 1 0
  0 0 0 0 1 0 1 0 1 1 1 0 0 1 1 1 1 1 1 0 0 1 1 1 0 1 1 0 1 0 1 1 0 1 1 0
  1 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 1 0 0 1 0 1 1 1 1 1 1 1 0 1 1 0 0 1 1 0
  0 0 1 0 1 0 1 0 1 1 1 0 0 1 1 1 1 0 1 1 0 1 0 1 1 0 1 0 1 1 1 1 0 1 1 1
  1 0 1 0 1 0 1 1 1 1 0 1 1 0 1 1 0 1 1 0 1 0 1 1 1 0 1 1 1 0 1 0 1 0 0 1
  0 1 1 0 1 1 0 1 1 0 1 1 1 0 1 1 1 1 0 1 0 0 1 1 0 1 1 1 0 0 0 1 1 0 1 1
  1 1 0 1 1 0 1 1 1 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 1 1 1
  1 1 1 1 0 0 0 1 1 1 1 0]]
[[1 1 1 1 0 1 0 1 1 0 1 1 1 0 0 0 0 1 0 1 0 0 1 0 1 0 1 1 1 1 1 0 0 0 0 1
  0 1 1 0 0 1 1 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 1 0 1 1 1 1 1 1 1 1 1 0 1 0
  1 1 1 1 1 0 1 0 0 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 0 0]]

In [8]:

plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6AnjDub6-1663742365209)(L2W1%E4%BD%9C%E4%B8%9A1%20%E5%88%9D%E5%A7%8B%E5%8C%96.assets/q1arn5mf3g.png)]

观察

  • 损失一开始很高是因为较大的随机权重值,对于某些数据,最后一层激活函数sigmoid输出的结果非常接近0或1,并且当该示例数据预测错误时,将导致非常高的损失。当 log ⁡ ( a [ 3 ] ) = log ⁡ ( 0 ) \log(a^{[3]}) = \log(0) log(a[3])=log(0)时,损失达到无穷大。
  • 初始化不当会导致梯度消失/爆炸,同时也会减慢优化算法的速度。
  • 训练较长时间的网络,将会看到更好的结果,但是使用太大的随机数进行初始化会降低优化速度。

总结

  • 将权重初始化为非常大的随机值效果不佳。
  • 初始化为较小的随机值会更好。重要的问题是:这些随机值应为多小?让我们在下一部分中找到答案!

4 He初始化

最后,让我们尝试一下“He 初始化”,该名称以He等人的名字命名(类似于“Xavier初始化”,但Xavier初始化使用比例因子 sqrt(1./layers_dims[l-1])来表示权重 W [ l ] W^{[l]} W[l] ,而He初始化使用sqrt(2./layers_dims[l-1]))。

练习:实现以下函数,以He初始化来初始化参数。

提示:此函数类似于先前的initialize_parameters_random(...)。 唯一的不同是,无需将np.random.randn(..,..)乘以10,而是将其乘以 2 dimension of the previous layer \sqrt{\frac{2}{\text{dimension of the previous layer}}} dimension of the previous layer2 ,这是He初始化建议使用的ReLU激活层。

In [9]:

# GRADED FUNCTION: initialize_parameters_he

def initialize_parameters_he(layers_dims):
    """
    Arguments:
    layer_dims -- python array (list) containing the size of each layer.
    
    Returns:
    parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
                    W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
                    b1 -- bias vector of shape (layers_dims[1], 1)
                    ...
                    WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
                    bL -- bias vector of shape (layers_dims[L], 1)
    """
    
    np.random.seed(3)
    parameters = {}
    L = len(layers_dims) - 1 # integer representing the number of layers
     
    for l in range(1, L + 1):
        ### START CODE HERE ### (≈ 2 lines of code)
        parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1])*np.sqrt(2./layers_dims[l-1])
        parameters['b' + str(l)] = np.zeros((layers_dims[l],1))
        ### END CODE HERE ###
        
    return parameters

In [10]:

parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
W1 = [[ 1.78862847  0.43650985]
 [ 0.09649747 -1.8634927 ]
 [-0.2773882  -0.35475898]
 [-0.08274148 -0.62700068]]
b1 = [[0.]
 [0.]
 [0.]
 [0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268  0.62552248]]
b2 = [[0.]]

预期输出:
W1 = [[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
b2 = [[0.]]

运行以下代码,使用He初始化并迭代15,000次以训练你的模型。

In [11]:

parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Cost after iteration 0: 0.8830537463419761
Cost after iteration 1000: 0.6879825919728063
Cost after iteration 2000: 0.6751286264523371
Cost after iteration 3000: 0.6526117768893805
Cost after iteration 4000: 0.6082958970572938
Cost after iteration 5000: 0.5304944491717495
Cost after iteration 6000: 0.4138645817071794
Cost after iteration 7000: 0.3117803464844441
Cost after iteration 8000: 0.23696215330322562
Cost after iteration 9000: 0.1859728720920684
Cost after iteration 10000: 0.15015556280371808
Cost after iteration 11000: 0.12325079292273551
Cost after iteration 12000: 0.09917746546525937
Cost after iteration 13000: 0.08457055954024283
Cost after iteration 14000: 0.07357895962677366

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-J3Kaupg3-1663742365210)(L2W1%E4%BD%9C%E4%B8%9A1%20%E5%88%9D%E5%A7%8B%E5%8C%96.assets/q1aro06c7x.png)]

On the train set:
Accuracy: 0.9933333333333333
On the test set:
Accuracy: 0.96

In [20]:

plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)

请添加图片描述

观察

  • 使用He初始化的模型可以在少量迭代中很好地分离蓝色点和红色点。

5 总结

我们已经学习了三种不同类型的初始化方法。对于相同的迭代次数和超参数,三种结果比较为:

Model测试准确率评价
零初始化的3层NN50%未能打破对称性
随机初始化的3层NN83%权重太大
He初始化的3层NN99%推荐方法

此作业中应记住的内容

  • 不同的初始化会导致不同的结果
  • 随机初始化用于打破对称性,并确保不同的隐藏单元可以学习不同的东西
  • 不要初始化为太大的值
  • 初始化对于带有ReLU激活的网络非常有效。
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
BP神经网络正则化是为了避免过拟合现象而进行的一种处理方法,主要有L1正则化和L2正则化两种方式。其中L1正则化的处理方法是将权重矩阵中的每个元素都乘以一个小于1的缩小因子,而L2正则化的处理方法是将权重矩阵中的每个元素都乘以一个小于1的缩小因子的平方。 以下是一个使用L2正则化的BP神经网络代码示例: ``` import numpy as np class NeuralNetwork: def __init__(self, input_dim, hidden_dim, output_dim, lambda_val): self.input_dim = input_dim self.hidden_dim = hidden_dim self.output_dim = output_dim self.lambda_val = lambda_val self.W1 = np.random.randn(self.input_dim, self.hidden_dim) / np.sqrt(self.input_dim) self.b1 = np.zeros((1, self.hidden_dim)) self.W2 = np.random.randn(self.hidden_dim, self.output_dim) / np.sqrt(self.hidden_dim) self.b2 = np.zeros((1, self.output_dim)) def sigmoid(self, x): return 1.0 / (1.0 + np.exp(-x)) def sigmoid_prime(self, x): return self.sigmoid(x) * (1 - self.sigmoid(x)) def feedforward(self, X): self.z1 = np.dot(X, self.W1) + self.b1 self.a1 = self.sigmoid(self.z1) self.z2 = np.dot(self.a1, self.W2) + self.b2 exp_scores = np.exp(self.z2) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) return probs def calculate_loss(self, X, y): num_examples = len(X) probs = self.feedforward(X) logprobs = -np.log(probs[range(num_examples), y]) data_loss = np.sum(logprobs) data_loss += self.lambda_val/2 * (np.sum(np.square(self.W1)) + np.sum(np.square(self.W2))) return 1./num_examples * data_loss def predict(self, X): probs = self.feedforward(X) return np.argmax(probs, axis=1) def backpropagation(self, X, y): num_examples = len(X) delta3 = self.feedforward(X) delta3[range(num_examples), y] -= 1 dW2 = (self.a1.T).dot(delta3) db2 = np.sum(delta3, axis=0, keepdims=True) delta2 = delta3.dot(self.W2.T) * self.sigmoid_prime(self.z1) dW1 = np.dot(X.T, delta2) db1 = np.sum(delta2, axis=0) dW2 += self.lambda_val * self.W2 dW1 += self.lambda_val * self.W1 return dW1, db1, dW2, db2 def train(self, X, y, num_passes=20000, learning_rate=0.01): for i in range(num_passes): dW1, db1, dW2, db2 = self.backpropagation(X,y) self.W1 -= learning_rate * dW1 self.b1 -= learning_rate * db1 self.W2 -= learning_rate * dW2 self.b2 -= learning_rate * db2 if i % 1000 == 0: print("Loss after iteration %i: %f" %(i, self.calculate_loss(X,y))) ``` 以上代码中,NeuralNetwork类的构造函数中输入参数依次为输入层维度、隐藏层维度、输出层维度和正则化因子。其中,初始化权重矩阵时使用的是随机高斯分布,并且通过除以根号下输入层或隐藏层维度来缩放权重矩阵。在feedforward方法中,首先计算z值和激活函数值,然后通过softmax函数计算输出概率。在calculate_loss方法中,计算交叉熵损失和L2正则化损失之和。在backpropagation方法中,首先计算输出层误差,然后反向传播计算隐藏层误差,最后计算梯度并添加L2正则化项。在train方法中,迭代训练神经网络并打印出每一次迭代后的损失值。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

追寻远方的人

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值