斯坦福ML课程——python转写(Week5—课程作业ex4)

利用python完成课程作业ex4,Introduction如下:

In this exercise, you will implement the backpropagation algorithm for neural networks and apply it to the task of hand-written digit recognition. 

下一篇文章中会总结完成ex4的一些注意事项以及个人想法:https://blog.csdn.net/wangzhihao_2015/article/details/103338448

代码如下(代码整体结构复杂和可读性低,还望见谅):

# -*- coding: utf-8 -*-
"""
Created on Sat Nov 30 11:22:30 2019

@author: Lonely_hanhan
"""
import scipy.io as sio
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as op

## Setup the parameters you will use for this exercise
input_layer_size = 400
hidden_layer_size = 25
num_labels = 10

''' =========== Part 1: Loading and Visualizing Data ============='''
# We start the exercise by first loading and visualizing the dataset.
# You will be working with a dataset that contains handwritten digits.
#
# Load Training Data

def displayData(X):
    
    #Compute rows, cols
    [m, n] = X.shape
    example_width = round(np.sqrt(n)).astype(int)
    example_height = (n / example_width).astype(int)
    #Compute number of items to display
    display_rows = np.floor(np.sqrt(m)).astype(int)
    display_cols = np.ceil(m / display_rows).astype(int)
    
    #Between images padding
    pad = 1
    
    #Setup blank display
    display_array = - np.ones((display_rows * (example_height + pad), \
                            display_cols * (example_width + pad)))
    
    # Copy each example into a patch on the display array
    curr_ex = 0
    for j in range(display_rows):
        for i in range(display_cols):
            if curr_ex > m-1:
                break
            #Copy the patch
            #Get the max value of the patch
            max_val = np.max(np.abs(X[curr_ex]))
            display_array[j * (example_height + pad) + np.arange(example_height),\
                          i * (example_width + pad) + np.arange(example_width)[:, np.newaxis]] = \
                          X[curr_ex].reshape((example_height, example_width)) / max_val
            curr_ex += 1
        if curr_ex > m-1:
            break
    plt.figure()
    plt.imshow(display_array, cmap='gray', extent=[-1, 1, -1, 1])
    plt.axis('off')
    return

print('Loading and Visualizing Data ...\n')

data = sio.loadmat('D:\exercise\machine-learning-ex4\machine-learning-ex4\ex4\ex4data1.mat')
X = data['X']
y = data['y']
m = X.shape[0]

rand_indices = np.random.permutation(range(m)) #获取0-4999 5000个无序随机索引
selected = X[rand_indices[0:100], :]  #获取前100个随机索引对应的整条数据的输入特征
 
displayData(selected)   #调用可视化函数 进行可视化
 
input('Program paused. Press ENTER to continue')

''' ================ Part 2: Loading Parameters ================'''
# In this part of the exercise, we load some pre-initialized
# neural network parameters.

weight = sio.loadmat('D:\exercise\machine-learning-ex4\machine-learning-ex4\ex4\ex4weights.mat')
Theta1 = weight['Theta1'] # first layer sigmoid
Theta2 = weight['Theta2'] # second layer sigmnid

'''================ Part 3: Compute Cost (Feedforward) ================
%  To the neural network, you should first start by implementing the
%  feedforward part of the neural network that returns the cost only. You
%  should complete the code in nnCostFunction.m to return cost. After
%  implementing the feedforward to compute the cost, you can verify that
%  your implementation is correct by verifying that you get the same cost
%  as us for the fixed debugging parameters.
%
%  We suggest implementing the feedforward cost *without* regularization
%  first so that it will be easier for you to debug. Later, in part 4, you
%  will get to implement the regularized cost.
'''

print('\nFeedforward Using Neural Network ...\n')

# Weight regularization parameter (we set this to 0 here)

lambda_nn = 0

def sigmoid(z):
     return 1/(1+np.exp(-z)) 

def nnCostFunction(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_nn):
    
    ''' %NNCOSTFUNCTION Implements the neural network cost function for a two layer
%neural network which performs classification
%   [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
%   X, y, lambda) computes the cost and gradient of the neural network. The
%   parameters for the neural network are "unrolled" into the vector
%   nn_params and need to be converted back into the weight matrices. 
% 
%   The returned parameter grad should be a "unrolled" vector of the
%   partial derivatives of the neural network.
%

% Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
% for our 2 layer neural network '''

    Theta1_nn = nn_params[:hidden_layer_size * (input_layer_size + 1)].reshape(hidden_layer_size, input_layer_size + 1)
    Theta2_nn = nn_params[hidden_layer_size * (input_layer_size + 1):].reshape(num_labels, hidden_layer_size + 1)
    # Setup some useful variables
    m = X.shape[0]
    X = np.c_[np.ones(m), X]
    # You need to return the following variables correctly
    J = 0
    Theta1_grad = np.zeros(Theta1_nn.shape)
    Theta2_grad = np.zeros(Theta2_nn.shape)
    
    '''% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the code by working through the
%               following parts.
%
% Part 1: Feedforward the neural network and return the cost in the
%         variable J. After implementing Part 1, you can verify that your
%         cost function computation is correct by verifying the cost
%         computed in ex4.m
%
% Part 2: Implement the backpropagation algorithm to compute the gradients
%         Theta1_grad and Theta2_grad. You should return the partial derivatives of
%         the cost function with respect to Theta1 and Theta2 in Theta1_grad and
%         Theta2_grad, respectively. After implementing Part 2, you can check
%         that your implementation is correct by running checkNNGradients
%
%         Note: The vector y passed into the function is a vector of labels
%               containing values from 1..K. You need to map this vector into a 
%               binary vector of 1's and 0's to be used with the neural network
%               cost function.
%
%         Hint: We recommend implementing backpropagation using a for-loop
%               over the training examples if you are implementing it for the 
%               first time.
%
% Part 3: Implement regularization with the cost function and gradients.
%
%         Hint: You can implement this around the code for
%               backpropagation. That is, you can compute the gradients for
%               the regularization separately and then add them to Theta1_grad
%               and Theta2_grad from Part 2.
%'''
    z2 = np.dot(X, (Theta1_nn.T))
    a2 = sigmoid(z2)
    n2 = a2.shape[0]
    a2 = np.c_[np.ones(n2), a2]
    #计算输出层
    z3 = np.dot(a2, Theta2_nn.T) #5000*10
    layer3 = sigmoid(z3)
    #对于正则化,theta需要去掉一列
    reg_theta1 = Theta1_nn[:, 1:]  # 25 x 400
    reg_theta2 = Theta2_nn[:, 1:]  # 10 x 25
    #对每一组sample设置 其 实际值为 1,其他为 0.
    Y = np.zeros((m, num_labels))#5000*10
    for i in range(m):
        Y[i , y[i]-1] = 1
    #计算cost
    J = np.sum(-np.log(layer3) * Y - np.log(1-layer3) * (1-Y)) /m \
    +lambda_nn/(2*m)*(np.sum(reg_theta1*reg_theta1)+np.sum(reg_theta2*reg_theta2))#对应类别相乘,不是矩阵相乘
    
    
    #back
    for i in range(m):
        x = X[i,:].reshape(1,X.shape[1]) 
        x_z2 = np.dot(x, (Theta1_nn.T))
        x_a2 = sigmoid(x_z2)
        x_n2 = x_a2.shape[0]
        x_a2 = np.c_[np.ones(x_n2), x_a2] # 1*26
        x_z3 = np.dot(x_a2, Theta2_nn.T) # 1*10
        x_layer3 = sigmoid(x_z3) # 1*10
        y_back = Y[i,:]
        dlt3 = x_layer3 - y_back
        theta2 = Theta2_nn[:,1:] # 10*25
        g_z2 = x_a2 * (1 - x_a2) #1*26
        dlt2 = np.dot(dlt3, theta2)*(g_z2[:,1:]) # 1*25
        
        Theta1_grad = Theta1_grad + dlt2.T @ x
        Theta2_grad = Theta2_grad + dlt3.T @ x_a2
    
    #无regula    
    Theta1_grad = Theta1_grad / m
    Theta2_grad = Theta2_grad / m
    
    #regula
    p1 = np.c_[np.zeros(hidden_layer_size), reg_theta1]
    p2 = np.c_[np.zeros(num_labels), reg_theta2]
    
    Theta1_grad = Theta1_grad + (lambda_nn / m) * p1
    Theta2_grad = Theta2_grad + (lambda_nn / m) * p2
    

    # ====================================================================================
    # Unroll gradients combine Theta1_grad with Theta2_grad as one row
    grad = np.concatenate([Theta1_grad.flatten(), Theta2_grad.flatten()])

    return J , grad

 

print('\nFeedforward Using Neural Network ...\n')
#% Weight regularization parameter (we set this to 0 here).
lambda_1 = 0
nn_params = np.concatenate([Theta1.flatten(), Theta2.flatten()])
J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_1)
print('Cost at parameters (loaded from ex4weights): %f '\
         '\n(this value should be about 0.287629)\n', J);

''' =============== Part 4: Implement Regularization ===============
%  Once your cost function implementation is correct, you should now
%  continue to implement the regularization with the cost.
%'''
print('\nChecking Cost Function (w/ Regularization) ... \n')
#% Weight regularization parameter (we set this to 1 here).
lambda_2 = 1;
nn_params = np.concatenate([Theta1.flatten(), Theta2.flatten()])
J_reg = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda_2)  

print('Cost at parameters (loaded from ex4weights): %f ''\n(this value should be about 0.383770)\n', J_reg);     
     
'''================ Part 5: Sigmoid Gradient  ================
%  Before you start implementing the neural network, you will first
%  implement the gradient for the sigmoid function. You should complete the
%  code in the sigmoidGradient.m file.
%'''
def sigmoidGradient(z):
  '''%SIGMOIDGRADIENT returns the gradient of the sigmoid function
%   evaluated at z
%   g = SIGMOIDGRADIENT(z) computes the gradient of the sigmoid function
%   evaluated at z. This should work regardless if z is a matrix or a
%   vector. In particular, if z is a vector or matrix, you should return
%   the gradient for each element. 
  '''
  g = np.zeros(z.shape)
  g = sigmoid(z)*(1-sigmoid(z))
  return g

print('\nEvaluating sigmoid gradient...\n')

g = sigmoidGradient(np.array([-1,-0.5,0,0.5,1]))
print('Sigmoid gradient evaluated at [-1 -0.5 0 0.5 1]:\n  ')
print('%f ', g)
print('\n\n')

'''%% ================ Part 6: Initializing Pameters ================
%  In this part of the exercise, you will be starting to implment a two
%  layer neural network that classifies digits. You will start by
%  implementing a function to initialize the weights of the neural network
%  (randInitializeWeights.m)
'''
def randInitializeWeights(L_in, L_out):
    '''
    %RANDINITIALIZEWEIGHTS Randomly initialize the weights of a layer with L_in
%incoming connections and L_out outgoing connections
%   W = RANDINITIALIZEWEIGHTS(L_in, L_out) randomly initializes the weights 
%   of a layer with L_in incoming connections and L_out outgoing 
%   connections. 
%
%   Note that W should be set to a matrix of size(L_out, 1 + L_in) as
%   the first column of W handles the "bias" terms
%
    '''
    #You need to return the following variables correctly
    W = np.zeros((L_out, 1 + L_in))
    epsilon_init = 0.12
    W = np.random.rand(L_out, 1 + L_in) * 2 * epsilon_init - epsilon_init
    return W

print('\nInitializing Neural Network Parameters ...\n')

initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size)
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels)

'''=============== Part 7: Implement Backpropagation ===============
%  Once your cost matches up with ours, you should proceed to implement the
%  backpropagation algorithm for the neural network. You should add to the
%  code you've written in nnCostFunction.m to return the partial
%  derivatives of the parameters.
%
'''
#print('\nChecking Backpropagation... \n');

#%  Check gradients by running checkNNGradients
def debugInitializeWeights(fan_out, fan_in):
    '''
    %DEBUGINITIALIZEWEIGHTS Initialize the weights of a layer with fan_in
%incoming connections and fan_out outgoing connections using a fixed
%strategy, this will help you later in debugging
%   W = DEBUGINITIALIZEWEIGHTS(fan_in, fan_out) initializes the weights 
%   of a layer with fan_in incoming connections and fan_out outgoing 
%   connections using a fix set of values
%
%   Note that W should be set to a matrix of size(1 + fan_in, fan_out) as
%   the first row of W handles the "bias" terms
%
    '''
    #Set W to zeros
    W = np.zeros((fan_out, 1 + fan_in))
    #Initialize W using "sin", this ensures that W is always of the same
    #values and will be useful for debugging
    W = np.sin(np.arange(np.size(W))).reshape(W.shape) / 10
    return W

def computeNumericalGradient(J, nn_params):
    '''
    %COMPUTENUMERICALGRADIENT Computes the gradient using "finite differences"
%and gives us a numerical estimate of the gradient.
%   numgrad = COMPUTENUMERICALGRADIENT(J, theta) computes the numerical
%   gradient of the function J around theta. Calling y = J(theta) should
%   return the function value at theta.

% Notes: The following code implements numerical gradient checking, and 
%        returns the numerical gradient.It sets numgrad(i) to (a numerical 
%        approximation of) the partial derivative of J with respect to the 
%        i-th input argument, evaluated at theta. (i.e., numgrad(i) should 
%        be the (approximately) the partial derivative of J with respect 
%        to theta(i).)
%    
    '''
    numgrad = np.zeros(nn_params.shape)
    perturb = np.zeros(nn_params.shape)
    e = 1e-4
    for p in range(np.size(nn_params)):
        #Set perturbation vector
        perturb[p] = e
        loss1, grad1 = J(nn_params - perturb)
        loss2, grad2 = J(nn_params + perturb)
        #Compute Numerical Gradient
        numgrad[p] = (loss2 - loss1) / (2*e)
        perturb[p] = 0
    return numgrad

def checkNNGradients(lambdach):
    '''
    %CHECKNNGRADIENTS Creates a small neural network to check the
%backpropagation gradients
%   CHECKNNGRADIENTS(lambda) Creates a small neural network to check the
%   backpropagation gradients, it will output the analytical gradients
%   produced by your backprop code and the numerical gradients (computed
%   using computeNumericalGradient). These two gradient computations should
%   result in very similar values.
%
    '''
    input_layer_size = 3
    hidden_layer_size = 5
    num_labels = 3
    m = 5
    #We generate some 'random' test data
    Theta1_ch = debugInitializeWeights(hidden_layer_size, input_layer_size)
    Theta2_ch = debugInitializeWeights(num_labels, hidden_layer_size)
    #Reusing debugInitializeWeights to generate X
    X = debugInitializeWeights(m, input_layer_size - 1)
    y = 1 + np.mod(np.arange(1, m + 1), num_labels)
    
    nn_params = np.concatenate([Theta1_ch.flatten(), Theta2_ch.flatten()])
    
    #Short hand for cost function
    def cost_func(p):
        return nnCostFunction(p, input_layer_size, hidden_layer_size, \
    num_labels, X, y, lambdach)
    
    
    cost, grad = cost_func(nn_params)
    
    numgrad = computeNumericalGradient(cost_func, nn_params)
#% Visually examine the two gradient computations.  The two columns
#you get should be very similar. 
    print(np.c_[numgrad, grad,  grad - numgrad])

    print('The above two columns you get should be very similar.\n' \
         '(Left-Your Numerical Gradient, Right-Analytical Gradient)\n\n')


    '''
    % Evaluate the norm of the difference between two solutions.  
    % If you have a correct implementation, and assuming you used EPSILON = 0.0001 
    % in computeNumericalGradient.m, then diff below should be less than 1e-9
    '''
    diff = np.linalg.norm(numgrad-grad) / np.linalg.norm(numgrad+grad)
    print('If your backpropagation implementation is correct, then \n' \
         'the relative difference will be small (less than 1e-9). \n' \
         '\nRelative Difference: %g\n', diff)
    return

print('\nChecking Backpropagation... \n');

#Check gradients by running checkNNGradients
lambda_b1 = 0
checkNNGradients(lambda_b1)

#Check gradients by running checkNNGradients

lambda_b2 = 3
checkNNGradients(lambda_b2)

debug_J  = nnCostFunction(nn_params, input_layer_size, \
                          hidden_layer_size, num_labels, X, y, lambda_b2);

print('\n\nCost at (fixed) debugging parameters (w/ lambda = %f): %f ' \
         '\n(for lambda = 3, this value should be about 0.576051)\n\n',  debug_J)

''' =================== Part 8: Training NN ===================
%  You have now implemented all the code necessary to train a neural 
%  network. To train your neural network, we will now use "fmincg", which
%  is a function which works similarly to "fminunc". Recall that these
%  advanced optimizers are able to train our cost functions efficiently as
%  long as we provide them with the gradient computations.
%
'''
print('\nTraining Neural Network... \n')
'%  After you have completed the assignment, change the MaxIter to a larger'
'%  value to see how more training helps.'

lmd = 1
 
 
def cost_func(p):
    return nnCostFunction(p, input_layer_size, hidden_layer_size, num_labels, X, y, lmd)[0]
 
 
def grad_func(p):
    return nnCostFunction(p, input_layer_size, hidden_layer_size, num_labels, X, y, lmd)[1]
 
nn_params, *unused = op.fmin_cg(cost_func, fprime=grad_func, x0=nn_params, maxiter=400, disp=True, full_output=True)
 
# Obtain theta1 and theta2 back from nn_params
theta1 = nn_params[:hidden_layer_size * (input_layer_size + 1)].reshape(hidden_layer_size, input_layer_size + 1)
theta2 = nn_params[hidden_layer_size * (input_layer_size + 1):].reshape(num_labels, hidden_layer_size + 1)


 
input('Program paused. Press ENTER to continue')

'''================= Part 9: Visualize Weights =================
 You can now "visualize" what the neural network is learning by 
%  displaying the hidden units to see what features they are capturing in 
%  the data.
'''
print('Visualizing Neural Network...')
 
displayData(theta1[:, 1:])


'''================= Part 10: Implement Predict =================
%  After training the neural network, we would like to use it to predict
%  the labels. You will now implement the "predict" function to use the
%  neural network to predict the labels of the training set. This lets
%  you compute the training set accuracy.
'''
def predict(Theta1, Theta2, X):
    # Useful values
    m = X.shape[0]
    #num_labels = Theta2.shape[0]
    X = np.c_[np.ones(m), X]
    
    # You need to return the following variables correctly
    p = np.zeros((m, 1))
# ====================== YOUR CODE HERE ======================
# Instructions: Complete the following code to make predictions using
#               your learned neural network. You should set p to a 
#               vector containing labels between 1 to num_labels.
#
# Hint: The max function might come in useful. In particular, the max
#       function can also return the index of the max element, for more
#       information see 'help max'. If your examples are in rows, then, you
#       can use max(A, [], 2) to obtain the max for each row.
    #计算第二层
    z2 = np.dot(X, (Theta1.T))
    a2 = sigmoid(z2)
    n2 = a2.shape[0]
    a2 = np.c_[np.ones(n2), a2] #第二层列加一
    #计算输出层
    z3 = np.dot(a2, Theta2.T)
    out = sigmoid(z3)
    p = np.argmax(out, axis=1)
    
    return p+1

pred = predict(theta1, theta2, X)
print('Training set accuracy:',sum(pred[:, np.newaxis] == y)[0] /5000 * 100,"%")

 运行结果如下:

lambda_1 = 0

 lambda_2 = 3

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值