吴恩达《深度学习》课程作业总结——1.2 用神经网络的方式实现逻辑回归

相关代码及数据已全部上传到github:1.2-用神经网络实现逻辑回归

1.2.1 需要的包

  • numpy是使用Python进行科学计算的基础包
  • h5py是与存储在H5文件中的数据集进行交互的常用包
  • matplotlib是一个用Python绘制图形的库
  • PIL是用来将自己的图片拿来测试模型
# 1-import.py
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset

#若是在jupyter notebook环境中,请加上"%matplotlib inline"。若不是,则不用加。

1.2.2 问题概述

问题描述:给我们一个数据集(“data.h5”),它包含:

  • 训练集:m_train张标记为猫(y=1)或非猫(y=0)的图片
  • 测试集:m_test张标记为猫或非猫的图片
  • 每张图片的形状为(num_px, num_px, 3),这里的3代表RGB3通道,每张图片为正方形

我们的任务是:建立一个简单的图像识别算法,它能正确地将猫或者非猫图片进行分类。

# 2-load_data.py
# Loading the data (cat/non-cat)		加载数据集,并显示数据集中的一个图片例子
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()

我们对要进行预处理的数据集(train和test)名字后面添加"_orig",预处理之后,我们会分别得到train_set_x和test_set_x。标记train_set_y和test_set_y不需要进行预处理。

train_set_x_orig和test_set_x_orig的每行是一个表示一张图片的数组。可以用下面的代码将显示出一张图片:

# 3-show_an_image.py
# Example of a picture

index = 25
plt.imshow(train_set_x_orig[index])
plt.show()	#在非jupyter notebook环境中,请加上此行
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") +  "' picture.")

一些值:

  • m_train:训练样本的数量
  • m_test:测试样本的数量
  • num_px:训练图片的高度和宽度,并且高度大小等于宽度大小
  • train_set_x_orig数组的形状为(m_train, num_px, num_px, 3)
    这些值可以通过以下代码得到:
# 4-get_some_values.py

m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
print ("Number of training examples: m_train = " + str(m_train))
#输出:Number of training examples: m_train = 209

print ("Number of testing examples: m_test = " + str(m_test))
#输出:Number of testing examples: m_test = 50

print ("Height/Width of each image: num_px = " + str(num_px))
#输出:Height/Width of each image: num_px = 64

print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
#输出:Each image is of size: (64, 64, 3)

print ("train_set_x shape: " + str(train_set_x_orig.shape))
#输出:train_set_x shape: (209, 64, 64, 3)

print ("train_set_y shape: " + str(train_set_y.shape))
#输出:train_set_y shape: (1, 209)

print ("test_set_x shape: " + str(test_set_x_orig.shape))
#输出:test_set_x shape: (50, 64, 64, 3)

print ("test_set_y shape: " + str(test_set_y.shape))
#输出:test_set_y shape: (1, 50)

接下来,现在应该将形状(num_px, num_px, 3)的图像重新整形为形状为(num_px*num_px *3, 1)的numpy数组。 在此之后,我们的训练(和测试)数据集是一个numpy数组,其中每列代表一个展平的图像。 应该有m_train(分别为m_test)列。

小技巧:把一个形状为(a, b, c, d)的矩阵X展平为形状为(a, bcd)的矩阵X_flatten使用如下代码:

X_flatten = X.reshape(X.shape[0], -1)
# 5-reshape.py
# Reshape the training and test examples

train_set_x_flatten = train_set_x_orig.reshape(m_train, -1).T
test_set_x_flatten = test_set_x_orig.reshape(m_test, -1).T
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
#输出:train_set_x_flatten shape: (12288, 209)

print ("train_set_y shape: " + str(train_set_y.shape))
#输出:train_set_y shape: (1, 209)

print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
#输出:test_set_x_flatten shape: (12288, 50)

print ("test_set_y shape: " + str(test_set_y.shape))
#输出:test_set_y shape: (1, 50)

print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
#输出:sanity check after reshaping: [17 31 56 22 33]

对这个过程进行稍微深入一点的解释:
以train_set_x_orig为例,其形状为(209, 64, 64, 3),m_train等于209,所以train_set_x_orig.reshape(m_train, -1)的形状为(209, 64 * 64 * 3),即(209, 12288),所以转置后train_set_x_flatten的形状为(12288, 209),即(num_px * num_px * 3, number of examples)。每一列为一个图片,共有209个图片。
同理,得到test_set_x_flatten形状为(1288, 50),只不过该测试集中只有50张图片罢了。

要表示彩色图像,必须为每个像素指定红色、绿色和蓝色通道(RGB),因此像素值实际上是有3个数字的向量,每个数字的范围从0到255。

机器学习中一个常见的预处理步骤是集中和标准化数据集,意味着你从每个样例中减去整个numpy数组的平均值,然后将每个样本除以整个numpy数组的标准方差。但对于图片数据集来说,将数据集的每一行除以255(像素通道的最大值)即可。

# 6-standardize.py
train_set_x = train_set_x_flatten/255
test_set_x = test_set_x_flatten/255 

现在总结一下预处理一个数据集的一般步骤:

  • 弄清楚问题的维度和形状(m_train, m_test, num_px, …)
  • 改变数据集的形状,让每个样本变为(num_px * num_px * 3, 1)的向量
  • “标准化”数据集

1.2.3 学习算法的整体框架

这节我们设计一个简单的学习算法去把猫与非猫区分开来。下面这张图解释了逻辑回归为什么是一个很简单的神经网络。
在这里插入图片描述

算法的数学表达式是:
对于单个的样本 x ( i ) x^{(i)} x(i)
z ( i ) = w T x ( i ) + b z^{(i)}=w^Tx^{(i)} + b z(i)=wTx(i)+b
y ^ ( i ) = a ( i ) = s i g m o i d ( z ( i ) ) \widehat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)}) y (i)=a(i)=sigmoid(z(i))
£ ( a ( i ) , y ( i ) ) = − y ( i ) l o g ( a ( i ) ) − ( 1 − y ( i ) ) l o g ( 1 − a ( i ) ) \pounds(a^{(i)},y^{(i)}) = -y^{(i)}log(a^{(i)}) -(1-y^{(i)})log(1-a^{(i)}) £(a(i),y(i))=y(i)log(a(i))(1y(i))log(1a(i))
损失的计算是:
J = 1 m ∑ i = 1 m £ ( a ( i ) , y ( i ) ) J=\frac{1}{m}\sum_{i=1}^{m}\pounds(a^{(i)}, y^{(i)}) J=m1i=1m£(a(i),y(i))

关键步骤:

  • 初始化模型的参数
  • 通过最小化损失来习得模型的参数
  • 使用学习到的参数在测试集上做出预测
  • 分析结果、下结论

1.2.4 构建算法的详细内容

构建一个神经网络的主要步骤是:

  • 定义模型的结构(比如输入特征的数量)
  • 初始化模型的参数
  • 循环如下内容:
    —计算当前的损失(前向传播)
    —计算当前的梯度(反向传播)
    —更新参数(梯度下降)

1.2.4.1 帮助函数

定义sigmoid函数,该函数在之前我们已经用代码实现过了。

1.2.4.2 初始化参数

将w初始化为元素全为0的向量

# 7-initialize_with_zeros.py

def initialize_with_zeros(dim):
    """
    This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.

    Argument:
    dim -- size of the w vector we want (or number of parameters in this case)

    Returns:
    w -- initialized vector of shape (dim, 1)
    b -- initialized scalar (corresponds to the bias)
    """

    w = np.zeros((dim, 1))
    b = 0

    assert(w.shape == (dim, 1))
    assert(isinstance(b, float) or isinstance(b, int))

    return w, b

1.2.4.3 前向与反向传播

前向传播的解释:

  • 输入X
  • 计算 A = σ ( w T X + b ) = ( a ( 0 ) , a ( 1 ) , . . . , a ( m − 1 ) , a ( m ) ) A=\sigma(w^TX+b)=(a^{(0)},a^{(1)},... ,a^{(m-1)},a^{(m)}) A=σ(wTX+b)=(a(0),a(1),...,a(m1),a(m))
  • 计算损失函数: J = − 1 m ∑ i = 1 m y ( i ) l o g ( a ( i ) ) + ( 1 − y ( i ) ) l o g ( 1 − a ( i ) ) J=-\frac{1}{m}\sum_{i=1}^{m}y^{(i)}log(a^{(i)})+(1-y^{(i)})log(1-a^{(i)}) J=m1i=1my(i)log(a(i))+(1y(i))log(1a(i))
    ∂ J ∂ w = 1 m X ( A − Y ) T ∂ J ∂ b = 1 m ∑ i = 1 m ( a ( i ) − y ( i ) ) \frac{\partial{J}}{\partial{w}}=\frac{1}{m}X(A-Y)^{T} \\ \frac{\partial{J}}{\partial{b}}=\frac{1}{m}\sum_{i=1}^{m}(a^{(i)}-y^{(i)}) wJ=m1X(AY)TbJ=m1i=1m(a(i)y(i))
# 8-propagate.py
#该函数计算损失函数和它的梯度

def propagate(w, b, X, Y):
    """
    Implement the cost function and its gradient for the propagation explained above

    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of size (num_px * num_px * 3, number of examples)
    Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)

    Return:
    cost -- negative log-likelihood cost for logistic regression
    dw -- gradient of the loss with respect to w, thus same shape as w
    db -- gradient of the loss with respect to b, thus same shape as b

    Tips:
    - Write your code step by step for the propagation. np.log(), np.dot()
    """

    m = X.shape[1]			#m表示样本数

    # FORWARD PROPAGATION (FROM X TO COST)
    A = sigmoid(np.dot(w.T, X)+b)                         		   # compute activation
    cost = -(1.0/m)*np.sum(Y*np.log(A)+(1-Y)*np.log(1-A))          # compute cost

    # BACKWARD PROPAGATION (TO FIND GRAD)
    dw = (1.0/m)*np.dot(X,(A-Y).T)
    db = (1.0/m)*np.sum(A-Y)

    assert(dw.shape == w.shape)
    assert(db.dtype == float)
    cost = np.squeeze(cost)
    assert(cost.shape == ())

    grads = {"dw": dw,
             "db": db}

    return grads, cost

1.2.4.4 优化操作

前面,我们已经初始化好了参数,有了损失函数和它的梯度。现在,我们需要使用梯度下降来更新参数。

# 9-optimize.py

def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
    """
    This function optimizes w and b by running a gradient descent algorithm
    该函数使用梯度下降算法最小化损失函数来学习参数与w与b.对于参数θ, 更新规则是θ=θ−αdθ,α在这了是学习率。

    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of shape (num_px * num_px * 3, number of examples)
    Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
    num_iterations -- number of iterations of the optimization loop
    learning_rate -- learning rate of the gradient descent update rule
    print_cost -- True to print the loss every 100 steps

    Returns:
    params -- dictionary containing the weights w and bias b
    grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
    costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.

    Tips:
    You basically need to write down two steps and iterate through them:
        1) Calculate the cost and the gradient for the current parameters. Use propagate().
        2) Update the parameters using gradient descent rule for w and b.
    """

    costs = []

    for i in range(num_iterations):
        # Cost and gradient calculation
        grads, cost = propagate(w, b, X, Y)

        # Retrieve derivatives from grads
        dw = grads["dw"]
        db = grads["db"]

        # update rule 
        w = w - learning_rate*dw
        b = b - learning_rate*db

        # Record the costs
        if i % 100 == 0:
            costs.append(cost)

        # Print the cost every 100 training examples
        if print_cost and i % 100 == 0:
            print ("Cost after iteration %i: %f" %(i, cost))

    params = {"w": w,
              "b": b}

    grads = {"dw": dw,
             "db": db}

    return params, grads, costs

这样,我们就得到了经过学习的w和b,我们使用w和b去给数据集X预测标记。计算预测值需要两步:

  • 计算 Y ^ = A = σ ( w T X + b ) \widehat{Y}=A=\sigma(w^TX+b) Y =A=σ(wTX+b)
  • 将a转换为0(若激活<=0.5)或者为1(若激活>0.5),并将预测值存储在向量Y_prediction中
# 10-predict.py

def predict(w, b, X):
    '''
    Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)

    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of size (num_px * num_px * 3, number of examples)

    Returns:
    Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
    '''

    m = X.shape[1]
    Y_prediction = np.zeros((1,m))
    w = w.reshape(X.shape[0], 1)

    # Compute vector "A" predicting the probabilities of a cat being present in the picture
    A = sigmoid(np.dot(w.T, X) + b)		#A为二维numpy数组
    
    for i in range(A.shape[1]):

        # Convert probabilities A[0,i] to actual predictions p[0,i]
        if A[0,i] > 0.5:
            Y_prediction[0,i] = 1
        else:
            Y_prediction[0,i] = 0

    assert(Y_prediction.shape == (1, m))

    return Y_prediction

1.2.5 将所有的函数合并成为一个模型

约定几个标记:

  • Y_prediction是测试集上的预测值
  • Y_prediction_train是训练集上的预测值
# 11-model.py

def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
    """
    Builds the logistic regression model by calling the function you've implemented previously

    Arguments:
    X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
    Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
    X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
    Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
    num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
    learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
    print_cost -- Set to true to print the cost every 100 iterations

    Returns:
    d -- dictionary containing information about the model.
    """


    # initialize parameters with zeros
    w, b = initialize_with_zeros(X_train.shape[0])

    # Gradient descent 
    parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)

    # Retrieve parameters w and b from dictionary "parameters"
    w = parameters["w"]
    b = parameters["b"]

    # Predict test/train set examples
    Y_prediction_test = predict(w, b, X_test)
    Y_prediction_train = predict(w, b, X_train)

    # Print train/test Errors
    print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
    print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))

    d = {"costs": costs,
         "Y_prediction_test": Y_prediction_test, 
         "Y_prediction_train" : Y_prediction_train, 
         "w" : w, 
         "b" : b,
         "learning_rate" : learning_rate,
         "num_iterations": num_iterations}

    return d 

下面用代码画出损失函数:

# 12-plot.py

# Plot learning curve (with costs)

costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()

若我们增加上面代码迭代的次数并重新运行代码,你可能会看到训练集准确率上升,但是测试集准确率下降,这叫做过拟合。

1.2.6 进一步分析

学习率的选择
学习率α决定了我们更新参数的速度。如果学习率太大,我们可能会“超调”最佳值。同样,如果它太小,我们将需要太多的迭代次数来收敛到最佳值。

# 13-learning_rates.py

learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
    print ("learning rate is: " + str(i))
    models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
    print ('\n' + "-------------------------------------------------------" + '\n')

for i in learning_rates:
    plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))

plt.ylabel('cost')
plt.xlabel('iterations')

legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()

在这里插入图片描述

说明:

  • 不同的学习率会得到不同的损失,因此产生不同的预测结果
  • 如果学习率太大(如0.01),损失函数可能会上下摆动。它甚至可能会发散(虽然在本例中使用0.01最终得到了一个比较小的损失)
  • 损失较小不意味着模型更好。你需要去检查是否出现了过拟合。过拟合通常在训练集准确率比测试集准确率高很多的时候发生。
  • 在深度学习中,我们推荐择能更好最小化损失函数的学习率

1.2.7 用你自己的图片进行测试

可以用我们自己的图片对模型进行测试

# 14-test_with_own_image.py

my_image = "my_image.jpeg"   # change this to the name of your image file 

# We preprocess the image to fit your algorithm.
fname = "image/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)

plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") +  "\" picture.")
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值