【深度学习基础1】神经网络基础--逻辑回归

转载请注明出处。谢谢。

本博文根据 coursera 吴恩达 深度学习整理。作为理解神经网络的基础。

一、知识点

       深度学习本质上是对数据的一种拟合。使用非线性的函数集合作为模型,对样本对进行损失最小的模拟。首先理解单个神经元的作用和原理,可以从最简单的逻辑回归开始。

1) 首先,我们进行符号表示的说明:

样本对:(x,y),训练样本共m个,x表示样本,y表示分类结果。

其中x\in \mathbb{R}^{n_x}, x有 n_x 个特征;由于是二分类问题,y \in \left \{ 0,1 \right \}

因而数据可表示为 data: \left \{ (x^1,y^1), (x^2,y^2),\left....... \right ,(x^m,y^m)\right \}

 

2) 了解一下sigmoid函数的特征:

sigmoid 函数: \sigma (z)= \frac{e^{z}}{1+e^{z}}

偏导:\frac{\partial \sigma }{\partial z} = \sigma(z)(1-\sigma(z))

函数图像:从图像中可以看出,当z无穷大或无穷小时,函数值均接近1,梯度接近于0

3)逻辑回归损失函数:

L(\widehat{y},y) = -(ylog\widehat{y}+(1-y)log(1-\widehat{y}))

当统计m个样本的cost function时:

J(\widehat{y},y) = -\frac{1}{m}\sum_{i=1}^{m}(y^ilog\widehat{y}^i+(1-y^i)log(1-\widehat{y}^i))

 

二、训练过程

正向传播:

z=w^Tx+b

\widehat{y}=a=\sigma(z)

z=np.dot(w.T,x)+b
A=sigma(z)

反向传播:

da = \frac{\partial L}{\partial a}=-\frac{y}{\widehat{y}}+\frac{1-y}{1-\widehat{y}} = -\frac{y}{a}+\frac{1-y}{1-a}

dz = \frac{\partial L}{\partial a}\cdot \frac{\partial a}{\partial z}=(-\frac{y}{a}+\frac{1-y}{1-a})(a(1-a))=a-y

dw_1 = \frac{\partial L}{\partial a}\cdot \frac{\partial a}{\partial z}\cdot \frac{\partial z}{\partial w_1}=x_1(a-y)

dw_2 = \frac{\partial L}{\partial a}\cdot \frac{\partial a}{\partial z}\cdot \frac{\partial z}{\partial w_2}=x_2(a-y)

db = \frac{\partial L}{\partial a}\cdot \frac{\partial a}{\partial z}\cdot \frac{\partial z}{\partial b}=1\cdot (a-y)=a-y

向量化:

dz = A-Y

dw=x\cdot dz^T

db=A-Y

梯度更新:

w:=w-\alpha dw

b:=b-\alpha db

dz = A - T
dw = 1.0/m * np.dot(X,dz.T)
db = 1.0/m * np.sum(dz)
w = w - alpha * dw
b = b - alpha * db

三、实例(Logistic Regression with a Neural Network mindset)整理版本

1)引入必要的包和文件

#coding=utf-8
import matplotlib.pyplot as plt
import h5py
import numpy as np
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
import pylab

2)pre-processing

Common steps for pre-processing a new dataset are: 
1. Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...) 
2. Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1) 
3. "Standardize" the data

 主要目的是熟悉数据(读取并显示),并对其做相应的处理,包括 reshape 和 standardize。

    # 读取训练、测试数据
    train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
    
    # m 表示样本数量,num_px 表示输入图像的维度即 Height/Width of each image,图像为正方形
    m_train = train_set_x_orig.shape[0]
    m_test = test_set_x_orig.shape[0]
    num_px = train_set_x_orig.shape[1]
    
    # 将单张图片像素拉直:最后变成 (width*height*channel, m)
    train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
    test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
    
    # 标准化
    train_set_x = train_set_x_flatten / 255.0
    test_set_x = test_set_x_flatten / 255.0

3)开始训练

3.1 定义必要的函数

def sigmoid(x):
    """定义sigmoid函数"""
    return 1.0*np.exp(x)/(1.0+np.exp(x))

3.2 初始化

def initialize_with_zeros(dim):
    """初始化权重"""
    w,b = np.zeros((dim,1)), 0
    assert (w.shape == (dim,1))
    assert (isinstance(b,float) or isinstance(b, int))
    return w,b

3.3 定义网络

def propagate(w,b,X,Y):
    """ 前向传播与后向传播,单次计算 """
    m = X.shape[1]
    # FORWARD PROPAGATION (FROM X TO COST)
    A = sigmoid(np.dot(w.T,X)+b)
    cost = -(1.0/m) * np.sum(Y*np.log(A)+(1-Y)*np.log(1-A)) # compute cost

    # BACKWARD PROPAGATION (TO FIND GRAD)
    dw = (1.0/m) * np.dot(X,(A-Y).T)
    db = (1.0/m) * np.sum(A-Y)

    assert (dw.shape == w.shape)
    assert (db.dtype == float)
    cost = np.squeeze(cost)
    assert (cost.shape == ())

    grads = {"dw": dw,
             "db": db}

    return grads, cost
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost=False):
    """
    整体优化过程

    参数:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of shape (num_px * num_px * 3, number of examples)
    Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
    num_iterations -- number of iterations of the optimization loop
    learning_rate -- learning rate of the gradient descent update rule
    print_cost -- True to print the loss every 100 steps

    Returns:
    params -- dictionary containing the weights w and bias b
    grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
    costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.

    Tips:
    You basically need to write down two steps and iterate through them:
        1) Calculate the cost and the gradient for the current parameters. Use propagate().
        2) Update the parameters using gradient descent rule for w and b.
    """

    costs = []

    for i in range(num_iterations):

        # Cost and gradient calculation 
        grads,cost = propagate(w,b,X,Y)
        costs.append(cost)

        # Retrieve derivatives from grads
        dw = grads["dw"]
        db = grads["db"]

        # update rule (≈ 2 lines of code)
        w = w - learning_rate * dw
        b = b - learning_rate * db
        

        # Record the costs
        if i % 100 == 0:
            costs.append(cost)

        # Print the cost every 100 training examples
        if print_cost and i % 100 == 0:
            print ("Cost after iteration %i: %f" % (i, cost))

    params = {"w": w,
              "b": b}

    grads = {"dw": dw,
             "db": db}

    return params, grads, costs
def predict(w, b, X):
    '''
    根据训练权重进行预测

    参数:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of size (num_px * num_px * 3, number of examples)

    Returns:
    Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
    '''

    m = X.shape[1]
    Y_prediction = np.zeros((1, m))
    w = w.reshape(X.shape[0], 1)

    # Compute vector "A" predicting the probabilities of a cat being present in the picture  
    A= sigmoid(np.dot(w.T,X)+b)

    for i in range(A.shape[1]):
        # Convert probabilities A[0,i] to actual predictions p[0,i]
        if A[0,i] <= 0.5:
            Y_prediction[0,i] = 0
        else:
            Y_prediction[0,i] = 1
        
    assert (Y_prediction.shape == (1, m))
    return Y_prediction

3.4 整体建模

def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False):
    """
    将整个流程建模
    参数:
    X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
    Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
    X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
    Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
    num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
    learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
    print_cost -- Set to true to print the cost every 100 iterations

    Returns:
    d -- dictionary containing information about the model.
    """

    # initialize parameters with zeros
    w, b = initialize_with_zeros(X_train.shape[0])

    # Gradient descent 
    parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)

    # Retrieve parameters w and b from dictionary "parameters"
    w = parameters["w"]
    b = parameters["b"]

    # Predict test/train set examples
    Y_prediction_test = predict(w, b, X_test)
    Y_prediction_train = predict(w, b, X_train)
    
    # Print train/test Errors
    print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
    print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))

    d = {"costs": costs,
         "Y_prediction_test": Y_prediction_test,
         "Y_prediction_train": Y_prediction_train,
         "w": w,
         "b": b,
         "learning_rate": learning_rate,
         "num_iterations": num_iterations}

    return d

训练与结果展示:

d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)

 

四、小结

       通过整体推导+实践可以对逻辑回归的正向传播、梯度下降、反向传播等问题有一个基本了解,这对后续更深层次网络的理解十分有必要。

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值