python逻辑回归识别猫(持续更新中)

34 篇文章 1 订阅
2 篇文章 0 订阅

逻辑回归

简单来说, 逻辑回归(Logistic Regression)是一种用于解决二分类(0 or 1)问题的机器学习方法,用于估计某种事物的可能性。比如某用户购买某商品的可能性,某病人患有某种疾病的可能性,以及某广告被用户点击的可能性等。 注意,这里用的是“可能性”,而非数学上的“概率”,logisitc回归的结果并非数学定义中的概率值,不可以直接当做概率值来用。该结果往往用于和其他特征值加权求和,而非直接相乘。
这次我们采用逻辑回归的方法来设计一张图片判断是否为猫的分类问题。
这次逻辑回归模型我们采用的是Python进行程序编写。

逻辑回归实现猫的辨别

下面的案列我参照吴恩达的深度学习作业做对应的编写,可能模型准确度不是很高,希望大家多多指点。

Python环境的配置

在对逻辑回归进行猫的辨别时,我们最好建一个jupyter notebook 将每一步都规划好。
第一步导入相对应的包和数据,对应的数据和文件我待会文后附上链接。

import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset ##调用自己的一个python脚本中的函数,在当前工作目录下面
%matplotlib inline #图片显示在jupyter notebook 上面

第二步就是开始导入数据,划分训练集和测试集,查看数据的规模,以及我们定义一些变量。

# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
print(train_set_x_orig.shape)# 查看训练集测试图片的规模,64*64*3,三个通道
print(train_set_y.shape)#查看训练集y的的标签
print(test_set_x_orig.shape)
print(test_set_y.shape)
print(classes)#查看类别
(209, 64, 64, 3)
(1, 209)
(50, 64, 64, 3)
(1, 50)
[b'non-cat' b'cat']

如果有兴趣的话,大家可以自己在自己的电脑上试着查看一下你导入的图片。

# Example of a picture
index = 0 # 查看图片,每张图片对应一个索引,但不要超出范围(这个取决与我们的训练集和测试集大小)
plt.imshow(train_set_x_orig[index])#展示图片
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") +  "' picture.")#打印图片的编号以及打印它的标签(是不是一只猫)

这个是我运行完后的一个结果,有兴趣大家可以在自己电脑上面进行测试。也可以改变index值,查看别的图片。
在这里插入图片描述
如果想具体了解和了解数据属性,可以看下面的操作。

m_train = train_set_x_orig.shape[0]#训练集图片的数量
m_test = test_set_x_orig.shape[0]#测试集图片数量
num_px = train_set_x_orig.shape[2]#图片像素大小矩阵

print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")#每一张图片三个通道对应RGB三种颜色,图片的矩阵表示,对应三个颜色强度矩阵
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
#################运行结果
Number of training examples: m_train = 209
Number of testing examples: m_test = 50
Height/Width of each image: num_px = 64
Each image is of size: (64, 64, 3)
train_set_x shape: (209, 64, 64, 3)
train_set_y shape: (1, 209)
test_set_x shape: (50, 64, 64, 3)
test_set_y shape: (1, 50)

接下来就是我们需要重点关注的一步,如何将图片数据转为向量表示,这里和大家普及一下图片一般是RGB色彩和其他色彩,不过我们这里的图片数据是RBG格式,rgb通俗点就是三原色每个颜色对应一个矩阵,矩阵中的每一个元素都是像素点,三个颜色矩阵的像素点组合就成为一张图片。然后我们将三个矩阵的元素合并到一个向量来表示。

# Reshape the training and test examples
#将图片转成向量
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0],-1).T#64*64*3,一共209张图片
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0],-1).T#209个训练图片标签
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))#像素检查
#########
train_set_x_flatten shape: (12288, 209)
train_set_y shape: (1, 209)
test_set_x_flatten shape: (12288, 50)
test_set_y shape: (1, 50)
sanity check after reshaping: [17 31 56 22 33]

单纯这样的处理还不够,因为像素点是0-255,数字比较大,不便于后面处理我么对数据进行一定的缩放比例,将像素点范围调控到0-1之间。

train_set_x = train_set_x_flatten/255.#像素的值在0-255之间,缩放像素0-1之间
test_set_x = test_set_x_flatten/255.

接下来就是构建神经网络模型,下面这个我截取吴恩达的课程给出的模型以及涉及到的公式。

Mathematical expression of the algorithm:

For one example x ( i ) x^{(i)} x(i):
(1) z ( i ) = w T x ( i ) + b z^{(i)} = w^T x^{(i)} + b \tag{1} z(i)=wTx(i)+b(1)
(2) y ^ ( i ) = a ( i ) = s i g m o i d ( z ( i ) ) \hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2} y^(i)=a(i)=sigmoid(z(i))(2)
(3) L ( a ( i ) , y ( i ) ) = − y ( i ) log ⁡ ( a ( i ) ) − ( 1 − y ( i ) ) log ⁡ ( 1 − a ( i ) ) \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3} L(a(i),y(i))=y(i)log(a(i))(1y(i))log(1a(i))(3)

The cost is then computed by summing over all training examples:
(6) J = 1 m ∑ i = 1 m L ( a ( i ) , y ( i ) ) J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6} J=m1i=1mL(a(i),y(i))(6)

但模型大致思路如下:

定义模型结构(例如输入特性的数量)
初始化模型的参数
计算当前损失(正向传播)
计算当前梯度(向后传播)
更新参数(梯度下降)

关于逻辑回归模型的算法讲解我会在下一篇博客中详解讲解。下面我们开始用python构建逻辑回归模型。
1.下面我们构建模型中的一个重要函数激活函数。

# GRADED FUNCTION: sigmoid
def sigmoid(x):
    """
    计算sigmoid函数
    :param x: 任意大小的标量或者numpy数组
    :return: sigmoid(x)
    """
    s = 1 / (1 + np.exp(-x))
    return s

如果你想看看你的激活函数是否达到你的要求,可以调用函数
在这里插入图片描述

初始化模型的参数

# GRADED FUNCTION: initialize_with_zeros
开始给权重值和偏差初始化一个值,权重是一个矢量,偏差是一个标量。
def initialize_with_zeros(dim):
    """
    This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
    
    Argument:
    dim -- size of the w vector we want (or number of parameters in this case)
    
    Returns:
    w -- initialized vector of shape (dim, 1)#向量
    b -- initialized scalar (corresponds to the bias)#标量
    """
    w = np.zeros((dim,1))#初始化权重值
    b = 0
    assert(w.shape == (dim, 1))#判断权重矩阵是否为你想要的形式
    assert(isinstance(b, float) or isinstance(b, int))
    
    return w, b

定义计算损失值函数

通过“正向”和“反向”传播,计算损失值。
正向传播:
获取 X
计算
计算损失函数:
计算dw和db使用到的两条公式

# GRADED FUNCTION: propagate
实现上述传播的成本函数及其梯度
def propagate(w, b, X, Y):
    """
    Implement the cost function and its gradient for the propagation explained above

    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1) 权重,一个numpy数组大小(num_px * num_px * 3,1)
    b -- bias, a scalar                              偏差,一个标量
    X -- data of size (num_px * num_px * 3, number of examples)   数据大小(num_px * num_px * 3,例子数量)
    Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) 真正的“标签”向量(包含0如果非猫,1如果猫)的大小(1,例子数量)
    
    Return:
    cost -- negative log-likelihood cost for logistic regression      Logistic回归的负对数似然成本。
    dw -- gradient of the loss with respect to w, thus same shape as w  关于w的损失梯度,与w相同。

    db -- gradient of the loss with respect to b, thus same shape as b  关于b的损失梯度,与b相同。
    
    Tips:
    - Write your code step by step for the propagation. np.log(), np.dot()
    """
    
    m = X.shape[1]
    
    # FORWARD PROPAGATION (FROM X TO COST)
#前向传播
    A = sigmoid(np.dot(w.T,X)+b)          
    cost = -1/m*np.sum(Y*np.log(A)+(1-Y)*np.log(1-A))   
    ### END CODE HERE ###
    
    # BACKWARD PROPAGATION (TO FIND GRAD)
#反向传播
    dw = 1/m*np.dot(X,(A-Y).T)
    db = 1/m*np.sum(A-Y)
    ### END CODE HERE ###
    assert(dw.shape == w.shape)
    assert(db.dtype == float)
    cost = np.squeeze(cost)
    assert(cost.shape == ())
    
    grads = {"dw": dw,
             "db": db}
    
    return grads, cost#返回梯度和代价

大家可以测试一下上面的函数,看看是否达到你的要求:
在这里插入图片描述
当我们进行到这里时候,大致的逻辑回归的模型大致搭建好了,接下来也就是我们要考虑优化函数的问题了。

# GRADED FUNCTION: optimize#优化函数

def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
    """
    This function optimizes w and b by running a gradient descent algorithm
    
    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of shape (num_px * num_px * 3, number of examples)
    Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
    num_iterations -- number of iterations of the optimization loop
    learning_rate -- learning rate of the gradient descent update rule
    print_cost -- True to print the loss every 100 steps
    
    Returns:
    params -- dictionary containing the weights w and bias b
    grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
    costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
    
    Tips:
    You basically need to write down two steps and iterate through them:
        1) Calculate the cost and the gradient for the current parameters. Use propagate().
        2) Update the parameters using gradient descent rule for w and b.
    """
    
    costs = []
    
    for i in range(num_iterations):
        
        
        # Cost and gradient calculation (≈ 1-4 lines of code)
        ### START CODE HERE ### 
        grads, cost = propagate(w,b,X,Y)
        ### END CODE HERE ###
        
        # Retrieve derivatives from grads
        dw = grads["dw"]
        db = grads["db"]
        
        # update rule (≈ 2 lines of code)
        ### START CODE HERE ###
        w = w-learning_rate*dw#梯度下降法更新参数
        b = b-learning_rate*db
        ### END CODE HERE ###
        
        # Record the costs
        if i % 100 == 0:
            costs.append(cost)
        
        # Print the cost every 100 training examples
        if print_cost and i % 100 == 0:
            print ("Cost after iteration %i: %f" %(i, cost))
    
    params = {"w": w,
              "b": b}
    
    grads = {"dw": dw,
             "db": db}
    
    return params, grads, costs

下面是我测试的优化函数结果
在这里插入图片描述
当我们进行到这里的时候,逻辑回归的主体函数基本搭建好了,但是我们训练的最终目的是为了预测结果,而不是单纯为了在训练集上面达到想要的结果。所以接下来我们要开始编写我们自己的预测函数。这个是借用吴恩达老师的预测函数。

# GRADED FUNCTION: predict

def predict(w, b, X):
    '''
    Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
    
    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of size (num_px * num_px * 3, number of examples)
    
    Returns:
    Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
    '''
    
    m = X.shape[1]
    Y_prediction = np.zeros((1,m))
    w = w.reshape(X.shape[0], 1)
    
    # Compute vector "A" predicting the probabilities of a cat being present in the picture
    ### START CODE HERE ### (≈ 1 line of code)
    A = sigmoid(np.dot(w.T,X)+b)  #数据预测结果
    ### END CODE HERE ###

    for i in range(A.shape[1]):
        
        # Convert probabilities A[0,i] to actual predictions p[0,i]
        ### START CODE HERE ### (≈ 4 lines of code)
        if A[0,i]<=0.5:
            Y_prediction[0,i]=0
        else:
            Y_prediction[0,i]=1 
        ### END CODE HERE ###
    
    assert(Y_prediction.shape == (1, m))
    
    return Y_prediction

这个是一个测试样例
在这里插入图片描述
上述将整个逻辑回归的模型拆成了一个个的函数,便于大家理解和学习,后面我们需要将所有的函数整合在一起,这样才是完整的模型。

# GRADED FUNCTION: model

def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
    """
    Builds the logistic regression model by calling the function you've implemented previously
    
    Arguments:
    X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
    Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
    X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
    Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
    num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
    learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
    print_cost -- Set to true to print the cost every 100 iterations
    
    Returns:
    d -- dictionary containing information about the model.
    """

    
    # initialize parameters with zeros 
    w, b = initialize_with_zeros(X_train.shape[0])

    # Gradient descent
    parameters, grads, costs = optimize(w,b,X_train,Y_train,num_iterations,learning_rate,print_cost)
    
    # Retrieve parameters w and b from dictionary "parameters"
    w = parameters["w"]
    b = parameters["b"]
    
    # Predict test/train set examples 
    Y_prediction_test = predict(w,b,X_test)
    Y_prediction_train = predict(w,b,X_train)
    
    print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
    print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))

    
    d = {"costs": costs,
         "Y_prediction_test": Y_prediction_test, 
         "Y_prediction_train" : Y_prediction_train, 
         "w" : w, 
         "b" : b,
         "learning_rate" : learning_rate,
         "num_iterations": num_iterations}
    
    return d

到这里,整个模型也就成型了,但是我们想知道我们的模型对辨别猫有多高的准确率,我们就可以利用上面的数据开始跑我们的模型了。
在这里插入图片描述
可能直接看这个觉得不是很直观,我们可以直接索引图片一个个的进行看

# Example of a picture that was wrongly classified.
index = 7#通过改变索引,查看图片测试结果
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[int(d["Y_prediction_test"][0,index])].decode("utf-8") +  "\" picture.")

这是的测试结果
在这里插入图片描述
当我们进性索引时发现有一些图片被模型判别错误了,这个一个是我们数据量不是很大的原因,还有我们的参数调节问题。这个我后面会和大家讲解,代码和文件我会后期附在这个博客后面,可以给大家参考,这个代码我也是在吴恩达老师的作业代码进行补充的。
代码参考链接链接:https://pan.baidu.com/s/1ZqXWD5rDSl-Y52W4dA0k9A
提取码:xmxl

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值