【吴恩达02-1】losgitic回归

复习总结关于logistic回归的相关原理及实现:

简单来讲,Logistic回归主要是用在二分类方面的问题,用于估计某种事物的可能性。

比如,用户买不买某件商品的可能性;某广告被用户点击的可能性;但是,这里的可能性并非是指概率,logistic回归的结果并不是数学上的概率值,不可以直接当做概率来用。

一般来讲,logistic回归得到的结果都是与其他的特征值加权求和,而不是直接相乘。

Logistic回归的主要思想是:针对线性可分问题,根据所给的样本数据,对分类边界建立一个回归公式,输入新的样本之后,利用此回归公式进行分类预测。

首先,要了解sigmoid函数,逻辑回归函数为:

其中g()代表常用的逻辑函数——sigmoid函数,其公式为:

因此得到的逻辑回归假设函数为:

那么此时的代价函数则为:

那么梯度则为:

记得要将假设函数带入再计算。

代码实现notebook版:

# 数据可视化
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

path = '~/condaProject/WUENDA/work2/ex2data1.txt'
data = pd.read_csv(path, header=None, names=['Exam 1', 'Exam 2', 'Admitted'])
data.head()

postive = data[data['Admitted'].isin([1])]    # pandas库中的isin()函数用来筛选数据,传入一个列表,返回含有该列表中元素的数据行。
# 多用于要选择某列等于多个数值或者字符串时。data[data[‘admitted’].isin([‘1’])]选取admitted列值为1的所有行,等价于data[data[‘admitted’]==1]。
negtive = data[data['Admitted'].isin([0])]

fig, ax = plt.subplots(figsize=(12, 8))
ax.scatter(postive['Exam 1'], postive['Exam 2'], s=50, c='b', marker='o', label='Admitted')
ax.scatter(negtive['Exam 1'], negtive['Exam 2'], s=50, c='r', marker='x', label='UnAdmitted')
ax.legend()    # 默认将图例调整到最佳区域
ax.set_xlabel('Exam 1 score')
ax.set_ylabel('Exam 2 score')
plt.show()

# 实现logistic回归
# 首先要了解logistic回归的假设函数:
def sigmoid(z):
    return 1 / (1 + np.exp(-z))

# 其次计算代价函数
def cost(theta, X, y):    
    theta = np.matrix(theta)    # matrix方法可以使得参数直接变成矩阵,以方便运算,此处theta维度为1*3
    X = np.matrix(X)            # X维度为100*3
    y = np.matrix(y)            # y维度为100*1
    first = np.multiply(-y, np.log(sigmoid(X * theta.T)))    # multiply输出对应位置相乘的矩阵,它的作用跟 A * B 一样,并非dot点积,此处得到的一个位置对应相乘的矩阵
    second = np.multiply((1-y), np.log(1 - sigmoid(X * theta.T)))
#     print(theta.T.shape)
#     print(X.shape,y.shape,first.shape,second.shape)
    return np.sum(first - second) / len(X)

''' 接下来就是操作数据,计算cost Begin'''
# 在最前面加一列常数项1
data.insert(0, 'Ones', 1)

# 初始化X,y, theta
cols =data.shape[1]    # shape表示行列数,[0]行,[1]列
X = data.iloc[ : , 0 : cols-1]
y = data.iloc[ : , cols-1 : cols]
theta = np.zeros(3)    # 只生成一个一维数组

# 转换X,y的类型
X = np.array(X.values)
y = np.array(y.values)

# 检查矩阵的维度
X.shape, y.shape, theta.shape
cost(theta, X, y)    # 未使用梯度下降时的代价函数值
''' 接下来就是操作数据,计算cost End '''



# 实现梯度计算的函数(但是这里并不更新theta,所以并没有梯度下降)
def gradient(theta, X, y):
    theta = np.matrix(theta)
    X = np.matrix(X)
    y = np.matrix(y)
    
    parameters = int(theta.ravel().shape[1])    # ravel()将多维数组转换为一维数组,并且一般不会产生副本
#     print(theta.ravel().shape)
    grad = np.zeros(parameters)
    
    error = sigmoid(X * theta.T) - y    # 100个样本的假设函数与真实值的差,计算后一般为0.5或-0.5
    
    for i in range(parameters):   # 误差跟样本第i列对应位置相乘,即每一个样本的特征x与error相乘
        term = np.multiply(error, X[:,i])
        grad[i] = np.sum(term) / len(X)
    
    return grad

# 本次我们使用工具库来以梯度下降的方式计算theta,scipy中的函数能够返回最优的结果
import scipy.optimize as opt

result = opt.fmin_tnc(func = cost, x0 = theta, fprime = gradient, args=(X, y))    
# fmin_tnc函数是一种自动优化函数,截断的牛顿算法
# 输入:func指优化的目标函数,
#       x0为初值,
#       fprime为提供优化函数func的梯度函数,不然优化函数func必须返回函数值和梯度,或者设置approx_grad=True,
#       approx_grad :如果设置为True,会给出近似梯度
# 输出:输出一个元组,包含三项内容,目标值、迭代次数、一个整型值
print(result)

# 然后使用计算出的theta值代入回代价函数
cost(result[0], X, y)

# 接下来画出决策曲线
plotting_x1 = np.linspace(30, 100, 100)    # 30-100 之间分成100份
plotting_h1 = ( -result[0][0] - result[0][1] * plotting_x1) / result[0][2]    # -(theta1 + theta2*x2 ) / theta3 = x3 在代码中,x3即为y,方可为一条决策边界。
# 推导公式https://www.freesion.com/article/73081126582/

fig, ax = plt.subplots(figsize=(12,8))
ax.plot(plotting_x1, plotting_h1, 'y', label = "Predict")
ax.scatter(postive['Exam 1'], postive['Exam 2'], s=50, c='b', marker='o', label='Admitted')
ax.scatter(negtive['Exam 1'], negtive['Exam 2'], s=50, c='r', marker='x', label='Unadmttid')
ax.legend()
ax.set_xlabel('Exam 1 Score')
ax.set_ylabel('Exam 2 Score')
plt.show()

 

# 在确定参数之后,我们可以使用这个模型来预测学生是否录取。如果一个学生exam1得分45,exam2得分85,那么他录取的概率应为0.776
# 实现利用假设函数预测录取的可能性
def luFun(theta, X):
    return sigmoid(np.dot(theta.T, X))

luFun(result[0], [1, 45, 85])

# 另外评价theta的方式是使用训练集,计算出使用theta所得的准确率
# 定义预测函数
def predict(theta, X):
    probability = sigmoid(X * theta.T)
    print(len(probability))
    return [1 if x >= 0.5 else 0 for x in probability]

# 统计正确率
theta_min = np.matrix(result[0])       # result[0]指计算出的三个theta值
print(theta_min)
predictions = predict(theta_min, X)    # 这里的X用的是原数据的值, 如果概率大于0.5,就表示为1;否则表示为0
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y)]   
# 把预测的0、1与真实值0、1打包成元组,如果两者相等,那么输出1,否则为0
accuracy = (sum(map(int, correct)) % len(correct))   
# map(function, 一个或者多个序列) ,在python3 中返回一个迭代器,所以要输出需要list(),但是可以直接用sum等函数进行计算
# def square(x):
#     return x ** 2
# print(sum(map(square, [1,2,3])))    
# print(sum(map(int, [1,2,3])))    
print('accuracy = {0} %'.format(accuracy))

数据集,直接保存为txt文件即可,100行三列,分别代表科目1成绩、科目2成绩、过没过,使用logistic回归的目的是由两科成绩判断过没过:

34.62365962451697,78.0246928153624,0
30.28671076822607,43.89499752400101,0
35.84740876993872,72.90219802708364,0
60.18259938620976,86.30855209546826,1
79.0327360507101,75.3443764369103,1
45.08327747668339,56.3163717815305,0
61.10666453684766,96.51142588489624,1
75.02474556738889,46.55401354116538,1
76.09878670226257,87.42056971926803,1
84.43281996120035,43.53339331072109,1
95.86155507093572,38.22527805795094,0
75.01365838958247,30.60326323428011,0
82.30705337399482,76.48196330235604,1
69.36458875970939,97.71869196188608,1
39.53833914367223,76.03681085115882,0
53.9710521485623,89.20735013750205,1
69.07014406283025,52.74046973016765,1
67.94685547711617,46.67857410673128,0
70.66150955499435,92.92713789364831,1
76.97878372747498,47.57596364975532,1
67.37202754570876,42.83843832029179,0
89.67677575072079,65.79936592745237,1
50.534788289883,48.85581152764205,0
34.21206097786789,44.20952859866288,0
77.9240914545704,68.9723599933059,1
62.27101367004632,69.95445795447587,1
80.1901807509566,44.82162893218353,1
93.114388797442,38.80067033713209,0
61.83020602312595,50.25610789244621,0
38.78580379679423,64.99568095539578,0
61.379289447425,72.80788731317097,1
85.40451939411645,57.05198397627122,1
52.10797973193984,63.12762376881715,0
52.04540476831827,69.43286012045222,1
40.23689373545111,71.16774802184875,0
54.63510555424817,52.21388588061123,0
33.91550010906887,98.86943574220611,0
64.17698887494485,80.90806058670817,1
74.78925295941542,41.57341522824434,0
34.1836400264419,75.2377203360134,0
83.90239366249155,56.30804621605327,1
51.54772026906181,46.85629026349976,0
94.44336776917852,65.56892160559052,1
82.36875375713919,40.61825515970618,0
51.04775177128865,45.82270145776001,0
62.22267576120188,52.06099194836679,0
77.19303492601364,70.45820000180959,1
97.77159928000232,86.7278223300282,1
62.07306379667647,96.76882412413983,1
91.56497449807442,88.69629254546599,1
79.94481794066932,74.16311935043758,1
99.2725269292572,60.99903099844988,1
90.54671411399852,43.39060180650027,1
34.52451385320009,60.39634245837173,0
50.2864961189907,49.80453881323059,0
49.58667721632031,59.80895099453265,0
97.64563396007767,68.86157272420604,1
32.57720016809309,95.59854761387875,0
74.24869136721598,69.82457122657193,1
71.79646205863379,78.45356224515052,1
75.3956114656803,85.75993667331619,1
35.28611281526193,47.02051394723416,0
56.25381749711624,39.26147251058019,0
30.05882244669796,49.59297386723685,0
44.66826172480893,66.45008614558913,0
66.56089447242954,41.09209807936973,0
40.45755098375164,97.53518548909936,1
49.07256321908844,51.88321182073966,0
80.27957401466998,92.11606081344084,1
66.74671856944039,60.99139402740988,1
32.72283304060323,43.30717306430063,0
64.0393204150601,78.03168802018232,1
72.34649422579923,96.22759296761404,1
60.45788573918959,73.09499809758037,1
58.84095621726802,75.85844831279042,1
99.82785779692128,72.36925193383885,1
47.26426910848174,88.47586499559782,1
50.45815980285988,75.80985952982456,1
60.45555629271532,42.50840943572217,0
82.22666157785568,42.71987853716458,0
88.9138964166533,69.80378889835472,1
94.83450672430196,45.69430680250754,1
67.31925746917527,66.58935317747915,1
57.23870631569862,59.51428198012956,1
80.36675600171273,90.96014789746954,1
68.46852178591112,85.59430710452014,1
42.0754545384731,78.84478600148043,0
75.47770200533905,90.42453899753964,1
78.63542434898018,96.64742716885644,1
52.34800398794107,60.76950525602592,0
94.09433112516793,77.15910509073893,1
90.44855097096364,87.50879176484702,1
55.48216114069585,35.57070347228866,0
74.49269241843041,84.84513684930135,1
89.84580670720979,45.35828361091658,1
83.48916274498238,48.38028579728175,1
42.2617008099817,87.10385094025457,1
99.31500880510394,68.77540947206617,1
55.34001756003703,64.9319380069486,1
74.77589300092767,89.52981289513276,1

 

  • 3
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值