逻辑回归python梯度下降_Python实现逻辑回归与梯度下降策略

我们将建立一个逻辑回归模型来预测一个学生是否被大学录取。假设你是一个大学的管理员,你想根据两次考试的结果来决定每个申请人的录取机会。你有以前申请人的历史数据,你可以用它作为逻辑回归的训练集,对于每一个训练例子,你有两个考试的申请人的分数和录取决定。为了做到这一点,我们将建立一个分类模型,根据考试成绩估计入学概率。

import pandas as pd

import numpy as np

import matplotlib.pyplot as plt

plt.rcParams['font.sans-serif']=['SimHei'] # 用来正常显示中文标签

plt.rcParams['axes.unicode_minus']=False # 用来正常显示负号

data = pd.read_csv("grade.csv")

Pass = data[data["Admitted"] == 1] # 获取及格的数据

noPass = data[data["Admitted"] == 0] # 获取不及格的数据

fig, ax = plt.subplots()

ax.scatter(Pass["EXAM 1"], Pass["EXAM 2"], s = 30, c = 'b', marker = 'o', label = 'PASS')

ax.scatter(noPass["EXAM 1"], noPass["EXAM 2"], s = 30, c = 'r', marker = 'x', label = 'noPASS')

ax.legend(loc = 2)

ax.set_xlabel('EXAM 1 score')

ax.set_ylabel('EXAM 2 score')

ax.set_title('逻辑回归案例')

plt.show()

接下来就是算法的实现

目标:建立分类器(求解出三个参数e1,e2,e3)

设定阈值,根据阈值判断录取结果

要完成的模块

1.sigmoid:映射到概率的函数

2.model:返回预测结果值

3.cost:根据参数计算损失

4.gradient:计算每个参数的梯度方向

5.descent:进行参数更新

6.accuracy:计算精度

# sigmoid:映射到概率的函数

def sigmoid(z):

return 1 / (1 + np.exp(-z))

# model:返回预测结果值

def model(X, theta):

return sigmoid(np.dot(X, theta.T))

data.insert(0, 'Ones', 1)

orig_data = data.as_matrix()

cols = orig_data.shape[1]

X = orig_data[:, 0:cols-1]

y = orig_data[:, cols-1:cols]

theta = np.zeros([1, 3])

# cost:根据参数计算损失

def cost(X, y, theta):

left = np.multiply(-y, np.log(model(X, theta)))

right = np.multiply(1-y, np.log(1 - model(X, theta)))

return np.sum(left - right) / (len(X))

# gradient:计算每个参数的梯度方向

def gradient(X, y, theta):

grad = np.zeros(theta.shape)

error = (model(X, theta) - y).ravel()

for j in range(len(theta.ravel())):

term = np.multiply(error, X[:, j])

grad[0, j] = np.sum(term) / len(X)

return grad

STOP_ITER = 0

STOP_COST = 1

STOP_GRAD = 2

# 设定三种不同的停止策略

def stopCriterion(type, value, threshold):

if type == STOP_ITER:

return value > threshold

elif type == STOP_COST:

return abs(value[-1] - value[-2]) < threshold

elif type == STOP_GRAD:

return np.linalg.norm(value) < threshold

# 将数据打乱

def shuffleData(data1):

shuffle(data1)

cols = data1.shape[1]

X = data1[:, 0:cols-1]

y = data1[:, cols-1:]

return X, y

# 梯度下降求解

def descent(data, theta, batchSize, stopType, thresh, alpha):

init_time = time.time()

i = 0 # 迭代次数

k = 0 # batch

X, y = shuffleData(data)

grad = np.zeros(theta.shape) # 计算的梯度

costs = [cost(X, y, theta)] # 损失值

while True:

grad = gradient(X[k:k+batchSize], y[k:k+batchSize], theta)

k += batchSize # 取batch数量个数据

if k >= n:

k = 0

X, y =shuffleData(data) # 重新打乱

theta = theta - alpha * grad # 更新参数

costs.append(cost(X, y, theta)) # 计算新的损失

i += 1

if stopType == STOP_ITER:

value = i

elif stopType == STOP_COST:

value = costs

elif stopType == STOP_GRAD:

value = grad

if stopCriterion(stopType, value, thresh):

break

return theta, i-1, costs, grad, time.time() - init_time

def runExpe(data, theta, batchSize, stopType, thresh, alpha):

theta, iter, costs, grad, dur = descent(data, theta, batchSize, stopType, thresh, alpha)

name = "Original" if (data[:,1] > 2).sum() > 1 else "Scaled"

name += "data - learning rate: {} - ".format(alpha)

if batchSize == n:

strDescType = "Gradient"

elif batchSize == 1:

strDescType = "Stochastic"

else:

strDescType = "MiNi-batch({})".format(batchSize)

name += strDescType + "descent - Stop:"

if stopType == STOP_ITER:

strStop = "{} iterations".format(thresh)

elif stopType == STOP_COST:

strStop = "costs change < {}".format(thresh)

else:

strStop = "gradient norm < {}".format(thresh)

name += strStop

print("***{}\nTheta:{} - Iter: {} - Last cost: {:03.2f} - Duration: {:03.2f}s".format(name,theta,iter,costs[-1],dur))

fig, ax = plt.subplots()

ax.plot(np.arange(len(costs)), costs, 'r')

ax.set_xlabel('Iterations')

ax.set_ylabel('Cost')

ax.set_title(name.upper() + '- Error vs. Iteration')

plt.show()

return theta

if __name__ == '__main__':

n = 100

# runExpe(orig_data,theta,n,STOP_ITER,thresh=5000,alpha=0.0001)

# 根据损失值停止

# runExpe(orig_data, theta, n, STOP_COST, thresh=0.000001, alpha=0.001)

# 根据梯度变化停止

runExpe(orig_data, theta, n, STOP_GRAD, thresh=0.05, alpha=0.001)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值