Python机器学习:线性回归
一、线性回归及梯度下降
看到线性回归,我首先想到的是高中算那个线性回归题,一节课只算出一个k,而且数据规模只有几组而已。
可如今,“大人时代变了”
关于什么是机器学习,我就暂且不做笔记了,网上介绍远比我介绍的清楚。我看的是周志华的西瓜书,疫情原因不得返校,有些数学知识不得细细咀嚼,实在是颇有遗憾。
步入正题,线性回归:
线性回归简单来说就是用一条曲线,来预测未知的可能的值。不知准确?
求得theta0、1最初的办法是利用最小二乘法,求得欧氏几何最小的闭式解。
梯度下降优化算法:
为了求得更好的theta1,与theta0,我们需要求得损失函数w = (theta1,theta0)
通过求的的theta1、theta0的偏导,计算出梯度下降;
其中α是学习域,代表每次梯度下降的步长。
二、代码演示
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
the linear regression model for machine study
f(x) = theta_1 * x + theta_0
"""
def ReadingDataSets():
Datainputstream = np.array(pd.read_csv(r"D:\桌面\data.csv")) # 读取文件
DataX = Datainputstream[:, 0: -1].ravel() # 将数据传入到各自的维度中
DataY = Datainputstream[:, -1]
DataSetShape = Datainputstream.shape # 获取数据规模
return DataX, DataY, DataSetShape
def average(sets): # 计算平均值
aver = sum(sets) / np.array(sets).shape[0]
return aver
def ParameterSolve(x,y,m):#求取y = theta_1 * x + theta_0
# 为了计算最小的欧氏距离,采用求偏导,计算出各个theta的最优解闭式
theta_1, theta_0 = 0, 0#赋初值
parameter_1, parameter_2, parameter_3, parameter_4 = 0, 0, 0, 0
for i in range(m):
parameter_1 += y[i] * (x[i] - average(x))
parameter_2 += x[i]**2
parameter_3 += x[i]
theta_1 = parameter_1 / ( parameter_2 - (1/m) * (parameter_3 **2) ) # theta_1的闭式
for i in range(m):
parameter_4 += y[i] - theta_1 * x[i]
theta_0 = (1/ m) * parameter_4#theta_0的闭式
return theta_1, theta_0
def LossFormula(x,y,m,theta_1,theta_0):#计算损失函数的
J = 0
for i in range(m):
h = theta_1 * x[i] + theta_0
J += ( h - y[i] ) ** 2
J /= (2 * m)
return J
def PartialTheta(x,y,m,theta_1,theta_0):#计算偏导
theta_1Partial = 0
theta_0Partial = 0
for i in range(m):
theta_1Partial += (theta_1 * x[i] + theta_0 - y[i]) * x[i]
theta_1Partial /= (1/m)
for i in range(m):
theta_0Partial += theta_1 * x[i] + theta_0 - y[i]
theta_0Partial /= (1/m)
return [theta_1Partial,theta_0Partial]
def GradientDescent(x,y,m,alpha = 0.01,theta_1 = 0,theta_0 = 0):#梯度下降优化参数
MaxIteration = 1000#迭代次数
counter = 0#计数器
Mindiffer = 0.0000000000001#上一次损失值与本次损失值之差的最小阈值
c = LossFormula(x,y,m,theta_1,theta_0)
differ = c + 10#先赋初值
theta_1sets = [theta_1]
theta_0sets = [theta_0]
Loss = [c]
"""
当上一次损失值与本次损失值之差小于最小阈值,进行迭代
每迭代一次,损失值都与上一次做差,以确定是否 过梯度
求得梯度,在原来的基础上进行梯度下降
"""
while (np.abs(differ - c) > Mindiffer and counter < MaxIteration):#当上一次损失值与本次损失值之差小于最小阈值,并且迭代
differ = c
upgradetheta_1 = alpha * PartialTheta(x,y,m,theta_1,theta_0)[0]#求得的一次theta的梯度值
upgradetheta_0 = alpha * PartialTheta(x,y,m,theta_1,theta_0)[1]
theta_1 -= upgradetheta_1
theta_0 -= upgradetheta_0#在原来的基础上进行梯度下降
theta_1sets.append(theta_1)
theta_0sets.append(theta_0)
Loss.append(LossFormula(x,y,m,theta_1,theta_0))
c = LossFormula(x,y,m,theta_1,theta_0)
counter += 1
return {"theta_1":theta_1,"theta_1sets":theta_1sets,"theta_0":theta_0,"theta_0sets":theta_0sets,"losssets":Loss}
def DrawScatterandPredictionModel(x,y,theta_1,theta_0,newtheta):
plt.figure("linear regression")
plt.scatter(x, y)
plt.plot(x,theta_1 * x + theta_0,lw=2,label="initital linear regression")
plt.plot(x,newtheta["theta_1"] * x + newtheta["theta_0"],ls="--",lw=0.5,label="optimzed linear regression")
plt.legend()
plt.show()
if __name__ == '__main__':
x,y,shape = ReadingDataSets()
th1, th0 = ParameterSolve(x,y,shape[0])
loss = GradientDescent(x,y,shape[0],alpha=0.01,theta_1=th1,theta_0=th0)
DrawScatterandPredictionModel(x,y,th1,th0,loss)
实在是有趣啊哈哈哈哈哈。
运行结果如下:
{'theta_1': 1.2873573697963243,
'theta_1sets': [1.287357370010957, 1.2873573697963243],
'theta_0': 9.908606190325537,
'theta_0sets': [9.908606190325276, 9.908606190325537],
'losssets': [53.73521850475449, 53.73521850475453]}
未被优化:
梯度下降优化:
区别不是很大,因为整个数据变化都是在(0.0000000000001)位上进行变化,而且从一开始我就对theta进行的求偏导取得的最优解闭式(意思就是没用最小二乘法),所以,损失值变化小,图象优化不明显。