机器学习 之 python实现 多元线性回归 梯度下降 普适算法与矩阵算法

介于网上的多元线性回归梯度下降算法多为固定数量的因变量,如三元一次函数 y = θ 1 x 1 + θ 2 x 2 + θ 3 y=\theta_1x_1+\theta_2x_2+\theta_3 y=θ1x1+θ2x2+θ3 ,或者四元一次函数 y = θ 1 x 1 + θ 2 x 2 + θ 3 x 3 + θ 4 y=\theta_1x_1+\theta_2x_2+\theta_3x_3+\theta_4 y=θ1x1+θ2x2+θ3x3+θ4。我决定用python手搓一个多元线性回归梯度下降,意思就是数据集可以是三元,可以是四元,可以是五元。

这里提供两种解法和思路。第一种是用for循环硬算,第二种是用矩阵运算。

  1. for循环:
    python代码如下,例子数据集用的是三元方程 y = 2 x + 4 z + 7 y = 2x + 4z + 7 y=2x+4z+7
    P 数据集元素为 [ x , z , 1 , y ] [x,z,1,y] [x,z,1,y] ,中间的1代表常数项。
    a 为学习率(需要调整),step为迭代次数。
# 多元线性梯度下降

import matplotlib.pyplot as plt
import numpy as np
# P = np.loadtxt("PV.csv", delimiter=",")


# y = 3x - 2k + 7z - 3
# P = [[1,1,1,1,5],[2,1,2,1,15],[3,0,1,1,13],[0,1,2,1,9]]


# y = 2x + 4k + 7

P = [[1,1,1,13],[2,3,1,23],[4,2,1,23],[3,3,1,25],[2,2,1,19]]


# y = -13 x + 9
# P = [[1,1,-4],[0,1,9],[-1,1,22],[2,1,-17]]


m = len(P)  					# how many scatters
n = len(P[0])					# how many features

a = 0.1							# learning rate
step = 1000
thre = 0.0009

para=[]
para2=[]

for i in range (0,n-1):
	para.append(1)              # set all parameters to 1
	para2.append(0)

def Cost(P, para, n, j): 		# Single point's parameter cost value
	S = 0
	J = []
	for i in range (0,n-1):		# calculate hyypothsis sum
		t = para[i]*P[j][i]
		S += t
	for i in range (0,n-1):		# calculate each parameter's derivative
		J.append((S - P[j][n-1])*P[j][i]) 
	return J

def Jump(j,t,k,n,thre):
	if abs(t) <= thre:
		k = 1 					# k is a judging parameter
	else:
		k = 0
	if j == n-2 and k == 1:
		return 1
	else:
		return 0

for i in range (0,step):		# repeat to adjust the parameters
	k = 0
	p = 0
	for j in range (0,n-1):		# reset the Cost function each turn
		para2[j]=0
	for j in range (0,m): 		# different points' parameters' derivative
		c = Cost(P,para,n,j)
		for j in range(0,n-1):	# add different points derivative together
			para2[j] += c[j]	
	for j in range(0,n-1):		# adjust the parameters
		t = a*para2[j]/m
		para[j] = para[j] - t
		if Jump(j,t,k,n,thre)==1:
			p = 1
	if p == 1:
		print ("\n$$$$$$$$$$$$$$$$$$$$$$$$$$$\nFunction is converged\n$$$$$$$$$$$$$$$$$$$$$$$$$$$")
		print ("\nFunction is:  \n h(x) = ", end="")
		for j in range(0,n-1):
			if j == n-2:
				print ("(",para[j],")","\n")
			else:
				print ("(",para[j],")x",j+1,"+", end="")
		break
	print(i,para)
 
  1. 矩阵运算
# 多元线性梯度下降

import matplotlib.pyplot as plt
import numpy as np
# P = np.loadtxt("PV.csv", delimiter=",")


# y = 3x - 2k + 7z - 3
P = np.array([[1,1,1,1],[2,1,2,1],[3,0,1,1],[0,1,2,1]])
Y = np.array([[5],[15],[13],[9]])

# y = 2x + 4k + 7

# P = np.array([[1,1,1],[2,3,1],[4,2,1],[3,3,1],[2,2,1]])
# Y = np.array([[13],[23],[23],[25],[19]])


# y = -13 x + 9
# P = np.array([[1,1],[0,1],[-1,1],[2,1]])
# Y = np.array([[-4],[9],[22],[-17]])

para = (np.ones((1,len(P[0]))))

a = 0.01							# learning rate
step = 500
thre = 0.0009

for i in range (0,step):
	para = para - a*len(P)*(para@P.T-Y.T)@P

	print (para)

以上。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值