2021-11-12

随机梯度下降法由于每次仅仅采用一个样本来迭代,训练速度很快

import numpy as np
import matplotlib.pyplot as plt
X = np.array([[2104,3],[1600,3],[2400,3],[1416,2],[3000,4]])
Y = np.array( [400,330,369,232,540])
theta0 = np.random.random()
theta1 = np.random.random()
theta2 = np.random.random()
epochs = 0.0001
alpha = 0.01
def cost(X,Y,theta0,theta1,theta2):
loss = 0
m = len(Y)
for i in range(m):
loss += (theta0+theta1X[i,0]+theta2X[i,1]-Y[i])**2
loss = loss/(2m)
return loss
def grad_des(X,Y,theta0,theta1,theta2,alpha,epochs):
m = len(Y)
for z in range(epochs):
theta0_grad = 0
theta1_grad = 0
theta2_grad = 0
for i in range(m):
theta0_grad = (theta0+theta1
X[i,0]+theta2X[i,1]-Y[i])
theta1_grad = (theta0+theta1
X[i,0]+theta2X[i,1]-Y[i])X[i,0]
theta2_grad = (theta0+theta1
X[i,0]+theta2
X[i,1]-Y[i])X[i,1]
theta0_grad = theta0_grad/m
theta1_grad = theta1_grad/m
theta2_grad = theta2_grad/m
theta0 -=alpha
theta0_grad
theta1 -=alphatheta1_grad
theta2 -=alpha
theta2_grad
return theta0,theta1,theta2
print(theta0, theta1, theta2)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值