PyTorch深度学习:梯度下降算法实现线性回归
1、损失函数:
2、梯度下降算法种类:
3、现在使用梯度下降算法:将模型表达式对权重(W)求偏导得到(dy/dW),然后用(dy/dW)乘一个我们设置的学习率(a=0.001),最后用原权重(W)减去学习率(a)与梯度(dy/dW)的乘积,得到的差即为第一次更新迭代的权重W。
*模型表达式:(y = Wx + b)
更新模型的权重(W):用原权重(W)减去学习率(a)与梯度(dy/dW)的积式子:[ W = W - a(dy/dW) ]
4、示例代码
# 线性模型:y = wx
# 损失函数: loss = (y_pred-y)**2
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]
# 初设置权重w为1
w = 1.0
# 定义模型:y = x * w
def forward(x):
return x * w
# 定义损失函数
def loss(xs, ys):
loss = 0
for x, y in zip(xs, ys):
y_pred = forward(x)
loss += (y_pred - y) ** 2
return loss / len(xs)
# 定义梯度
def gradient(xs, ys):
grad = 0
for x, y in zip(xs, ys):
grad += 2 * x * (x * w - y)
return grad / len(xs)
print('Predict (before training)', 4, forward(4))
for epoch in range(100):
loss_val = loss(x_data, y_data)
grad_val = gradient(x_data, y_data)
# 梯度下降(若loss<0,则预测值<实际值,,所对应的梯度<0,原权重值减去为负值的梯度与学习率的乘积,则权重增加,会使得预测值变大,缩小与预测值的差距,更接近实际值;梯度为>0,也同理)
w -= 0.01 *grad_val
print("Epoch: ", epoch, 'W = ', w, 'loss = ', loss_val)
print('Predict (after training)', 4, forward(4))