Pytorch深度学习实践-刘二大人-02梯度下降和随机梯度下降

梯度下降法:

import matplotlib.pyplot as plt

x_data = [1.0, 2.0, 3.0, 4.0]
y_data = [2.0, 4.0, 6.0, 8.0]
loss_list = []
w = 1.0


def forward(x):
    return w*x


def cost(xs, ys):
    sum = 0
    for x, y in zip(xs, ys):
        pre_y = forward(x)
        sum += (y-pre_y)**2
    return sum/len(xs)


def gradient(xs, ys):
    grad = 0
    for x, y in zip(xs, ys):
        grad += 2 * x * (x*w-y)
    return grad/len(xs)


for epoch in range(100):
    loss = cost(x_data, y_data)
    loss_list.append(loss)
    w = w - 0.1 * gradient(x_data, y_data)
    print(epoch, 'w=', w, 'loss=', loss)

plt.plot(loss_list)
plt.ylabel('loss')
plt.show()

实验结果:

随机梯度下降法:

import matplotlib.pyplot as plt

x_data = [1.0, 2.0, 3.0, 4.0]
y_data = [2.0, 4.0, 6.0, 8.0]
loss_list = []
w = 1.0
a = 0.1


def forward(x):
    return w * x


def loss(x, y):
    pre_y = forward(x)
    return (y-pre_y)**2


def gradient(x, y):
    return 2 * x * (x * w - y)


for epoch in range(100):
    for x, y in zip(x_data, y_data):
        los = loss(x,y)
        loss_list.append(los)
        w = w - a * gradient(x,y)
    print(epoch, 'w=', w, 'loss=', los)


plt.plot(loss_list)
plt.ylabel('loss')
plt.show()

实验结果:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值