序言
笔记(二)只是给一个大致映像,步子跨的比较大。现在,一步步,扎实的记录分享学习过程。主要记录代码,原理见下面的百度网盘分享。欢迎各位和我一起学习pytorch与深度学习,请多多讨论留言~
原理
https://pan.baidu.com/s/18fxfRcKlZAYXLdXcDtfaYg
详细代码
import torch
import numpy as np
import matplotlib.pyplot as plt
#一个简单的线性例子
#参数点
x_train = np.array([[3.3], [4.4], [5.5], [6.71], [6.93], [4.168],
[9.779], [6.182], [7.59], [2.167], [7.042],
[10.791], [5.313], [7.997], [3.1]], dtype=np.float32)
y_train = np.array([[1.7], [2.76], [2.09], [3.19], [1.694], [1.573],
[3.366], [2.596], [2.53], [1.221], [2.827],
[3.465],[1.65], [2.904], [1.3]], dtype=np.float32)
#转换成Tensor
x_train = torch.from_numpy(x_train)
y_train = torch.from_numpy(y_train)
# 线性模型 y = wx+b 定义 w 与 b
w = torch.randn(1,requires_grad=True)
b = torch.zeros(1,requires_grad=True)
#定义线性模型
def linear_model(x):
return x * w + b
#损失函数 loss(xi,yi)=(xi−yi)2
loss_func=torch.nn.MSELoss()
#开始训练
for i in range(10):
y_ = linear_model(x_train)#15,1
loss = loss_func(y_,y_train)
#计算梯度(求导)
loss.backward()
#1e-2 学习率 更新w b
w.data = w.data - 1e-2 * w.grad.data
b.data = b.data - 1e-2 * b.grad.data
#梯度归零
w.grad.zero_()
b.grad.zero_()
print('epoch: {}, loss: {}'.format(i, loss.detach().numpy()))
y_ = linear_model(x_train)
#画出图像
plt.plot(x_train.data.numpy(), y_train.data.numpy(), 'bo', label='real')
plt.plot(x_train.data.numpy(), y_.data.numpy(), 'ro', label='estimated')
plt.legend()
plt.show()
结果
epoch: 0, loss: 69.31951141357422
epoch: 1, loss: 1.4899113178253174
epoch: 2, loss: 0.23439694941043854
epoch: 3, loss: 0.21095100045204163
epoch: 4, loss: 0.21030764281749725
epoch: 5, loss: 0.21008740365505219
epoch: 6, loss: 0.20987607538700104
epoch: 7, loss: 0.2096659392118454
epoch: 8, loss: 0.2094568908214569
epoch: 9, loss: 0.20924891531467438