当使用PyTorch实现线性回归时,我们需要定义一个包含一个线性层的模型,并使用梯度下降优化算法来训练该模型。下面是一个简单的示例代码:
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
# 创建输入数据
x_train = np.array([[3.3], [4.4], [5.5], [6.71], [6.93], [4.168],
[9.779], [6.182], [7.59], [2.167], [7.042],
[10.791], [5.313], [7.997], [3.1]], dtype=np.float32)
y_train = np.array([[1.7], [2.76], [2.09], [3.19], [1.694], [1.573],
[3.366], [2.596], [2.53], [1.221], [2.827],
[3.465], [1.65], [2.904], [1.3]], dtype=np.float32)
# 转换为PyTorch的张量
x_train = torch.from_numpy(x_train)
y_train = torch.from_numpy(y_train)
# 定义线性回归模型
class LinearRegression(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearRegression, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self, x):
out = self.linear(x)
return out
# 定义模型参数
input_dim = 1
output_dim = 1
# 实例化模型
model = LinearRegression(input_dim, output_dim)
# 定义损失函数和优化器
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)
# 训练模型
num_epochs = 2000
for epoch in range(num_epochs):
# 前向传播
outputs = model(x_train)
loss = criterion(outputs, y_train)
# 反向传播和优化
optimizer.zero_grad()
loss.backward()
optimizer.step()
# 每100个epoch打印一次损失
if (epoch+1) % 100 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
# 打印训练后的模型参数
print('The trained parameters:')
for name, param in model.named_parameters():
if param.requires_grad:
print(name, param.data)
# 预测新数据
x_test = torch.tensor([[4.5], [7.8]])
predicted = model(x_test)
print('Predictions for new data:')
print(predicted.data)
运行结果
------------------------------------
Epoch [100/2000], Loss: 0.1696
Epoch [200/2000], Loss: 0.1693
Epoch [300/2000], Loss: 0.1691
Epoch [400/2000], Loss: 0.1691
Epoch [500/2000], Loss: 0.1690
Epoch [600/2000], Loss: 0.1690
Epoch [700/2000], Loss: 0.1689
Epoch [800/2000], Loss: 0.1689
Epoch [900/2000], Loss: 0.1689
Epoch [1000/2000], Loss: 0.1689
Epoch [1100/2000], Loss: 0.1689
Epoch [1200/2000], Loss: 0.1689
Epoch [1300/2000], Loss: 0.1689
Epoch [1400/2000], Loss: 0.1689
Epoch [1500/2000], Loss: 0.1689
Epoch [1600/2000], Loss: 0.1689
Epoch [1700/2000], Loss: 0.1689
Epoch [1800/2000], Loss: 0.1689
Epoch [1900/2000], Loss: 0.1689
Epoch [2000/2000], Loss: 0.1689
------------------------------------
The trained parameters:
linear.weight tensor([[0.2599]])
linear.bias tensor([0.7482])
Predictions for new data:
tensor([[1.9178],
[2.7755]])
------------------------------------
这个例子中,我们创建了一个包含一个线性层的简单的线性回归模型。我们使用均方误差(MSE)作为损失函数,并使用随机梯度下降(SGD)优化器来更新模型的参数。在训练过程中,我们通过反向传播计算梯度,并使用优化器更新参数。最后,我们使用训练好的模型进行新数据的预测。