java multipy_用pytorch进行多变量线性回归

在尝试使用PyTorch进行多变量线性回归时遇到错误,TypeError表明输入和目标类型不匹配。代码中定义了一个3输入、1输出的线性回归模型,并尝试用MSELoss进行训练。经过多次迭代,模型会拟合数据点并绘制拟合线。
摘要由CSDN通过智能技术生成

我正在使用Pytorch进行linear_regression .

我用一个变量成功了 . 但是使用pytorch的multi_variable linear_regression .

得到了一些错误 . 我应该如何使用多变量进行线性回归?

TypeError Traceback(最近一次调用last)in()9 optimizer.zero_grad()#gradient 10 outputs = model(inputs)#output ---> 11 loss = criterion(outputs,targets)#loss function 12 loss.backward( )#backward propogation 13 optimizer.step()#1步优化(gradeint descent)/anaconda/envs/tensorflow/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, 输入,* kwargs)204 205 def调用(self,* input,** kwargs): - > 206 result = self.forward(* input,** kwargs)207 for hook in self._forward_hooks.values() :208 hook_result = hook(self,input,result)/anaconda/envs/tensorflow/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self,input,target)22 _assert_no_grad( target)23 backend_fn = getattr(self._backend,type(self).name)---> 24 return backend_fn(self.size_average)(输入,目标)25 26 /anaconda/envs/tensorflow/lib/python3.6/ site-packages / torch / nn / _functions / thnn / auto.py in forward(self,input,target)39 output = input.new(1)40 getattr(self._backend, update_output.name)(self._backend.library_state,input,target,---> 41 output,* self.additional_args)42返回输出43 TypeError:FloatMSECriterion_updateOutput接收到无效的参数组合 - got(int,torch.FloatTensor,torch .DoubleTensor,torch.FloatTensor,bool),但是预期(int state,torch.FloatTensor输入,torch.FloatTensor目标,torch.FloatTensor输出,bool sizeAverage)

这是代码

#import

import torch

import torch.nn as nn

import numpy as np

import matplotlib.pyplot as plt

from torch.autograd import Variable

#input_size = 1

input_size = 3

output_size = 1

num_epochs = 300

learning_rate = 0.002

#Data set

#x_train = np.array([[1.564],[2.11],[3.3],[5.4]], dtype=np.float32)

x_train = np.array([[73.,80.,75.],[93.,88.,93.],[89.,91.,90.],[96.,98.,100.],[73.,63.,70.]],dtype=np.float32)

#y_train = np.array([[8.0],[19.0],[25.0],[34.45]], dtype= np.float32)

y_train = np.array([[152.],[185.],[180.],[196.],[142.]])

print('x_train:\n',x_train)

print('y_train:\n',y_train)

class LinearRegression(nn.Module):

def __init__(self,input_size,output_size):

super(LinearRegression,self).__init__()

self.linear = nn.Linear(input_size,output_size)

def forward(self,x):

out = self.linear(x) #Forward propogation

return out

model = LinearRegression(input_size,output_size)

#Lost and Optimizer

criterion = nn.MSELoss()

optimizer = torch.optim.SGD(model.parameters(),lr=learning_rate)

#train the Model

for epoch in range(num_epochs):

#convert numpy array to torch Variable

inputs = Variable(torch.from_numpy(x_train)) #convert numpy array to torch tensor

#inputs = Variable(torch.Tensor(x_train))

targets = Variable(torch.from_numpy(y_train)) #convert numpy array to torch tensor

#forward+ backward + optimize

optimizer.zero_grad() #gradient

outputs = model(inputs) #output

loss = criterion(outputs,targets) #loss function

loss.backward() #backward propogation

optimizer.step() #1-step optimization(gradeint descent)

if(epoch+1) %5 ==0:

print('epoch [%d/%d], Loss: %.4f' % (epoch +1, num_epochs, loss.data[0]))

predicted = model(Variable(torch.from_numpy(x_train))).data.numpy()

plt.plot(x_train,y_train,'ro',label='Original Data')

plt.plot(x_train,predicted,label='Fitted Line')

plt.legend()

plt.show()

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值