pytorch线性回归_pytorch用于深度学习autograd和简单的线性回归

pytorch线性回归

PyTorch的AutoGrad(PyTorch’s AutoGrad)

Image for post

PyTorch’s AutoGrad is a very powerful feature with which we can easily find the differentiation of a variable with respect to another. This comes handy while calculating gradients for gradient descent algorithm

PyTorch的AutoGrad是一项非常强大的功能,通过它我们可以轻松地找到变量相对于另一个变量的差异。 这在为梯度下降算法计算梯度时非常方便

如何使用此功能(How to use this feature)

first and foremost , let’s import the necessary libraries

首先,让我们导入必要的库

#importing the librariesimport torch
import numpy as np
import matplotlib.pyplot as plt
import randomx = torch.tensor(5.) #some data
w = torch.tensor(4.,requires_grad=True) #weight ( slope )
b = torch.tensor(2.,requires_grad=True) #bias (intercept)y = x*w + b #equation of a line
y.backward() #letting pytorch know that Y is the variable that needs to be differentiatedprint(w.grad,b.grad) #prints the derivative of Y with respect to w and boutput:tensor(5.) tensor(1.)

This is the basic idea behind PyTorch’s AutoGrad.the backward() function specify the variable to be differentiatedand the .grad prints the differentiation of that function with respect to the variable.

这是PyTorch的AutoGrad背后的基本思想。backward ()函数指定要区分的变量和。 grad打印该函数相对于变量的微分。

note: requires_grad parameter must be true for variables for which gradients are to be found

注意:require_grad参数对于要为其找到渐变的变量必须为true

使用PyTorch AutoGrad进行简单的线性回归 (Simple Linear Regression With PyTorch AutoGrad)

now that we have a basic understanding of the AutoGrad function, let’s use that to do linear regression with gradient descent for better understanding

既然我们对AutoGrad函数有了基本的了解,我们就可以使用它来进行带有梯度下降的线性回归以更好地理解

I am creating a custom dataset on my own using some numpy functions

我正在使用一些numpy函数自行创建自定义数据集

#creating a datasetx = torch.tensor(np.arange(1,100,1))
y = (x*20+5+random.randint(-2,3)).reshape(-1)
print("shape of x: ",x.shape)
print("shape of y: ",y.shape)output:shape of x: torch.Size([99])
shape of y: torch.Size([99])

checkout the numpy documentation if you don’t know what a numpy function does

如果您不知道numpy函数的作用,请查看numpy文档

#initialising weight and bias term
w = torch.tensor(0.,requires_grad=True)
b = torch.tensor(0.,requires_grad=True)

specifying the number of iterations

指定迭代次数

epochs = 100

defining the forward loop

定义正向循环

losses = []
for i in range(epochs):
#making predictions
y_pred = ((x*w)+b)
y_pred.reshape(-1)

#calculating loss
loss = torch.square(y_pred - y).mean()
losses.append(loss) #gradient descent
loss.backward()
with torch.no_grad():
w -= w.grad*0.0001
b -= b.grad*0.0001w.grad.zero_()
b.grad.zero_()#printing loss
if i%15==0:
print(loss)output:tensor(1338702.5000, grad_fn=<MeanBackward0>)
tensor(7.9803, grad_fn=<MeanBackward0>)
tensor(7.9686, grad_fn=<MeanBackward0>)
tensor(7.9568, grad_fn=<MeanBackward0>)
tensor(7.9450, grad_fn=<MeanBackward0>)
tensor(7.9333, grad_fn=<MeanBackward0>)
tensor(7.9216, grad_fn=<MeanBackward0>)

Printing the final value of weight and bias

打印重量和偏差的最终值

print(w.item(),b.item())output:20.115468978881836 0.34108221530914307

the value w of w is close to 20 and the value of b is somewhat close to 5.so , we can conclude that our program learnt well using the PyTorch AutoGrad Function

w的w值接近20,b的值接近5。所以,我们可以得出结论,我们的程序使用PyTorch AutoGrad函数学习得很好

分析损失(Analyse the Loss)

if you want, you could plot the loss over time using the losses list and see how the loss decreased.

如果需要,可以使用“损失”列表绘制随时间变化的损失,并查看损失如何减少。

我们所做的概述 (Overview of what we did)

  1. We first created a dataset x and y. where we assigned the value of y to be equal to 20*x + 5

    我们首先创建了一个数据集x和y。 我们将y的值分配为等于20 * x + 5
  2. Initialised the weights and bias to 0 and set require_grad parameter to be true. (because , we are going to differentiate loss with respect to W and B to get the gradients )

    将权重和偏差初始化为0,并将require_grad参数设置为true。 (因为,我们将区分W和B的损耗以获得梯度)

  3. define the forward loop where we make the predictions , calculate the loss, differentiate the loss with respect to w and b and get the gradients, update w and b and do this process continuously for 100 times

    定义前向循环,在该循环中我们进行预测,计算损耗,针对w和b区分损耗并获得梯度,更新w和b并连续执行此过程100次
  4. finally, print w and b to check if they are closer to 20 and 5, respectively (because our y = 20*x + 5) and those are the parameters we wanted our model to learn

    最后,打印w和b分别检查它们是否分别接近20和5(因为我们的y = 20 * x + 5 ),并且这些参数是我们希望模型学习的参数

  5. The final value was really close, which proves our linear regression model worked well

    最终值非常接近,这证明我们的线性回归模型运行良好

谢谢(Thank You)

翻译自: https://medium.com/analytics-vidhya/pytorch-for-deep-learning-autograd-and-simple-linear-regression-954f999316b1

pytorch线性回归

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值