pytorch实现线性回归_如何用pytorch实现线性回归

pytorch实现线性回归

Probably, implementing linear regression with PyTorch is an overkill. This library was made for more complicated stuff like neural networks, complex deep learning architectures, etc. Nevertheless, I think that using it for implementing a simpler machine learning method, like linear regression, is a good exercise for those who want to start learning PyTorch.

用PyTorch实施线性回归可能是一个过大的杀伤力。 这个库是为诸如神经网络,复杂的深度学习体系结构等更复杂的东西而制作的。但是,我认为,使用它来实现更简单的机器学习方法(如线性回归)对于想要开始学习PyTorch的人来说是一个很好的练习。

At its core, PyTorch is just a math library similar to NumPy, but with 2 important improvements:

PyTorch的核心只是一个类似于NumPy的数学库,但有两个重要的改进:

  • It can use GPU to make its operations a lot faster. If you have a compatible GPU properly configured, you can make the code run on GPU with just a few changes.

    它可以使用GPU使其操作更快。 如果您已正确配置了兼容的GPU,则只需进行一些更改即可使代码在GPU上运行。
  • It is capable of automatic differentiation; this means that for gradient-based methods you don’t need to manually compute the gradient, PyTorch will do it for you.

    具有自动区分能力; 这意味着对于基于梯度的方法,您无需手动计算梯度,PyTorch会为您完成。

You can think of PyTorch as NumPy on steroids.

您可以将PyTorch视为类固醇的NumPy。

While these 2 features may not seem like big improvements for what we want to do here (linear regression), since this is not very computationally-expensive and the gradient is quite simple to compute manually, they make a big difference in deep learning where we need a lot of computing power and the gradient is quite nasty to calculate by hand.

尽管这两个功能似乎对我们在此处想要做的事情(线性回归)似乎没有很大的改进,但是由于这在计算上不是很昂贵,并且梯度很容易手动计算,因此它们在深度学习中有很大的不同需要大量的计算能力,并且手工计算梯度非常麻烦。

Before working on the implementation, let’s first briefly recall what linear regression is:

在进行实现之前,让我们首先简要回顾一下线性回归是什么:

Linear regression is estimating an unknown variable in a linear fashion by some other known variables. Visually, we fit a line (or a hyperplane in higher dimensions) through our data points.

线性回归是通过一些其他已知变量以线性方式估计未知变量。 在视觉上,我们通过数据点拟合一条线(或较大尺寸的超平面)。

If you’re not comfortable with this concept or want to understand better the math behind it, you can read my previous article about linear regression:

如果您对这个概念不满意,或者想更好地理解其背后的数学运算,可以阅读我以前关于线性回归的文章:

Now, let’s jump to the coding part.

现在,让我们跳到编码部分。

Firstly, we need to, obviously, import some libraries. We import torch as it is the main thing we use for the implementation, matplotlib for visualizing our results, make_regression function, from sklearn, which we will be using to generate a regression dataset for using as an example, and the python’s built-in math module.

首先,显然,我们需要导入一些库。 我们将torch作为实现的主要内容,从matplotlib可视化结果,从sklearn make_regression函数,该函数将用于生成回归数据集作为示例,以及python的内置math模块。

import torchimport matplotlib.pyplot as pltfrom sklearn.datasets import make_regressionimport math

Then we will create a LinearRegression class with the following methods:

然后,我们将使用以下方法创建LinearRegression类:

  • .fit() — this method will do the actual learning of our linear regression model; here we will find the optimal weights

    .fit() -此方法将实际学习我们的线性回归模型; 在这里我们将找到最佳权重

  • .predict() — this one will be used for prediction; it will return the output of our linear

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值