转正老板让你谈谈你的看法_让我们谈谈逻辑回归

转正老板让你谈谈你的看法

Moving forward from Linear Regression, we will now address the next most important algorithm on our ML journey. The Logistic Regression.

从线性回归向前发展,我们现在将解决机器学习旅程中的下一个最重要的算法。 逻辑回归。

‘Logistic’ in the English language means ‘planning’. Now, how do we interpret that in the context of Machine Learning?

英文中的“物流”是指“计划”。 现在,我们如何在机器学习的背景下对此进行解释?

Actually, Logistic Regression derives its name from the Logistic function that is implicitly used in the algorithm and is, therefore, called what it is called. (It has nothing to do with the meaning of ‘logistic’, per se).

实际上,Logistic回归是从该算法中隐式使用的Logistic函数获得其名称的,因此被称为它。 (本质上与“物流”的含义无关)。

Let’s now break down the aspects of Logistic Regression.

现在让我们分解Logistic回归的各个方面。

The algorithm is usually used to sort the data into two different possibilities i.e pass/fail, alive/dead, this/that (Binary). While Linear Regression is a predictive algorithm, Logistic Regression is termed as a classification algorithm.

该算法通常用于将数据分类为两种不同的可能性,即通过/失败,有效/无效,this / that( 二进制 )。 虽然线性回归是一种预测算法,但逻辑回归被称为分类算法

How does the algorithm classify? You may ask. To answer that, we will have to probe it in deeper places. Probe into the very function that is the core of the method.

该算法如何分类? 你可能会问。 为了回答这个问题,我们将不得不在更深的地方进行探讨。 探讨作为方法核心的功能

The Logistic function (or the Sigmoid function) is a S-shaped curve that rises pretty quickly and then saturates at a certain level. It takes any real valued number and then maps it between 0 and 1.

Logistic函数(或Sigmoid函数 )是一条S形曲线,其上升非常快,然后在一定水平上达到饱和。 它采用任何实数值,然后将其映射为0到1。

It’s mathematically given by : 1 / (1 + e^-value), where e is the base of the natural logarithms.

从数学上讲,它是: 1 /(1 + e ^ -value) ,其中e是自然对数的底数。

Now, how does the logistic curve help? Here you go.

现在,逻辑曲线如何帮助? 干得好。

We take the values of x (the training data), plug it in an equation and model the values of y. Exactly like linear regression, except that y here is either 0 or 1. (We are sorting the data into possibilities, as I mentioned before).

我们取x的值( 训练数据 ),将其插入方程式并为y的值建模。 与线性回归完全一样,不同之处在于y此处为0或1。(如前所述,我们正在将数据分类为可能性)。

Here’s an example of a Logistic Regression equation :

这是Logistic回归方程的示例:

y = e^(b0 + b1*x) / (1 + e^(b0 + b1*x))

y = e ^(b0 + b1 * x)/(1 + e ^(b0 + b1 * x))

This formula essentially is a revised version of the logistic function. We take a perfectly linear function b0 + b1*x and plug it into our logistic function to get a mapped output between 0 and 1. (You can say it’s Linear Regression with a twist!)

该公式本质上是逻辑函数的修订版。 我们采用一个完美的线性函数b0 + b1 * x并将其插入我们的逻辑函数中,以获取0到1之间的映射输出。

Here b0 and b1 are coefficients that need to be learned from the training data.

这里b0b1是需要 从训练数据中学到。

Which brings us to two important questions.

这给我们带来了两个重要的问题。

我们如何确定y是0还是1? (How do we determine whether our y is 0 or 1?)

The answer is simple : Determine a decision boundary.

答案很简单:确定决策边界。

If our decision boundary is 0.5, the values of y which are computed less than 0.5 will be a 0 and the ones greater than 0.5 will be a 1.

如果我们的决策边界为0.5,则计算出的小于y的y值将为0,大于0.5的y值为1。

如何学习系数? (How are the coefficients learned?)

While there are several methods to this as covered in ‘Let’s talk Linear Regression’, an important one is the Stochastic Gradient method.

尽管“线性对话回归”中介绍了多种方法,但重要的一种方法是随机梯度法。

Given a training example :

给出一个训练示例:

1. We initialise our coefficients to 0, and calculate our prediction.

1.我们将系数初始化为0,然后计算我们的预测。

2. We calculate the new coefficient values based on the error in prediction.

2.我们根据预测误差计算新系数值。

3. We repeat till our error drops to a desirable level. (We want our model to be accurate after all)

3.重复执行直到错误降至理想水平。 (我们毕竟希望我们的模型是准确的)

I will be skimming over the rigorous math in this article to keep it beginner-friendly.

我将在本文中浏览严格的数学,以使其对初学者友好。

This article is by no means an exhaustive study of Logistic Regression and is an attempt at simplifying and breaking down the concepts.

本文绝不是对Logistic回归的详尽研究,而是试图简化和分解这些概念。

For a deeper mathematical understanding, I would highly recommend checking out the Scikit Learn documentation and the Python implementation of the algorithm.

为了更深入地了解数学,我强烈建议您查看Scikit Learn文档和该算法的Python实现。

The equation examples and the images in this article have been referred from : https://machinelearningmastery.com/logistic-regression-for-machine-learning/

本文中的方程式示例和图像已从以下网站引用: https : //machinelearningmastery.com/logistic-regression-for-machine-learning/

翻译自: https://medium.com/swlh/lets-talk-logistic-regression-4b2072ad7b4e

转正老板让你谈谈你的看法

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值