机器学习学习吴恩达逻辑回归_机器学习基础:逻辑回归

机器学习学习吴恩达逻辑回归

In the previous stories, I had given an explanation of the program for implementation of various Regression models. As we move on to Classification, isn’t it surprising as to why the title of this algorithm still has the name, Regression. Let us understand the mechanism of the Logistic Regression and learn to build a classification model with an example.

在先前的故事中 ,我已经解释了用于实现各种回归模型的程序。 当我们继续进行分类时 ,为什么该算法的标题仍然具有名称Regression也不奇怪。 让我们了解Logistic回归的机制,并通过示例学习构建分类模型。

Logistic回归概述 (Overview of Logistic Regression)

Logistic Regression is a classification model that is used when the dependent variable (output) is in the binary format such as 0 (False) or 1 (True). Examples include such as predicting if there is a tumor (1) or not (0) and if an email is a spam (1) or not (0).

Logistic回归是一种分类模型,当因变量(输出)采用二进制格式(例如0(假)或1(真))时使用。 例如,例如预测是否有肿瘤(1)(0)和电子邮件是否为垃圾邮件(1)(0)。

The logistic function, also called as sigmoid function was initially used by statisticians to describe properties of population growth in ecology. The sigmoid function is a mathematical function used to map the predicted values to probabilities. Logistic Regression has an S-shaped curve and can take values between 0 and 1 but never exactly at those limits. It has the formula of 1 / (1 + e^-value).

统计学家最初使用逻辑函数(也称为S型函数)来描述生态学中人口增长的特性。 S形函数是用于将预测值映射到概率的数学函数。 Logistic回归具有S形曲线,并且可以采用0到1之间的值,但永远不能精确地处于那些极限。 它的公式为1 / (1 + e^-value)

Logistic Regression is an extension of the Linear Regression model. Let us understand this with a simple example. If we want to classify if an email is a spam or not, if we apply a Linear Regression model, we would get only continuous values between 0 and 1 such as 0.4, 0.7 etc. On the other hand, the Logistic Regression extends this linear regression model by setting a threshold at 0.5, hence the data point will be classified as spam if the output value is greater than 0.5 and not spam if the output value is lesser than 0.5.

Logistic回归是线性回归模型的扩展。 让我们用一个简单的例子来理解这一点。 如果我们要分类电子邮件是否为垃圾邮件,则应用线性回归模型,我们将只能获得0到1之间的连续值,例如0.4、0.7等。另一方面,逻辑回归可以扩展此线性通过将阈值设置为0.5来建立回归模型,因此,如果输出值大于0.5,则数据点将被归类为垃圾邮件;如果输出值小于0.5,则数据点将被归类为垃圾邮件。

In this way, we can use Logistic Regression to classification problems and get accurate predictions.

这样,我们可以使用Logistic回归对问题进行分类并获得准确的预测。

问题分析 (Problem Analysis)

To apply the Logistic Regression model in practical usage, let us consider a DMV Test dataset which consists of three columns. The first two columns consist of the two DMV written tests (DMV_Test_1 and DMV_Test_2) which are the independent variables and the last column consists of the dependent variable, Results which denote that the driver has got the license (1) or not (0).

为了在实际应用中应用Logistic回归模型,让我们考虑由三列组成的DMV测试数据集。 前两列包含两个DMV书面测试( DMV_Test_1DMV_Test_2 ),它们是自变量,最后一列包含因变量, 结果表示驱动程序已获得许可证(1)或没有获得许可证(0)。

In this, we have to build a Logistic Regression model using this data to predict if a driver who has taken the two DMV written tests will get the license or not using those marks obtained in their written tests and classify the results.

在这种情况下,我们必须使用此数据构建Logistic回归模型,以预测已参加两次DMV笔试的驾驶员是否会使用他们在笔试中获得的那些标记来获得驾照,然后对结果进行分类。

步骤1:导入库 (Step 1: Importing the Libraries)

As always, the first step will always include importing the libraries which are the NumPy, Pandas and the Matplotlib.

与往常一样,第一步将始终包括导入NumPy,Pandas和Matplotlib库。

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd

步骤2:导入数据集 (Step 2: Importing the dataset)

In this step, we shall get the dataset from my GitHub repository as “DMVWrittenTests.csv”. The variable X will store the two “DMV Tests ”and the variable Y will store the final output as “Results. The dataset.head(5)is used to visualize the first 5 rows of the data.

在这一步中,我们将从GitHub存储库中获取数据集,名称为“ DMVWrittenTests.csv”。 变量X将存储两个“ DMV测试 ”,变量Y将最终输出存储为“ 结果 dataset.head(5)用于可视化数据的前5行。

dataset = pd.read_csv('https://raw.githubusercontent.com/mk-gurucharan/Classification/master/DMVWrittenTests.csv')X = dataset.iloc[:, [0, 1]].values
y = dataset.iloc[:, 2].valuesdataset.head(5)>>
DMV_Test_1 DMV_Test_2 Results
34.623660 78.024693 0
30.286711 43.894998 0
35.847409 72.902198 0
60.182599 86.308552 1
79.032736 75.344376 1

步骤3:将资料集分为训练集和测试集 (Step 3: Splitting the dataset into the Training set and Test set)

In this step, we have to split the dataset into the Training set, on which the Logistic Regression model will be trained and the Test set, on which the trained model will be applied to classify the results. In this the test_size=0.25 denotes that 25% of the data will be kept as the Test set and the remaining 75% will be used for training as the Training set.

在这一步中,我们必须将数据集分为训练集和测试集,训练集将在该训练集上训练逻辑回归模型,测试集将在训练集上应用训练后的模型对结果进行分类。 在这种情况下, test_size=0.25表示将保留25%的数据作为测试集,而将剩余的75 %的数据用作培训集

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)

步骤4:功能缩放 (Step 4: Feature Scaling)

This is an additional step that is used to normalize the data within a particular range. It also aids in speeding up the calculations. As the data is widely varying, we use this function to limit the range of the data within a small limit ( -2,2). For example, the score 62.0730638 is normalized to -0.21231162 and the score 96.51142588 is normalized to 1.55187648. In this way, the scores of X_train and X_test are normalized to a smaller range.

这是一个附加步骤,用于对特定范围内的数据进行规范化。 它还有助于加快计算速度。 由于数据变化很大,我们使用此功能将数据范围限制在很小的限制(-2,2)内。 例如,将分数62.0730638标准化为-0.21231162,将分数96.51142588标准化为1.55187648。 这样,将X_train和X_test的分数归一化为较小的范围。

from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)

步骤5:在训练集上训练Logistic回归模型 (Step 5: Training the Logistic Regression model on the Training Set)

In this step, the class LogisticRegression is imported and is assigned to the variable “classifier”. The classifier.fit() function is fitted with X_train and Y_train on which the model will be trained.

在此步骤中,将导入LogisticRegression类并将其分配给变量“ classifier”classifier.fit()函数配有X_trainY_train ,将在其上训练模型。

from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train, y_train)

步骤6:预测测试集结果 (Step 6: Predicting the Test set results)

In this step, the classifier.predict() function is used to predict the values for the Test set and the values are stored to the variable y_pred.

在此步骤中, classifier.predict()函数用于预测测试集的值,并将这些值存储到变量y_pred.

y_pred = classifier.predict(X_test) 
y_pred

步骤7:混淆矩阵和准确性 (Step 7: Confusion Matrix and Accuracy)

This is a step that is mostly used in classification techniques. In this, we see the Accuracy of the trained model and plot the confusion matrix.

这是分类技术中最常用的步骤。 在此,我们看到了训练模型的准确性,并绘制了混淆矩阵。

The confusion matrix is a table that is used to show the number of correct and incorrect predictions on a classification problem when the real values of the Test Set are known. It is of the format

混淆矩阵是一个表,用于在已知测试集的实际值时显示有关分类问题的正确和不正确预测的数量。 它的格式

Image for post
Source — Self
来源—自我

The True values are the number of correct predictions made.

True值是做出正确预测的次数。

from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)from sklearn.metrics import accuracy_score
print ("Accuracy : ", accuracy_score(y_test, y_pred))
cm>>Accuracy : 0.88
>>array([[11, 0],
[ 3, 11]])

From the above confusion matrix, we infer that, out of 25 test set data, 22 were correctly classified and 3 were incorrectly classified. Pretty good for a start, isn’t it?

从上面的混淆矩阵中,我们推断出,在25个测试集数据中,有22个被正确分类,而3个被错误分类。 一开始很不错,不是吗?

步骤8:将实际值与预测值进行比较 (Step 8: Comparing the Real Values with Predicted Values)

In this step, a Pandas DataFrame is created to compare the classified values of both the original Test set (y_test) and the predicted results (y_pred).

在此步骤中,将创建一个Pandas DataFrame来比较原始测试集( y_test )和预测结果( y_pred )的分类值。

df = pd.DataFrame({'Real Values':y_test, 'Predicted Values':y_pred})
df>>
Real Values Predicted Values
1 1
0 0
0 0
0 0
1 1
1 1
1 0
1 1
0 0
1 1
0 0
0 0
0 0
1 1
1 0
1 1
0 0
1 1
1 0
1 1
0 0
0 0
1 1
1 1
0 0

Though this visualization may not be of much use as it was with Regression, from this, we can see that the model is able to classify the test set values with a decent accuracy of 88% as calculated above.

尽管这种可视化可能不像使用回归那样有用,但是从中我们可以看到,该模型能够以如上所述的88%的准确度对测试集值进行分类。

步骤9:可视化结果 (Step 9: Visualising the Results)

In this last step, we visualize the results of the Logistic Regression model on a graph that is plotted along with the two regions.

在最后一步中,我们在与两个区域一起绘制的图形上可视化Logistic回归模型的结果。

from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Logistic Regression')
plt.xlabel('DMV_Test_1')
plt.ylabel('DMV_Test_2')
plt.legend()
plt.show()
Image for post
Logistic Regression
逻辑回归

In this graph, the value 1 (i.e, Yes) is plotted in “Red” color and the value 0 (i.e, No) is plotted in “Green” color. The Logistic Regression line separates the two regions. Thus, any data with the two data points (DMV_Test_1 and DMV_Test_2) given, can be plotted on the graph and depending upon which region if falls in, the result (Getting the Driver’s License) can be classified as Yes or No.

在该图中,值1(即“是”)以“ 红色 ”颜色绘制,而值0(即“否”)以“ 绿色 ”颜色绘制。 Logistic回归线将两个区域分开。 因此,具有给定两个数据点(DMV_Test_1和DMV_Test_2)的任何数据都可以绘制在图形上,并且根据所落的区域,结果(获得驾驶执照)可以分类为是或否。

As calculated above, we can see that there are three values in the test set that are wrongly classified as “No” as they are on the other side of the line.

如上所述,我们可以看到测试集中有3个值被错误地归类为“否”,因为它们位于行的另一侧。

Logistic Regression
逻辑回归

结论— (Conclusion —)

Thus in this story, we have successfully been able to build a Logistic Regression model that is able to predict if a person is able to get the driving license from their written examinations and visualize the results.

因此,在这个故事中,我们已经成功地建立了Logistic回归模型,该模型可以预测一个人是否能够通过笔试获得驾照并将结果可视化。

I am also attaching the link to my GitHub repository where you can download this Google Colab notebook and the data files for your reference.

我还将链接附加到我的GitHub存储库中,您可以在其中下载此Google Colab笔记本和数据文件以供参考。

You can also find the explanation of the program for other Classification models below:

您还可以在下面找到其他分类模型的程序说明:

  • Logistic Regression

    逻辑回归
  • K-Nearest Neighbors (KNN) Classification (Coming Soon)

    K最近邻居(KNN)分类(即将推出)
  • Support Vector Machine (SVM) Classification (Coming Soon)

    支持向量机(SVM)分类(即将推出)
  • Naive Bayes Classification (Coming Soon)

    朴素贝叶斯分类(即将推出)
  • Random Forest Classification (Coming Soon)

    随机森林分类(即将推出)

We will come across the more complex models of Regression, Classification and Clustering in the upcoming articles. Till then, Happy Machine Learning!

在接下来的文章中,我们将介绍更复杂的回归,分类和聚类模型。 到那时,快乐机器学习!

翻译自: https://towardsdatascience.com/machine-learning-basics-logistic-regression-890ef5e3a272

机器学习学习吴恩达逻辑回归

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值