机器学习的第三天,总结一下今天学习的逻辑回归,一个名字带有回归的模型算法,却用来解决分类问题
1.数据集合
老师提供的两个数据集,
2.代码框架
- 读取数据
data = pd.read_csv("data1.data",header=None,index_col=False)
columns = np.array(['a','b','category'])
data.rename(columns=dict(list(zip(np.arange(3),columns))),inplace=True)
x = data[columns[:-1]] # 保存待降维的4个要素
y = data[columns[-1]]
- 划分数据集
x, x_test, y, y_test = train_test_split(x, y, train_size=0.7)
3.调用模型
regr = Pipeline([('poly', PolynomialFeatures(degree=2,interaction_only=False)),
('clf', LogisticRegression())])
regr.fit(x,y)
y_hat = regr.predict(x)
print('训练集精确度:', metrics.accuracy_score(y, y_hat))
y_test_hat = regr.predict(x_test)
print('测试集精确度:', metrics.accuracy_score(y_test, y_test_hat))
这里需要注意PolynomialFeatures里面的degree是表示逻辑回归线方程的次数。可以根据具体情况进行调整,下文将对这一项工作进行介绍。
4.画图
N, M = 500, 500 # 横纵各采样多少个值
x1_min = x['a'].min()
x1_max = x['a'].max()
x2_min = x['b'].min()
x2_max = x['b'].max()
cm_light = mpl.colors.ListedColormap(['#77E0A0', '#FF8080'])
cm_dark = mpl.colors.ListedColormap(['g', 'r'])
t1 = np.linspace(x1_min, x1_max, N)
t2 = np.linspace(x2_min, x2_max, M)
x1, x2 = np.meshgrid(t1, t2) # 生成网格采样点
x_show = np.stack((x1.flat, x2.flat), axis=1) # 测试点
y_hat = regr.predict(x_show) # 预测值
y_hat = y_hat.reshape(x1.shape) # 使之与输入的形状相同
plt.figure(facecolor='w')
plt.pcolormesh(x1, x2, y_hat, cmap=cm_light) # 预测值的显示
plt.scatter(x['a'], x['b'], s=30, c=y, edgecolors='k', cmap=cm_dark) # 样本的显示
x1_label, x2_label = 'a', 'b'
plt.xlabel(x1_label, fontsize=12)
plt.ylabel(x2_label, fontsize=12)
plt.xlim(x1_min, x1_max)
plt.ylim(x2_min, x2_max)
plt.grid(b=True, ls=':', color='k')
patchs = [mpatches.Patch(color='#77E0A0', label='unknow_a'),
mpatches.Patch(color='#FF8080', label='unknow_b')]
plt.legend(handles=patchs, fancybox=True, framealpha=0.8, loc='lower right')
plt.title('test', fontsize=15)
plt.show()
全部代码
#!/usr/bin/env python
#-*- coding:utf-8 -*-
# author:caipeng
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures,StandardScaler
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import warnings
warnings.filterwarnings("ignore")
data = pd.read_csv("data1.data",header=None,index_col=False)
columns = np.array(['a','b','category'])
data.rename(columns=dict(list(zip(np.arange(3),columns))),inplace=True)
x = data[columns[:-1]] # 保存待降维的4个要素
y = data[columns[-1]]
#print(x)
x, x_test, y, y_test = train_test_split(x, y, train_size=0.7)
regr = Pipeline([('poly', PolynomialFeatures(degree=2,interaction_only=False)),
('clf', LogisticRegression())])
regr.fit(x,y)
y_hat = regr.predict(x)
print('训练集精确度:', metrics.accuracy_score(y, y_hat))
y_test_hat = regr.predict(x_test)
print('测试集精确度:', metrics.accuracy_score(y_test, y_test_hat))
N, M = 500, 500 # 横纵各采样多少个值
x1_min = x['a'].min()
x1_max = x['a'].max()
x2_min = x['b'].min()
x2_max = x['b'].max()
cm_light = mpl.colors.ListedColormap(['#77E0A0', '#FF8080'])
cm_dark = mpl.colors.ListedColormap(['g', 'r'])
t1 = np.linspace(x1_min, x1_max, N)
t2 = np.linspace(x2_min, x2_max, M)
x1, x2 = np.meshgrid(t1, t2) # 生成网格采样点
x_show = np.stack((x1.flat, x2.flat), axis=1) # 测试点
y_hat = regr.predict(x_show) # 预测值
y_hat = y_hat.reshape(x1.shape) # 使之与输入的形状相同
plt.figure(facecolor='w')
plt.pcolormesh(x1, x2, y_hat, cmap=cm_light) # 预测值的显示
plt.scatter(x['a'], x['b'], s=30, c=y, edgecolors='k', cmap=cm_dark) # 样本的显示
x1_label, x2_label = 'a', 'b'
plt.xlabel(x1_label, fontsize=12)
plt.ylabel(x2_label, fontsize=12)
plt.xlim(x1_min, x1_max)
plt.ylim(x2_min, x2_max)
plt.grid(b=True, ls=':', color='k')
patchs = [mpatches.Patch(color='#77E0A0', label='unknow_a'),
mpatches.Patch(color='#FF8080', label='unknow_b')]
plt.legend(handles=patchs, fancybox=True, framealpha=0.8, loc='lower right')
plt.title('test', fontsize=15)
plt.show()
3.训练结果
-
数据集一
当我们的degree选择2时,准确率可以达到100%
下面是分类图
-
数据集二
对于数据二需要进行标准化处理
需要加一层 StandardScale() 将degree调整为4。
regr = Pipeline([('sc', StandardScaler()),('poly', PolynomialFeatures(degree=4,interaction_only=False)),
('clf', LogisticRegression())])
分类图
需要本博文数据集的同学可以评论邮箱