数据来源:https://www.kaggle.com/uciml/iris
代码来源:
https://www.kaggle.com/jchen2186/machine-learning-with-iris-dataset#Modeling-with-scikit-learn
https://www.kaggle.com/adityabhat24/iris-data-analysis-and-machine-learning-python
鸢尾花是一个适合于刚开始接触机器学习的新手的数据,本文主要是为了了解相关代码,并对机器学习里面的一些方法做一些思考。
数据介绍:Iris 鸢尾花数据集是一个经典数据集,在统计学习和机器学习领域都经常被用作示例。数据集内包含 3 类共 150 条记录,每类各
50 个数据,每条记录都有 4 项特征:花萼长度、花萼宽度、花瓣长度、花瓣宽度,可以通过这4个特征预测鸢尾花卉属于(iris-setosa,
iris-versicolour, iris-virginica)中的哪一品种。
关于seaborn:
- https://blog.csdn.net/qq_34264472/article/details/53814653
- https://blog.csdn.net/wuzlun/article/details/80319394
Getting Data
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(color_codes=True)
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('C:/Users/admin/Desktop/kaggle/kaggle/3.Iris鸢尾花分类/数据/Iris.csv')
dataset.head()
Id | SepalLengthCm | SepalWidthCm | PetalLengthCm | PetalWidthCm | Species | |
---|---|---|---|---|---|---|
0 | 1 | 5.1 | 3.5 | 1.4 | 0.2 | Iris-setosa |
1 | 2 | 4.9 | 3.0 | 1.4 | 0.2 | Iris-setosa |
2 | 3 | 4.7 | 3.2 | 1.3 | 0.2 | Iris-setosa |
3 | 4 | 4.6 | 3.1 | 1.5 | 0.2 | Iris-setosa |
4 | 5 | 5.0 | 3.6 | 1.4 | 0.2 | Iris-setosa |
#drop Id column
dataset = dataset.drop('Id',axis=1)
dataset.head()
SepalLengthCm | SepalWidthCm | PetalLengthCm | PetalWidthCm | Species | |
---|---|---|---|---|---|
0 | 5.1 | 3.5 | 1.4 | 0.2 | Iris-setosa |
1 | 4.9 | 3.0 | 1.4 | 0.2 | Iris-setosa |
2 | 4.7 | 3.2 | 1.3 | 0.2 | Iris-setosa |
3 | 4.6 | 3.1 | 1.5 | 0.2 | Iris-setosa |
4 | 5.0 | 3.6 | 1.4 | 0.2 | Iris-setosa |
Summary Of the Dataset
# shape
print(dataset.shape)
(150, 5)
# more info on the data
print(dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 150 entries, 0 to 149
Data columns (total 5 columns):
SepalLengthCm 150 non-null float64
SepalWidthCm 150 non-null float64
PetalLengthCm 150 non-null float64
PetalWidthCm 150 non-null float64
Species 150 non-null object
dtypes: float64(4), object(1)
memory usage: 5.3+ KB
None
# descriptions
print(dataset.describe())
SepalLengthCm SepalWidthCm PetalLengthCm PetalWidthCm
count 150.000000 150.000000 150.000000 150.000000
mean 5.843333 3.054000 3.758667 1.198667
std 0.828066 0.433594 1.764420 0.763161
min 4.300000 2.000000 1.000000 0.100000
25% 5.100000 2.800000 1.600000 0.300000
50% 5.800000 3.000000 4.350000 1.300000
75% 6.400000 3.300000 5.100000 1.800000
max 7.900000 4.400000 6.900000 2.500000
# class distribution
print(dataset.groupby('Species').size())
Species
Iris-setosa 50
Iris-versicolor 50
Iris-virginica 50
dtype: int64
Visualizations
sharex,sharey: 用来确定是否共享坐标轴刻度,当设置为True或者all时,会共享参数刻度,设置为False或者None时,每一个子绘图会有自己独立的刻度,设置为row或者col时即共享行,列的x,y坐标轴刻度,当sharex='col’时,
只会该列最底层axes的x刻度标签会创建
# box and whisker plots
dataset.plot(kind='box', sharex=False, sharey=False)
# histograms
dataset.hist(edgecolor='black', linewidth=1.2)
# boxplot on each feature split out by species
dataset.boxplot(by="Species",figsize=(10,10))
# violinplots on petal-length for each species
sns.violinplot(data=dataset,x="Species", y="PetalLengthCm")
g = sns.violinplot(y='Species', x='SepalLengthCm', data=dataset, inner='quartile')
plt.show()
g = sns.violinplot(y='Species', x='SepalWidthCm', data=dataset, inner='quartile')
plt.show()
g = sns.violinplot(y='Species', x='PetalLengthCm', data=dataset, inner='quartile')
plt.show()
g = sns.violinplot(y='Species', x='PetalWidthCm', data=dataset, inner='quartile')
plt.show()
未分组:变量之间的散点矩阵图
from pandas.plotting import scatter_matrix
# scatter plot matrix
scatter_matrix(dataset,figsize=(10,10))
plt.show()
分组后:散点矩阵图,diag_kind=“kde”:kde密度曲线,如果不加则下图对角线处默认为柱状图。
# updating the diagonal elements in a pairplot to show a kde
sns.pairplot(dataset, hue="Species",diag_kind="kde",markers='+')
Applying different Classification models
# Importing metrics for evaluation
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
数据可以是数据框格式,也可以是array格式
# Seperating the data into dependent and independent variables
# X = dataset.iloc[:, :-1].values #前四列
# y = dataset.iloc[:, -1].values #最后一列
X=dataset[['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm']]
y =dataset['Species']
train_test_split函数用于将矩阵随机划分为训练子集和测试子集,并返回划分好的训练集测试集样本和训练集测试集标签。
格式:
X_train,X_test, y_train, y_test =train_test_split(X, y,test_size=0.3, random_state=0)
参数解释:
test_size:如果是浮点数,在0-1之间,表示样本占比;如果是整数的话就是样本的数量
random_state:是随机数的种子。
随机数种子:其实就是该组随机数的编号,在需要重复试验的时候,保证得到一组一样的随机数。
比如你每次都填1,其他参数一样的情况下你得到的随机数组是一样的。但填0或不填,每次都会不一样。
随机数的产生取决于种子,随机数和种子之间的关系遵从以下两个规则:
种子不同,产生不同的随机数;种子相同,即使实例不同也产生相同的随机数。
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 1)
逻辑斯蒂回归
# LogisticRegression
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
from sklearn.metrics import accuracy_score
print('accuracy is',accuracy_score(y_pred,y_test))
precision recall f1-score support
Iris-setosa 1.00 1.00 1.00 11
Iris-versicolor 1.00 0.62 0.76 13
Iris-virginica 0.55 1.00 0.71 6
accuracy 0.83 30
macro avg 0.85 0.87 0.82 30
weighted avg 0.91 0.83 0.84 30
[[11 0 0]
[ 0 8 5]
[ 0 0 6]]
accuracy is 0.8333333333333334
Roc曲线参考链接:
1 https://blog.csdn.net/qq_36523839/article/details/83758002
2 https://blog.csdn.net/cymy001/article/details/79613787
朴素贝叶斯
# Naive Bayes
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
from sklearn.metrics import accuracy_score
print('accuracy is',accuracy_score(y_pred,y_test))
precision recall f1-score support
Iris-setosa 1.00 1.00 1.00 11
Iris-versicolor 1.00 0.92 0.96 13
Iris-virginica 0.86 1.00 0.92 6
accuracy 0.97 30
macro avg 0.95 0.97 0.96 30
weighted avg 0.97 0.97 0.97 30
[[11 0 0]
[ 0 12 1]
[ 0 0 6]]
accuracy is 0.9666666666666667
支持向量机
# Support Vector Machine's
from sklearn.svm import SVC
classifier = SVC()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
from sklearn.metrics import accuracy_score
print('accuracy is',accuracy_score(y_pred,y_test))
precision recall f1-score support
Iris-setosa 1.00 1.00 1.00 11
Iris-versicolor 1.00 0.92 0.96 13
Iris-virginica 0.86 1.00 0.92 6
accuracy 0.97 30
macro avg 0.95 0.97 0.96 30
weighted avg 0.97 0.97 0.97 30
[[11 0 0]
[ 0 12 1]
[ 0 0 6]]
accuracy is 0.9666666666666667
K近邻
划分数据集,且将k设置为8
# K-Nearest Neighbours
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=8)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
from sklearn.metrics import accuracy_score
print('accuracy is',accuracy_score(y_pred,y_test))
precision recall f1-score support
Iris-setosa 1.00 1.00 1.00 11
Iris-versicolor 1.00 1.00 1.00 13
Iris-virginica 1.00 1.00 1.00 6
accuracy 1.00 30
macro avg 1.00 1.00 1.00 30
weighted avg 1.00 1.00 1.00 30
[[11 0 0]
[ 0 13 0]
[ 0 0 6]]
accuracy is 1.0
没有划分数据集,进行比较不同k的分类准确率
# experimenting with different n values
k_range = list(range(1,26))
scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X, y)
y_pred = knn.predict(X)
scores.append(accuracy_score(y, y_pred))
plt.plot(k_range, scores)
plt.xlabel('Value of k for KNN')
plt.ylabel('Accuracy Score')
plt.title('Accuracy Scores for Values of k of k-Nearest-Neighbors')
plt.show()
决策树
# Decision Tree's
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
# Summary of the predictions made by the classifier
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
# Accuracy score
from sklearn.metrics import accuracy_score
print('accuracy is',accuracy_score(y_pred,y_test))
precision recall f1-score support
Iris-setosa 1.00 1.00 1.00 11
Iris-versicolor 1.00 0.92 0.96 13
Iris-virginica 0.86 1.00 0.92 6
accuracy 0.97 30
macro avg 0.95 0.97 0.96 30
weighted avg 0.97 0.97 0.97 30
[[11 0 0]
[ 0 12 1]
[ 0 0 6]]
accuracy is 0.9666666666666667