机器学习-多数投票方式-MajorityVotingClassifier

Section I: Code Bundle and Result Analyses

第一部分:三种分类算法(Pipeline)的性能比较

代码

from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")

plt.rcParams['figure.dpi']=200
plt.rcParams['savefig.dpi']=200
font = {
   'family': 'Times New Roman',
        'weight': 'light'}
plt.rc("font", **font)

#Section 1: Load data and split data into train/test datasets
iris=datasets.load_iris()
X,y=iris.data[50:,[1,2]],iris.target[50:]
le=LabelEncoder()
y=le.fit_transform(y)
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.5,random_state=1,stratify=y)

#Section 2: Model performance among different classifiers
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import make_pipeline,Pipeline
import numpy as np

clf1=LogisticRegression(penalty='l2',
                        C=0.001,
                        random_state=1)
clf2=DecisionTreeClassifier(max_depth=1,
                            criterion='entropy',
                            random_state=1)
clf3=KNeighborsClassifier(n_neighbors=1,
                          p=2,
                          metric='minkowski')

pipe1=Pipeline([["sc",StandardScaler()],["clf",clf1]])
pipe3=Pipeline([["sc",StandardScaler()],["clf",clf3]])
clf_labels=["Logistic Regression","Decision Tree","KNN"]

print("10-fold Cross Validation:")
for clf,label in zip([pipe1,clf2,pipe3],clf_labels):
    scores=cross_val_score(estimator=clf,
                           X=X_train,
                           y=y_train,
                           cv=10,
                           scoring="roc_auc")
    print("ROC AUC: %.2f (+/- %.2f) [%s]" % (scores.mean(),scores.std(),label))

结果

10-fold Cross Validation:
ROC AUC: 0.87 (+/- 0.17) [Logistic Regression]
ROC AUC: 0.89 (+/- 0.16) [Decision Tree]
ROC AUC: 0.88 (+/- 0.15) [KNN]

第二部分:多数投票方式

代码

#Section 3: Combine individual classifier via MajorityVoting
from sklearn.ensemble import VotingClassifier

"""Return class labels or probabilities for X for each estimator.
probabilities_or_labels
    If `voting='soft'` and `flatten_transform=True`:
        returns array-like of shape (n_classifiers, n_samples *
        n_classes), being class probabilities calculated by each
        classifier.
    If `voting='soft' and `flatten_transform=False`:
        array-like of shape (n_classifiers, n_samples, n_classes)
    If `voting='hard'`:
        array-like of shape (n_samples, n_classifiers), being
        class labels predicted by each classifier.
"""
mv_clf=VotingClassifier(estimators=[('pipe1',pipe1),('clf2',clf2),('pipe3',pipe3)],
                        voting='soft')

clf_labels+=['Majority Voting']
all_clf=[pipe1,clf2,pipe3,mv_clf]

for clf,label in zip(all_clf,clf_labels):
    scores=cross_val_score(estimator=clf,
                           X=X_train,
                           y=y_train,
                           cv=10,
                           scoring="roc_auc")
    print("Accuracy: %.2f (+/- %.2f) [%s]" % (scores.mean(),scores.std(),label))

结果

Accuracy: 0.87 (+/- 0.17) [Logistic Regression]
Accuracy: 0.89 (+/- 0.16) [Decision Tree]
Accuracy: 0.88 (+/- 0.15) [KNN]
Accuracy: 0.94 (+/- 0.13) [Majority Voting]

对比上述结果,可以得知多数投票方式的分类算法,抗差能力更强。

第三部分:ROC 曲线

在第一部分基础上,进一步添加如下代码。

代码

#Section 4: Evaluate and tune the ensemble classifier
from sklearn.metrics import roc_curve
from sklearn.metrics import auc

colors=['black','orange','blue','green']
linestyle=[':','--'<
  • 2
    点赞
  • 28
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值