task5:模型融合

模型融合的方式:

  1. 平均:
    a. 简单平均法
    b. 加权平均法
  2. 投票:
    a. 简单投票法
    b. 加权投票法
  3. 综合:
    a. 排序融合
    b. log融合
  4. stacking:
    构建多层模型,并利用预测结果再拟合预测。
  5. blending:
    选取部分数据预测训练得到预测结果作为新特征,带入剩下的数据中预测。Blending只有一层,而 Stacking有多层
  6. boosting/bagging

1 平均

#简单平均
pre=(pre1+pre2+pre3)/3
#加权平均
pre=0.1*pre1+0.3*pre2+0.6*pre3

2 投票

# 简单投票
from xgboost import XGBClassifier 
from sklearn.linear_model import LogisticRegression 
from sklearn.ensemble import RandomForestClassifier, VotingClassifier 
clf1 = LogisticRegression(random_state=1) 
clf2 = RandomForestClassifier(random_state=1) 
clf3 = XGBClassifier(learning_rate=0.1, n_estimators=150, max_depth=4, min_child_weight=2, subsample=0.7,objective='binary:logistic')
 
vclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('xgb', clf3)]) 
vclf = vclf .fit(x_train,y_train) 
print(vclf .predict(x_test))
#加权投票 
#在VotingClassifier中加入参数 voting='soft', weights=[2, 1, 1],weights用于调节基模型的权重
from xgboost import XGBClassifier 
from sklearn.linear_model import LogisticRegression 
from sklearn.ensemble import RandomForestClassifier, VotingClassifier 
clf1 = LogisticRegression(random_state=1) 
clf2 = RandomForestClassifier(random_state=1) 
clf3 = XGBClassifier(learning_rate=0.1, n_estimators=150, max_depth=4, min_child_weight=2, subsample=0.7,objective='binary:logistic')
 
vclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('xgb', clf3)], voting='soft', weights=[2, 1, 1]) 
vclf = vclf .fit(x_train,y_train) 
print(vclf .predict(x_test))

3 stacking

import warnings 
warnings.filterwarnings('ignore') 
import itertools 
import numpy as np 
import seaborn as sns 
import matplotlib.pyplot as plt 
import matplotlib.gridspec as gridspec 
from sklearn import datasets 
from sklearn.linear_model import LogisticRegression 
from sklearn.neighbors import KNeighborsClassifier 
from sklearn.naive_bayes import GaussianNB 
from sklearn.ensemble import RandomForestClassifier 
from mlxtend.classifier import StackingClassifier 
from sklearn.model_selection import cross_val_score, train_test_split
from mlxtend.plotting import plot_learning_curves 
from mlxtend.plotting import plot_decision_regions
# 以python自带的鸢尾花数据集为例 
iris = datasets.load_iris() 
X, y = iris.data[:, 1:3], iris.target

clf1 = KNeighborsClassifier(n_neighbors=1) 
clf2 = RandomForestClassifier(random_state=1) 
clf3 = GaussianNB() 
lr = LogisticRegression() 
sclf = StackingClassifier(classifiers=[clf1, clf2, clf3],
                          meta_classifier=lr)
label = ['KNN', 'Random Forest', 'Naive Bayes', 'Stacking Classifier'] 
clf_list = [clf1, clf2, clf3, sclf]    
fig = plt.figure(figsize=(10,8)) 
gs = gridspec.GridSpec(2, 2) 
grid = itertools.product([0,1],repeat=2)
clf_cv_mean = [] 
clf_cv_std = [] 
for clf, label, grd in zip(clf_list, label, grid):            
    scores = cross_val_score(clf, X, y, cv=5, scoring='accuracy')    
    print("Accuracy: %.2f (+/- %.2f) [%s]" %(scores.mean(), scores.std(), label))    
    clf_cv_mean.append(scores.mean())    
    clf_cv_std.append(scores.std())            
    clf.fit(X, y)    
    ax = plt.subplot(gs[grd[0], grd[1]])    
    fig = plot_decision_regions(X=X, y=y, clf=clf)    
    plt.title(label)
plt.show()
Accuracy: 0.91 (+/- 0.07) [KNN]
Accuracy: 0.94 (+/- 0.04) [Random Forest]
Accuracy: 0.91 (+/- 0.04) [Naive Bayes]
Accuracy: 0.94 (+/- 0.04) [Stacking Classifier]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VmJjtGdX-1601214768456)(output_8_1.png)]

4 blending

from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import roc_auc_score
# 以python自带的鸢尾花数据集为例 
data_0 = iris.data 
data = data_0[:100,:]
target_0 = iris.target 
target = target_0[:100]
 
#模型融合中基学习器 
clfs = [LogisticRegression(),        
        RandomForestClassifier(),        
        ExtraTreesClassifier(),        
        GradientBoostingClassifier()]
 
#切分一部分数据作为测试集 
X, X_predict, y, y_predict = train_test_split(data, target, test_size=0.3, random_state=914)
#切分训练数据集为d1,d2两部分 
X_d1, X_d2, y_d1, y_d2 = train_test_split(X, y, test_size=0.5, random_state=914) 
dataset_d1 = np.zeros((X_d2.shape[0], len(clfs))) 
dataset_d2 = np.zeros((X_predict.shape[0], len(clfs)))
 
for j, clf in enumerate(clfs):    
    #依次训练各个单模型    
    clf.fit(X_d1, y_d1)    
    y_submission = clf.predict_proba(X_d2)[:, 1]    
    dataset_d1[:, j] = y_submission    
    #对于测试集,直接用这k个模型的预测值作为新的特征。    
    dataset_d2[:, j] = clf.predict_proba(X_predict)[:, 1]    
    print("val auc Score: %f" % roc_auc_score(y_predict, dataset_d2[:, j]))
    
#融合使用的模型 
clf = GradientBoostingClassifier() 
clf.fit(dataset_d1, y_d2) 
y_submission = clf.predict_proba(dataset_d2)[:, 1] 
print("Val auc Score of Blending: %f" % (roc_auc_score(y_predict, y_submission)))
val auc Score: 1.000000
val auc Score: 1.000000
val auc Score: 1.000000
val auc Score: 1.000000
Val auc Score of Blending: 1.000000
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值