目 录
stacking是一种模式集成的策略
先对简化版的stacking进行讨论,也叫做Blending
Blending集成学习算法
Blending
- (1)划分训练集、验证集和测试集
- (2)创建第一层的多个模型(同质或异质)
- (3)训练(2)中的模型,用训练好的模型预测得到val_predict 和test_predict1
- (4)第二层模型,使用val_predict作为训练集训练
- (5)使用第二层训练好的模型,对test_predict进行预测,得到整个测试集的结果
优点:实现简单
缺点:只使用了一部分数据集作为留出集进行验证,即只能用上数据的一部分
案例
相关工具包加载
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("ggplot")
%matplotlib inline
import seaborn as sns
创建数据
from sklearn import datasets
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
data, target = make_blobs(n_samples=10000, centers=2, random_state=1, cluster_std=1.0 )
## 创建训练集和测试集
X_train1,X_test,y_train1,y_test = train_test_split(data, target, test_size=0.2, random_state=1)
## 创建训练集和验证集
X_train,X_val,y_train,y_val = train_test_split(X_train1, y_train1, test_size=0.3, random_state=1)
print("The shape of training X:",X_train.shape)
print("The shape of training y:",y_train.shape)
print("The shape of test X:",X_test.shape)
print("The shape of test y:",y_test.shape)
print("The shape of validation X:",X_val.shape)
print("The shape of validation y:",y_val.shape)
设置第一层分类器
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
clfs = [SVC(probability=True),RandomForestClassifier(n_estimators=5,n_jobs=-1,criterion='gini'),KNeighborsClassifier()]
设置第二层分类器
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
第一层
val_features = np.zeros((X_val.shape[0],len(clfs)))
test_features = np.zeros((X_test.shape[0],len(clfs)))
for i,clf in enumerate(clfs):
clf.fit(X_train,y_train)
val_feature = clf.predict_proba(X_val)[:,1]
test_feature = clf.predict_proba(X_test)[:,1]
val_features[:,i] = val_feature
test_features[:,i] = test_feature
第二层
lr.fit(val_features,y_val)
输出预测的结果
lr.fit(val_features,y_val)
from sklearn.model_selection import cross_val_score
cross_val_score(lr,test_features,y_test,cv=5)
验证效果很好
参考资料:
1.DataWhale开源资料