Blending集成学习算法
Blendingm集成学习过程
(1) 将数据划分为训练集和测试集(test_set),其中训练集需要再次划分为训练集 (train_set)和验证集(val_set);
(2) 创建第一层的多个模型,这些模型可以使同质的也可以是异质的;
(3) 使用train_set训练步骤2中的多个模型,然后用训练好的模型预测val_set和test_set 得到val_predict,test_predict1;
(4) 创建第二层的模型,使用val_predict作为训练集训练第二层的模型;
(5) 使用第二层训练好的模型对第二层测试集test_predict1进行预测,该结果为整个测试集的结果。
实例
# 加载相关工具包
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("ggplot")
%matplotlibinline
import seaborn as sns
UsageError: Line magic function `%matplotlibinline` not found.
# 创建数据
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
data,target=make_blobs(n_samples=10000,centers=2,random_state=1,cluster_std=1.0)
data
array([[-3.08389358, 5.70067218],
[-8.80258525, -5.07389013],
[-1.68452735, 5.22511143],
...,
[-8.65168502, -5.58805662],
[-1.41968841, 3.76555241],
[-9.9077506 , -3.42556702]])
# 创建训练集和测试集
X_train1,X_test,y_train1,y_test = train_test_split(data, target,test_size=0.2,random_state=1)
# 创建训练集和验证集
X_train,X_val,y_train,y_val = train_test_split(X_train1, y_train1, test_size=0.3,random_state=1)
print("The shape of training X:",X_train.shape)
print("The shape of training y:",y_train.shape)
print("The shape of test X:",X_test.shape)
print("The shape of test y:",y_test.shape)
print("The shape of validation X:",X_val.shape)
print("The shape of validation y:",y_val.shape)
The shape of training X: (5600, 2)
The shape of training y: (5600,)
The shape of test X: (2000, 2)
The shape of test y: (2000,)
The shape of validation X: (2400, 2)
The shape of validation y: (2400,)
设置第一层分类器
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
clfs=[SVC(probability=True),RandomForestClassifier(n_estimators=100,n_jobs=-1,criterion='gini'),KNeighborsClassifier()]
设置第二层分类器
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
### 输入第一次的验证集结果与测试集结果
val_features=np.zeros((X_val.shape[0],len(clfs)))
test_features=np.zeros((X_test.shape[0],len(clfs)))
for i,clf in enumerate(clfs):
clf.fit(X_train,y_train)
val_feature = clf.predict_proba(X_val)[:, 1]
test_feature = clf.predict_proba(X_test)[:,1]
val_features[:,i] = val_feature
test_features[:,i] = test_feature
将第一层的验证集的结果输入第二层训练第二层分类器
lr.fit(val_features,y_val)
LinearRegression()
输出结果
from sklearn.model_selection import cross_val_score
cross_val_score(lr,test_features,y_test,cv=5)
array([1., 1., 1., 1., 1.])
应用Iris数据集进行验证
from sklearn.datasets import load_iris
data=load_iris().data
target=load_iris().target
data.shape
(150, 4)
target.shape
(150,)
# 创建训练集和测试集
X_train1,X_test,y_train1,y_test = train_test_split(data, target,test_size=0.1,random_state=1)
# 创建训练集和验证集
X_train,X_val,y_train,y_val = train_test_split(X_train1, y_train1, test_size=0.1,random_state=1)
print("The shape of training X:",X_train.shape)
print("The shape of training y:",y_train.shape)
print("The shape of test X:",X_test.shape)
print("The shape of test y:",y_test.shape)
print("The shape of validation X:",X_val.shape)
print("The shape of validation y:",y_val.shape)
The shape of training X: (121, 4)
The shape of training y: (121,)
The shape of test X: (15, 4)
The shape of test y: (15,)
The shape of validation X: (14, 4)
The shape of validation y: (14,)
设置第一层分类器
clfs=[SVC(probability=True),RandomForestClassifier(n_estimators=100,n_jobs=-1),KNeighborsClassifier()]
from sklearn.linear_model import Lin
lr = LinearRegression()
### 输入第一次的验证集结果与测试集结果
val_features=np.zeros((X_val.shape[0],len(clfs)))
test_features=np.zeros((X_test.shape[0],len(clfs)))
for i,clf in enumerate(clfs):
clf.fit(X_train,y_train)
val_feature = clf.predict_proba(X_val)[:, 1]
test_feature = clf.predict_proba(X_test)[:,1]
val_features[:,i] = val_feature
test_features[:,i] = test_feature
lr.fit(val_features,y_val)
LinearRegression()
cross_val_score(lr,test_features,y_test,cv=5)
array([-11.99947378, -9.65028823, 0.02669007, 0.08171008,
-1.06964251])
总结
关于blending的东西很少,这块和boost的内容相比更简单,但仿佛没有boost好用?