Stacking基本思想与简单实现

Stacking模型的基本思想

假设有1000条训练集,100条测试集,那么把训练集分为5份(一般分为5份),每一份有200条。用model训练其中四份,即800条,后,预测剩下200条,同时也预测测试集100条,得到预测结果。经过5次训练,训练集正好得到200×5条结果,也就是原来训练集的数量,合为一列,即1000×1的矩阵,测试集得到100×5条,将5次预测结果取平均值,得到100×1的矩阵,第一层任务结束。接着用相同的方法,尝试另外的模型,把不同模型得到的结果按列合并,若使用3个基模型,即得到1000×3的矩阵和100×3的矩阵,将这些结果作为第二层模型的训练集和测试集,初始的训练集标签作为第二层训练集标签,投入训练,预测结果。

简单实现

import numpy as np
from sklearn.model_selection import KFold
import pandas as pd
import warnings
warnings.filterwarnings('ignore')

# 创建一个父类,实现交叉训练的方法
class BasicModel(object):
    def train(self, x_train, y_train, x_val, y_val):
        pass

    def predict(self, model, x_test):
        pass

    def mode(slef,nums):
        num_dict = {}
        for i in nums:
            if i in num_dict:
                num_dict[i] += 1
            else:
                num_dict[i] = 1
        return max(num_dict.items(), key=lambda x: x[1])[0]

    def get_oof(self, x_train, y_train, x_test, n_folds=5):
        num_train, num_test = x_train.shape[0], x_test.shape[0]  # 读取矩阵第一维度的长度
        oof_train = np.zeros((num_train,))
        oof_test =[]

        oof_test_all_fold = np.zeros((num_test, n_folds))
        aucs = []
        KF = KFold(n_splits=n_folds, random_state=0)

        for i, (train_index, val_index) in enumerate(KF.split(x_train)): 
         # 得到原来训练集的4/5的训练集和1/5的测试集
            print('{0} fold, train {1}, val {2}'.format(i, len(train_index), len(val_index)))
            x_tra, y_tra = x_train[train_index], y_train[train_index]
            x_val, y_val = x_train[val_index], y_train[val_index]
            model, auc = self.train(x_tra, y_tra, x_val, y_val)  
            # 调用自身的train方法
            aucs.append(auc)
            oof_train[val_index] = self.predict(model, x_val)  
            # 得到第二层的训练集
            oof_test_all_fold[:, i] = self.predict(model, x_test)
'''
对于文本分类方面,最终得到地标签是整数,若取平均值会影响下一层判断,这里改进,求众数作为第二层模型测试集的输入
'''
        #找出众数算法:
        print('off_test_all_fold')
        print(oof_test_all_fold)
        for item in oof_test_all_fold:
            mode=self.mode(item)
            oof_test.append(mode)
        print(oof_test)

        print('all aucs {0}, average {1}'.format(aucs, np.mean(aucs)))
        return oof_train, oof_test


# 多项式朴素贝叶斯
from sklearn.naive_bayes import MultinomialNB as mnb
class MNBClassifier(BasicModel):
    def __init__(self):
        self.params = {
            'alpha': 1.0
        }

    def train(self, x_train, y_train, x_val, y_val):
        print('train with mnb model')
        model = mnb()
        model.fit(x_train,y_train)
        score = model.score(x_val, y_val)
        return model, score

    def predict(self, model, x_test):
        print('test with mnb model')
        # print(model.predict(x_test))
        return model.predict(x_test)


# 逻辑回归
from sklearn.linear_model import LogisticRegression as lgr
class LGRClassifier(BasicModel):
    def __init__(self):
        self.num_rounds = 1000
        self.early_stopping_rounds = 15

    def train(self, x_train, y_train, x_val, y_val):
        print('train with lgr model')
        model=lgr()
        model.fit(self.params,x_train, y_train)
        score = model.score(x_val, y_val)
        return model, score

    def predict(self, model, x_test):
        print('test with lgr model')
        # print(model.predict(x_test))
        return model.predict(x_test)

#支持向量机
from sklearn.svm import SVC
class SVCClassifier(BasicModel):
    def train(self, x_train, y_train, x_val, y_val):
        print('train with svc model')
        model = SVC()
        model.fit(x_train, y_train)
        score = model.score(x_val, y_val)
        return model, score

    def predict(self, model, x_test):
        print('test with svc model')
        print(model.predict(x_test))
        return model.predict(x_test)

def doJob(x_train, y_train, x_test, testLabel):
# 基模型处理
    mnb_classifier = MNBClassifier()
    mnb_oof_train, mnb_oof_test = mnb_classifier.get_oof(x_train, y_train, x_test)
    
    lgr_classifier = LGRClassifier()
    lgr_oof_train, lgr_oof_test = lgr_classifier.get_oof(x_train, y_train, x_test)
    
    svc_classifier = SVCClassifier()
    svc_oof_train, svc_oof_test = svc_classifier.get_oof(x_train, y_train, x_test)
    
    # 合并多个模型的结果
    input_train = [mnb_oof_train, lgr_oof_train,svc_oof_train]
    input_test = [mnb_oof_test, lgr_oof_test,svc_oof_test]
    input_test= np.array(input_test)

    stacked_train = np.concatenate([f.reshape(-1, 1) for f in input_train], axis=1)
    stacked_test = np.concatenate([f.reshape(-1, 1) for f in input_test], axis=1)
    
	#引入第二层模型
    from sklearn.linear_model import LinearRegression
    from  sklearn import metrics
    import lightgbm as lgb
    final_model = lgb.LGBMRegressor(objective='regression', num_leaves=31, learning_rate=0.05, n_estimators=20)

    final_model.fit(stacked_train, y_train)
    test_prediction = final_model.predict(stacked_test)
    print('test_prediction','\n',test_prediction)
    print(metrics.f1_score(test_prediction,testLabel,average='macro'))

参考链接:stacking基本思想与代码实现

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Stacking(堆叠)是一种集成学习的方法,它通过将多个基本模型的预测结果作为新特征输入到一个元模型中,来进一步提升模型性能。下面是一个简单stacking 代码实现: ```python import numpy as np from sklearn.model_selection import KFold from sklearn.linear_model import LinearRegression from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_squared_error # 基本模型列表 base_models = [LinearRegression(), RandomForestRegressor()] # 元模型 meta_model = LinearRegression() # 加载数据集 X, y = load_data() # 初始化 stacking 结果矩阵 stacking_train = np.zeros((X.shape[0], len(base_models))) # 交叉验证 kf = KFold(n_splits=5, shuffle=True) for i, (train_index, valid_index) in enumerate(kf.split(X)): # 获取训练集和验证集 X_train, y_train = X[train_index], y[train_index] X_valid, y_valid = X[valid_index], y[valid_index] # 训练基本模型并预测验证集 for j, model in enumerate(base_models): model.fit(X_train, y_train) y_pred = model.predict(X_valid) stacking_train[valid_index, j] = y_pred # 训练元模型 meta_model.fit(stacking_train, y) # 测试集 stacking stacking_test = np.zeros((X_test.shape[0], len(base_models))) for j, model in enumerate(base_models): model.fit(X, y) y_pred = model.predict(X_test) stacking_test[:, j] = y_pred # 预测测试集 y_pred = meta_model.predict(stacking_test) # 性能评估 mse = mean_squared_error(y_test, y_pred) ``` 该代码中,我们使用了两个基本模型(线性回归和随机森林回归),并将它们的预测结果作为新特征输入到一个线性回归元模型中。在交叉验证过程中,我们分别训练两个基本模型,并使用它们的预测结果构建 stacking 训练集。在测试集中,我们同样使用两个基本模型进行预测,并将它们的预测结果作为新特征输入到元模型中进行预测。最后,我们使用均方误差(MSE)对预测结果进行性能评估。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值