【机器学习竞赛+笔记干货】工业蒸汽量预测:模型融合篇(七)


相关阅读:

比赛地址:工业蒸汽量预测_学习赛_天池大赛

7 模型融合

7.1 模型优化

一般可以从以下几方面进行优化:

  • 研究模型学习曲线,判断模型是否过拟合或者欠拟合并做出相应的调整。
  • 对模型权重参数进行分析,对于权重绝对值高或低的特征,可以进行更细化的操作,也可以进行特征组合。
  • 进行Bad-Case分析,针对错误的例子确定是否还有地方可以修改挖掘。
  • 进行模型融合。

7.1.1 模型学习曲线

7.1.2 模型融合提升技术

先产生一组个体学习器,再用某种策略将它们结合起来,以加强模型效果。

  • 个体学习器间不存在强依赖关系可同时生成的并行化方法,代表是Bagging方法和随机森林。
  • 个体学习器间存在强依赖关系必须串行生成的序列化方法,代表是Boosting方法。
1. Bagging方法和随机森林

Bagging方法是从训练集中抽样得到每个基模型所需要的子训练集,然后对所有基模型预测的结果进行综合,产生最终的预测结果。
在这里插入图片描述

Bagging方法采用自助采样法(Bootstrap sampling),即对于m个样本的原始数据集,每次先随机采集一个样本放入采样集,接着把该样本放回,然后继续采集。
随机森林是对Bagging方法的改进:

  • 基本学习器限定为决策树。
  • 在Bagging的样本和属性上加上扰动。
2. Boosting方法

Boosting方法的基模型按次序一一进行训练,基模型的训练集按照某种策略每次进行一定的转换,然后对所有的预测结果进行线性综合。
在这里插入图片描述

(1)AdaBoost算法

是加法模型、损失函数为指数函数、学习算法为前向分布算法时的二分类算法。

(2)提升树

提升树(Boosting Tree)是加法模型、学习算法为前向分布算法时的算法,基本学习器限定为决策树。
对于二分类问题,损失函数为指数函数;对于回归问题,损失函数为平方误差。

(3)梯度提升树

梯度提升树(Gradient Boosting Tree)是对树算法的改进,只适合于损失函数为指数函数和平方误差,对于一般的损失函数,可以将损失函数的负梯度在当前模型的值作为残差的近似值。

7.1.3 预测结果融合策略

1. Voting

Voting(投票机制)分为软投票和硬投票两种,采用少数服从多数思想,可用于解决分类问题。

  • 硬投票:对多个模型进行直接投票,最终投票数最多的类为最终被预测的类。
  • 软投票:为不同模型设置不同的权重,进而区分模型不同的重要度。
2. 软投票示例
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from mlxtend.classifier import EnsembleVoteClassifier
clf1=LogisticRegression(random_state=0,solver='lbfgs',multi_class='auto')
clf2=RandomRegressionClassifier(random_state=0,n_estimators=100)
clf3=SVC(random_state=0,probability=Ture,gamma='auto')
eclf=EnsembleVoteClassifier(clfs=[clf1,clf2,clf3],weights=[2,1,1],voting='soft')
eclf.fit(X,y)
3. Averaging和Ranking
  • Averaging是将模型结果的(加权)平均值作为最终的预测值。存在问题:不同回归方法预测结果的波动幅度相差较大,波动小的回归结果在融合时起的作用较小。
  • Ranking采用了把排名平均的方法,如果有权重,则求出n个模型权重比排名之和为最终结果。
4. Blending

把原始训练集分为两部分,如70%和30%。
在第一层中,用70%数据训练多个模型,去预测30%数据的label。在第二层中,用30%数据在第一层预测的结果作为新特征继续训练。

  • 优点:比Stacking简单,不用进行k次交叉验证来获得stacker feature,避开了一些信息泄露问题。
  • 缺点:使用了很少的数据,第二阶段的blender只使用了10%训练集;blender可能会过拟合。
5. Stacking

Stacking是用训练好的所有基模型对训练集进行预测,将第j个基模型对第i个训练样本的预测值作为新的训练集中第i个样本的第j个特征值,最后基于新的训练集进行训练。同理,预测过程中也要先经过所有基模型的预测形成新的测试集再进行预测。
可以使模型效果更好,还可以防止模型过拟合。
在这里插入图片描述

7.1.4 其他提升方法

  • 通过对权重或者特征重要性分析,可以准确找到重要的数据和字段及相关的特征方向,并且可以朝着此方向继续细化,同时寻找这个方向更多的数据,还可以做相关的特征组合,从而提高模型性能。
  • 通过Bad-Case分析,可以有效找到预测不准确的样本点,进而回溯分析数据,寻找相关原因,从而找到提高模型精度的方法。

7.2 赛题模型融合

  • 导入数据
  • 合并数据
  • 删除相关特征
  • 数据归一化
  • 画图探索特征和标签的关系
  • Box-Cox变换
  • 分位数计算和绘图
  • 标签数据对数变换,是数据更符合正态分布
  • 绘图查看异常数据分布
  • 删除异常数据

7.2.4 采用网格搜索训练模型

from  sklearn..proprecessing import StandardScaler

def train_model(model,param_grid=[],X=[],y=[],aplits=5,repeats=5):
	# get unmodified training data, unless data to use already specified
	if len(y)==0:
		X,y=get_trainning_data_omitoutliers()
		# poly_trans=PolynomialFeatures(degree=2)
		# X=poly_trans.fit_transform(X)
		# X=MinMaxScaler().fit_transform(X)

	# create cross-validation method
	rkfold=RepeateKFold(n_splits=splits,n_repeats=repeats)
	# perform a grid search if param_grid given
	if len(param_grid)>0:
		# setup grid search parameters
		gsearch=GridSearchCV(model,param_grid,cv=rkfold,scoring='neg_mean_squared_error',verbose=1,return_train_score=Ture)
		# search the grid
		gsearch.fit(X,y)
		# extract best model from thr grid
		model=gsearch.best_estimateor_
		best_idx=gsearch.best_index_
		# get cv-scores for best model
		grid_results=pd.DataFrame(gsearch.cv_results_)
		cv_mean=abs(grid_results.loc[best_idx,'mean_test_score'])
		cv_std=grid_results.loc[best_idx,'std_best_score']
    # combine mean and std cv-score in to a pandas series
    cv_score = pd.Series({'mean':cv_mean,'std':cv_std})

    # predict y using the fitted model
    y_pred = model.predict(X)
    
    # print stats on model performance         
    print('----------------------')
    print(model)
    print('----------------------')
    print('score=',model.score(X,y))
    print('rmse=',rmse(y, y_pred))
    print('mse=',mse(y, y_pred))
    print('cross_val: mean=',cv_mean,', std=',cv_std)
    
    # residual plots
    y_pred = pd.Series(y_pred,index=y.index)
    resid = y - y_pred
    mean_resid = resid.mean()
    std_resid = resid.std()
    z = (resid - mean_resid)/std_resid    
    n_outliers = sum(abs(z)>3)
    
    plt.figure(figsize=(15,5))
    ax_131 = plt.subplot(1,3,1)
    plt.plot(y,y_pred,'.')
    plt.xlabel('y')
    plt.ylabel('y_pred');
    plt.title('corr = {:.3f}'.format(np.corrcoef(y,y_pred)[0][1]))
    ax_132=plt.subplot(1,3,2)
    plt.plot(y,y-y_pred,'.')
    plt.xlabel('y')
    plt.ylabel('y - y_pred');
    plt.title('std resid = {:.3f}'.format(std_resid))
    
    ax_133=plt.subplot(1,3,3)
    z.plot.hist(bins=50,ax=ax_133)
    plt.xlabel('z')
    plt.title('{:.0f} samples with z>3'.format(n_outliers))

    return model, cv_score, grid_results

# places to store optimal models and scores
opt_models = dict()
score_models = pd.DataFrame(columns=['mean','std'])

# no. k-fold splits
splits=5
# no. k-fold iterations
repeats=5

7.2.7 多模型预测Bagging方法

def model_predict(test_data,test_y=[],stack=False):
	i=0
	y_predict_total=np.zeros((test_data.shape[0],))
	for model in opt_models.keys():
		if model!='LinearSVR' and model!='KNeighbors':
			y_predict=opt_models[model].predict(test_data)
			y_predict_total+=y_predict
			i+=1
		if len(test_y)>0:
			print('{}_mse:'.format(model),mean_sqared_error(y_predict,test_y))
	y_predict_mean=np.round(y_predict_total/i,3)
	if len(test_y)>0:
		print('mean_mse:',mean_sqared_error(y_predict_mean,test_y))
	else:
		y_predict_mean=pd.Series(y_predict_mean)
		return y_predict_mean
model_predict(X_valid,y_valid)

7.2.8 多模型融合Stacking方法

1. 基础代码
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
from scipy import sparse
import xgboost
import lightgbm

from sklearn.ensemble import RandomForestRegressor,AdaBoostRegressor,GradientBoostingRegressor,ExtraTreesRegressor
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error

def stacking_reg(clf,train_x,train_y,test_x,clf_name,kf,label_split=None):
    train=np.zeros((train_x.shape[0],1))
    test=np.zeros((test_x.shape[0],1))
    test_pre=np.empty((folds,test_x.shape[0],1))
    cv_scores=[]
    for i,(train_index,test_index) in enumerate(kf.split(train_x,label_split)):       
        tr_x=train_x[train_index]
        tr_y=train_y[train_index]
        te_x=train_x[test_index]
        te_y=train_y[test_index]
        if clf_name in ["rf","ada","gb","et","lr","lsvc","knn"]:
            clf.fit(tr_x,tr_y)
            pre=clf.predict(te_x).reshape(-1,1)
            train[test_index]=pre
            test_pre[i,:]=clf.predict(test_x).reshape(-1,1)
            cv_scores.append(mean_squared_error(te_y, pre))
        elif clf_name in ["xgb"]:
            train_matrix = clf.DMatrix(tr_x, label=tr_y, missing=-1)
            test_matrix = clf.DMatrix(te_x, label=te_y, missing=-1)
            z = clf.DMatrix(test_x, label=te_y, missing=-1)
            params = {'booster': 'gbtree',
                      'eval_metric': 'rmse',
                      'gamma': 1,
                      'min_child_weight': 1.5,
                      'max_depth': 5,
                      'lambda': 10,
                      'subsample': 0.7,
                      'colsample_bytree': 0.7,
                      'colsample_bylevel': 0.7,
                      'eta': 0.03,
                      'tree_method': 'exact',
                      'seed': 2017,
                      'nthread': 12
                      }
            num_round = 10000
            early_stopping_rounds = 100
            watchlist = [(train_matrix, 'train'),
                         (test_matrix, 'eval')
                         ]
            if test_matrix:
                model = clf.train(params, train_matrix, num_boost_round=num_round,evals=watchlist,
                                  early_stopping_rounds=early_stopping_rounds
                                  )
                pre= model.predict(test_matrix,ntree_limit=model.best_ntree_limit).reshape(-1,1)
                train[test_index]=pre
                test_pre[i, :]= model.predict(z, ntree_limit=model.best_ntree_limit).reshape(-1,1)
                cv_scores.append(mean_squared_error(te_y, pre))
        elif clf_name in ["lgb"]:
            train_matrix = clf.Dataset(tr_x, label=tr_y)
            test_matrix = clf.Dataset(te_x, label=te_y)
            #z = clf.Dataset(test_x, label=te_y)
            #z=test_x
            params = {
                      'boosting_type': 'gbdt',
                      'objective': 'regression_l2',
                      'metric': 'mse',
                      'min_child_weight': 1.5,
                      'num_leaves': 2**5,
                      'lambda_l2': 10,
                      'subsample': 0.7,
                      'colsample_bytree': 0.7,
                      'colsample_bylevel': 0.7,
                      'learning_rate': 0.03,
                      'tree_method': 'exact',
                      'seed': 2017,
                      'nthread': 12,
                      'silent': True,
                      }
            num_round = 10000
            early_stopping_rounds = 100
            if test_matrix:
                model = clf.train(params, train_matrix,num_round,valid_sets=test_matrix,
                                  early_stopping_rounds=early_stopping_rounds
                                  )
                pre= model.predict(te_x,num_iteration=model.best_iteration).reshape(-1,1)
                train[test_index]=pre
                test_pre[i, :]= model.predict(test_x, num_iteration=model.best_iteration).reshape(-1,1)
                cv_scores.append(mean_squared_error(te_y, pre))
        else:
            raise IOError("Please add new clf.")
        print("%s now score is:"%clf_name,cv_scores)
    test[:]=test_pre.mean(axis=0)
    print("%s_score_list:"%clf_name,cv_scores)
    print("%s_score_mean:"%clf_name,np.mean(cv_scores))
    return train.reshape(-1,1),test.reshape(-1,1)
2. 模型融合Stacking基学习器
def rf_reg(x_train, y_train, x_valid, kf, label_split=None):
    randomforest = RandomForestRegressor(n_estimators=600, max_depth=20, n_jobs=-1, random_state=2017, max_features="auto",verbose=1)
    rf_train, rf_test = stacking_reg(randomforest, x_train, y_train, x_valid, "rf", kf, label_split=label_split)
    return rf_train, rf_test,"rf_reg"

def ada_reg(x_train, y_train, x_valid, kf, label_split=None):
    adaboost = AdaBoostRegressor(n_estimators=30, random_state=2017, learning_rate=0.01)
    ada_train, ada_test = stacking_reg(adaboost, x_train, y_train, x_valid, "ada", kf, label_split=label_split)
    return ada_train, ada_test,"ada_reg"

def gb_reg(x_train, y_train, x_valid, kf, label_split=None):
    gbdt = GradientBoostingRegressor(learning_rate=0.04, n_estimators=100, subsample=0.8, random_state=2017,max_depth=5,verbose=1)
    gbdt_train, gbdt_test = stacking_reg(gbdt, x_train, y_train, x_valid, "gb", kf, label_split=label_split)
    return gbdt_train, gbdt_test,"gb_reg"

def et_reg(x_train, y_train, x_valid, kf, label_split=None):
    extratree = ExtraTreesRegressor(n_estimators=600, max_depth=35, max_features="auto", n_jobs=-1, random_state=2017,verbose=1)
    et_train, et_test = stacking_reg(extratree, x_train, y_train, x_valid, "et", kf, label_split=label_split)
    return et_train, et_test,"et_reg"

def lr_reg(x_train, y_train, x_valid, kf, label_split=None):
    lr_reg=LinearRegression(n_jobs=-1)
    lr_train, lr_test = stacking_reg(lr_reg, x_train, y_train, x_valid, "lr", kf, label_split=label_split)
    return lr_train, lr_test, "lr_reg"

def xgb_reg(x_train, y_train, x_valid, kf, label_split=None):
    xgb_train, xgb_test = stacking_reg(xgboost, x_train, y_train, x_valid, "xgb", kf, label_split=label_split)
    return xgb_train, xgb_test,"xgb_reg"

def lgb_reg(x_train, y_train, x_valid, kf, label_split=None):
    lgb_train, lgb_test = stacking_reg(lightgbm, x_train, y_train, x_valid, "lgb", kf, label_split=label_split)
    return lgb_train, lgb_test,"lgb_reg"
3. 定义模型融合Stacking预测函数
def stacking_pred(x_train, y_train, x_valid, kf, clf_list, label_split=None, clf_fin="lgb", if_concat_origin=True):
    for k, clf_list in enumerate(clf_list):
        clf_list = [clf_list]
        column_list = []
        train_data_list=[]
        test_data_list=[]
        for clf in clf_list:
            train_data,test_data,clf_name=clf(x_train, y_train, x_valid, kf, label_split=label_split)
            train_data_list.append(train_data)
            test_data_list.append(test_data)
            column_list.append("clf_%s" % (clf_name))
    train = np.concatenate(train_data_list, axis=1)
    test = np.concatenate(test_data_list, axis=1)
    
    if if_concat_origin:
        train = np.concatenate([x_train, train], axis=1)
        test = np.concatenate([x_valid, test], axis=1)
    print(x_train.shape)
    print(train.shape)
    print(clf_name)
    print(clf_name in ["lgb"])
    if clf_fin in ["rf","ada","gb","et","lr","lsvc","knn"]:
        if clf_fin in ["rf"]:
            clf = RandomForestRegressor(n_estimators=600, max_depth=20, n_jobs=-1, random_state=2017, max_features="auto",verbose=1)
        elif clf_fin in ["ada"]:
            clf = AdaBoostRegressor(n_estimators=30, random_state=2017, learning_rate=0.01)
        elif clf_fin in ["gb"]:
            clf = GradientBoostingRegressor(learning_rate=0.04, n_estimators=100, subsample=0.8, random_state=2017,max_depth=5,verbose=1)
        elif clf_fin in ["et"]:
            clf = ExtraTreesRegressor(n_estimators=600, max_depth=35, max_features="auto", n_jobs=-1, random_state=2017,verbose=1)
        elif clf_fin in ["lr"]:
            clf = LinearRegression(n_jobs=-1)
        clf.fit(train, y_train)
        pre = clf.predict(test).reshape(-1,1)
        return pre
    elif clf_fin in ["xgb"]:
        clf = xgboost
        train_matrix = clf.DMatrix(train, label=y_train, missing=-1)
        test_matrix = clf.DMatrix(train, label=y_train, missing=-1)
        params = {'booster': 'gbtree',
                  'eval_metric': 'rmse',
                  'gamma': 1,
                  'min_child_weight': 1.5,
                  'max_depth': 5,
                  'lambda': 10,
                  'subsample': 0.7,
                  'colsample_bytree': 0.7,
                  'colsample_bylevel': 0.7,
                  'eta': 0.03,
                  'tree_method': 'exact',
                  'seed': 2017,
                  'nthread': 12
                  }
        num_round = 10000
        early_stopping_rounds = 100
        watchlist = [(train_matrix, 'train'),
                     (test_matrix, 'eval')
                     ]
        model = clf.train(params, train_matrix, num_boost_round=num_round,evals=watchlist,
                          early_stopping_rounds=early_stopping_rounds
                          )
        pre = model.predict(test,ntree_limit=model.best_ntree_limit).reshape(-1,1)
        return pre
    elif clf_fin in ["lgb"]:
        print(clf_name)
        clf = lightgbm
        train_matrix = clf.Dataset(train, label=y_train)
        test_matrix = clf.Dataset(train, label=y_train)
        params = {
                  'boosting_type': 'gbdt',
                  'objective': 'regression_l2',
                  'metric': 'mse',
                  'min_child_weight': 1.5,
                  'num_leaves': 2**5,
                  'lambda_l2': 10,
                  'subsample': 0.7,
                  'colsample_bytree': 0.7,
                  'colsample_bylevel': 0.7,
                  'learning_rate': 0.03,
                  'tree_method': 'exact',
                  'seed': 2017,
                  'nthread': 12,
                  'silent': True,
                  }
        num_round = 10000
        early_stopping_rounds = 100
        model = clf.train(params, train_matrix,num_round,valid_sets=test_matrix,
                          early_stopping_rounds=early_stopping_rounds
                          )
        print('pred')
        pre = model.predict(test,num_iteration=model.best_iteration).reshape(-1,1)
        print(pre)
        return pre

clf_list = [lr_reg, lgb_reg]
#clf_list = [lr_reg, rf_reg]

##很容易过拟合
pred = stacking_pred(x_train, y_train, x_valid, kf, clf_list, label_split=None, clf_fin="lgb", if_concat_origin=True)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

还重名就过分了啊

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值