MachineLearning—集成学习(Ensemble Learning)

集成学习是提高模型鲁棒性的重要方法,在数据、特征处理之后的阶段,如果在算法方面没有提升,可以尝试在模型集成方面发力,可以收到意想不到的结果。但并不是使用集成学习方法就一定会提高结果。例如stacking方法,理论讲其结果渐进等价于第一层最优子模型结果,使用stacking至少不会大幅度降低模型效果。

一、投票方法

常用的有软投票和硬投票两种,例如,支持向量机可以输出各个样本属于某一类的概率,将多个模型的这种结果进行加权,便得到软投票的集成结果。硬投票更简单,直接多数服从少数即可。

二、Bagging

最常用的莫过于随机森林方法,其思想在随机森林原理这篇文章中有详细介绍。在构建模型的过程中,随机有放回的采样部分样本训练基学习器,最后将基学习器的结果进行融合就是bagging的最终结果。

三、Boosting

提升方法在boosting原理这篇文章中有较为详细的介绍。Boosting算法可以并行处理,而Boosting的思想是一种迭代的方法,每一次训练的时候都更加关心分类错误的样例,给这些分类错误的样例增大权重,下一次迭代的目标就是能够更容易辨别出上一轮分类错误的样例。最终将这些弱分类器进行加权。

四、Stacking

接下来重点介绍一下Stacking方法。

这里写图片描述

 

df = pd.read_csv('C:/Users/Titanic/train.csv')
test = df.sample(frac=0.1)
test.to_csv('C:/Users/Titanic/test.csv')

train = df[~df.PassengerId.isin(test.PassengerId)]
train.to_csv('C:/Users/Titanic/train.csv')

In [43]:

import pandas as pd
usedColumnFeature = ['PassengerId','Pclass','Sex','Age','SibSp','Parch','Fare','Embarked','Survived']
train = pd.read_csv('C:/Users/Titanic/train.csv', usecols=usedColumnFeature)
train.dropna(subset=['Age'],how='any',axis=0,inplace=True)
test = pd.read_csv('C:/Users/Titanic/test.csv', usecols=usedColumnFeature)
test.dropna(subset=['Age'],how='any',axis=0,inplace=True)
train = train.set_index('PassengerId')
test = test.set_index('PassengerId')
train.head()

Out[43]:

 SurvivedPclassSexAgeSibSpParchFareEmbarked
PassengerId        
103male22.0107.2500S
211female38.01071.2833C
313female26.0007.9250S
411female35.01053.1000S
503male35.0008.0500S

In [44]:

y_train = train['Survived']
train.drop('Survived',axis=1,inplace=True)
y_test = test['Survived']
test.drop('Survived',axis=1,inplace=True)

In [45]:

typeNames = ['Pclass','Sex','Embarked']
for item in typeNames:
    train = pd.concat([train,pd.get_dummies(train[item],prefix=item+'_')], axis=1)
    test = pd.concat([test,pd.get_dummies(test[item],prefix=item+'_')], axis=1)
train.drop(typeNames, axis=1, inplace=True)
test.drop(typeNames, axis=1, inplace=True)
test.head()

Out[45]:

 AgeSibSpParchFarePclass__1Pclass__2Pclass__3Sex__femaleSex__maleEmbarked__CEmbarked__QEmbarked__S
PassengerId            
37422.000135.633310001100
51734.00010.500001010001
31030.00056.929210010100
1651.04139.687500101001
3871.05246.900000101001

In [46]:

#rf = RandomForestClassifier(oob_score=True, random_state=9)
#gbm = GradientBoostingClassifier(random_state=9)

from sklearn.linear_model.logistic import LogisticRegression
lr_model=LogisticRegression()
lr_model.fit(train,y_train)
pred = lr_model.predict(test)

In [47]:

import numpy as np
result = pd.DataFrame(pred, columns=['pred'], index=y_test.index)
result['y_test'] = y_test
print(len(result[result.pred == result.y_test]))
result.head()
55

Out[47]:

 predy_test
PassengerId  
37410
51711
31011
16500
38700

In [48]:

print(train.shape)
print(test.shape)
(644, 12)
(70, 12)

In [49]:

train = train.reset_index(drop=True)
test = test.reset_index(drop=True)
y_train = y_train.to_frame().reset_index(drop=True)

In [50]:

# Out-of-Fold Predictions
from sklearn.model_selection import KFold
ntrain = train.shape[0]
ntest = test.shape[0]
kf = KFold(n_splits=5, random_state=2019)
clf = LogisticRegression()

def get_oof(clf, train, y_train, test):
    oof_train = np.zeros((ntrain,))  # 1 * 644
    oof_test = np.zeros((ntest,))  # 1 * 70
    oof_test_skf = np.empty((5, ntest))  # 5 * 70
    
    for i, (train_index,test_index) in enumerate(kf.split(train)):   # train: 644 * 12
        kf_X_train = train.iloc[list(train_index),:]  # 515 * 12
        kf_y_train = y_train.iloc[list(train_index),:]  # 515 * 1
        kf_X_test = train.iloc[list(test_index),:]    # 129 * 12

        clf.fit(kf_X_train, kf_y_train)
        
        oof_train[test_index] = clf.predict(kf_X_test)  # 1 * 129  ==>  1 * 644
        oof_test_skf[i, :] = clf.predict(test)        # oof_test_skf[i,:]    1 * 70  ==>  5 * 70
        
    oof_test[:] = oof_test_skf.mean(axis=0)  # oof_test[:]  1 * 70
    return oof_train.reshape(-1,1), oof_test.reshape(-1,1)
    # 891 * 1      418 * 1
new_train_lr, new_test_lr = get_oof(clf, train, y_train, test)

In [51]:

from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=3)
new_train_knn, new_test_knn = get_oof(neigh, train, y_train, test)

In [52]:

new_train = y_train
new_train['lr_feature'] = new_train_lr
new_train['knn_feature'] = new_train_knn
new_train.head()

Out[52]:

 Survivedlr_featureknn_feature
000.00.0
111.01.0
211.01.0
311.01.0
400.00.0

In [53]:

from sklearn.tree import DecisionTreeClassifier
dt_model = DecisionTreeClassifier()
dt_model.fit(new_train[['lr_feature','knn_feature']].values, new_train['Survived'])

new_test = pd.DataFrame(new_test_lr,columns=['lr_feature'])
new_test['knn_feature'] = new_test_knn
pred = dt_model.predict(new_test.values)

In [54]:

new_result = pd.DataFrame(y_test)
new_result['pred'] = pred
len(new_result[new_result.Survived == new_result.pred])

Out[54]:

55
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值