python XGBoost分类器 和 基于树的特征选择 决策树法

绪论

为什么要进行特征选择[1],特征选择的意义在于通过减少输入特征数量,来减少模型的复杂度,有的时候还能起到防止过拟合、提高模型的泛化能力等效果[2],在得分相同的情况下,通常认为特征数量少的模型更好,因为这样的模型更简洁,这也可以增强特征的说明意义。
Filter方法
Flter方法是一种挖掘特征之间的的内部信息的方法,例如通过判断其距离和相关性来确定特征的重要程度。该方法比较容易理解,常用的有皮尔逊相关系数、信息增益、互信息等。
Wrapper方法
Wrapper方法一般是先设定一个分类性能的评价标准,然后将选出来的特征子集进行评分,分数越高说明特征越好。但这种方法不太适用于高维特征,计算起来耗时较长。常见的有粒子群算法、遗传算法等。
Embedded方法
Embedded方法主要是结合分类器,是算法内置的特征选择方法。在模型训练时,就会输出各个特征的重要性得分,然后根据得分排序,选择得分高的几个特征。Embedded主要有通过增加惩罚项控制过拟合的Lasso正则化法和决策树的方法,这里我就用python实现一下我也不知道是叫决策树算法的特征选择还是基于树的特征选择。

1. 导入包和数据

import numpy as np 
import pandas as pd
from numpy import sort
import matplotlib.pyplot as plt
from xgboost import XGBClassifier
from xgboost import plot_importance
from sklearn import metrics

from sklearn.metrics import classification_report
from sklearn.metrics import precision_recall_curve
from sklearn.feature_selection import SelectFromModel
import warnings
warnings.filterwarnings('ignore')
df_train = pd.read_csv("../data/train.csv") 
df_test = pd.read_csv("../data/test.csv") 

2. 分割训练集和测试集

y_train = df_train.iloc[:, 88:89]
y_test = df_test.iloc[:, 88:89]
x_train = df_train.iloc[:, 1:88]
x_test = df_test.iloc[:, 1:88]

3. 不进行特征选择直接用xgb分类

3.1. xgb直接建模

model = XGBClassifier(learning_rate=0.1,
                      n_estimators=1000,         # 树的个数--1000棵树建立xgboost
                      max_depth=6,               # 树的深度
                      min_child_weight = 1,      # 叶子节点最小权重
                      gamma=0.,                  # 惩罚项中叶子结点个数前的参数
                      subsample=0.8,             # 随机选择80%样本建立决策树
                      colsample_btree=0.8,       # 随机选择80%特征建立决策树
                      objective='binary:logistic', # 指定损失函数
                      scale_pos_weight=1,        # 解决样本个数不平衡的问题
                      random_state=27            # 随机数
                      )
model.fit(x_train,
          y_train,
          eval_set = [(x_test,y_test)],
          eval_metric = "error",
          early_stopping_rounds = 10,
          verbose = True)      

在这里插入图片描述

3.2. 输出图片特征重要性

fig,ax = plt.subplots(figsize=(15,15))
plot_importance(model,
                height=0.5,
                ax=ax,
                max_num_features=64)
plt.show()
pred = model.predict(x_test)

在这里插入图片描述

3.3. 模型评价

from sklearn import metrics
print('F1-score:%.4f' % metrics.f1_score(y_test,pred))
print('AUC:%.4f' % metrics.roc_auc_score(y_test,pred))
print('ACC:%.4f' % metrics.accuracy_score(y_test,pred))
print('Recall:%.4f' % metrics.recall_score(y_test,pred))
print('Precesion:%.4f' % metrics.precision_score(y_test,pred)) 

F1-score:0.7059
AUC:0.6044
ACC:0.6259
Recall:0.8148
Precesion:0.6226

4. 进行特征选择后的分类

thresholds = sort(model.feature_importances_)
for thresh in thresholds:

    selection = SelectFromModel(model, threshold=thresh, prefit=True)
    select_x_train = selection.transform(x_train)#对x_train 进行处理

    selection_model = XGBClassifier()
    selection_model.fit(select_x_train, y_train)#拟合模型

    select_x_test = selection.transform(x_test)#对x_test 进行处理
    y_pred = selection_model.predict(select_x_test)
    predictions = [round(value) for value in y_pred]
    auc = accuracy_score(y_test, predictions)
    print("Thresh=%.3f, n=%d, Accuracy: %.2f%%" % (thresh, select_x_train.shape[1], auc*100.0))

Thresh=0.000, n=87, Accuracy: 58.50%
Thresh=0.000, n=87, Accuracy: 58.50%
Thresh=0.000, n=87, Accuracy: 58.50%
Thresh=0.000, n=87, Accuracy: 58.50%
Thresh=0.003, n=83, Accuracy: 59.86%
Thresh=0.004, n=82, Accuracy: 57.14%
Thresh=0.005, n=81, Accuracy: 56.46%
Thresh=0.006, n=80, Accuracy: 57.82%
Thresh=0.007, n=79, Accuracy: 60.54%
Thresh=0.007, n=78, Accuracy: 61.90%
Thresh=0.007, n=77, Accuracy: 61.22%
Thresh=0.008, n=76, Accuracy: 58.50%
Thresh=0.008, n=75, Accuracy: 56.46%
Thresh=0.008, n=74, Accuracy: 58.50%
Thresh=0.009, n=73, Accuracy: 57.14%
Thresh=0.009, n=72, Accuracy: 57.14%
Thresh=0.009, n=71, Accuracy: 59.86%
Thresh=0.009, n=70, Accuracy: 57.82%
Thresh=0.009, n=69, Accuracy: 60.54%
Thresh=0.009, n=68, Accuracy: 55.10%
Thresh=0.009, n=67, Accuracy: 57.82%
Thresh=0.009, n=66, Accuracy: 56.46%
Thresh=0.009, n=65, Accuracy: 57.82%
Thresh=0.010, n=64, Accuracy: 61.90%
Thresh=0.010, n=63, Accuracy: 59.18%
Thresh=0.010, n=62, Accuracy: 58.50%
Thresh=0.010, n=61, Accuracy: 59.18%
Thresh=0.010, n=60, Accuracy: 62.59%
Thresh=0.010, n=59, Accuracy: 60.54%
Thresh=0.010, n=58, Accuracy: 59.18%
Thresh=0.010, n=57, Accuracy: 56.46%
Thresh=0.010, n=56, Accuracy: 57.82%
Thresh=0.010, n=55, Accuracy: 58.50%
Thresh=0.011, n=54, Accuracy: 58.50%
Thresh=0.011, n=53, Accuracy: 59.18%
Thresh=0.011, n=52, Accuracy: 56.46%
Thresh=0.011, n=51, Accuracy: 62.59%
Thresh=0.011, n=50, Accuracy: 59.86%
Thresh=0.011, n=49, Accuracy: 59.86%
Thresh=0.011, n=48, Accuracy: 57.14%
Thresh=0.011, n=47, Accuracy: 59.18%
Thresh=0.011, n=46, Accuracy: 62.59%
Thresh=0.011, n=45, Accuracy: 58.50%
Thresh=0.011, n=44, Accuracy: 59.18%
Thresh=0.011, n=43, Accuracy: 59.18%
Thresh=0.011, n=42, Accuracy: 58.50%
Thresh=0.011, n=41, Accuracy: 51.70%
Thresh=0.011, n=40, Accuracy: 57.14%
Thresh=0.012, n=39, Accuracy: 57.82%
Thresh=0.012, n=38, Accuracy: 59.86%
Thresh=0.012, n=37, Accuracy: 60.54%
Thresh=0.012, n=36, Accuracy: 60.54%
Thresh=0.012, n=35, Accuracy: 58.50%
Thresh=0.012, n=34, Accuracy: 57.14%
Thresh=0.012, n=33, Accuracy: 58.50%
Thresh=0.012, n=32, Accuracy: 59.86%
Thresh=0.013, n=31, Accuracy: 59.86%
Thresh=0.013, n=30, Accuracy: 59.86%
Thresh=0.013, n=29, Accuracy: 57.82%
Thresh=0.013, n=28, Accuracy: 59.18%
Thresh=0.013, n=27, Accuracy: 59.18%
Thresh=0.013, n=26, Accuracy: 60.54%
Thresh=0.014, n=25, Accuracy: 57.14%
Thresh=0.014, n=24, Accuracy: 55.10%
Thresh=0.014, n=23, Accuracy: 57.14%
Thresh=0.014, n=22, Accuracy: 61.90%
Thresh=0.014, n=21, Accuracy: 57.82%
Thresh=0.014, n=20, Accuracy: 57.82%
Thresh=0.014, n=19, Accuracy: 58.50%
Thresh=0.014, n=18, Accuracy: 60.54%
Thresh=0.014, n=17, Accuracy: 61.22%
Thresh=0.014, n=16, Accuracy: 58.50%
Thresh=0.015, n=15, Accuracy: 61.22%
Thresh=0.015, n=14, Accuracy: 57.14%
Thresh=0.016, n=13, Accuracy: 62.59%
Thresh=0.016, n=12, Accuracy: 63.95%
Thresh=0.016, n=11, Accuracy: 64.63%
Thresh=0.016, n=10, Accuracy: 59.86%
Thresh=0.017, n=9, Accuracy: 59.18%
Thresh=0.018, n=8, Accuracy: 58.50%
Thresh=0.018, n=7, Accuracy: 57.14%
Thresh=0.019, n=6, Accuracy: 61.90%
Thresh=0.019, n=5, Accuracy: 57.14%
Thresh=0.020, n=4, Accuracy: 55.78%
Thresh=0.022, n=3, Accuracy: 52.38%
Thresh=0.023, n=2, Accuracy: 55.10%
Thresh=0.023, n=1, Accuracy: 53.06%
我这里ACC最高的时候Thresh是0.016,然后在下面跑就好了,实际上选出来了’f4’, ‘f10’, ‘f15’, ‘f34’, ‘f40’, ‘f45’, ‘f53’, ‘f59’, ‘f62’, ‘f73’, 'f83’这几个特征。

thresh = 0.016
selection = SelectFromModel(model, threshold=thresh, prefit=True)
select_x_train = selection.transform(x_train)#对x_train 进行处理

selection_model = XGBClassifier()
selection_model.fit(select_x_train, y_train)

select_x_test = selection.transform(x_test)
pred = selection_model.predict(select_x_test)

看一下效果咋样,有没有提升

print('F1-score:%.4f' % metrics.f1_score(y_test,pred))
print('AUC:%.4f' % metrics.roc_auc_score(y_test,pred))
print('ACC:%.4f' % metrics.accuracy_score(y_test,pred))
print('Recall:%.4f' % metrics.recall_score(y_test,pred))
print('Precesion:%.4f' % metrics.precision_score(y_test,pred))

F1-score:0.7079
AUC:0.6313
ACC:0.6463
Recall:0.7778
Precesion:0.6495

5. 围师必阙

提升好像比较微弱,我用的股票数据,最重要的召回率反而降了一点??

那我岂不是不能得出结论:特征选择有的时候是必要的??

好吧,这波是围师必阙,大家看下代码就好,用到了再好好研究研究。

参考文献

[1] 张靖,面向高维小样本数据的分类特征选择算法研究[D],安徽,合肥工业大学,2014
[2] 蒋胜利,高维数据的特征选择与特征提取研究[D],西安电子科技大学,2011

好好学习奥,peace
  • 1
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值