数据挖掘 —— 金融数据(五)

task 5 模型调优
使用网格搜索法对5个模型进行调优(调参时采用五折交叉验证的方式),并进行模型评估,记得展示代码的运行结果。

GridSearchCV

自动调参,输入参数 ==> 输出最优化结果和参数(适用于小数量级)

参数说明:

(1) estimator:选择使用的分类器,并且传入除需要确定最佳的参数之外的其他参数。每一个分类器都需要一个scoring参数,或者score方法:如estimator=RandomForestClassifier(min_samples_split=100,min_samples_leaf=20,max_depth=8,max_features='sqrt',random_state=10)

(2) param_grid:需要最优化的参数的取值,值为字典或者列表,例如:

param_grid =param_test1,param_test1 = {'n_estimators':range(10,71,10)}

(3) scoring=None:模型评价标准,默认None,这时需要使用score函数;或者如scoring=‘roc_auc’,根据所选模型不同,评价准则不同。字符串(函数名),或是可调用对象,需要其函数签名形如:scorer(estimator, X, y);如果是None,则使用estimator的误差估计函数

(4) n_jobs=1:并行数,int:个数,-1:跟CPU核数一致, 1:默认值

(5) iid=True: 默认True,为True时,默认为各个样本fold概率分布一致,误差估计为所有样本之和,而非各个fold的平均

(6) refit=True:默认为True,程序将会以交叉验证训练集得到的最佳参数,重新对所有可用的训练集与开发集进行,作为最终用于性能评估的最佳模型参数。即在搜索参数结束后,用最佳参数结果再次fit一遍全部数据集。

(7) cv=None:交叉验证参数,默认None,使用三折交叉验证。指定fold数量,默认为3,也可以是yield训练/测试数据的生成器。

(8) verbose=0, scoring=None:verbose:日志冗长度,int:冗长度,0:不输出训练过程,1:偶尔输出,>1:对每个子模型都输出。

(9) pre_dispatch=‘2*n_jobs’:指定总共分发的并行任务数。当n_jobs大于1时,数据将在每个运行点进行复制,这可能导致OOM,而设置pre_dispatch参数,则可以预先划分总共的job数量,使数据最多被复制pre_dispatch次

# 导入库
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier

from sklearn.model_selection import RandomizedSearchCV,cross_val_predict
from scipy.stats import uniform
from sklearn import metrics
from sklearn.metrics import accuracy_score,precision_score,recall_score,f1_score,roc_auc_score,roc_curve
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn.model_selection import GridSearchCV
import numpy as np
# 获取进行特征选择后的数据集
import pandas as pd
data = pd.read_csv('task2_proc.csv')
x = data.iloc[:,:-1]
y = data.iloc[:,-1]
print('feature shape:{}, label shape:{}'.format(x.shape,y.shape))
feature shape:(4455, 50), label shape:(4455,)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.3,random_state=2018)

# 对训练数据集进行标准化处理
scaler = StandardScaler()
x_train_standard = scaler.fit_transform(x_train)
x_test_standard = scaler.fit_transform(x_test)
def get_scores(label, y_train, y_test, y_train_predict, y_test_predict, y_train_proba, y_test_proba):
    train_accuracy = metrics.accuracy_score(y_train, y_train_predict)
    test_accuracy = metrics.accuracy_score(y_test, y_test_predict)
    # 精准率
    train_precision = metrics.precision_score(y_train, y_train_predict)
    test_precision = metrics.precision_score(y_test, y_test_predict)
    # 召回率
    train_recall = metrics.recall_score(y_train, y_train_predict)
    test_recall = metrics.recall_score(y_test, y_test_predict)
    # F1-score
    train_f1_score = metrics.f1_score(y_train, y_train_predict)
    test_f1_score = metrics.f1_score(y_test, y_test_predict)
    # AUC
    train_auc = metrics.roc_auc_score(y_train, y_train_proba)
    test_auc = metrics.roc_auc_score(y_test, y_test_proba)
    # ROC
    train_fprs, train_tprs, train_thresholds = metrics.roc_curve(y_train, y_train_proba)
    test_fprs, test_tprs, test_thresholds = metrics.roc_curve(y_test, y_test_proba)
    
    plt.plot(train_fprs, train_tprs, label=label+' train ROC', linewidth=2)
    plt.plot(test_fprs, test_tprs, label=label+' test ROC', linewidth=2)
    plt.title("ROC Curve")
    plt.xlabel("FPR")
    plt.ylabel("TPR")
    plt.legend()
    plt.show()
    # 输出评分
    print("训练集准确率:", train_accuracy)
    print("测试集准确率:", test_accuracy)
    print("训练集精准率:", train_precision)
    print("测试集精准率:", test_precision)
    print("训练集召回率:", train_recall)
    print("测试集召回率:", test_recall)
    print("训练集F1-score:", train_f1_score)
    print("测试集F1-score:", test_f1_score)
    print("训练集AUC:", train_auc)
    print("测试集AUC:", test_auc)
    
    train = [train_accuracy, train_precision, train_recall, train_f1_score, train_auc]
    test = [test_accuracy, test_precision, test_recall, test_f1_score, test_auc]
    return train, test
# 逻辑回归
param_lr = {'penalty':['l1', 'l2'], 
            'C':[0.0001, 0.001, 0.01, 0.1, 1.0]}
lr = GridSearchCV(LogisticRegression(), param_lr, cv=5, n_jobs=-1)
lr.fit(x_train_standard, y_train)
y_train_predict = lr.predict(x_train_standard)
y_test_predict = lr.predict(x_test_standard)
y_train_proba = lr.predict_proba(x_train_standard)[:, 1]
y_test_proba = lr.predict_proba(x_test_standard)[:, 1]
Logistic_train, Logistic_test = get_scores('Logistic', y_train, y_test, y_train_predict, y_test_predict, y_train_proba, y_test_proba)

在这里插入图片描述

训练集准确率: 0.799550994227
测试集准确率: 0.794315632012
训练集精准率: 0.727513227513
测试集精准率: 0.643274853801
训练集召回率: 0.34504391468
测试集召回率: 0.33950617284
训练集F1-score: 0.468085106383
测试集F1-score: 0.444444444444
训练集AUC: 0.80259936416
测试集AUC: 0.806774280039
# SVM
SVM = svm.SVC(C=0.6, kernel='rbf', gamma=20, decision_function_shape='ovr')
param_svm = {'C':[0.3,0.5,0.6,0.7],
             'kernel':['rbf','linear'],
             'gamma':[18,20,22]
        }
SVM = GridSearchCV(clf, param_svm, cv=5, n_jobs=-1)
SVM.fit(x_train_standard, y_train)
y_train_predict = SVM.predict(x_train_standard)
y_test_predict = SVM.predict(x_test_standard)
y_train_proba = SVM.decision_function(x_train_standard)
y_test_proba = SVM.decision_function(x_test_standard)
SVM_train, SVM_test = get_scores('SVM', y_train, y_test, y_train_predict, y_test_predict, y_train_proba, y_test_proba)

在这里插入图片描述

训练集准确率: 0.788646568313
测试集准确率: 0.792071802543
训练集精准率: 0.776
测试集精准率: 0.705357142857
训练集召回率: 0.243412797992
测试集召回率: 0.243827160494
训练集F1-score: 0.370582617001
测试集F1-score: 0.362385321101
训练集AUC: 0.80488983624
测试集AUC: 0.804135741533
# 决策树
param_DT = {'max_depth':range(1,10)}
tree = GridSearchCV(DecisionTreeClassifier(), param_DT, cv=5, n_jobs=-1)
tree.fit(x_train_standard, y_train)
y_train_predict = tree.predict(x_train_standard)
y_test_predict = tree.predict(x_test_standard)
y_train_proba = tree.predict_proba(x_train_standard)[:, 1]
y_test_proba = tree.predict_proba(x_test_standard)[:, 1]
DT_train, DT_test = get_scores('DT', y_train, y_test, y_train_predict, y_test_predict, y_train_proba, y_test_proba)

在这里插入图片描述

训练集准确率: 0.791853752405
测试集准确率: 0.777860882573
训练集精准率: 0.697860962567
测试集精准率: 0.594405594406
训练集召回率: 0.32747804266
测试集召回率: 0.262345679012
训练集F1-score: 0.445772843723
测试集F1-score: 0.364025695931
训练集AUC: 0.773804664952
测试集AUC: 0.745953834717
# 随机森林
# rf = RandomForestClassifier(n_estimators=1000,criterion='gini',oob_score=True,
#                                random_state=2018,verbose=0,n_jobs=-1)

param_rf ={'n_estimators':range(10,71,10)}
rf = GridSearchCV(estimator = RandomForestClassifier(min_samples_split=100, min_samples_leaf=20, 
                                                            max_depth=8,max_features='sqrt',random_state=10), 
                                                             param_grid =param_rf,scoring='roc_auc',cv=5)

rf.fit(x_train_standard, y_train)
y_train_predict = rf.predict(x_train_standard)
y_test_predict = rf.predict(x_test_standard)
y_train_proba = rf.predict_proba(x_train_standard)[:, 1]
y_test_proba = rf.predict_proba(x_test_standard)[:, 1]
RM_train, RM_test = get_scores('RM', y_train, y_test, y_train_predict, y_test_predict, y_train_proba, y_test_proba)

在这里插入图片描述

训练集准确率: 0.80660679923
测试集准确率: 0.795811518325
训练集精准率: 0.843971631206
测试集精准率: 0.757575757576
训练集召回率: 0.298619824341
测试集召回率: 0.231481481481
训练集F1-score: 0.441149212234
测试集F1-score: 0.354609929078
训练集AUC: 0.858928651551
测试集AUC: 0.802271093074
# XGboost
param_xgb = dict(
    max_depth = [4, 5, 6, 7],
    learning_rate = np.linspace(0.03, 0.3, 10),
    n_estimators = [100, 200]
)
xgb = GridSearchCV(XGBClassifier(), param_xgb, cv=5, n_jobs=-1)
xgb.fit(x_train_standard, y_train)
y_train_predict = xgb.predict(x_train_standard)
y_test_predict = xgb.predict(x_test_standard)
y_train_proba = xgb.predict_proba(x_train_standard)[:, 1]
y_test_proba = xgb.predict_proba(x_test_standard)[:, 1]
XGBoost_train, XGBoost_test = get_scores('XGBoost', y_train, y_test, y_train_predict, y_test_predict, y_train_proba, y_test_proba)

在这里插入图片描述

训练集准确率: 0.859846055164
测试集准确率: 0.795811518325
训练集精准率: 0.882978723404
测试集精准率: 0.660377358491
训练集召回率: 0.520702634881
测试集召回率: 0.324074074074
训练集F1-score: 0.655090765588
测试集F1-score: 0.434782608696
训练集AUC: 0.927930947429
测试集AUC: 0.809117277857
# 各模型分数对比
model_name = ['Logistic','SVM','DecisionTree','RandomForest','xgboost']
columns = ['accuracy','precision','recall','f1','roc_auc']
ttype = ['train','test']
pd_list = []
model_score_train = [Logistic_train, SVM_train, DT_train, RM_train, XGBoost_train]
model_score_test = [Logistic_test, SVM_test, DT_test, RM_test, XGBoost_test]
for train,test in zip(model_score_train,model_score_test):
    pd_list.append(pd.DataFrame([train,test],index=ttype,columns=columns))
    
pd.concat(pd_list,axis=0,keys=model_name)

在这里插入图片描述

对比未调参时的数据:
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值