CDA第11届Level2建模案例题Python代码实现

一、前言:

本次考试案例题的难度要高于模拟题,时间只有2个小时且不能上网,而且还是银行业这种稍微有点专业的行业,所以难度颇高。现在的这份答案是我考试后花了2个小时才做出来的,也就是加上考试的原代码总共花了4个小时。里面涉及到一些新的数据清洗方法,包括了.loc、字符串分列、上采样等方法,仅供参考。

二、案例内容

根据相同的背景材料和数据实作以下的分类模型,最终须提交对测试数据的预测结果。

  1. 题目:找出有资金需求的中小企业借贷户并销售其贷款产品。

  2. 说明:对于中小企业而言,要快速成长最需要的就是资金。若能找出这些有资金需求的中小企业公司户并销售其贷款产品,将能为银行带来不少的营收,并改善中小企业的经营。有鉴于此‚本考题将提供某银行中小企业客户的相关数据‚考生需建置一个分类预测模型,找出有资金需求的中小企业借贷户。

  3. 进行方式:考生将取得训练数据及测试数据。训练数据包含26,144笔客户资料;每笔客户资料包含26个字段(1个客户ID字段、24个输入字段及1个目标字段-VV(是否为SME公司户;1代表有资金需求的中小企业公司户,0则代表没有资金需求的中小企业公司户)。字段的定义可参考Description.xlsx。测试数据包含6,537笔客户资料;字段个数与训练数据相同,只有目标字段的值全部填“Withheld”。特别要注意的是训练数据及测试数据的字段中,含有一定量的错误值及空值的数据。若能妥善处理这些噪声及遗失值的数据,将有助于提升分类模型的预测效能。考生以训练数据为基础,建立一个分类预测模型,找出有资金需求的中小企业借贷户,并输出一个测试结果的档案(考生姓名_results.csv)。考生姓名_results.csv中只有两个字段,分别是客户ID以及预测客户是否是有资金需求的中小企业公司户。results.csv的形式如下:

ID,Predicted_Results
1,1
2,0
8,0

最后需将 考生姓名_results.csv 连同代码及建模的相关档案(未传相关档案者将酌情扣分),一并压缩成.zip拷贝给监考老师。考生姓名_results.csv的格式请务必正确,否则也将酌情扣分。

  1. 评分方式:评分方式是以有资金需求的中小企业公司户的F-Measure来评估预测结果的好坏。参与认证者的成绩以F-Measure的结果排序,F-Measure越大者越好。

三、代码和分析

1、数据导入
import pandas as pd
import numpy as np
import warnings
from imblearn.over_sampling import SMOTE
warnings.filterwarnings("ignore")
pd.options.display.max_columns = None #显示所有列
pd.set_option('display.float_format', lambda x: '%.2f' % x) #取消科学计数法

train_data = pd.read_csv('Training.csv')
test_data = pd.read_csv('Test.csv')
2、数据清洗
#合并数据
total_data = pd.concat([train_data, test_data])

#替换.和?
total_data = total_data.replace('.', np.nan)
total_data = total_data.replace('?', np.nan)

#将area中的异常值替换为nan
# print(total_data[['area']].info())
total_data['area'].where(total_data['area'].str.len()>=3, inplace=True)
# print(total_data[['area']].info())

# 将一些object变量转为数值变量
total_data.rename(columns={'depsaveavg':'dep-saveavg', 'depdrawavg': 'dep-drawavg'}, inplace=True)
num_features = list(set(total_data.columns) - set(['ID', 'area', 'ck', 'comp', 'VV', 'area1', 'area2', 'area3']))
for col in num_features:
    total_data[col] = pd.to_numeric(total_data[col])
# total_data.info()

#根据规则,把'ck'和'dep'相关字段的数据补充
for way in ['ck-save', 'ck-draw', 'dep-save', 'dep-draw']:
    total_data['new_{}all'.format(way)] = total_data['{}time'.format(way)] * total_data['{}avg'.format(way)]
    total_data['new_{}time'.format(way)] = total_data['{}all'.format(way)] / total_data['{}avg'.format(way)]
    total_data['new_{}avg'.format(way)] = total_data['{}all'.format(way)] / total_data['{}time'.format(way)]
    total_data.loc[total_data['{}all'.format(way)].isnull(),'{}all'.format(way)] = total_data[total_data['{}all'.format(way)].isnull()]['new_{}all'.format(way)]
    total_data.loc[total_data['{}time'.format(way)].isnull(),'{}time'.format(way)] = total_data[total_data['{}time'.format(way)].isnull()]['new_{}time'.format(way)]
    total_data.loc[total_data['{}avg'.format(way)].isnull(),'{}avg'.format(way)] = total_data[total_data['{}avg'.format(way)].isnull()]['new_{}avg'.format(way)]
# print(total_data.info())

#将ck的数据进行补充
# print(total_data['ck'].value_counts())
total_data.loc[(total_data['ck-saveall']>0)|(total_data['ck-drawall']>0)|(total_data['ck-drawtime']>0)|(total_data['ck-saveavg']>0)
               |(total_data['ck-drawavg']>0)|(total_data['ck-savetime']>0)|(total_data['ck-changame']>0)|(total_data['ck-changtime']>0)
               |(total_data['ck-avg']>0), 'ck'] = '1'
# print(total_data['ck'].value_counts())

在这里插入图片描述
根据数据字典可以知道几点:
1)、area是一个字符串,区域代码第1码为大分类, 第1+2码为中分类, 依此类推。所以理论上应该是3位的,我们需要将异常数据转为空值,并且分别提取出前三位;
2)、黄色部分的字段是有相关性的,比如ck-saveall = ck-savetime ×ck-saveavg,所以可以通过计算填充;
3)、ck这个字段和后面带ck字样的字段都有关系,理论上应该后面字段中只要有一个值>0,ck就是1;
4)、其它字段应该也有内在联系,但是因为对银行业不了解,所以也不敢动。
5)、不建议用均值替换缺失值,因为我后续用的是LightGBM算法,所以可以不填充;如果使用RandomForest,则建议把缺失值填充为-1

3、数据集成和上采样
#筛选有用的特征值
cate_features = ['area1', 'area2', 'area3', 'ck', 'comp']
predictors = num_features + cate_features
all_columns = predictors + ['ID', 'VV']
total_data = total_data[all_columns]
total_data = total_data.fillna(-1)

for col in cate_features:
    total_data[col] = pd.to_numeric(total_data[col])
    
new_train_data = total_data[total_data['VV'] != 'Withheld']
new_test_data = total_data[total_data['VV'] == 'Withheld']

#上采样
smo = SMOTE(random_state=42)
new_train_data['VV'] = new_train_data['VV'].astype(int)
X_smo, y_smo = smo.fit_sample(new_train_data[predictors], new_train_data['VV'])
last_train_data = pd.concat([X_smo, y_smo], axis=1)
last_train_data

说明:因为评分方式是以有资金需求的中小企业公司户的F-Measure来评估预测结果的好坏,而样本极其不均衡,所以需要通过上采样来调整数据分布,我这里使用了smote方法(考试中时间来不及直接是1的样本×40)

4、建模

1)、如果只是LightGBM简单预测

#简单预测
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
from sklearn.metrics import classification_report
import lightgbm as lgb

params = {'num_leaves': 30, #结果对最终效果影响较大,越大值越好,太大会出现过拟合
          'min_data_in_leaf': 30,
          'objective': 'binary', #定义的目标函数
          'max_depth': -1,
          'learning_rate': 0.01,
          "min_sum_hessian_in_leaf": 6,
          "boosting": "gbdt",
          "feature_fraction": 0.8,  #提取的特征比率
          "bagging_freq": 1,
          "bagging_fraction": 0.8,
          "bagging_seed": 11,
          "lambda_l1": 0.1,             #l1正则
          # 'lambda_l2': 0.001,     #l2正则
          "verbosity": -1,
          "nthread": -1,                #线程数量,-1表示全部线程,线程越多,运行的速度越快
          'metric': {'binary_logloss'},  ##评价函数选择
          "random_state": 2019, #随机数种子,可以防止每次运行的结果不一致
          # 'device': 'gpu' ##如果安装的事gpu版本的lightgbm,可以加快运算
          }

X_train, X_val, y_train, y_val = train_test_split(last_train_data[predictors], last_train_data["VV"], 
                test_size=0.2, random_state=2019)
training_data = lgb.Dataset(X_train, label=y_train)
val_data = lgb.Dataset(X_val, label=y_val, reference=training_data)

evals_result = {}  #记录训练结果所用

model = lgb.train(params, 
                  training_data, 
                  num_boost_round=10000, 
                  valid_sets=val_data, 
                  early_stopping_rounds=100, 
                  categorical_feature = cate_features,
                  evals_result = evals_result,
                  verbose_eval=500)
val_pred = model.predict(X_val)
val_pred = np.where(val_pred>=0.5, 1, 0)
val_true = y_val.as_matrix()
print(classification_report(val_true,val_pred))
test_pred = model.predict(new_test_data[predictors])
test_pred = np.where(test_pred>=0.5, 1, 0)
print(sum(test_pred))
print(len(test_pred))
answer = new_test_data.copy()
answer['VV'] = pred
answer[['ID', 'VV']].to_csv('张磊_results.csv', index=False)

2)、如果是LightGBM5折交叉

#5折交叉验证
import lightgbm as lgb
from sklearn.model_selection import KFold
from sklearn.metrics import f1_score

train_x = last_train_data[predictors]
train_y = last_train_data['VV']
test_x = new_test_data[predictors]

X, y, X_test = train_x.values, train_y.values, test_x.values  # 转为np.array类型 

def self_metric(labels, preds):
    preds = preds.get_label()
    pred = np.where(preds>=0.5, 1, 0)
    return f1_score(labels, preds)

param = {'num_leaves': 30, #结果对最终效果影响较大,越大值越好,太大会出现过拟合
          'min_data_in_leaf': 30,
          'objective': 'binary', #定义的目标函数
          'max_depth': -1,
          'learning_rate': 0.01,
          "min_sum_hessian_in_leaf": 6,
          "boosting": "gbdt",
          "feature_fraction": 0.8,  #提取的特征比率
          "bagging_freq": 1,
          "bagging_fraction": 0.8,
          "bagging_seed": 11,
          "lambda_l1": 0.1,             #l1正则
          # 'lambda_l2': 0.001,     #l2正则
          "verbosity": -1,
          "nthread": -1,                #线程数量,-1表示全部线程,线程越多,运行的速度越快
          'metric': {'binary_logloss'},  ##评价函数选择
          "random_state": 2019, #随机数种子,可以防止每次运行的结果不一致
          # 'device': 'gpu' ##如果安装的事gpu版本的lightgbm,可以加快运算
          }

# 五折交叉验证
folds = KFold(n_splits=5, shuffle=True, random_state=36)
predictions = [] #测试的预测值

for fold_, (train_index, test_index) in enumerate(folds.split(X, y)):
    print("第{}次交叉验证:".format(fold_+1))
    X_train, X_valid, y_train, y_valid = X[train_index], X[test_index], y[train_index], y[test_index]
    training_data = lgb.Dataset(X_train, label=y_train)    # 训练数据
    validation_data = lgb.Dataset(X_valid, label=y_valid)   # 验证数据
    clf = lgb.train(param, 
                    training_data, 
                    num_boost_round=10000, 
                    valid_sets=[validation_data], 
                    verbose_eval=1000, 
                    early_stopping_rounds=100,
#                     feval = self_metric
                    )
    x_pred = clf.predict(X_valid, num_iteration=clf.best_iteration)
    x_pred = np.where(x_pred>0.5,1,0)
    print(f1_score(y_valid, x_pred))
    y_test = clf.predict(X_test, num_iteration=clf.best_iteration)  # 预测
#     print(y_test[:10])
    predictions.append(y_test)

final_scoreList = []
for i in range(0, 6537):
    final_score = (predictions[0][i] + predictions[1][i] + predictions[2][i] + predictions[3][i] + predictions[4][i]) / 5
    final_scoreList.append(final_score)
# print(final_scoreList[:10])

pred1 = np.array(final_scoreList)
pred = np.where(pred1>=0.5, 1, 0)
print(sum(pred))
print(len(pred))

3)、如果是用5折LGBMClassifier

from sklearn.model_selection import KFold
from sklearn.metrics import f1_score
from lightgbm import LGBMClassifier
import lightgbm as lgb
from scipy import stats

X, y, X_test = last_train_data[predictors].values, last_train_data['VV'], new_test_data[predictors].values  # 转为np.array类型

folds = KFold(n_splits=5, shuffle=True, random_state=36)
predictions = [] #最后的预测值

for k, (train_index, test_index) in enumerate(folds.split(X, y)):
    print("第{}次交叉验证:".format(k+1))
    X_train, X_valid, y_train, y_valid = X[train_index], X[test_index], y[train_index], y[test_index]
    clg = LGBMClassifier(
        boosting="gbdt",
        learning_rate=0.1,
        colsample_bytree=0.8,
#         max_depth=5,
#         n_estimators=100,
        num_leaves=31,
        lambda_l1=0.1,    
        lambda_l2=0.1,    
        seed=0
    )
    clg.fit(X_train,y_train,eval_set=[(X_valid, y_valid)],verbose=-1)
    train_pred = clg.predict(X_train)
    valid_pred = clg.predict(X_valid)
    print("本轮训练集得分:%.2f%%"%(f1_score(y_train,train_pred)*100))
    print("本轮验证集得分:%.2f%%"%(f1_score(y_valid,valid_pred)*100))
    pred = clg.predict(X_test)
    predictions.append(pred)
    
last_pred = stats.mode(predictions)[0][0]
new_test_data['VV'] = last_pred
new_test_data[['ID', 'VV']].to_csv('up_answer.csv', index=False)
  • 7
    点赞
  • 30
    收藏
    觉得还不错? 一键收藏
  • 7
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值