记一次失败的kaggle比赛(1):赛题简介与初次尝试



题目描述:https://www.kaggle.com/c/santander-customer-satisfaction

简单总结:一堆匿名属性;label是0/1;目标是最大化AUC。


第一次尝试:


特征:

由于时间比较充裕,直接用了暴力搜索提取较好的特征:

#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! better use a RFC or GBC as the clf
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! because the final predict model are those two
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! we should select better feature for RFC or GBC, not for LR
clf = LogisticRegression(class_weight='balanced', penalty='l2', n_jobs=-1)
selectedFeaInds=GreedyFeatureAdd(clf, trainX, trainY, scoreType="auc", goodFeatures=[], maxFeaNum=150)
joblib.dump(selectedFeaInds, 'modelPersistence/selectedFeaInds.pkl')
#selectedFeaInds=joblib.load('modelPersistence/selectedFeaInds.pkl') 
trainX=trainX[:,selectedFeaInds]
testX=testX[:,selectedFeaInds]
print trainX.shape


模型:

直接使用sklearn中最常用的三个模型:

trainN=len(trainY)

print "Creating train and test sets for blending..."
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! always use a seed for randomized procedures
models=[
    RandomForestClassifier(n_estimators=1999, criterion='gini', n_jobs=-1, random_state=SEED),
    RandomForestClassifier(n_estimators=1999, criterion='entropy', n_jobs=-1, random_state=SEED),
    ExtraTreesClassifier(n_estimators=1999, criterion='gini', n_jobs=-1, random_state=SEED),
    ExtraTreesClassifier(n_estimators=1999, criterion='entropy', n_jobs=-1, random_state=SEED),
    GradientBoostingClassifier(learning_rate=0.1, n_estimators=101, subsample=0.6, max_depth=8, random_state=SEED)
]
#StratifiedKFold is a variation of k-fold which returns stratified folds: each set contains approximately the same percentage of samples of each target class as the complete set.
#kfcv=KFold(n=trainN, n_folds=nFold, shuffle=True, random_state=SEED)
kfcv=StratifiedKFold(y=trainY, n_folds=nFold, shuffle=True, random_state=SEED)
dataset_trainBlend=np.zeros( ( trainN, len(models) ) )
dataset_testBlend=np.zeros( ( len(testX), len(models) ) )
meanAUC=0.0
for i, model in enumerate(models):
    print "model ", i, "=="*20
    dataset_testBlend_j=np.zeros( ( len(testX), nFold ) )
    for j, (trainI, testI) in enumerate(kfcv):
        print "Fold ", j, "^"*20



最终结果:

通过blending所有模型的结果:

print "Blending models..."
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if we want to predict some real values, use RidgeCV
model=LogisticRegression(n_jobs=-1)
C=np.linspace(0.001,1.0,1000)
trainAucList=[]
for c in C:
    model.C=c
    model.fit(dataset_trainBlend,trainY)
    trainProba=model.predict_proba(dataset_trainBlend)[:,1]
    trainAuc=metrics.roc_auc_score(trainY, trainProba)
    trainAucList.append((trainAuc, c))
sortedtrainAucList=sorted(trainAucList)
for trainAuc, c in sortedtrainAucList:
    print "c=%f => trainAuc=%f" % (c, trainAuc)

model.C=sortedtrainAucList[-1][1] #0.05
model.fit(dataset_trainBlend,trainY)
trainProba=model.predict_proba(dataset_trainBlend)[:,1]
print "train auc: %f" % metrics.roc_auc_score(trainY, trainProba)  #0.821439
print "model.coef_: ", model.coef_

print "Predict and saving results..."
submitProba=model.predict_proba(dataset_testBlend)[:,1]
df=pd.DataFrame(submitProba)
print df.describe()
SaveFile(submitID, submitProba, fileName="1submit.csv")

归一化:

print "MinMaxScaler predictions to [0,1]..."
mms=preprocessing.MinMaxScaler(feature_range=(0, 1))
submitProba=mms.fit_transform(submitProba)
df=pd.DataFrame(submitProba)
print df.describe()
SaveFile(submitID, submitProba, fileName="1submitScale.csv")




从测试结果中总结经验:

第一:暴力搜索特征的方式在特征数较多的情况下不可取;较少的情况下可以考虑(<200)

第二:sklearn中的这几个模型,ExtraTreesClassifier效果最差,RandomForestClassifier效果较好且速度比较快,GradientBoostingClassifier结果最好但速度非常慢(因为不能并行)

第三:当某一个模型(GradientBoostingClassifier)比其他模型效果好很多时,不要使用blending的方法(尤其是特征空间一样,分类器类似的情况,比如这里的五个分类器都在同一组特征上建模,而且都是基于树的分类器),因为blending往往会使整体效果低于单独使用最好的一个模型

第四:对于AUC,实际上关心的是样本间的排名,而不是具体数值的大小,所以结果没必要做归一化处理;关于这个结论,自行搜索资料理解



持续更新后续经验,欢迎关注^_^





  • 4
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值