机器学习-多特征分类问题练习-泰塔尼克号生存分类 (梯度提升决策树 GradientBoostingClassifier 16特征 87.5%)

一、学习网址:

https://blog.csdn.net/ztf312/article/details/98596968

https://blog.csdn.net/wydyttxs/article/details/76695205

https://zhuanlan.zhihu.com/p/51886442

 

数据集下载链接:https://www.kaggle.com/c/titanic

 

二、数据预处理介绍

 

三、特征选择和筛选经验分享

原图

各个特征对生存影响

 

特征联系

四、训练模型选择

 

五、训练模型性能评价

 

六、python代码实现

 

主要分成两个文件: 一、数据处理和特征选择 

                                 二、模型选择和模型训练评价

 

文件一: FeatureAnalyse.py

"""
Created on Thu Apr 28 10:00 2021

@author: 杜文涛

"""

# (其中 Analyse.csv  deal_train.csv deal_test.csv 是事先创建好的空白文件)


import os
from sklearn.feature_selection import SelectKBest,f_classif # 特征选择评分 即f_classif方法 -> ANOVA(方差分析)
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import warnings
warnings.filterwarnings('ignore')

#显示所有列
pd.set_option('display.max_columns', None)
#显示所有行
pd.set_option('display.max_rows', None)
#设置value的显示长度为100,默认为50
pd.set_option('max_colwidth',100)

def main():
    """
        1. 数据提取
    """
    Train_Date = pd.read_csv(os.path.abspath('train.csv'))
    Test__Date = pd.read_csv(os.path.abspath('test.csv'))
    #print("原始特征类型有:\n",Train_Date.columns.values) #  Pclass(阶级)、Survived(是否存活)、Sex(性别)、Embarked(出发地)
                                                            # Age(年龄)、Fare(票价)、SibSp(船上兄弟姐妹数量)、Parch(船上父母孩子数量)
                                                            # Name(姓名)、Ticket(船票号)、Cabin(船舱号)
    """
        2. 训练数据 特征表格分析
    """
    Analyse = Train_Date.describe()
    #AnalyseSave = pd.DataFrame(Analyse)
    #AnalyseSave.to_csv(os.path.abspath('Analyse.csv')) # 分析结果保存
    #print("\n",Train_Date.head(3)) #查看前3行数据
    #print("\n",Analyse) #对每个特征进行基本参数统计分析

    """
        3. 创建新的整体列表(训练数据和测试数据统一预处理)
    """
    AllData_Sheet = pd.concat([Train_Date, Test__Date], ignore_index=True)
    """
        4. 补齐原始数据 ( Age(年龄)  Embarked(出发地)   Cabin(船舱号) )
    """
    #print("\n", Train_Date.info())
#<一> Age(年龄)
    #AllData_Sheet["Age"] = AllData_Sheet["Age"].fillna(AllData_Sheet["Age"].mode()[0])  #用众数补齐Age数据
    AllData_Sheet["Age"]=AllData_Sheet["Age"].fillna(AllData_Sheet["Age"].median()) #用中位数补齐Age数据
    #sns.catplot('Age', hue='Survived', data=AllData_Sheet, kind='count',aspect=6.0)
    #print("\n", AllData_Sheet.describe())
    AllData_Sheet["Age_Sore"] = pd.cut(AllData_Sheet["Age"],  # 划分3个年龄区间,并做数字标记
                                       bins=[-1, 15, 54, 100], labels=[1, 2, 3])#具体分组区间需要图形化看数据分布后制定

#<二> Embarked(出发地)
    AllData_Sheet.Embarked.fillna(AllData_Sheet.Embarked.mode()[0],inplace=True) ## 使用众数补齐
#<三> Cabin(船舱号)
    AllData_Sheet['Cabin']=(AllData_Sheet.Cabin.notnull()).astype(float)  ####是否缺失值处理为0或1 小数
    #print("\n", AllData_Sheet.info())
    #print("\n",AllData_Sheet.head(6))

    """
         5. 将字符串数据,处理成数字数据 ( Name(姓名) Sex(性别) Ticket(船票号) Embarked(出发地) )
    """
#<一> Sex(性别)
    #print(AllData_Sheet["Sex"].unique()) # 查看特征所拥有的类别数
    AllData_Sheet.loc[AllData_Sheet["Sex"] == "male", "Sex"] = 0   #少数类别下,字符串标签数字化
    AllData_Sheet.loc[AllData_Sheet["Sex"] == "female", "Sex"] = 1
#<二> Embarked(出发地)
    #print(AllData_Sheet["Embarked"].unique()) # 查看特征所拥有的类别数
    AllData_Sheet.loc[AllData_Sheet["Embarked"] == "S", "Embarked"] = 0 #少数类别下,字符串标签数字化
    AllData_Sheet.loc[AllData_Sheet["Embarked"] == "C", "Embarked"] = 1
    AllData_Sheet.loc[AllData_Sheet["Embarked"] == "Q", "Embarked"] = 2
    # 对于数据中含有几个特征的影响力远远大于剩下类别特征时,经单独拆开自成一个特征
    AllData_Sheet["Embarked_S"] = np.where(((AllData_Sheet.Embarked == 0)), 1, 0)
    AllData_Sheet["Embarked_C"] = np.where(((AllData_Sheet.Embarked == 1)), 1, 0)
    #AllData_Sheet["Embarked_Q"] = np.where(((AllData_Sheet.Embarked == 2)), 1, 0) #特征存活关联弱,去掉
    #sns.catplot('Embarked', hue='Survived', data=AllData_Sheet, kind='count')
#<三> Name(姓名) (使用特征: 名字长度 名字前缀 名字带括号)
    AllData_Sheet['NameTitle']=AllData_Sheet.Name.str.extract('([A-Za-z]+)\.') #提取出名字名字前缀
                                                                              #如果数据类别端阳,说明字符串本身含有意义,
                                                                              #听对这些特征进行处理后在进行数字化。这也
                                                                              # 是大家算法性能差异的技巧差异点
    mapping_title=np.array(AllData_Sheet["NameTitle"].unique())
    #mapping_len=len(mapping_title)
    mapping_title=mapping_title.tolist()
    mapping_title_Score=dict(zip(mapping_title,[1,2,3,0,4,5,0,0,6,0,0,7,7,0,0,0,0]))#将NameTitle中获得的类别转为字典类型
    #对于数据中含有几个特征的影响力远远大于同类别的其他特征时,经单独拆开自成一个特征
    mapping_title_Mr = dict(zip(mapping_title,  [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]))  # 将NameTitle中获得的类别转为字典类型
    mapping_title_Mrs = dict(zip(mapping_title, [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]))  # 将NameTitle中获得的类别转为字典类型
    mapping_title_Miss= dict(zip(mapping_title, [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]))  # 将NameTitle中获得的类别转为字典类型
    #print(mapping_title)
    AllData_Sheet['NameTitle_Sore'] = AllData_Sheet.NameTitle. \
        map(lambda x: mapping_title_Score.get(x, x))  # 将NameTitle获得的类别字典,数字化
    AllData_Sheet['NameTitle_Mr']=AllData_Sheet.NameTitle.\
        map(lambda x:mapping_title_Mr.get(x,x))#将NameTitle的Mr字典,数字化
    AllData_Sheet['NameTitle_Mrs'] = AllData_Sheet.NameTitle. \
        map(lambda x: mapping_title_Mrs.get(x, x))  # 将NameTitle的Mrs字典,数字化
    AllData_Sheet['NameTitle_Miss'] = AllData_Sheet.NameTitle. \
        map(lambda x: mapping_title_Miss.get(x, x))  # 将NameTitle的Mrs字典,数字化
    AllData_Sheet['NameBracket']=AllData_Sheet.Name.str.contains('\([A-Za-z]+.+[A-Za-z]\)').astype(int)#提取出名字带括号
    AllData_Sheet['NameLen']=AllData_Sheet.Name.str.len() #提取出名字长度
    #print(AllData_Sheet.describe())
    #sns.catplot('NameLen', hue='Survived', data=AllData_Sheet, kind='count') # 查看分布规律以后对数据进行分段标示处理
    AllData_Sheet["NameLen_Section"] = pd.cut(AllData_Sheet["NameLen"], #划分10个姓名长度区间,并做数字标记
                                              10, labels=[i for i in range(10)])  #分组具体数列要看评分效果调节
    #sns.catplot('NameTitle', hue='Survived', data=AllData_Sheet, kind='count')
    #sns.catplot('NameBracket', hue='Survived', data=AllData_Sheet, kind='count')
    #sns.catplot('NameLen_Section', hue='Survived', data=AllData_Sheet, kind='count')
#<四> Ticket(船票号) (使用特征:船票数字长度
    #AllData_Sheet['TicketWithletter']=AllData_Sheet.Ticket.str.contains('^[A-Za-z]').astype(int) #查看一下船票有首字母
                                                                                # 对于存活率的影响,检查后发现没什么区别
    AllData_Sheet['Ticket_NumberLen']=AllData_Sheet.Ticket.str.extract('([0-9]+$)') #提取船票中的数字
    AllData_Sheet['Ticket_NumberLen']=AllData_Sheet.Ticket_NumberLen.str.len() #计算船票数字长度
    AllData_Sheet["Ticket_NumberLen"] = AllData_Sheet["Ticket_NumberLen"].\
        fillna(AllData_Sheet["Ticket_NumberLen"].median())#船票数字长度数据补全
    #print(AllData_Sheet.describe())
    AllData_Sheet["Ticket_NumberLen"] = pd.cut(AllData_Sheet["Ticket_NumberLen"],  # 划分3个船票数字长度区间,并做数字标记
                                               3, labels=[i for i in range(3)])
    #sns.catplot('Ticket_NumberLen', hue='Survived', data=AllData_Sheet, kind='count')

    """
        6. 对其他数字特征之间相关性进行分析处理 
                    (特征使用:SibSp(船上兄弟姐妹数量) Parch(船上父母孩子数量) Fare(票价))
    """
    AllData_Sheet['FamilyPeople'] = AllData_Sheet['SibSp'] + AllData_Sheet['Parch']
    AllData_Sheet["FamilyPeople"] = pd.cut(AllData_Sheet["FamilyPeople"],  # 划分3个人数区间,并做数字标记
                                        3, labels=[i for i in range(3)])
    #print(AllData_Sheet.describe())
    AllData_Sheet["Fare_Sore"] = pd.qcut(AllData_Sheet["Fare"],  # 数据出现频率百分比划分4个船费区间,并做数字标记
                                        6, labels=[i for i in range(6)])
    #sns.catplot('Fare_Sore', hue='Survived', data=AllData_Sheet, kind='count')
    """
        7. 数据预处理完毕,将整体列表,进行训练数据集和测试数据集拆分
    """
    Train_Date = AllData_Sheet[AllData_Sheet.PassengerId < len(Train_Date) + 1]
    Test_Date = AllData_Sheet[AllData_Sheet.PassengerId > len(Train_Date)]

    """
        8. 特征选择评分 (模型训练特征筛选)
    """
    #print("\n", Train_Date.info())
    Score_Feature = ["Pclass", "NameTitle_Sore",'NameTitle_Mr', 'NameTitle_Mrs', "NameBracket",
                     'NameTitle_Miss',"NameLen_Section", "Sex", "Age_Sore", "FamilyPeople",
                     "Embarked_S","Embarked_C","Cabin","Ticket_NumberLen","Fare_Sore"]
    Selector = SelectKBest(f_classif, k=5) # 训练集分成 5 份
    Selector.fit(AllData_Sheet[Score_Feature], AllData_Sheet["Survived"])
    Scores = -np.log10(Selector.pvalues_) # 获得特征值的转移p值
    plt.figure(figsize=(12, 15), dpi=60) # 图形化显示各个特征的影响力大小
    plt.bar(range(len(Score_Feature)),Scores)
    plt.xticks(range(len(Score_Feature)), Score_Feature, rotation='vertical')
    for a, b in zip(range(len(Score_Feature)), Scores):
        plt.text(a, b - 0.3, '%.3f' % b, ha='center', va='bottom', fontsize=15)

    """
        9. 特征选择后数据保存
    """
    #Score_Feature = ["Pclass", "Sex", "Age_Sore", "Fare_Sore", "NameTitle_Sore",
    #                  "NameBracket", "NameLen_Section", "FamilyPeople",
    #                 "Embarked_S", "Embarked_C", "Ticket_NumberLen", "Cabin"]
    AllData_Sheet_Features=AllData_Sheet.head(0).columns.tolist() #获取AllData_Sheet_里的第一行所有特征类型
    DeleWhat_Diff = set(AllData_Sheet_Features) ^ set(Score_Feature) # 没选中的余姚删除特征
    Train_Date.drop(DeleWhat_Diff, inplace=True, axis=1)  # 删除后覆盖 对列进行操作
    Test_Date.drop(DeleWhat_Diff, inplace=True, axis=1)  # 删除后覆盖 对列进行操作
    Test_Date.to_csv(os.path.abspath('deal_test.csv'))
    Train_Date.to_csv(os.path.abspath('deal_train.csv'))
    AllData_Sheet.to_csv(os.path.abspath('Analyse.csv'))

    plt.show()



if __name__ == '__main__':
    main()

 

文件二: GradientBoostingClassifier.py

# -*- coding: utf-8 -*-
"""
Created on Thu Apr 28 10:00 2021

@author: 杜文涛

"""

import os
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import warnings
warnings.filterwarnings('ignore')


#模型预处理
from sklearn.feature_selection import RFECV
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV


#回归模型模块
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor


#分类模型模块
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LogisticRegressionCV
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import PassiveAggressiveClassifier

from sklearn.neighbors import KNeighborsClassifier

from sklearn.svm import SVC, LinearSVC

from sklearn.gaussian_process import GaussianProcessClassifier

from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import BernoulliNB

from sklearn.tree import DecisionTreeClassifier

from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier

from sklearn.metrics import roc_curve, auc

###建立一个Baseline函数,这样做完特征工程之后,能非常方便的查看各个模型的结果,当然你还可以在里面增加训练分数,计算时间等等计算结果和数据的可视化。
def Baseline(X,y):
    MLA=[AdaBoostClassifier(),BaggingClassifier(),ExtraTreesClassifier(),\
    GradientBoostingClassifier(),RandomForestClassifier(),\
    GaussianProcessClassifier(),LogisticRegressionCV(),\
    PassiveAggressiveClassifier(),SGDClassifier(),\
    Perceptron(),BernoulliNB(),GaussianNB(),KNeighborsClassifier(),\
    SVC(probability=True),LinearSVC(),DecisionTreeClassifier()]
    MLA_compare={}
    #确定一下随机数,这样能保证结果的可重复性
    for alg in MLA:
        alg.random_state=0
        MLA_name=alg.__class__.__name__
        score=cross_val_score(alg,X,y,cv=6) # 交叉验证分5份
        score_mean=round(score.mean(),4)
        MLA_compare[MLA_name]=score_mean
    scores=pd.Series(MLA_compare)
    return scores.sort_values(ascending=False)

def floatrange(start,stop,steps):
    return [start+float(i)*(stop-start)/(float(steps)-1) for i in range(steps)]

def plot_roc_curve(fpr, tpr, label=None):
    #绘制ROC曲线
    plt.plot(fpr, tpr, linewidth=2, label=label)
    #绘制对角虚线
    plt.plot([0, 1], [0, 1], 'k--')
    plt.axis([0, 1, 0, 1])
    plt.xlabel('False Positive Rate (Fall-Out)', fontsize=16)
    plt.ylabel('True Positive Rate (Recall)', fontsize=16)
    plt.grid(True)

def RocShow(test_df, train_df, survived1, survived2, clf):
    TP = 0
    FP = 0
    TN = 0
    FN = 0
    score_tr = clf.score(train_df, survived1)
    score_te = clf.score(test_df, survived2)

    test_df = np.array(test_df)
    survived2 = np.array(survived2)
    for x, y in zip(test_df, survived2):
        x = x.reshape(1, -1)
        s = clf.predict(x)
        if s == 1 and y == 1:
            TP += 1
        elif s == 1 and y == 0:
            FP += 1
        elif s == 0 and y == 1:
            FN += 1
        else:
            TN += 1
    print('TP is %d,FP is %d,TN is %d,FN is %d' % (TP, FP, TN, FN))

    print("训练数据得分为%f,测试数据得分为%f" % (score_tr, score_te))

    precision = (TP + TN) / len(survived2)
    P = TP / (TP + FP)
    R = TP / (TP + FN)

    TPR = TP / (TP + FN)
    FPR = FP / (FP + TN)
    print('the precision is ', precision)
    print('the P is ', P)
    print('the R is ', R)
    fpr, tpr, thresholds = roc_curve(survived2, clf.predict_proba(test_df)[:, 1])
    roc_auc = auc(fpr, tpr)
    plt.figure(figsize=(8, 6))
    plot_roc_curve(fpr, tpr)


class SubestimatorOfStacking():
    '''单个模型调参,控制特征量进行预测探索,保存预测文档,创建Stacking子特征的机器学习辅助类
    参数
    ---------------
    alg:要进行网格搜索的评估器
    param_grid:进行超参数调优的参数网格,建议每次调参使用两维的参数
    random_state:随机数控制,默认为0,利用随机数参数,能控制结果的稳定性
    n_jobs:使用计算的cpu,默认为None,-1是启动所在电脑的全部cpu内核
    ----------------
    方法
    ----------------
    fit(self,X,y,rfecv=False):根据param_grid提供的参数网格,对训练集进行网格搜索,计算后,自动输出最优的测试分数,
        模型参数及参数热图。rfecv是在网格搜索后,是否使用递归特征消除(recursive feature elimination,RFE),对特
        征进行迭代筛选。
    stacking(self,X,y,X_test,NFolds=5):对特定的模型,进行staking的预准备,输出为train的一列NFolds预测特征,
        和test的一列NFolds的预测均值。
    fit_predict(self,X,y,test):利用fit之后的最优模型,训练新的数据集,并输出预测值。
    fit_predict_submit(self,X,y,test,test_index,reduce_features=0,from_importancest=False):利用fit选择出的
        模型,对数据特征进行筛选,训练,并将预测结果保存到当前目录下。test_index是测试集的键,reduce_features
        是进行此次测试时,选择减少的特征,from_importancest是选择从哪里减少特征,True是从最重要的特征方向减少,
        适合于各个特征重要性差不多的数据集,False是从最不重要的特征方向进行减少,类似于RFECV的功能。
    '''

    def __init__(self,alg,param_grid,random_state=0,n_jobs=-1):
        self.alg=alg
        self.param_grid=param_grid
        self.random_state=random_state
        self.alg.random_state=self.random_state
        self.n_jobs=n_jobs

    def fit(self,X,y,rfecv=False):
        self.X_train=X
        self.y_train=y
        self.rfecv=rfecv
        self.grid=GridSearchCV(self.alg,self.param_grid,cv=5)
        self.grid.n_jobs=self.n_jobs
        self.grid.fit(self.X_train,self.y_train)
        self.best_estimator_=self.grid.best_estimator_
        self.best_params_=self.grid.best_params_
        print('The best score after GridSearchCV is '+str(self.grid.best_score_)+'.')
        if self.rfecv:
            self.rfecv=RFECV(self.best_estimator_,min_features_to_select=int(self.X_train.shape[1]/2),cv=5)
            self.rfecv.fit(self.X_train,self.y_train)
            self.best_features_=self.X_train.columns[self.rfecv.get_support()]
            print('The best score after RFECV is '+str(self.rfecv.grid_scores_.max())+'.')
            print('The number of selected features is '+str(self.rfecv.n_features_)+'.')
            print('If you want get the top features,please use self.best_features_.')
        self.cv_results_=pd.DataFrame(self.grid.cv_results_)
        self.cv_results_heatmap_=self.cv_results_.pivot_table(values='mean_test_score',columns=self.cv_results_.columns[4],index=self.cv_results_.columns[5])
        sns.heatmap(self.cv_results_heatmap_,annot=True)
        print('The best params is {}'.format(self.grid.best_params_))
        return self

    def stacking(self,X,y,X_test,NFolds=5):
        self.X_train=X.values
        self.y_train=y
        self.X_test=X_test.values
        self.NFolds=NFolds
        ntrain=self.X_train.shape[0]
        ntest=self.X_test.shape[0]
        self.oof_train=np.zeros((ntrain,))
        self.oof_test=np.zeros((ntest,))
        oof_test_df=np.empty((self.NFolds,ntest))
        kf=KFold(n_splits=self.NFolds,random_state=self.random_state)

        for i,(train_index,test_index) in enumerate(kf.split(self.X_train)):
            X_tr=self.X_train[train_index]
            y_tr=self.y_train[train_index]
            X_te=self.X_train[test_index]

            self.best_estimator_.fit(X_tr,y_tr)
            y_te=self.best_estimator_.predict(X_te)
            self.oof_train[test_index]=y_te
            oof_test_df[i,:]=self.best_estimator_.predict(X_test)
        self.oof_test=oof_test_df.mean(axis=0)
        self.oof_train=self.oof_train.reshape(-1,1)
        self.oof_test=self.oof_test.reshape(-1,1)
        return self.oof_train,self.oof_test

    def fit_predict(self,X,y,test):
        self.best_estimator_.fit(X,y)
        return self.best_estimator_.predict(test)

    def fit_predict_submit(self,X,y,test,test_index,reduce_features=0,from_importancest=False):
        self.from_importancest=from_importancest
        self.features_=pd.Series(self.rfecv.estimator_.feature_importances_,index=self.best_features_).sort_values(ascending=self.from_importancest)
        self.reduce_features=reduce_features
        self.submit_features_len_=len(self.features_.index)-self.reduce_features
        self.columns=self.features_.index[:self.submit_features_len_]
        self.best_estimator_.fit(X[self.columns],y)
        self.prediction=self.best_estimator_.predict(test[self.columns])
        self.submit=pd.DataFrame({'PassengerId':test_index,'Survived':self.prediction})
        return self.submit.to_csv('sub.csv',index=False)

def Param_Chooce(train_df, y_train):
    ###选择参数的范围,建议两个参数,先第一步大范围的,然后再根据结构,缩小参数的范围(找最白的,对应的X和Y坐标)
    # 集成学习梯度提升决策树 https://blog.csdn.net/ztf312/article/details/98596968
  #方法一: (封装类图形化随机区间调参)
    # random_state_grid = [0]
    # param_grid_RandomForestClassifier = {'n_estimators': [500,600,700], 'max_depth': range(2,5,4),
    #                                         } #估计器Y 层数X
    # rfc = SubestimatorOfStacking(GradientBoostingClassifier(), param_grid_RandomForestClassifier).fit(train_df, y_train,                                                                                               rfecv=True)
    # rfc_train, rfc_test = rfc.stacking(train_df, y_train, test_df)

  # 方法二:
    param_test1 = {'max_depth': [1, 2, 3], 'subsample': [0.85, 0.9]}
    gsearch1 = GridSearchCV(estimator=GradientBoostingClassifier(learning_rate=0.005, random_state=60,
                                                                 n_estimators=471, subsample=0.9, max_depth=3),
                            param_grid=param_test1, scoring='roc_auc', cv=5)
    gsearch1.fit(train_df, y_train)
    print(gsearch1.best_params_)
    print(gsearch1.param_grid)
    print(gsearch1.best_score_)

def main():
###<一> 数据读取预处理成训练集,测试集,有监督学习标签
    train_df = pd.read_csv(os.path.abspath('deal_train.csv'))
    test_df = pd.read_csv(os.path.abspath('deal_test.csv'))
    train_raw = pd.read_csv(os.path.abspath('train.csv'))
    AllData_Sheet = pd.read_csv(os.path.abspath('Analyse.csv'))
    AllData_Sheet.drop(["PassengerId"], inplace=True, axis=1)
    survived1 = AllData_Sheet.iloc[0:699,1]
    survived2 = AllData_Sheet.iloc[699:,1]

    columns_train_test=train_df.head(0).columns.tolist()
    train=np.array(train_df[columns_train_test])
    y_train=np.array(train_raw.Survived)
    #print(train.shape,y_train.shape)
###<二> 查看各种机器学习模型的识别分数,选择参数细化模型
    #print(Baseline(train,y_train),'\n')

###<三> 交叉验证模型参数选择
    #Param_Chooce(train_df, y_train)

###<四> 调节参数完毕以后,开始进入模型进行训练
    clf=GradientBoostingClassifier(learning_rate=0.006, random_state=0, n_estimators=471, subsample=1.0,
                                             max_depth=5)
    Resule = clf.fit(train_df, survived1)

###<五> 模型预测和roc性能测试验证图像显示
    RocShow(test_df, train_df, survived1, survived2, Resule)
    plt.show()


if __name__ == '__main__':
    main()

 

  • 8
    点赞
  • 39
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值