Titanic获救预测

数据分析

1912年4月15日,在泰坦尼克号的处女航中,她与冰山相撞后沉没,2224名乘客和机组人员中,有1502人死亡。这场耸人听闻的悲剧震惊了国际社会,造成海难失事的原因之一是乘客和机组人员没有足够的救生艇。尽管幸存与死亡有一些运气因素,但有些人比其他人更容易生存下来,比如女人,孩子和上流社会。

数据基本情况分析

导入库

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

读取数据

Titanic = pd.read_csv(r'C:\Users\lenovo\Desktop\titanic\train.csv')

观察一下数据的形式

Titanic.head()


Survived=1代表生,0代表死;SibSp代表兄弟姐妹的数量;Parch代表父母子女的数量;Cabin代表客舱信息;Embarked代表登船地点;Pclass代表座位等级

数据的基本情况

Titanic.info()


可以发现Age、Cabin、Embarked三列含有缺失值。

船舱等级越高,获救的可能性越大

无关变量删除

通过观察PassengerId与目标变量无关,将其删除。

Titanic.drop('PassengerId',axis=1)

各特征与目标变量间的关系

首先考察一下存活比例

from pyecharts import Bar
attr = ["死亡 ", "存活"]
v1 = [pd.value_counts(Titanic['Survived']).values[0],0]
v2 = [0,pd.value_counts(Titanic['Survived']).values[1]]
bar = Bar("获救情况")
bar.add("死亡",attr,v1)
bar.add("存活",attr,v2)
bar


可以看出死亡的人数更多。

接下来考察各特征与目标变量间的关系:
(1)船舱等级与生存的关系

Titanic[['Pclass','Survived']].groupby(['Pclass']).mean().plot.bar()

不同船舱等级生存率

Titanic[['Pclass','Survived']].corr(method = 'spearman')

船舱等级与生存存在相关性,相关系数约为-0.34,船舱等级越高获救的可能性越大。

(2)性别与生存的关系

Titanic.groupby(['Sex','Survived'])['Survived'].count()

Titanic[['Sex','Survived']].groupby(['Sex']).mean().plot.bar()

不同性别生存率

性别与生存有相关性,女性生存率更高,优先获救。

(3)年龄与生存的关系

Titanic.Age[Titanic.Survived == 1].plot(kind = 'kde')
Titanic.Age[Titanic.Survived == 0].plot(kind = 'kde')
plt.xlabel('Age')
plt.legend(('1','0'),loc = 'best')
plt.show()

不同年龄年龄生存情况分布图
中青年人的存活率较高。

(4)亲人的数量与生存的关系

f,ax=plt.subplots(1,2,figsize=(18,8))
Titanic[['Parch','Survived']].groupby(['Parch']).mean().plot.bar(ax=ax[0])
ax[0].set_title('Parch and Survived')
Titanic[['SibSp','Survived']].groupby(['SibSp']).mean().plot.bar(ax=ax[1])
ax[1].set_title('SibSp and Survived')

亲人数量与生存关系图

兄弟姐妹的数量大于3时,获救的比例下降,父母子女的数量与获救比例看起来没有太大关系。

(5)登船地点与生存的关系

survived_0 = Titanic.Embarked[Titanic.Survived == 0].value_counts()
survived_1 = Titanic.Embarked[Titanic.Survived == 1].value_counts()
df=pd.DataFrame({'1':survived_1, '0':survived_0})
df.plot(kind='bar', stacked=True)
plt.xlabel('Embarked') 
plt.ylabel('number') 
plt.show()
Titanic[['Embarked','Survived']].groupby(['Embarked']).mean().plot.bar()

登船地点与生存关系图

登船地点与生存关系图
在S地登船的获救比例最低,C地登船的获救比例最高

(6)Cabin有无与生存的关系
Cabin含有大量的缺失值,将Cabin有无作为特征,考察其与生存之间的关系

survived_cabin = Titanic.Survived[pd.notnull(Titanic.Cabin)].value_counts()
survived_nocabin = Titanic.Survived[pd.isnull(Titanic.Cabin)].value_counts()
df = pd.DataFrame({'1':survived_cabin,'0':survived_nocabin}).transpose()
df.plot(kind = 'bar',stacked = True)
plt.xlabel('cabin')
plt.ylabel('人数')
plt.show()

Cabin有无与生存的关系

有Cabin信息的乘客获救的可能性更大。

(7)票价与生存的关系

Titanic[Titanic.Survived==0].Fare.mean()
Titanic[Titanic.Survived==1].Fare.mean()

获救的乘客平均票价为约22.12元、未获救的乘客平均票价约为48.40元 ,可见票价更高的乘客身份可能更显赫、获救的可能性更大。

(8)其他因素与生存的关系
Name项中包含了对乘客的称呼,如Mr、Miss等,这些信息包含了乘客的年龄、性别、也有可能包含社会地位,如Dr、Lady、Major、Master等称呼。在下一部分特征工程中再进行处理。

缺失值处理

在前面的探索中已经发现Age、Cabin、Embarked三列含有缺失值,我们对其分别进行填补。
(1)Age可以用均值填补,含有缺失值的男性乘客的年龄可以利用其他不含缺失值的男性乘客的年龄均值填补,女性同理。

for i in range(len(titanic['Age'])):
    if np.isnan(titanic.iloc[i,5]):
        if titanic.iloc[i,4] == 'male':
            titanic.iloc[i,5] = titanic.loc[titanic.Sex == 'male','Age'].mean()
        else:
            titanic.iloc[i,5]= titanic.loc[titanic.Sex == 'female','Age'].mean()

(2)Cabin的缺失值数量过大,不能随意填补,将有Cabin记录的乘客记为Yes,反之记为No

def set_cabin(df):
    df.loc[(df.Cabin.notnull()),'Cabin'] = 'Yes'
    df.loc[(df.Cabin.isnull()),'Cabin'] = 'No'
    return df
Titanic = set_cabin(Titanic)
``

(2)Embarked的缺失值只有3个,按照Embarked的众数补全

Titanic.Embarked = Titanic.Embarked.fillna('S')

再看一下现在的数据基本情况,已经没有缺失值了。

数据类型转换

为了便于下一步处理,将Cabin、Embarked、Sex、Pclass转换成one-hot数据(特征因子化),形式如下。

Titanic.Pclass = Titanic.Pclass.astype('category')
dummy = pd.get_dummies(Titanic[['Pclass','Sex','Embarked','Cabin']])
Titanic = pd.concat([Titanic,dummy],axis = 1)
Titanic.drop(['Pclass','Sex','Embarked','Cabin'],axis = 1,inplace = True)

特征工程

特征衍生

(1)处理称呼

通过观察名字数据,我们可以看出其中包括对乘客的称呼,如:Mr、Miss、Mrs等,称呼信息包含了乘客的年龄、性别,同时也包含了如社会地位等的称呼,如:Dr,、Lady、Major、Master等的称呼,下面我们将他提取出来。

Titanic['Title'] = Titanic['Name'].str.extract(' ([A-Za-z]+)\.', expand=False)
title_Dict = {}
title_Dict.update(dict.fromkeys(['Capt', 'Col', 'Major', 'Dr', 'Rev'], 'Officer'))
title_Dict.update(dict.fromkeys(['Jonkheer', 'Don', 'Sir', 'the Countess', 'Dona', 'Lady'], 'Royalty'))
title_Dict.update(dict.fromkeys(['Mme', 'Ms', 'Mrs'], 'Mrs'))
title_Dict.update(dict.fromkeys(['Mlle', 'Miss'], 'Miss'))
title_Dict.update(dict.fromkeys(['Mr'], 'Mr'))
title_Dict.update(dict.fromkeys(['Master'], 'Master'))
Titanic['Title'] = Titanic['Title'].map(title_Dict)
title_dummies_df = pd.get_dummies(Titanic['Title'], prefix=Titanic[['Title']].columns[0])
Titanic = pd.concat([Titanic,title_dummies_df],axis=1)


(2)处理Ticket

Titanic['Ticket_Letter'] = Titanic['Ticket'].str.split().str[0]
Titanic['Ticket_Letter'] = Titanic['Ticket_Letter'].apply(lambda x:np.nan if x.isnumeric() else x)
Titanic['Ticket_Number'] = Titanic['Ticket'].apply(lambda x: pd.to_numeric(x,errors='coerce'))
Titanic['Ticket_Number'].fillna(0,inplace=True)
Titanic = pd.get_dummies(Titanic,columns=['Ticket','Ticket_Letter'])

(3)对Age、Fare归一化处理,有利于更快的梯度下降

from  sklearn import preprocessing 
scaler = preprocessing.StandardScaler()
Titanic['Age_scaled'] = scaler.fit_transform(Titanic['Age'].values.reshape(-1,1))
Titanic['Fare_scaled'] = scaler.fit_transform(Titanic['Fare'].values.reshape(-1,1))

(4)处理SibSp与Parch,将两项合并为一项FamilySize,并按照家庭规模进行划分,同时保留SibSp与Parch。

from sklearn.preprocessing import LabelEncoder
def family_size_category(family_size):
    if family_size <= 1:
        return 'Single'
    elif family_size <= 4:
        return 'Small_Family'
    else:
        return 'Large_Family'

Titanic['Family_Size'] = Titanic['Parch'] + Titanic['SibSp'] + 1
Titanic['Family_Size_Category'] = Titanic['Family_Size'].map(family_size_category)

le_family = LabelEncoder()
le_family.fit(np.array(['Single', 'Small_Family', 'Large_Family']))
Titanic['Family_Size_Category'] = le_family.transform(Titanic['Family_Size_Category'])

family_size_dummies_df = pd.get_dummies(Titanic['Family_Size_Category'],prefix=Titanic[['Family_Size_Category']].columns[0])
Titanic = pd.concat([Titanic, family_size_dummies_df], axis=1)

特征选择

参考了知乎上的文章,建立了一种模型融合的方法筛选特征,基础模型包括三种:随机森林、AdaBoost、ExtraTree

def get_top_n_features(titanic_train_data_X, titanic_train_data_Y, top_n_features):
        # 随机森林
        rf_est = RandomForestClassifier(random_state=42)
        rf_param_grid = {'n_estimators': [500], 'min_samples_split': [2, 3], 'max_depth': [20]}
        rf_grid = sklearn.model_selection.GridSearchCV(rf_est, rf_param_grid, n_jobs=25, cv=10, verbose=1)
        rf_grid.fit(titanic_train_data_X,titanic_train_data_Y)
        #将feature按Importance排序
        feature_imp_sorted_rf = pd.DataFrame({'feature': list(titanic_train_data_X), 'importance': rf_grid.best_estimator_.feature_importances_}).sort_values('importance', ascending=False)
        features_top_n_rf = feature_imp_sorted_rf.head(top_n_features)['feature']
        print(str(features_top_n_rf[:25]))
        plt.figure()
        imp = feature_imp_sorted_rf[:5]['importance']
        imp.plot('barh')
        plt.show()
        
        # AdaBoost
        ada_est = sklearn.ensemble.AdaBoostClassifier(random_state=42)
        ada_param_grid = {'n_estimators': [500], 'learning_rate': [0.5, 0.6]}
        ada_grid = sklearn.model_selection.GridSearchCV(ada_est, ada_param_grid, n_jobs=25, cv=10, verbose=1)
        ada_grid.fit(titanic_train_data_X, titanic_train_data_Y)
        #排序
        feature_imp_sorted_ada = pd.DataFrame({'feature': list(titanic_train_data_X),'importance': ada_grid.best_estimator_.feature_importances_}).sort_values( 'importance', ascending=False)
        features_top_n_ada = feature_imp_sorted_ada.head(top_n_features)['feature']
        print(str(features_top_n_ada[:25]))
        plt.figure()
        imp1 = feature_imp_sorted_ada[:5]['importance']
        imp1.plot('barh')
        plt.show()
        
         # ExtraTree
        et_est = sklearn.ensemble.ExtraTreesClassifier(random_state=42)
        et_param_grid = {'n_estimators': [500], 'min_samples_split': [3, 4], 'max_depth': [15]}
        et_grid = sklearn.model_selection.GridSearchCV(et_est, et_param_grid, n_jobs=25, cv=10, verbose=1)
        et_grid.fit(titanic_train_data_X, titanic_train_data_Y)
        #排序
        feature_imp_sorted_et = pd.DataFrame({'feature': list(titanic_train_data_X), 'importance': et_grid.best_estimator_.feature_importances_}).sort_values('importance', ascending=False)
        features_top_n_et = feature_imp_sorted_et.head(top_n_features)['feature']
        print(str(features_top_n_et[:25]))
        plt.figure()
        imp2 = feature_imp_sorted_et[:5]['importance']
        imp2.plot('barh')
        plt.show()

        # 将三个模型挑选出来的前features_top_n_et合并
        feature_imp_sorted = pd.concat([feature_imp_sorted_rf, feature_imp_sorted_ada, feature_imp_sorted_et],  ignore_index=True).drop_duplicates()
        features_top_n = pd.concat([features_top_n_rf, features_top_n_ada, features_top_n_et], ignore_index=True).drop_duplicates()
        plt.figure()
        imp3 = feature_imp_sorted[:5]['importance']
        imp3.plot('barh')
        plt.show()
        
       return features_top_n

(1)随机森林筛选结果
随机森林法
(2)AdaBoost筛选结果
AdaBoost法
(3)ExtraTree筛选结果
ExtraTree法
(4)融合后的筛选结果

模型构建

在前面的步骤中,已经选取出了建模需要的特征,接下来对模型进行训练,分别输出预测准确率及混淆矩阵。
(1)随机森林分类

RF_Classsifier(X = Titanic16.iloc[:,1:],Y = Titanic16.Survived,test_rate = 0.15)

(2)决策树分类

DT_Classifier(X = Titanic16.iloc[:,1:],Y = Titanic16.Survived,test_rate = 0.15)


(3)逻辑回归

LR_regression(X = Titanic16.iloc[:,1:],Y = Titanic16.Survived,test_rate = 0.15)


(4)朴素贝叶斯分类

NaiveBayes(X = Titanic16.iloc[:,1:],Y = Titanic16.Survived,test_rate = 0.15)


(5)朴素贝叶斯分类

svm(X = titanic.iloc[:,1:5],Y = titanic.Survived,test_rate = 0.15)

(6)线性判别分析

lda(X = Titanic16.iloc[:,1:],Y = Titanic16.Survived,test_rate = 0.15)


(7)神经网络分类

nnet_classifier(X = Titanic16.iloc[:,1:],Y = Titanic16.Survived,test_rate = 0.15)

(8)Lightgbm分类

gbm_classifier(X = Titanic16.iloc[:,1:],Y = Titanic16.Survived,test_rate = 0.15)


(9)KNN分类

KNN_classifier(X = Titanic16.iloc[:,1:],Y = Titanic16.Survived,test_rate = 0.15)

通过比较,发现朴素贝叶斯分类器的准确率最高,达到0.87313

集成模型

在这一部分中,用随机森林,XGBoost进行模型构建(暂不调参),随机森林算法在上面的部分中已经写过了,这一部分中增加XGboost算法,并补充代码过程,前面的几个算法同理处理,就不赘述了。

from xgboost import XGBClassifier
def XG_classifier(X,Y,test_rate=0.15):
        x_train,x_test,y_train,y_test = train_test_split(X,Y.values,test_size = test_rate,random_state = 1234)
        model = XGBClassifier()
        model.fit(x_train,y_train)
        y_pre = model.predict(x_test)
        res = []   
        res.append(accuracy_score(y_test,y_pre))
        res.append(confusion_matrix(y_test,y_pre))
        plt.figure(figsize=(20,10))
        plt.subplot(2,4,1)
        probas_ = model.predict_proba(x_test)  
        fpr, tpr, thresholds = roc_curve(y_test, probas_[:, 1])  
        roc_auc = auc(fpr, tpr)  
        plt.plot(fpr, tpr, lw=1, label='ROC(auc = %0.2f)' % (roc_auc))  
        #  画对角线 
        plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Luck')  
        #画平均ROC曲线  
        plt.xlim([-0.05, 1.05])  
        plt.ylim([-0.05, 1.05])  
        plt.xlabel('False Positive Rate')  
        plt.ylabel('True Positive Rate')    
        plt.legend(loc="lower right")  
        plt.show()
        return res
  XG_classifier(X = Titanic16.iloc[:,2:],Y = Titanic16.Survived,test_rate = 0.15)

模型评估

在这一部分,我将分别评估逻辑回归、SVM、决策树、随机森林、XGBoost这五个模型的accuracy、precision,recall、F1-score、auc值,并画出相应的ROC曲线。因为有一部分评价指标已经在前面的代码中涵盖了,这一部分只补充新增的评价指标。

res.append(precision_score(y_train, y_pre, average='binary'))
res.append(recall_score(y_train, y_pre, average='binary'))
res.append(f1_score(y_train, y_pre, average='binary'))

(1)逻辑回归

(2)SVM

(3)决策树

(4)随机森林

(5)XGBoost

观察一下各个算法的结果对比,就测试集来说逻辑回归和XGboost的整体表现更好。
可能是利用了随机森林法提取了特征的原因,决策树和随机森林在训练集的表现非常好,几乎完全分类正确,而在测试集上的表现却不尽如人意,训练时可能过拟合了。

模型调优

(1)逻辑回归

#Logistic回归 
parameters = {'penalty': ['l1', 'l2'],'C': [0.01,0.1,0.5,1,10]}
grid_logistic = GridSearchCV(estimator = LogisticRegression(), param_grid=parameters,cv = 5)
x_train,x_test,y_train,y_test = train_test_split(Titanic16.iloc[:,2:],Titanic16.Survived,test_size =0.15,random_state = 1234)
grid_logistic.fit(x_train, y_train)
print('最佳效果:%0.3f' % grid_logistic.best_score_)
best_parameters=grid_logistic.best_estimator_.get_params()
print(best_parameters)
for param_name in sorted(parameters.keys()):
      print ('\t%s:%r' %(param_name,best_parameters[param_name]))
pre=grid_logistic.predict(x_test)
print('准确率:',accuracy_score(y_test,pre))
print('精确率:',precision_score(y_test,pre))
print('召回率:',recall_score(y_test,pre))


(2)支持向量机

parameters = {'C':  [0.01,0.1,0.5,1,10]}
grid_svm= GridSearchCV(estimator = SVC(), param_grid=parameters,cv = 5)
x_train,x_test,y_train,y_test = train_test_split(Titanic16.iloc[:,2:],Titanic16.Survived,test_size =0.15,random_state = 1234)
grid_svm.fit(x_train, y_train)
print('最佳效果:%0.3f' % grid_svm.best_score_)
best_parameters=grid_svm.best_estimator_.get_params()
print(best_parameters)
for param_name in sorted(parameters.keys()):
      print ('\t%s:%r' %(param_name,best_parameters[param_name]))
pre=grid_svm.predict(x_test)
print('准确率:',accuracy_score(y_test,pre))
print('精确率:',precision_score(y_test,pre))
print('召回率:',recall_score(y_test,pre))

(3)决策树

parameters = {'max_depth':range(1,21),'criterion':np.array(['entropy','gini'])}
grid_DT= GridSearchCV(estimator =  DecisionTreeClassifier(), param_grid=parameters,cv = 5)
x_train,x_test,y_train,y_test = train_test_split(Titanic16.iloc[:,2:],Titanic16.Survived,test_size =0.15,random_state = 1234)
grid_DT.fit(x_train, y_train)
print('最佳效果:%0.3f' % grid_DT.best_score_)
best_parameters=grid_DT.best_estimator_.get_params()
print(best_parameters)
for param_name in sorted(parameters.keys()):
      print ('\t%s:%r' %(param_name,best_parameters[param_name]))
pre=grid_DT.predict(x_test)
print('准确率:',accuracy_score(y_test,pre))
print('精确率:',precision_score(y_test,pre))
print('召回率:',recall_score(y_test,pre))


(4)随机森林

parameters = {'max_depth':range(1,100),'criterion':np.array(['entropy','gini'])}
grid_RF= GridSearchCV(estimator = RandomForestClassifier(), param_grid=parameters,cv = 5)
x_train,x_test,y_train,y_test = train_test_split(Titanic16.iloc[:,2:],Titanic16.Survived,test_size =0.15,random_state = 1234)
grid_RF.fit(x_train, y_train)
print('最佳效果:%0.3f' % grid_RF.best_score_)
best_parameters=grid_RF.best_estimator_.get_params()
print(best_parameters)
for param_name in sorted(parameters.keys()):
      print ('\t%s:%r' %(param_name,best_parameters[param_name]))
pre=grid_RF.predict(x_test)
print('准确率:',accuracy_score(y_test,pre))
print('精确率:',precision_score(y_test,pre))
print('召回率:',recall_score(y_test,pre))


(5)xgboost

parameters={'n_estimators': [1,5,10,20,40,60],'learning_rate': [0.01,0.05,0.1],'max_depth': [1,2,5,8,10]}
grid_xg= GridSearchCV(estimator =XGBClassifier(), param_grid=parameters,cv = 5)
x_train,x_test,y_train,y_test = train_test_split(Titanic16.iloc[:,2:],Titanic16.Survived,test_size =0.15,random_state = 1234)
grid_xg.fit(x_train, y_train)
print('最佳效果:%0.3f' %grid_xg.best_score_)
best_parameters=grid_xg.best_estimator_.get_params()
print(best_parameters)
for param_name in sorted(parameters.keys()):
      print ('\t%s:%r' %(param_name,best_parameters[param_name]))
pre=grid_xg.predict(x_test)
print('准确率:',accuracy_score(y_test,pre))
print('精确率:',precision_score(y_test,pre))
print('召回率:',recall_score(y_test,pre))

模型融合

(1)VotingClassifier

from sklearn.ensemble import VotingClassifier
x_train,y_train= (Titanic16.iloc[:,2:],Titanic16.Survived)
#随机森林
RF = RandomForestClassifier(random_state=42)
RF_result = RF.fit(x_train,y_train)
#决策树
DT = DecisionTreeClassifier()
DT_result = DT.fit(x_train,y_train)
#逻辑回归
LR = LogisticRegression()
LR_result = LR.fit(x_train,y_train)
#朴素贝叶斯
NB = GaussianNB()
NB_result = NB.fit(x_train,y_train)
#支持向量机
svc = SVC(kernel='rbf',probability=True)
svc_result = svc.fit(x_train,y_train)
#线性判别
lda = LinearDiscriminantAnalysis(n_components=1)
lda_result = lda.fit(x_train,y_train)
#神经网络
mlp = MLPClassifier(solver='lbfgs',activation='logistic',max_iter=10000,hidden_layer_sizes=(30,2))
mlp_result = mlp.fit(x_train,y_train)
#lightgbm
LGB = LGBMClassifier()
LGB_result = LGB.fit(x_train,y_train)
#k近邻
knn = KNeighborsClassifier()
knn_result =knn.fit(x_train,y_train)
clf_vc = VotingClassifier(estimators=[('LR', LR),('DT',DT),('RF', RF),('NB',NB),('svc', svc), ('lda', lda), ('KNN', knn),('mlp',mlp),('LGB',LGB)])
clf_vc.fit(x_train,y_train)
print(clf_vc.score(x_train, y_train))

输出结果:

(2)Stacking

from mlxtend.classifier import StackingCVClassifier
xgb = XGBClassifier()
sv = SVC(probability=True)
rfc = RandomForestClassifier(n_estimators=10, criterion="entropy")
lr = LogisticRegression() 
sclf = StackingCVClassifier(classifiers=[lr, rfc,sv], meta_classifier= xgb, use_probas=True)
sclf.fit(x_train.values,y_train.values)
print(sclf.score(x_train, y_train))

输出结果:

参考文献

[1] https://zhuanlan.zhihu.com/p/30538352
[2] https://blog.csdn.net/weixin_40300458/article/details/79996764

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值