泰坦尼克号预测结果分析报告

目录

  1. 提出问题(Business Understanding)
  2. 理解数据(Data Understanding)
    • 采集数据
    • 导入数据
    • 查看数据集信息
  3. 数据清洗(Data Preparation)
    • 数据预处理
    • 特征工程(Feature Engineering)
  4. 构建模型(Modeling)
  5. 模型预估(Evaluation)
  6. 方案实施(Deployment)
    • 将结果提交到kaggle
    • 报告撰写

1. 提出问题

什么样的人容易在泰坦尼克号存活?

2. 理解数据

2.1 采集数据

点击此链接进入kaggle的titanic项目下载数据集

2.2 导入数据

用pd.read_csv()函数读取数据集中的数据;然后将训练数据集和测试数据集合并成一个数据集来进行清洗
# 忽略警告提示
import warnings
warnings.filterwarnings('ignore')

#导入处理数据包
import numpy as np
import pandas as pd
train=pd.read_csv('E:\\titanic\\train.csv')
test=pd.read_csv('E:\\titanic\\test.csv')
print('训练数据集:',train.shape,'测试数据集:',test.shape)
训练数据集: (891, 12) 测试数据集: (418, 11)
#合并数据集,方便同时对两个数据集进行清洗
full = train.append( test , ignore_index = True )

print ('合并后的数据集:',full.shape)
合并后的数据集: (1309, 12)

2.3 查看数据集信息

#查看数据
full.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
010.03Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
121.01Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
231.03Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
341.01Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
450.03Allen, Mr. William Henrymale35.0003734508.0500NaNS

数据集中的字段都是英文,为了方便了解字段含义,查询了官网的项目介绍,总结如下:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-QKBADEtQ-1654679972210)(https://blog.csdn.net/qq_26675765/article/details/125180282?csdn_share_tail=%7B%22type%22%3A%22blog%22%2C%22rType%22%3A%22article%22%2C%22rId%22%3A%22125180282%22%2C%22source%22%3A%22qq_26675765%22%7D&ctrtid=6h2E2)]

'''
describe只能查看数据类型的描述统计信息,对于其他类型的数据不显示,比如字符串类型姓名(name),客舱号(Cabin)
这很好理解,因为描述统计指标是计算数值,所以需要该列的数据类型是数据
'''
#获取数据类型列的描述统计信息
full.describe()
PassengerIdSurvivedPclassAgeSibSpParchFare
count1309.000000891.0000001309.0000001046.0000001309.0000001309.0000001308.000000
mean655.0000000.3838382.29488229.8811380.4988540.38502733.295479
std378.0200610.4865920.83783614.4134931.0416580.86556051.758668
min1.0000000.0000001.0000000.1700000.0000000.0000000.000000
25%328.0000000.0000002.00000021.0000000.0000000.0000007.895800
50%655.0000000.0000003.00000028.0000000.0000000.00000014.454200
75%982.0000001.0000003.00000039.0000001.0000000.00000031.275000
max1309.0000001.0000003.00000080.0000008.0000009.000000512.329200
# 查看每一列的数据类型,和数据总数
full.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1309 entries, 0 to 1308
Data columns (total 12 columns):
 #   Column       Non-Null Count  Dtype  
---  ------       --------------  -----  
 0   PassengerId  1309 non-null   int64  
 1   Survived     891 non-null    float64
 2   Pclass       1309 non-null   int64  
 3   Name         1309 non-null   object 
 4   Sex          1309 non-null   object 
 5   Age          1046 non-null   float64
 6   SibSp        1309 non-null   int64  
 7   Parch        1309 non-null   int64  
 8   Ticket       1309 non-null   object 
 9   Fare         1308 non-null   float64
 10  Cabin        295 non-null    object 
 11  Embarked     1307 non-null   object 
dtypes: float64(3), int64(4), object(5)
memory usage: 97.2+ KB

根据上面打印的结果,我们发现数据总共有1309行。

其中数据类型列:年龄(Age)、船票价格(Fare)里面有缺失数据:

年龄(Age)里面数据总数是1046条,缺失了1309-1046=263,缺失率263/1309=20%
船票价格(Fare)里面数据总数是1308条,缺失了1条数据

字符串列:

登船港口(Embarked)里面数据总数是1307,只缺失了2条数据,缺失比较少
船舱号(Cabin)里面数据总数是295,缺失了1309-295=1014,缺失率=1014/1309=77.5%,缺失比较大

接下来进行数据清洗,针对以上指标处理缺失数据。

3. 数据清洗

3.1 数据预处理

3.1.1 缺失值处理

很多机器学习算法为了训练模型,要求传入的特征中不能由空值;所以要对缺失值进行处理,针对数据类型的列(年龄(Age)、船票价格(Fare)),最简单的方法用平均值代替缺失值

print('处理前数据:')
full.info()
full['Age']=full['Age'].fillna(full['Age'].mean())
full['Fare']=full['Fare'].fillna(full['Fare'].mean())
print('处理后数据:')
full.info()
处理前数据:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1309 entries, 0 to 1308
Data columns (total 12 columns):
 #   Column       Non-Null Count  Dtype  
---  ------       --------------  -----  
 0   PassengerId  1309 non-null   int64  
 1   Survived     891 non-null    float64
 2   Pclass       1309 non-null   int64  
 3   Name         1309 non-null   object 
 4   Sex          1309 non-null   object 
 5   Age          1046 non-null   float64
 6   SibSp        1309 non-null   int64  
 7   Parch        1309 non-null   int64  
 8   Ticket       1309 non-null   object 
 9   Fare         1308 non-null   float64
 10  Cabin        295 non-null    object 
 11  Embarked     1307 non-null   object 
dtypes: float64(3), int64(4), object(5)
memory usage: 97.2+ KB
处理后数据:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1309 entries, 0 to 1308
Data columns (total 12 columns):
 #   Column       Non-Null Count  Dtype  
---  ------       --------------  -----  
 0   PassengerId  1309 non-null   int64  
 1   Survived     891 non-null    float64
 2   Pclass       1309 non-null   int64  
 3   Name         1309 non-null   object 
 4   Sex          1309 non-null   object 
 5   Age          1309 non-null   float64
 6   SibSp        1309 non-null   int64  
 7   Parch        1309 non-null   int64  
 8   Ticket       1309 non-null   object 
 9   Fare         1309 non-null   float64
 10  Cabin        295 non-null    object 
 11  Embarked     1307 non-null   object 
dtypes: float64(3), int64(4), object(5)
memory usage: 97.2+ KB
#检查数据处理是否正常
full.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
010.03Braund, Mr. Owen Harrismale22.010A/5 211717.2500NaNS
121.01Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
231.03Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250NaNS
341.01Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
450.03Allen, Mr. William Henrymale35.0003734508.0500NaNS

针对字符串数列,登船港口(Embarked)和船舱号(Cabin),缺失值处理方法:分别查看两个列数据都是什么,针对登船港口(Embarked)只缺失两个,用最多的那个数据填充;船舱号(Cabin)缺失较多,用U填充(Uknow)

#登船港口(Embarked):查看里面数据长啥样
'''
出发地点:S=英国南安普顿Southampton
途径地点1:C=法国 瑟堡市Cherbourg
途径地点2:Q=爱尔兰 昆士敦Queenstown
'''
full['Embarked'].head()
0    S
1    C
2    S
3    S
4    S
Name: Embarked, dtype: object
full['Embarked'].value_counts()
S    914
C    270
Q    123
Name: Embarked, dtype: int64
'''
# 只有两个缺失值,我们将缺失值填充为最频繁出现的值:
S=英国南安普顿Southampton
'''
full['Embarked'] = full['Embarked'].fillna( 'S' )
#船舱号(Cabin):查看里面数据长啥样
full['Cabin'].head()
0     NaN
1     C85
2     NaN
3    C123
4     NaN
Name: Cabin, dtype: object
#缺失数据比较多,船舱号(Cabin)缺失值填充为U,表示未知(Uknow) 
full['Cabin'] = full['Cabin'].fillna( 'U' )
#检查数据处理是否正常
full.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
010.03Braund, Mr. Owen Harrismale22.010A/5 211717.2500US
121.01Cumings, Mrs. John Bradley (Florence Briggs Th...female38.010PC 1759971.2833C85C
231.03Heikkinen, Miss. Lainafemale26.000STON/O2. 31012827.9250US
341.01Futrelle, Mrs. Jacques Heath (Lily May Peel)female35.01011380353.1000C123S
450.03Allen, Mr. William Henrymale35.0003734508.0500US
#查看最终缺失值处理情况,记住生成情况(Survived)这里一列是我们的标签,用来做机器学习预测的,不需要处理这一列
full.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1309 entries, 0 to 1308
Data columns (total 12 columns):
 #   Column       Non-Null Count  Dtype  
---  ------       --------------  -----  
 0   PassengerId  1309 non-null   int64  
 1   Survived     891 non-null    float64
 2   Pclass       1309 non-null   int64  
 3   Name         1309 non-null   object 
 4   Sex          1309 non-null   object 
 5   Age          1309 non-null   float64
 6   SibSp        1309 non-null   int64  
 7   Parch        1309 non-null   int64  
 8   Ticket       1309 non-null   object 
 9   Fare         1309 non-null   float64
 10  Cabin        1309 non-null   object 
 11  Embarked     1309 non-null   object 
dtypes: float64(3), int64(4), object(5)
memory usage: 97.2+ KB

3.2 特征提取

对不同数据类型的特征提取方法:

①数值类型数据:直接使用
②时间序列:转成单独的年、月、日
③分类数据:one-hot编码用数值代替类别
'''
1.数值类型:
乘客编号(PassengerId),年龄(Age),船票价格(Fare),同代直系亲属人数(SibSp),不同代直系亲属人数(Parch)
2.时间序列:无
3.分类数据:
1)有直接类别的
乘客性别(Sex):男性male,女性female
登船港口(Embarked):出发地点S=英国南安普顿Southampton,途径地点1:C=法国 瑟堡市Cherbourg,出发地点2:Q=爱尔兰 昆士敦Queenstown
客舱等级(Pclass):1=1等舱,2=2等舱,3=3等舱
2)字符串类型:可能从这里面提取出特征来,也归到分类数据中
乘客姓名(Name)
客舱号(Cabin)
船票编号(Ticket)
'''
full.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1309 entries, 0 to 1308
Data columns (total 12 columns):
 #   Column       Non-Null Count  Dtype  
---  ------       --------------  -----  
 0   PassengerId  1309 non-null   int64  
 1   Survived     891 non-null    float64
 2   Pclass       1309 non-null   int64  
 3   Name         1309 non-null   object 
 4   Sex          1309 non-null   object 
 5   Age          1309 non-null   float64
 6   SibSp        1309 non-null   int64  
 7   Parch        1309 non-null   int64  
 8   Ticket       1309 non-null   object 
 9   Fare         1309 non-null   float64
 10  Cabin        1309 non-null   object 
 11  Embarked     1309 non-null   object 
dtypes: float64(3), int64(4), object(5)
memory usage: 97.2+ KB

3.2.1 分类数据:有直接类别的

①乘客性别(Sex):男性male,女性female
②登船港口(Embarked):出发地点S=英国南安普顿Southampton,途径地点1:C=法国 瑟堡市Cherbourg,出发地点2:Q=爱尔兰 昆士敦Queenstown
③客舱等级(Pclass):1=1等舱,2=2等舱,3=3等舱
3.2.1.1 性别
#查看性别数据这一列
full['Sex'].head()
0      male
1    female
2    female
3    female
4      male
Name: Sex, dtype: object
'''
将性别的值映射为数值
男(male)对应数值1,女(female)对应数值0
'''
sex_mapDict={'male':1,
            'female':0}
#map函数:对Series每个数据应用自定义的函数计算
full['Sex']=full['Sex'].map(sex_mapDict)
full.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
010.03Braund, Mr. Owen Harris122.010A/5 211717.2500US
121.01Cumings, Mrs. John Bradley (Florence Briggs Th...038.010PC 1759971.2833C85C
231.03Heikkinen, Miss. Laina026.000STON/O2. 31012827.9250US
341.01Futrelle, Mrs. Jacques Heath (Lily May Peel)035.01011380353.1000C123S
450.03Allen, Mr. William Henry135.0003734508.0500US
3.2.1.2 登船港口
#查看该类数据内容
full['Embarked'].head()
0    S
1    C
2    S
3    S
4    S
Name: Embarked, dtype: object
#存放提取后的特征
embarkedDf = pd.DataFrame()

'''
使用get_dummies进行one-hot编码,产生虚拟变量(dummy variables),列名前缀是Embarked
'''
embarkedDf = pd.get_dummies( full['Embarked'] , prefix='Embarked' )
embarkedDf.head()
Embarked_CEmbarked_QEmbarked_S
0001
1100
2001
3001
4001
#添加one-hot编码产生的虚拟变量(dummy variables)到泰坦尼克号数据集full
full = pd.concat([full,embarkedDf],axis=1)

'''
因为已经使用登船港口(Embarked)进行了one-hot编码产生了它的虚拟变量(dummy variables)
所以这里把登船港口(Embarked)删掉
'''
full.drop('Embarked',axis=1,inplace=True)
full.head()
PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked_CEmbarked_QEmbarked_S
010.03Braund, Mr. Owen Harris122.010A/5 211717.2500U001
121.01Cumings, Mrs. John Bradley (Florence Briggs Th...038.010PC 1759971.2833C85100
231.03Heikkinen, Miss. Laina026.000STON/O2. 31012827.9250U001
341.01Futrelle, Mrs. Jacques Heath (Lily May Peel)035.01011380353.1000C123001
450.03Allen, Mr. William Henry135.0003734508.0500U001
3.2.1.3 客舱等级
'''
客舱等级(Pclass):
1=1等舱,2=2等舱,3=3等舱
'''
#存放提取后的特征
pclassDf = pd.DataFrame()

#使用get_dummies进行one-hot编码,列名前缀是Pclass
pclassDf = pd.get_dummies( full['Pclass'] , prefix='Pclass' )
pclassDf.head()
Pclass_1Pclass_2Pclass_3
0001
1100
2001
3100
4001
#添加one-hot编码产生的虚拟变量(dummy variables)到泰坦尼克号数据集full
full = pd.concat([full,pclassDf],axis=1)

#删掉客舱等级(Pclass)这一列
full.drop('Pclass',axis=1,inplace=True)
full.head()
PassengerIdSurvivedNameSexAgeSibSpParchTicketFareCabinEmbarked_CEmbarked_QEmbarked_SPclass_1Pclass_2Pclass_3
010.0Braund, Mr. Owen Harris122.010A/5 211717.2500U001001
121.0Cumings, Mrs. John Bradley (Florence Briggs Th...038.010PC 1759971.2833C85100100
231.0Heikkinen, Miss. Laina026.000STON/O2. 31012827.9250U001001
341.0Futrelle, Mrs. Jacques Heath (Lily May Peel)035.01011380353.1000C123001100
450.0Allen, Mr. William Henry135.0003734508.0500U001001

3.2.2 分类数据:字符串数据

字符串类型:可能从这里面提取出特征来,也归到分类数据中,这里数据有:

①乘客姓名(Name)
②客舱号(Cabin)
③船票编号(Ticket)
3.2.2.1 从姓名提取头衔
'''
查看姓名这一列长啥样
注意到在乘客名字(Name)中,有一个非常显著的特点:
乘客头衔每个名字当中都包含了具体的称谓或者说是头衔,将这部分信息提取出来后可以作为非常有用一个新变量,可以帮助我们进行预测。
例如:
Braund, Mr. Owen Harris
Heikkinen, Miss. Laina
Oliva y Ocana, Dona. Fermina
Peter, Master. Michael J
'''
full[ 'Name' ].head()
0                              Braund, Mr. Owen Harris
1    Cumings, Mrs. John Bradley (Florence Briggs Th...
2                               Heikkinen, Miss. Laina
3         Futrelle, Mrs. Jacques Heath (Lily May Peel)
4                             Allen, Mr. William Henry
Name: Name, dtype: object
'''
定义函数:从姓名中获取头衔
'''
def getTitle(name):
    str1=name.split( ',' )[1] #Mr. Owen Harris
    str2=str1.split( '.' )[0]#Mr
    #strip() 方法用于移除字符串头尾指定的字符(默认为空格)
    str3=str2.strip()
    return str3
#存放提取后的特征
titleDf = pd.DataFrame()
#map函数:对Series每个数据应用自定义的函数计算
titleDf['Title'] = full['Name'].map(getTitle)
titleDf.head()
Title
0Mr
1Mrs
2Miss
3Mrs
4Mr
'''
定义以下几种头衔类别:
Officer政府官员
Royalty王室(皇室)
Mr已婚男士
Mrs已婚妇女
Miss年轻未婚女子
Master有技能的人/教师
'''
#姓名中头衔字符串与定义头衔类别的映射关系
title_mapDict = {
                    "Capt":       "Officer",
                    "Col":        "Officer",
                    "Major":      "Officer",
                    "Jonkheer":   "Royalty",
                    "Don":        "Royalty",
                    "Sir" :       "Royalty",
                    "Dr":         "Officer",
                    "Rev":        "Officer",
                    "the Countess":"Royalty",
                    "Dona":       "Royalty",
                    "Mme":        "Mrs",
                    "Mlle":       "Miss",
                    "Ms":         "Mrs",
                    "Mr" :        "Mr",
                    "Mrs" :       "Mrs",
                    "Miss" :      "Miss",
                    "Master" :    "Master",
                    "Lady" :      "Royalty"
                    }

#map函数:对Series每个数据应用自定义的函数计算
titleDf['Title'] = titleDf['Title'].map(title_mapDict)

#使用get_dummies进行one-hot编码
titleDf = pd.get_dummies(titleDf['Title'])
titleDf.head()
MasterMissMrMrsOfficerRoyalty
0001000
1000100
2010000
3000100
4001000
#添加one-hot编码产生的虚拟变量(dummy variables)到泰坦尼克号数据集full
full = pd.concat([full,titleDf],axis=1)

#删掉姓名这一列
full.drop('Name',axis=1,inplace=True)
full.head()
PassengerIdSurvivedSexAgeSibSpParchTicketFareCabinEmbarked_C...Embarked_SPclass_1Pclass_2Pclass_3MasterMissMrMrsOfficerRoyalty
010.0122.010A/5 211717.2500U0...1001001000
121.0038.010PC 1759971.2833C851...0100000100
231.0026.000STON/O2. 31012827.9250U0...1001010000
341.0035.01011380353.1000C1230...1100000100
450.0135.0003734508.0500U0...1001001000

5 rows × 21 columns

3.2.2.2 从客舱号中提取客舱类别
'''
客舱号的首字母是客舱的类别
'''
#查看客舱号的内容
full['Cabin'].head()
0       U
1     C85
2       U
3    C123
4       U
Name: Cabin, dtype: object
#存放客舱号信息
cabinDf = pd.DataFrame()

'''
客场号的类别值是首字母,例如:
C85 类别映射为首字母C
'''
full[ 'Cabin' ] = full[ 'Cabin' ].map( lambda c : c[0] )

##使用get_dummies进行one-hot编码,列名前缀是Cabin
cabinDf = pd.get_dummies( full['Cabin'] , prefix = 'Cabin' )

cabinDf.head()
Cabin_ACabin_BCabin_CCabin_DCabin_ECabin_FCabin_GCabin_TCabin_U
0000000001
1001000000
2000000001
3001000000
4000000001
#添加one-hot编码产生的虚拟变量(dummy variables)到泰坦尼克号数据集full
full = pd.concat([full,cabinDf],axis=1)

#删掉客舱号这一列
full.drop('Cabin',axis=1,inplace=True)
full.head()
PassengerIdSurvivedSexAgeSibSpParchTicketFareEmbarked_CEmbarked_Q...RoyaltyCabin_ACabin_BCabin_CCabin_DCabin_ECabin_FCabin_GCabin_TCabin_U
010.0122.010A/5 211717.250000...0000000001
121.0038.010PC 1759971.283310...0001000000
231.0026.000STON/O2. 31012827.925000...0000000001
341.0035.01011380353.100000...0001000000
450.0135.0003734508.050000...0000000001

5 rows × 29 columns

3.2.3 建立家庭人数和家庭类别¶

#存放家庭信息
familyDf = pd.DataFrame()

'''
家庭人数=同代直系亲属数(Parch)+不同代直系亲属数(SibSp)+乘客自己
(因为乘客自己也是家庭成员的一个,所以这里加1)
'''
familyDf[ 'FamilySize' ] = full[ 'Parch' ] + full[ 'SibSp' ] + 1

'''
家庭类别:
小家庭Family_Single:家庭人数=1
中等家庭Family_Small: 2<=家庭人数<=4
大家庭Family_Large: 家庭人数>=5
'''
#if 条件为真的时候返回if前面内容,否则返回0
familyDf[ 'Family_Single' ] = familyDf[ 'FamilySize' ].map( lambda s : 1 if s == 1 else 0 )
familyDf[ 'Family_Small' ]  = familyDf[ 'FamilySize' ].map( lambda s : 1 if 2 <= s <= 4 else 0 )
familyDf[ 'Family_Large' ]  = familyDf[ 'FamilySize' ].map( lambda s : 1 if 5 <= s else 0 )

familyDf.head()
FamilySizeFamily_SingleFamily_SmallFamily_Large
02010
12010
21100
32010
41100
#添加one-hot编码产生的虚拟变量(dummy variables)到泰坦尼克号数据集full
full = pd.concat([full,familyDf],axis=1)
full.head()
PassengerIdSurvivedSexAgeSibSpParchTicketFareEmbarked_CEmbarked_Q...Cabin_DCabin_ECabin_FCabin_GCabin_TCabin_UFamilySizeFamily_SingleFamily_SmallFamily_Large
010.0122.010A/5 211717.250000...0000012010
121.0038.010PC 1759971.283310...0000002010
231.0026.000STON/O2. 31012827.925000...0000011100
341.0035.01011380353.100000...0000002010
450.0135.0003734508.050000...0000011100

5 rows × 33 columns

#到现在我们已经有了这么多个特征了
full.shape
(1309, 33)

3.3 特征选择

3.3.1 相关系数法:计算相关系数的相关关系

#相关性矩阵
corrDf = full.corr() 
corrDf
PassengerIdSurvivedSexAgeSibSpParchFareEmbarked_CEmbarked_QEmbarked_S...Cabin_DCabin_ECabin_FCabin_GCabin_TCabin_UFamilySizeFamily_SingleFamily_SmallFamily_Large
PassengerId1.000000-0.0050070.0134060.025731-0.0552240.0089420.0314160.0481010.011585-0.049836...0.000549-0.0081360.000306-0.045949-0.0230490.000208-0.0314370.0285460.002975-0.063415
Survived-0.0050071.000000-0.543351-0.070323-0.0353220.0816290.2573070.1682400.003650-0.149683...0.1507160.1453210.0579350.016040-0.026456-0.3169120.016639-0.2033670.279855-0.125147
Sex0.013406-0.5433511.0000000.057397-0.109609-0.213125-0.185484-0.066564-0.0886510.115193...-0.057396-0.040340-0.006655-0.0832850.0205580.137396-0.1885830.284537-0.255196-0.077748
Age0.025731-0.0703230.0573971.000000-0.190747-0.1308720.1715210.076179-0.012718-0.059153...0.1328860.106600-0.072644-0.0859770.032461-0.271918-0.1969960.116675-0.038189-0.161210
SibSp-0.055224-0.035322-0.109609-0.1907471.0000000.3735870.160224-0.048396-0.0486780.073709...-0.015727-0.027180-0.0086190.006015-0.0132470.0090640.861952-0.5910770.2535900.699681
Parch0.0089420.081629-0.213125-0.1308720.3735871.0000000.221522-0.008635-0.1009430.071881...-0.0273850.0010840.0204810.058325-0.012304-0.0368060.792296-0.5490220.2485320.624627
Fare0.0314160.257307-0.1854840.1715210.1602240.2215221.0000000.286241-0.130054-0.169894...0.0727370.073949-0.037567-0.0228570.001179-0.5071970.226465-0.2748260.1972810.170853
Embarked_C0.0481010.168240-0.0665640.076179-0.048396-0.0086350.2862411.000000-0.164166-0.778262...0.1077820.027566-0.020010-0.031566-0.014095-0.258257-0.036553-0.1078740.159594-0.092825
Embarked_Q0.0115850.003650-0.088651-0.012718-0.048678-0.100943-0.130054-0.1641661.000000-0.491656...-0.061459-0.042877-0.020282-0.019941-0.0089040.142369-0.0871900.127214-0.122491-0.018423
Embarked_S-0.049836-0.1496830.115193-0.0591530.0737090.071881-0.169894-0.778262-0.4916561.000000...-0.0560230.0029600.0305750.0405600.0181110.1373510.0877710.014246-0.0629090.093671
Pclass_10.0264950.285904-0.1073710.362587-0.034256-0.0130330.5999560.325722-0.166101-0.181800...0.2756980.242963-0.073083-0.0354410.048310-0.776987-0.029656-0.1265510.165965-0.067523
Pclass_20.0227140.093349-0.028862-0.014193-0.052419-0.010057-0.121372-0.134675-0.1219730.196532...-0.037929-0.0502100.127371-0.032081-0.0143250.176485-0.039976-0.0350750.097270-0.118495
Pclass_3-0.041544-0.3223080.116562-0.3020930.0726100.019521-0.419616-0.1714300.243706-0.003805...-0.207455-0.169063-0.0411780.056964-0.0300570.5276140.0584300.138250-0.2233380.155560
Master0.0022540.0852210.164375-0.3639230.3291710.2534820.011596-0.014172-0.0090910.018297...-0.0421920.0018600.058311-0.013690-0.0061130.0411780.355061-0.2653550.1201660.301809
Miss-0.0500270.332795-0.672819-0.2541460.0775640.0664730.092051-0.0143510.198804-0.113886...-0.0125160.008700-0.0030880.061881-0.013832-0.0043640.087350-0.023890-0.0180850.083422
Mr0.014116-0.5491990.8706780.165476-0.243104-0.304780-0.192192-0.065538-0.0802240.108924...-0.030261-0.032953-0.026403-0.0725140.0236110.131807-0.3264870.386262-0.300872-0.194207
Mrs0.0332990.344935-0.5711760.1980910.0616430.2134910.1392350.098379-0.100374-0.022950...0.0803930.0455380.0133760.042547-0.011742-0.1622530.157233-0.3546490.3612470.012893
Officer0.002231-0.0313160.0872880.162818-0.013813-0.0326310.0286960.003678-0.003212-0.001202...0.006055-0.024048-0.017076-0.008281-0.003698-0.067030-0.0269210.0133030.003966-0.034572
Royalty0.0044000.033391-0.0204080.059466-0.010787-0.0301970.0262140.077213-0.021853-0.054250...-0.012950-0.012202-0.008665-0.004202-0.001876-0.071672-0.0236000.008761-0.000073-0.017542
Cabin_A-0.0028310.0222870.0475610.125177-0.039808-0.0307070.0200940.094914-0.042105-0.056984...-0.024952-0.023510-0.016695-0.008096-0.003615-0.242399-0.0429670.045227-0.029546-0.033799
Cabin_B0.0158950.175095-0.0944530.113458-0.0115690.0730510.3937430.161595-0.073613-0.095790...-0.043624-0.041103-0.029188-0.014154-0.006320-0.4237940.032318-0.0879120.0842680.013470
Cabin_C0.0060920.114652-0.0774730.1679930.0486160.0096010.4013700.158043-0.059151-0.101861...-0.053083-0.050016-0.035516-0.017224-0.007691-0.5156840.037226-0.1374980.1419250.001362
Cabin_D0.0005490.150716-0.0573960.132886-0.015727-0.0273850.0727370.107782-0.061459-0.056023...1.000000-0.034317-0.024369-0.011817-0.005277-0.353822-0.025313-0.0743100.102432-0.049336
Cabin_E-0.0081360.145321-0.0403400.106600-0.0271800.0010840.0739490.027566-0.0428770.002960...-0.0343171.000000-0.022961-0.011135-0.004972-0.333381-0.017285-0.0425350.068007-0.046485
Cabin_F0.0003060.057935-0.006655-0.072644-0.0086190.020481-0.037567-0.020010-0.0202820.030575...-0.024369-0.0229611.000000-0.007907-0.003531-0.2367330.0055250.0040550.012756-0.033009
Cabin_G-0.0459490.016040-0.083285-0.0859770.0060150.058325-0.022857-0.031566-0.0199410.040560...-0.011817-0.011135-0.0079071.000000-0.001712-0.1148030.035835-0.0763970.087471-0.016008
Cabin_T-0.023049-0.0264560.0205580.032461-0.013247-0.0123040.001179-0.014095-0.0089040.018111...-0.005277-0.004972-0.003531-0.0017121.000000-0.051263-0.0154380.022411-0.019574-0.007148
Cabin_U0.000208-0.3169120.137396-0.2719180.009064-0.036806-0.507197-0.2582570.1423690.137351...-0.353822-0.333381-0.236733-0.114803-0.0512631.000000-0.0141550.175812-0.2113670.056438
FamilySize-0.0314370.016639-0.188583-0.1969960.8619520.7922960.226465-0.036553-0.0871900.087771...-0.025313-0.0172850.0055250.035835-0.015438-0.0141551.000000-0.6888640.3026400.801623
Family_Single0.028546-0.2033670.2845370.116675-0.591077-0.549022-0.274826-0.1078740.1272140.014246...-0.074310-0.0425350.004055-0.0763970.0224110.175812-0.6888641.000000-0.873398-0.318944
Family_Small0.0029750.279855-0.255196-0.0381890.2535900.2485320.1972810.159594-0.122491-0.062909...0.1024320.0680070.0127560.087471-0.019574-0.2113670.302640-0.8733981.000000-0.183007
Family_Large-0.063415-0.125147-0.077748-0.1612100.6996810.6246270.170853-0.092825-0.0184230.093671...-0.049336-0.046485-0.033009-0.016008-0.0071480.0564380.801623-0.318944-0.1830071.000000

32 rows × 32 columns

'''
查看各个特征与生成情况(Survived)的相关系数,
ascending=False表示按降序排列
'''
corrDf['Survived'].sort_values(ascending =False)
Survived         1.000000
Mrs              0.344935
Miss             0.332795
Pclass_1         0.285904
Family_Small     0.279855
Fare             0.257307
Cabin_B          0.175095
Embarked_C       0.168240
Cabin_D          0.150716
Cabin_E          0.145321
Cabin_C          0.114652
Pclass_2         0.093349
Master           0.085221
Parch            0.081629
Cabin_F          0.057935
Royalty          0.033391
Cabin_A          0.022287
FamilySize       0.016639
Cabin_G          0.016040
Embarked_Q       0.003650
PassengerId     -0.005007
Cabin_T         -0.026456
Officer         -0.031316
SibSp           -0.035322
Age             -0.070323
Family_Large    -0.125147
Embarked_S      -0.149683
Family_Single   -0.203367
Cabin_U         -0.316912
Pclass_3        -0.322308
Sex             -0.543351
Mr              -0.549199
Name: Survived, dtype: float64

3.3.2 选择特征

根据各个特征与生成情况(Survived)的相关系数大小,我们选择了这几个特征作为模型的输入:

头衔(前面所在的数据集titleDf)、客舱等级(pclassDf)、家庭大小(familyDf)、船票价格(Fare)、船舱号(cabinDf)、登船港口(embarkedDf)、性别(Sex)

#特征选择
full_X = pd.concat( [titleDf,#头衔
                     pclassDf,#客舱等级
                     familyDf,#家庭大小
                     full['Fare'],#船票价格
                     cabinDf,#船舱号
                     embarkedDf,#登船港口
                     full['Sex']#性别
                    ] , axis=1 )
full_X.head()
MasterMissMrMrsOfficerRoyaltyPclass_1Pclass_2Pclass_3FamilySize...Cabin_DCabin_ECabin_FCabin_GCabin_TCabin_UEmbarked_CEmbarked_QEmbarked_SSex
00010000012...0000010011
10001001002...0000001000
20100000011...0000010010
30001001002...0000000010
40010000011...0000010011

5 rows × 27 columns

4. 构建模型

用训练数据和某个机器学习算法得到机器学习模型,用测试数据评估模型

4.1 建立训练数据集和测试数据集

* 坦尼克号测试数据集因为是我们最后要提交给Kaggle的,里面没有生存情况的值,所以不能用于评估模型。 我们将Kaggle泰坦尼克号项目给我们的测试数据,叫做预测数据集(记为pred,也就是预测英文单词predict的缩写)。 也就是我们使用机器学习模型来对其生存情况就那些预测。
* 我们使用Kaggle泰坦尼克号项目给的训练数据集,做为我们的原始数据集(记为source);从这个原始数据集中拆分出训练数据集(记为train:用于模型训练)和测试数据集(记为test:用于模型评估)。
#原始数据集有891行
sourceRow=891

'''
sourceRow是我们在最开始合并数据前知道的,原始数据集有总共有891条数据
从特征集合full_X中提取原始数据集提取前891行数据时,我们要减去1,因为行号是从0开始的。
'''
#原始数据集:特征
source_X = full_X.loc[0:sourceRow-1,:]
#原始数据集:标签
source_y = full.loc[0:sourceRow-1,'Survived']   

#预测数据集:特征
pred_X = full_X.loc[sourceRow:,:]
'''
确保这里原始数据集取的是前891行的数据,不然后面模型会有错误
'''
#原始数据集有多少行
print('原始数据集有多少行:',source_X.shape[0])
#预测数据集大小
print('原始数据集有多少行:',pred_X.shape[0])
原始数据集有多少行: 891
原始数据集有多少行: 418
'''
从原始数据集(source)中拆分出训练数据集(用于模型训练train),测试数据集(用于模型评估test)
train_test_split是交叉验证中常用的函数,功能是从样本中随机的按比例选取train data和test data
train_data:所要划分的样本特征集
train_target:所要划分的样本结果
test_size:样本占比,如果是整数的话就是样本的数量
'''

'''
sklearn包0.8版本以后,需要将之前的sklearn.cross_validation 换成sklearn.model_selection
所以课程中的代码
from sklearn.cross_validation import train_test_split 
更新为下面的代码
'''
from sklearn.model_selection import train_test_split

#建立模型用的训练数据集和测试数据集
train_X, test_X, train_y, test_y = train_test_split(source_X ,
                                                    source_y,
                                                    train_size=.8)

#输出数据集大小
print ('原始数据集特征:',source_X.shape, 
       '训练数据集特征:',train_X.shape ,
      '测试数据集特征:',test_X.shape)

print ('原始数据集标签:',source_y.shape, 
       '训练数据集标签:',train_y.shape ,
      '测试数据集标签:',test_y.shape)
原始数据集特征: (891, 27) 训练数据集特征: (712, 27) 测试数据集特征: (179, 27)
原始数据集标签: (891,) 训练数据集标签: (712,) 测试数据集标签: (179,)
#原始数据查看
source_y.head()
0    0.0
1    1.0
2    1.0
3    1.0
4    0.0
Name: Survived, dtype: float64

4.2 选择机器学习算法

#第1步:导入算法
from sklearn.linear_model import LogisticRegression
#第2步:创建模型:逻辑回归(logisic regression)
model = LogisticRegression()
#随机森林Random Forests Model
#from sklearn.ensemble import RandomForestClassifier
#model = RandomForestClassifier(n_estimators=100)
#支持向量机Support Vector Machines
#from sklearn.svm import SVC, LinearSVC
#model = SVC()
#Gradient Boosting Classifier
#from sklearn.ensemble import GradientBoostingClassifier
#model = GradientBoostingClassifier()
#K-nearest neighbors
#from sklearn.neighbors import KNeighborsClassifier
#model = KNeighborsClassifier(n_neighbors = 3)
# Gaussian Naive Bayes
#from sklearn.naive_bayes import GaussianNB
#model = GaussianNB()

4.3 训练模型

#第3步:训练模型
model.fit( train_X , train_y )
LogisticRegression()

5. 评估模型

评估模型使用的是测试数据。因为我们这里使用的是分类机器学习算法,所以模型的score方法计算出的就是模型的准确率。

score方法输入的第1个参数test_X是测试数据的特征,test_y是测试数据的标签,模型输出预测结果。

# 分类问题,score得到的是模型的准确率
model.score(test_X , test_y )
0.8044692737430168

6. 方案实施

6.1 得到预测结果上传到Kaggle

使用预测数据集到底预测结果,并保存到csv文件中,上传到Kaggle中,就可以看到排名。

#使用机器学习模型,对预测数据集中的生存情况进行预测
pred_Y = model.predict(pred_X)

'''
生成的预测值是浮点数(0.0,1,0)
但是Kaggle要求提交的结果是整型(0,1)
所以要对数据类型进行转换
'''
pred_Y=pred_Y.astype(int)
#乘客id
passenger_id = full.loc[sourceRow:,'PassengerId']
#数据框:乘客id,预测生存情况的值
predDf = pd.DataFrame( 
    { 'PassengerId': passenger_id , 
     'Survived': pred_Y } )
predDf.shape
predDf.head()
#保存结果
predDf.to_csv( 'titanic_pred.csv' , index = False )
  • 1
    点赞
  • 26
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值