kaggle泰坦尼克(Titanic: Machine Learning from Disaster)数据分析(一)

本文是关于kaggle泰坦尼克号数据分析的项目,涉及特征工程、可视化、模型融合和基础模型。在特征工程中,增加了如名字长度、是否有客舱等变量,并对数据进行了预处理。通过相关性分析和可视化,展示了变量间的关系。模型融合部分,使用了sklearnhelper类,结合随机森林、Extra Trees等分类器进行Out-of-Fold预测。最后,通过XGBoost进行了二级学习,生成了提交数据文件。
摘要由CSDN通过智能技术生成

这是第二个kaggle上的入门项目,泰坦尼克号灾难人员是否获救预测。依然从投票较前的kernel中学习。这次学习的kernel是Introduction to Ensembling/Stacking in Python

1、准备工作

导入各种库:

# Load in our libraries
import pandas as pd
import numpy as np
import re
import sklearn
import xgboost as xgb
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.tools as tls
import warnings
warnings.filterwarnings('ignore')
#Going to use these 5 base models for the stacking
from sklearn.ensemble import (RandomForestClassifier, AdaBoostClassifier, 
                              GradientBoostingClassifier, ExtraTreesClassifier)
from sklearn.svm import SVC
from sklearn.model_selection import KFold

读取数据集,显示前三行:

#Load in the train and test datasets
train = pd.read_csv('data/train.csv')
test = pd.read_csv('data/test.csv')
#Store our passenger ID for easy access
PassengerId = test['PassengerId']
train.head(3)

在这里插入图片描述
数据集中各变量名的含义:
(1)Survived:是否获救,1表示是。
(2)Pclass: 客票类型(1 = 1st, 2 = 2nd, 3 = 3rd),社会经济地位的代表
(3)SibSP:船上(继)兄弟姐妹/配偶的人数
(4)Parch:船上父母(继)子女的人数
(5)Ticket:票号
(6)Fare:票价
(7)Cabin:客舱号
(8)Embarked:登船港,C=瑟堡,Q=昆士敦,S=南安普敦

2、特征工程

连接训练集和测试集,增加两个变量:名字的长度和是否有客舱:

full_data = [train, test]
#Some features of my own that I have added in
#Gives the length of the name
train['Name_length'] = train['Name'].apply(len)
test['Name_length'] = test['Name'].apply(len)
#Feature that tells whether a passenger had a cabin on the Titanic
train['Has_Cabin'] = train["Cabin"].apply(lambda x: 0 if type(x) == float else 1)
test['Has_Cabin'] = test["Cabin"].apply(lambda x: 0 if type(x) == float else 1)

增加两个变量:船上家人总数和是否一个人:

#Feature engineering steps taken from Sina
#Create new feature FamilySize as a combination of SibSp and Parch
for dataset in full_data:
    dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
#Create new feature IsAlone from FamilySize
for dataset in full_data:
    dataset['IsAlone'] = 0
    dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1

用S填充登船港,票价用中值填充

#Remove all NULLS in the Embarked column
for dataset in full_data:
    dataset['Embarked'] = dataset['Embarked'].fillna('S')
#Remove all NULLS in the Fare column and create a new feature CategoricalFare
for dataset in full_data:
    dataset['Fare'] = dataset['Fare'].fillna(train['Fare'].median())
train['CategoricalFare'] = pd.qcut(train['Fare'], 4)

注:pd.qcut(数组,k)表示将对应的数组切成相同的k份,返回每个数对应的分组。
增加年龄分层变量:

# Create a New feature CategoricalAge
for dataset in full_data:
    age_avg = dataset['Age'].mean()
    age_std = dataset['Age'].std()
    age_null_count = dataset['Age'].isnull().sum()
    age_null_random_list = np.random.randint(age_avg - age_std, age_avg + age_std, size=age_null_count)
    dataset['Age'][np.isnan(dataset['Age'])] = age_null_random_list
    dataset['Age'] = dataset['Age'].astype(int)
train['CategoricalAge'] = pd.cut(train['Age'], 5)

注:numpy.random.randint(low, high=None, size=None, dtype=‘l’),函数的作用是,返回一个随机整型数,范围从低(包括)到高(不包括)。pd.cut将根据值本身来选择箱子均匀间隔,即每个箱子的间距都是相同的,而qcut是根据这些值的频率来选择箱子的均匀间隔,即每个箱子中含有的数的数量是相同的。

新增变量,返回名字中.前面的信息:

# Define function to extract titles from passenger names
def get_title(name):
    title_search = re.search(' ([A-Za-z]+)\.', name)
    # If the title exists, extract and return it.
    if title_search:
        return title_search.group(1)
    return ""
# Create a new feature Title, containing the titles of passenger names
for dataset in full_data:
    dataset['Title'] = dataset['Name'].apply(get_title)

注:re.search函数会在字符串内查找模式匹配,只要找到第一个匹配然后返回,如果字符串没有匹配,则返回None。 group(1) 列出第一个括号匹配部分。

将一些特殊的名字用其它代替:

#Group all non-common titles into one single grouping "Rare"
for dataset in full_data:
    dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col','Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 
  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值