Titanic 特征工程

在之前的数据分析中将Age、Ticket、Name字段删除确实有些草率,所以先对Age字段做缺失值处理。

1.Age字段的缺失值处理

#按称谓计算各层年龄
Mr_age_mean = train_data[train_data.Name.str.contains('Mr.')]['Age'].mean()
Mrs_age_mean = train_data[train_data.Name.str.contains('Mrs.')]['Age'].mean()
Miss_age_mean = train_data[train_data.Name.str.contains('Miss.')]['Age'].mean()
Master_age_mean = train_data[train_data.Name.str.contains('Master.')]['Age'].mean()
train_data['Age'][(train_data.Name.str.contains('Mr.')) & (train_data.Age.isnull())] = Mr_age_mean
train_data['Age'][(train_data.Name.str.contains('Mrs.')) & (train_data.Age.isnull())] = Mrs_age_mean
train_data['Age'][(train_data.Name.str.contains('Miss.')) & (train_data.Age.isnull())] = Miss_age_mean
train_data['Age'][(train_data.Name.str.contains('Master.')) & (train_data.Age.isnull())] = Master_age_mean
train_data['Age'][(train_data.Name.str.contains('Dr.')) & (train_data.Age.isnull())] = Mr_age_mean
#按称谓计算各层年龄
Mr_age_mean = test_data[test_data.Name.str.contains('Mr.')]['Age'].mean()
Mrs_age_mean = test_data[test_data.Name.str.contains('Mrs.')]['Age'].mean()
Miss_age_mean = test_data[test_data.Name.str.contains('Miss.')]['Age'].mean()
Master_age_mean = test_data[test_data.Name.str.contains('Master.')]['Age'].mean()
test_data['Age'][(test_data.Name.str.contains('Mr.')) & (test_data.Age.isnull())] = Mr_age_mean
test_data['Age'][(test_data.Name.str.contains('Dr.')) & (test_data.Age.isnull())] = Mr_age_mean
test_data['Age'][(test_data.Name.str.contains('Mrs.')) & (test_data.Age.isnull())] = Mrs_age_mean
test_data['Age'][(test_data.Name.str.contains('Miss.')) & (test_data.Age.isnull())] = Miss_age_mean
test_data['Age'][(test_data.Name.str.contains('Master.')) & (test_data.Age.isnull())] = Master_age_mean

Age缺失值填充结束,在这里以称谓为基准来填充缺失值的。

接下里挖掘隐藏特征。

1.增加称谓特征

Name字段中包含一些称谓,这些称谓可以体现性别,是否结婚,社会地位等重要信息,所以从Name字段中提取出称谓特征。

train_data['Title'] = train_data.Name.apply(lambda x: x.split(',')[1]).apply(lambda x: x.split()[0])
test_data['Title'] = test_data.Name.apply(lambda x: x.split(',')[1]).apply(lambda x: x.split()[0])

分析一下称谓特征与是否获救的关系

import matplotlib.pyplot as plt
survived_0_title = train_data.Title[train_data.Survived == 0].value_counts()
survived_1_title = train_data.Title[train_data.Survived == 1].value_counts()
df = pd.DataFrame({'survived-0-title':survived_0_title, 'survived-1-title':survived_1_title})
df.plot(kind='bar', stacked=True, color=['lightcoral', 'lightgreen'])
plt.xlabel('title')
plt.ylabel('sum')
plt.legend(('survived-0', 'survived-1'), loc='best')
plt.title('Title-Survived')
plt.show()

2.增加是否独自一人的特征

从之前的数据分析中可以看出,当为独自一人时获救几率较低,但是从这家人的增加获救几率会先升后降。所以提取是否为一人的特征。

#增加是否为独自一人的特征
train_data['isalone'] = np.nan
train_data['isalone'][train_data.SibSp + train_data.Parch == 0] = 1
train_data['isalone'][train_data.isalone.isnull()] = 0
#增加是否为独自一人的特征
test_data['isalone'] = np.nan
test_data['isalone'][test_data.SibSp + test_data.Parch == 0] = 1
test_data['isalone'][test_data.isalone.isnull()] = 0

分析是否一人特征与是否获救的关系

survived_0_alone = train_data.isalone[train_data.Survived == 0].value_counts()
survived_1_alone = train_data.isalone[train_data.Survived == 1].value_counts()
df = pd.DataFrame({'survived-0-alone':survived_0_alone, 'survived-1-alone':survived_1_alone})
df.plot(kind='bar', stacked=True, color=['lightcoral', 'lightgreen'])
plt.xlabel('isalone')
plt.ylabel('sum')
plt.legend(('survived-0', 'survived-1'), loc='best')
plt.title('isalone-Survived')
plt.show()

可以看书独自一个人的时候获救几率低于不是独自一人的时候

3.添加家人总人数的特征,分析家人人数与是否获救的关系,从而确定是否再增加关于家人人数的特征

#增加家人总人数的特征
train_data['Family'] = train_data.SibSp + train_data.Parch
test_data['Family'] = test_data.SibSp + train_data.Parch

survived_0_family = train_data.Family[train_data.Survived == 0].value_counts()
survived_1_family = train_data.Family[train_data.Survived == 1].value_counts()
df = pd.DataFrame({'survived-0-family':survived_0_family, 'survived-1-family':survived_1_family})
df.plot(kind='bar', stacked=True, color=['lightcoral', 'lightgreen'])
plt.xlabel('Family')
plt.ylabel('sum')
plt.legend(('survived-0', 'survived-1'), loc='best')
plt.title('Family-Survived')
plt.show()

可以看出家人太多或太少获救几率都不高,家人人数为1-3时获救几率最高,所以对家人总人数特征修改,体现出家人总人数为1-3时的特征

#将家人总人数走一个划分
train_data['Family'] = np.nan
train_data['Family'][train_data.SibSp + train_data.Parch == 0] = 0
train_data['Family'][(train_data.SibSp + train_data.Parch > 0) & (train_data.SibSp + train_data.Parch) <= 3] = 1
train_data['Family'][train_data.Family.isnull()] = 2
#将家人总人数走一个划分
test_data['Family'] = np.nan
test_data['Family'][test_data.SibSp + test_data.Parch == 0] = 0
test_data['Family'][(test_data.SibSp + test_data.Parch > 0) & (test_data.SibSp + test_data.Parch) <= 3] = 1
test_data['Family'][test_data.Family.isnull()] = 2

4.添加是否为母亲的特征。如果Parch不为0,且是女性,那很可能是以为母亲,母亲的获救概率按理来说大一些

#增加母亲的特征
train_data['mother'] = np.nan
train_data['mother'][(train_data.Parch > 0) & (train_data.Sex == 'female')] = 1
train_data['mother'][train_data.mother.isnull()] = 0
#增加母亲的特征
test_data['mother'] = np.nan
test_data['mother'][(test_data.Parch > 0) & (test_data.Sex == 'female')] = 1
test_data['mother'][test_data.mother.isnull()] = 0

可以看一下mother与是否获救的关系

5.增加person特征,从性别、年龄可以区分出成年男性、成年女性、小孩,这三类人的hu获救几率有很多差别

#增加person特征,区分成年男性、成年女性、小孩
train_data['person'] = np.nan
train_data['person'][train_data.Age <= 16] = 'child'
train_data['person'][(train_data.Age > 16) & (train_data.Sex == 'female')] = 'adult-women'
train_data['person'][(train_data.Age > 16) & (train_data.Sex == 'male')] = 'adult-man'
#增加person特征,区分成年男性、成年女性、小孩
test_data['person'] = np.nan
test_data['person'][test_data.Age <= 16] = 'child'
test_data['person'][(test_data.Age > 16) & (test_data.Sex == 'female')] = 'adult-women'
test_data['person'][(test_data.Age > 16) & (test_data.Sex == 'male')] = 'adult-man'

查看person特征与是否获救的关系

6.添加票号相同的特征,观察Ticket字段发现有些人共享一张票,那或者是朋友或家人,那这类人是否获救几率高一些

#增加票号是否有重复的特征
train_data['ticket-same'] = np.nan
train_data['ticket-same'] = train_data['Ticket'].duplicated()
train_data['ticket-same'][train_data['ticket-same'] == True] = 1
train_data['ticket-same'][train_data['ticket-same'] == False] = 0
#增加票号是否有重复的特征
test_data['ticket-same'] = np.nan
test_data['ticket-same'] = test_data['Ticket'].duplicated()
test_data['ticket-same'][test_data['ticket-same'] == True] = 1
test_data['ticket-same'][test_data['ticket-same'] == False] = 0

查看ticket-same与是否获救的关系

添加完特征之后,已经利用Name和Ticket产生了新的特征,可以将这两个特征删除

接下来对Age和Fare做离散化处理

1.Age离散化处理

#Age离散化
train_data.Age[train_data.Age <= 16] = 0
train_data.Age[(train_data.Age > 16) & (train_data.Age <= 32)] = 1
train_data.Age[(train_data.Age > 32) & (train_data.Age <= 48)] = 2
train_data.Age[(train_data.Age > 48) & (train_data.Age <= 64)] = 3
train_data.Age[train_data.Age > 64] = 4
#Age离散化
test_data.Age[test_data.Age <= 16] = 0
test_data.Age[(test_data.Age > 16) & (test_data.Age <= 32)] = 1
test_data.Age[(test_data.Age > 32) & (test_data.Age <= 48)] = 2
test_data.Age[(test_data.Age > 48) & (test_data.Age <= 64)] = 3
test_data.Age[test_data.Age > 64] = 4

2.Fare离散化处理

#Fare离散化
train_data.Fare[train_data.Fare <= 7.91] = 0
train_data.Fare[(train_data.Fare > 7.91) & (train_data.Fare <= 14.454)] = 1
train_data.Fare[(train_data.Fare > 14.454) & (train_data.Fare) <= 31] = 2
train_data.Fare[train_data.Fare > 31] = 3
#Fare离散化
test_data.Fare[test_data.Fare <= 7.91] = 0
test_data.Fare[(test_data.Fare > 7.91) & (test_data.Fare <= 14.454)] = 1
test_data.Fare[(test_data.Fare > 14.454) & (test_data.Fare) <= 31] = 2
test_data.Fare[test_data.Fare > 31] = 3

最后确定特征的重要性。

目前所有的特征为:Pclass、Sex、SibSp、Parch、Fare、Cabin、Embarked、Age、Title、isalone、Family、mother、person、ticket-same

selector = SelectKBest(chi2, k=14)
X = train_data[['Pclass', 'Sex', 'SibSp', 'Parch', 'Fare', 'Cabin', 'Embarked', 'Age',
       'Title', 'isalone', 'Family', 'mother', 'person', 'ticket-same']]
Y = train_data['Survived']
a = selector.fit(X, Y)
features = ['Pclass', 'Sex', 'SibSp', 'Parch', 'Fare', 'Cabin', 'Embarked', 'Age',
       'Title', 'isalone', 'Family', 'mother', 'person', 'ticket-same']
import seaborn as sns
sns.barplot(x=features, y=np.array(a.scores_), ci=0)
plt.show()

 

参考资料https://bbs.huaweicloud.com/blogs/80ef17d5319c11e89fc57ca23e93a89f

之前对Age、Name、Fare的考虑有些不足,做些修改

1.使用随机森林预测未知的Age

from sklearn.ensemble import RandomForestRegressor
#用随机森林预测未知的年龄
def set_missing_ages(df):

    # 把已有的数值型特征取出来丢进Random Forest Regressor中
    age_df = df[['Age','Fare', 'Parch', 'SibSp', 'Pclass']]

    # 乘客分成已知年龄和未知年龄两部分
    known_age = age_df[age_df.Age.notnull()].as_matrix()
    unknown_age = age_df[age_df.Age.isnull()].as_matrix()

    # y即目标年龄
    y = known_age[:, 0]

    # X即特征属性值
    X = known_age[:, 1:]

    # fit到RandomForestRegressor之中
    rfr = RandomForestRegressor(random_state=0, n_estimators=2000)
    rfr.fit(X, y)

    # 用得到的模型进行未知年龄结果预测
    predictedAges = rfr.predict(unknown_age[:, 1::])

    # 用得到的预测结果填补原缺失数据
    df.loc[ (df.Age.isnull()), 'Age' ] = predictedAges 

    return df, rfr

train_data1, rfr  = set_missing_ages(train_data1)
age_df = test_data1[['Age','Fare', 'Parch', 'SibSp', 'Pclass']]

# 乘客分成已知年龄和未知年龄两部分
known_age = age_df[age_df.Age.notnull()].as_matrix()
unknown_age = age_df[age_df.Age.isnull()].as_matrix()
test_data_age = rfr.predict(unknown_age[:, 1::])
test_data1['Age'][test_data1.Age.isnull()] = test_data_age

2.从Name中提取title,并做归一处理

#从Name中提取头衔
import re     #re 模块使 Python 语言拥有全部的正则表达式功能。
def get_title(name):
    title_search=re.search(' ([A-Za-z]+)\.',name) #re.search 扫描整个字符串并返回第一个成功的匹配。name是要进行检索的,前面是要匹配的
    if title_search:
        return title_search.group(1) #查看匹配的情况
    return ""
 
train_data1['title'] = train_data1['Name'].apply(get_title)
test_data1['title'] = test_data1['Name'].apply(get_title)

#对title最进一步的归一
title_mapping={'Mr':1,'Miss':2,'Mrs':3,'Master':4,'Dr':5,'Rev':6,'Major':7,'Mlle':7,'Col':7,'Jonkheer':8,'Capt':8,'Dona':8,'Don':8,'Mme':8,'Sir':8,'Lady':8,'Ms':8,'Countess':8}
for k,v in title_mapping.items():
    train_data1.title[train_data1.title==k]=v
    test_data1.title[test_data1.title == k] = v

 因为Age的值发生变化,所以对person字段进行更新

train_data['person'][train_data.Age <= 16] = 'child'
train_data['person'][(train_data.Age > 16) & (train_data.Sex == 'male')] = 'adult-man'
train_data['person'][(train_data.Age > 16) & (train_data.Sex == 'female')] = 'adult-woman'
test_data['person'][test_data.Age <= 16] = 'child'
test_data['person'][(test_data.Age > 16) & (test_data.Sex == 'male')] = 'adult-man'
test_data['person'][(test_data.Age > 16) & (test_data.Sex == 'female')] = 'adult-woman'

3.使用Kmeans对Age、Fare进行离散化

#对Age离散化
kmeans = KMeans(n_clusters=4)
kmeans.fit(np.asarray(train_data.Age).reshape(-1,1))
train_data_age = kmeans.labels_
kmeans.fit(np.asarray(test_data.Age).reshape(-1, 1))
test_data_age = kmeans.labels_
train_data['age'] = train_data_age
test_data['age'] = test_data_age

#对Fare离散化
kmeans = KMeans(n_clusters=4)
kmeans.fit(np.asarray(test_data.Fare).reshape(-1, 1))
test_data_fare = kmeans.labels_
kmeans.fit(np.asarray(train_data.Fare).reshape(-1, 1))
train_data_fare = kmeans.labels_
train_data['fare'] = train_data_fare
test_data['fare'] = test_data_fare

最后使用随机森林计算特征的重要性

rf = RandomForestClassifier(random_state=42, n_estimators=500, max_depth=20 )
result = rf.fit(train_data[['Pclass', 'Sex', 'SibSp', 'Parch', 'Cabin',
       'Embarked', 'title', 'isalone', 'Family', 'mother', 'person',
       'ticket-same', 'age', 'fare']], train_data['Survived'])
importances_df = pd.DataFrame({'feature':['Pclass', 'Sex', 'SibSp', 'Parch', 'Cabin',
       'Embarked', 'title', 'isalone', 'Family', 'mother', 'person',
       'ticket-same', 'age', 'fare'], 'importance':result.feature_importances_})
importances_df.plot(kind='bar', stacked=True)
plt.xticks(np.arange(0, 16, 1),['Pclass', 'Sex', 'SibSp', 'Parch', 'Cabin',
       'Embarked', 'title', 'isalone', 'Family', 'mother', 'person',
       'ticket-same', 'age', 'fare'])

这是将Age和Fare离散化之后的特征重要性

这是Age和Fare离散化之前的特征重要性

参考资料https://zhuanlan.zhihu.com/p/30538352

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值