14天数据分析与机器学习实践之Day14——案例分析:泰坦尼克之灾代码详解

14天数据分析与机器学习实践之Day14——案例分析:泰坦尼克之灾代码详解

  • 首先读取数据并输出前五行的数据
import pandas #ipython notebook
titanic = pandas.read_csv("titanic_train.csv")
titanic.head(5)
#print (titanic.describe())

在这里插入图片描述

  • 数据预处理,将空白的Age值填充为均值
titanic["Age"] = titanic["Age"].fillna(titanic["Age"].median())
print(titanic.describe())

在这里插入图片描述

  • 将字符值男女转换为0,1
print(titanic["Sex"].unique())

# Replace all the occurences of male with the number 0.
titanic.loc[titanic["Sex"] == "male", "Sex"] = 0
titanic.loc[titanic["Sex"] == "female", "Sex"] = 1

[‘male’ ‘female’]

  • 将字符值SCQ转换为0,1,2
print(titanic["Embarked"].unique())
titanic["Embarked"] = titanic["Embarked"].fillna('S')
titanic.loc[titanic["Embarked"] == "S", "Embarked"] = 0
titanic.loc[titanic["Embarked"] == "C", "Embarked"] = 1
titanic.loc[titanic["Embarked"] == "Q", "Embarked"] = 2

[‘S’ ‘C’ ‘Q’ nan]

  • 使用sklearn做数据回归
  • 导入sklearn中的两个库
  • 使用predictors标记特征"Pclass", “Sex”, “Age”, “SibSp”, “Parch”, “Fare”, “Embarked”
  • alg = LinearRegression()另一个变量等于线性回归的模型
  • kf = KFold(titanic.shape[0], n_folds=3, random_state=1)交叉验证
  • 将训练数据代码标签取出,fit线性回归模型并作拟合和预测
# Import the linear regression class
from sklearn.linear_model import LinearRegression
# Sklearn also has a helper that makes it easy to do cross validation
from sklearn.cross_validation import KFold

# The columns we'll use to predict the target
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]

# Initialize our algorithm class
alg = LinearRegression()
# Generate cross validation folds for the titanic dataset.  It return the row indices corresponding to train and test.
# We set random_state to ensure we get the same splits every time we run this.
kf = KFold(titanic.shape[0], n_folds=3, random_state=1)

predictions = []
for train, test in kf:
    # The predictors we're using the train the algorithm.  Note how we only take the rows in the train folds.
    train_predictors = (titanic[predictors].iloc[train,:])
    # The target we're using to train the algorithm.
    train_target = titanic["Survived"].iloc[train]
    # Training the algorithm using the predictors and target.
    alg.fit(train_predictors, train_target)
    # We can now make predictions on the test fold
    test_predictions = alg.predict(titanic[predictors].iloc[test,:])
    predictions.append(test_predictions)
  • 输出分类结果(获救率)
  • 将线性回归所得值映射为1和0,并计算获救率
import numpy as np

# The predictions are in three separate numpy arrays.  Concatenate them into one.  
# We concatenate them on axis 0, as they only have one axis.
predictions = np.concatenate(predictions, axis=0)

# Map predictions to outcomes (only possible outcomes are 1 and 0)
predictions[predictions > .5] = 1
predictions[predictions <=.5] = 0
accuracy = sum(predictions[predictions == titanic["Survived"]]) / len(predictions)
print(accuracy)

0.2615039281705948

  • 逻辑回归
from sklearn import cross_validation
from sklearn.linear_model import LogisticRegression
# Initialize our algorithm
alg = LogisticRegression(random_state=1)
# Compute the accuracy score for all the cross validation folds.  (much simpler than what we did before!)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=3)
# Take the mean of the scores (because we have one for each fold)
print(scores.mean())

0.7878787878787877

  • 逻辑回归
titanic_test = pandas.read_csv("test.csv")
titanic_test["Age"] = titanic_test["Age"].fillna(titanic["Age"].median())
titanic_test["Fare"] = titanic_test["Fare"].fillna(titanic_test["Fare"].median())
titanic_test.loc[titanic_test["Sex"] == "male", "Sex"] = 0 
titanic_test.loc[titanic_test["Sex"] == "female", "Sex"] = 1
titanic_test["Embarked"] = titanic_test["Embarked"].fillna("S")

titanic_test.loc[titanic_test["Embarked"] == "S", "Embarked"] = 0
titanic_test.loc[titanic_test["Embarked"] == "C", "Embarked"] = 1
titanic_test.loc[titanic_test["Embarked"] == "Q", "Embarked"] = 2
  • 随机森林(RandomForest)
  • 选择使用的特征,构造参数
  • 交叉验证,平均准确率
from sklearn import cross_validation
from sklearn.ensemble import RandomForestClassifier

predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]

# Initialize our algorithm with the default paramters
# n_estimators is the number of trees we want to make
# min_samples_split is the minimum number of rows we need to make a split
# min_samples_leaf is the minimum number of samples we can have at the place where a tree branch ends (the bottom points of the tree)
alg = RandomForestClassifier(random_state=1, n_estimators=10, min_samples_split=2, min_samples_leaf=1)
# Compute the accuracy score for all the cross validation folds.  (much simpler than what we did before!)
kf = cross_validation.KFold(titanic.shape[0], n_folds=3, random_state=1)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=kf)

# Take the mean of the scores (because we have one for each fold)
print(scores.mean())

0.7856341189674523

  • 改变参数,使随机森林由更多的树,节点减少
alg = RandomForestClassifier(random_state=1, n_estimators=100, min_samples_split=4, min_samples_leaf=2)
# Compute the accuracy score for all the cross validation folds.  (much simpler than what we did before!)
kf = cross_validation.KFold(titanic.shape[0], 3, random_state=1)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=kf)

# Take the mean of the scores (because we have one for each fold)
print(scores.mean())

0.8148148148148148

  • 将名字长度作为特征
# Generating a familysize column
titanic["FamilySize"] = titanic["SibSp"] + titanic["Parch"]

# The .apply method generates a new series
titanic["NameLength"] = titanic["Name"].apply(lambda x: len(x))
  • 将名字开头作为一个变量特征
import re

# A function to get the title from a name.
def get_title(name):
    # Use a regular expression to search for a title.  Titles always consist of capital and lowercase letters, and end with a period.
    title_search = re.search(' ([A-Za-z]+)\.', name)
    # If the title exists, extract and return it.
    if title_search:
        return title_search.group(1)
    return ""

# Get all the titles and print how often each one occurs.
titles = titanic["Name"].apply(get_title)
print(pandas.value_counts(titles))

# Map each title to an integer.  Some titles are very rare, and are compressed into the same codes as other titles.
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Dr": 5, "Rev": 6, "Major": 7, "Col": 7, "Mlle": 8, "Mme": 8, "Don": 9, "Lady": 10, "Countess": 10, "Jonkheer": 10, "Sir": 9, "Capt": 7, "Ms": 2}
for k,v in title_mapping.items():
    titles[titles == k] = v

# Verify that we converted everything.
print(pandas.value_counts(titles))

# Add in the title column.
titanic["Title"] = titles

在这里插入图片描述

  • 使用sklearn.feature_selection库
import numpy as np
from sklearn.feature_selection import SelectKBest, f_classif
import matplotlib.pyplot as plt
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked", "FamilySize", "Title", "NameLength"]

# Perform feature selection
selector = SelectKBest(f_classif, k=5)
selector.fit(titanic[predictors], titanic["Survived"])

# Get the raw p-values for each feature, and transform from p-values into scores
scores = -np.log10(selector.pvalues_)

# Plot the scores.  See how "Pclass", "Sex", "Title", and "Fare" are the best?
plt.bar(range(len(predictors)), scores)
plt.xticks(range(len(predictors)), predictors, rotation='vertical')
plt.show()

# Pick only the four best features.
predictors = ["Pclass", "Sex", "Fare", "Title"]

alg = RandomForestClassifier(random_state=1, n_estimators=50, min_samples_split=8, min_samples_leaf=4)

在这里插入图片描述

  • 集成逻辑回归、boosting算法分别进行预测,两个结果做平均获得最终是否获救
from sklearn.ensemble import GradientBoostingClassifier
import numpy as np

# The algorithms we want to ensemble.
# We're using the more linear predictors for the logistic regression, and everything with the gradient boosting classifier.
algorithms = [
    [GradientBoostingClassifier(random_state=1, n_estimators=25, max_depth=3), ["Pclass", "Sex", "Age", "Fare", "Embarked", "FamilySize", "Title",]],
    [LogisticRegression(random_state=1), ["Pclass", "Sex", "Fare", "FamilySize", "Title", "Age", "Embarked"]]
]

# Initialize the cross validation folds
kf = KFold(titanic.shape[0], n_folds=3, random_state=1)

predictions = []
for train, test in kf:
    train_target = titanic["Survived"].iloc[train]
    full_test_predictions = []
    # Make predictions for each algorithm on each fold
    for alg, predictors in algorithms:
        # Fit the algorithm on the training data.
        alg.fit(titanic[predictors].iloc[train,:], train_target)
        # Select and predict on the test fold.  
        # The .astype(float) is necessary to convert the dataframe to all floats and avoid an sklearn error.
        test_predictions = alg.predict_proba(titanic[predictors].iloc[test,:].astype(float))[:,1]
        full_test_predictions.append(test_predictions)
    # Use a simple ensembling scheme -- just average the predictions to get the final classification.
    test_predictions = (full_test_predictions[0] + full_test_predictions[1]) / 2
    # Any value over .5 is assumed to be a 1 prediction, and below .5 is a 0 prediction.
    test_predictions[test_predictions <= .5] = 0
    test_predictions[test_predictions > .5] = 1
    predictions.append(test_predictions)

# Put all the predictions together into one array.
predictions = np.concatenate(predictions, axis=0)

# Compute accuracy by comparing to the training data.
accuracy = sum(predictions[predictions == titanic["Survived"]]) / len(predictions)
print(accuracy)

0.27946127946127947

titles = titanic_test["Name"].apply(get_title)
# We're adding the Dona title to the mapping, because it's in the test set, but not the training set
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Dr": 5, "Rev": 6, "Major": 7, "Col": 7, "Mlle": 8, "Mme": 8, "Don": 9, "Lady": 10, "Countess": 10, "Jonkheer": 10, "Sir": 9, "Capt": 7, "Ms": 2, "Dona": 10}
for k,v in title_mapping.items():
    titles[titles == k] = v
titanic_test["Title"] = titles
# Check the counts of each unique title.
print(pandas.value_counts(titanic_test["Title"]))

# Now, we add the family size column.
titanic_test["FamilySize"] = titanic_test["SibSp"] + titanic_test["Parch"]

1 240
2 79
3 72
4 21
7 2
6 2
10 1
5 1
Name: Title, dtype: int64

predictors = ["Pclass", "Sex", "Age", "Fare", "Embarked", "FamilySize", "Title"]

algorithms = [
    [GradientBoostingClassifier(random_state=1, n_estimators=25, max_depth=3), predictors],
    [LogisticRegression(random_state=1), ["Pclass", "Sex", "Fare", "FamilySize", "Title", "Age", "Embarked"]]
]

full_predictions = []
for alg, predictors in algorithms:
    # Fit the algorithm using the full training data.
    alg.fit(titanic[predictors], titanic["Survived"])
    # Predict using the test dataset.  We have to convert all the columns to floats to avoid an error.
    predictions = alg.predict_proba(titanic_test[predictors].astype(float))[:,1]
    full_predictions.append(predictions)

# The gradient boosting classifier generates better predictions, so we weight it higher.
predictions = (full_predictions[0] * 3 + full_predictions[1]) / 4
predictions

结果:
array([0.11681943, 0.47836987, 0.12614048, 0.13097708, 0.52107272,
0.14351509, 0.64086375, 0.18002627, 0.67802231, 0.12110664,
0.12104744, 0.20901962, 0.91068965, 0.10891017, 0.89143306,
0.87714269, 0.16348534, 0.13906931, 0.54105377, 0.55662058,
0.22420124, 0.53718352, 0.90572929, 0.38889821, 0.88385196,
0.10357086, 0.90910138, 0.13745621, 0.31046113, 0.1266512 ,
0.11663426, 0.18274307, 0.55222145, 0.49649779, 0.42414476,
0.14190148, 0.5097587 , 0.52454348, 0.13270036, 0.28366139,
0.11144781, 0.46618779, 0.0999618 , 0.83421925, 0.89960308,
0.14982951, 0.31592763, 0.13788495, 0.89105091, 0.54190897,
0.35666131, 0.17717223, 0.83074005, 0.87996462, 0.17558166,
0.13738875, 0.10666907, 0.12343384, 0.12099323, 0.9128551 ,
0.13098613, 0.15341529, 0.12993431, 0.66574184, 0.66341763,
0.87273821, 0.67239645, 0.28826312, 0.35235967, 0.85566511,
0.66225137, 0.12701486, 0.55392305, 0.36739875, 0.91110735,
0.41201027, 0.13013567, 0.83673046, 0.15613989, 0.66225137,
0.68126047, 0.20605054, 0.20382454, 0.12104744, 0.18485201,
0.13129541, 0.65681561, 0.53031943, 0.65490661, 0.7987881 ,
0.53765995, 0.12103592, 0.89138279, 0.13013567, 0.28405691,
0.12344901, 0.86794385, 0.14665909, 0.58601586, 0.12260391,
0.90434217, 0.14730313, 0.13788495, 0.12261977, 0.62258308,
0.13155404, 0.14606445, 0.13788495, 0.13019897, 0.1747259 ,
0.14285637, 0.65491346, 0.89529098, 0.67147699, 0.88346925,
0.13991227, 0.11804329, 0.69614332, 0.36668206, 0.86243053,
0.87650636, 0.12608344, 0.90276979, 0.12098591, 0.13788495,
0.56973992, 0.12607685, 0.6373454 , 0.13339624, 0.13340097,
0.12723238, 0.516065 , 0.23922865, 0.10791048, 0.09896431,
0.12430648, 0.13345732, 0.16213663, 0.52031607, 0.12232514,
0.20713034, 0.90530415, 0.19746624, 0.16153256, 0.42927458,
0.10486884, 0.33642421, 0.13517918, 0.46618779, 0.34475031,
0.91431763, 0.13214259, 0.106908 , 0.48984982, 0.11273495,
0.12427392, 0.91070653, 0.57993806, 0.42927458, 0.51275443,
0.65490269, 0.5788139 , 0.82115224, 0.12096213, 0.2897616 ,
0.58588482, 0.30129764, 0.14606414, 0.90250897, 0.52259532,
0.12101447, 0.13298743, 0.12418074, 0.13206749, 0.13196412,
0.8729528 , 0.8763491 , 0.2966958 , 0.83391074, 0.85559817,
0.15613989, 0.33351255, 0.90219659, 0.13788495, 0.91719144,
0.13602615, 0.85484209, 0.12240938, 0.14217439, 0.13560305,
0.13487572, 0.25547156, 0.4994872 , 0.12728693, 0.71978347,
0.10795079, 0.855151 , 0.58992535, 0.16645233, 0.53981907,
0.64868974, 0.66326561, 0.60979494, 0.87334866, 0.16322206,
0.25696069, 0.63084589, 0.16482157, 0.88985628, 0.12345941,
0.12849223, 0.12096689, 0.24674446, 0.80201864, 0.41248946,
0.29767987, 0.65493693, 0.21859743, 0.90027904, 0.13013567,
0.81371562, 0.13610635, 0.84276502, 0.12700322, 0.87790232,
0.59808804, 0.12517601, 0.65490661, 0.11487155, 0.14412709,
0.25074609, 0.89267223, 0.11622218, 0.13790202, 0.34223771,
0.12796256, 0.19365149, 0.14018024, 0.80950131, 0.89791511,
0.87599955, 0.82599874, 0.33035454, 0.12104665, 0.33256695,
0.2871044 , 0.87904012, 0.16058594, 0.86243053, 0.59134008,
0.74587991, 0.1543381 , 0.39646483, 0.13353789, 0.12701466,
0.12101447, 0.13788495, 0.13013567, 0.83007385, 0.12700079,
0.10894619, 0.12701002, 0.85005234, 0.64931714, 0.16618664,
0.12104744, 0.21821031, 0.12101447, 0.5097587 , 0.14015932,
0.3449509 , 0.13788495, 0.91564324, 0.63329198, 0.13206702,
0.85715289, 0.15861211, 0.12499702, 0.14266702, 0.16811417,
0.52047246, 0.66229245, 0.65490661, 0.64138125, 0.71200672,
0.10600723, 0.12098591, 0.36277807, 0.13206749, 0.13013567,
0.33304406, 0.59320635, 0.13206749, 0.5058149 , 0.1208131 ,
0.12263198, 0.77904145, 0.1266512 , 0.33024405, 0.12028548,
0.11813558, 0.17546984, 0.12169028, 0.13346667, 0.65490661,
0.82135602, 0.3349679 , 0.67693366, 0.20916067, 0.42576549,
0.1391233 , 0.13798687, 0.12101686, 0.61905813, 0.90112575,
0.67394569, 0.23918442, 0.17328368, 0.12182407, 0.18522385,
0.12261977, 0.13490689, 0.16213663, 0.45541235, 0.9060203 ,
0.12509399, 0.8656554 , 0.34597795, 0.14469227, 0.17033775,
0.82149328, 0.32822924, 0.13206702, 0.64323718, 0.12182816,
0.25111353, 0.15333007, 0.09369676, 0.20950322, 0.35409118,
0.1750671 , 0.11811901, 0.14695545, 0.91556576, 0.33656009,
0.61838844, 0.16213663, 0.6246373 , 0.16542449, 0.85159855,
0.89604589, 0.16322206, 0.24472224, 0.16066254, 0.70032835,
0.15642285, 0.85674382, 0.12104585, 0.13788495, 0.57256735,
0.10418161, 0.87673302, 0.86920135, 0.13097708, 0.9191463 ,
0.15714899, 0.13129579, 0.53324137, 0.89563745, 0.17355165,
0.15319342, 0.90892039, 0.16307814, 0.13130214, 0.87656306,
0.90969631, 0.48855368, 0.17001886, 0.19866738, 0.13509335,
0.13788495, 0.14009086, 0.54135379, 0.59500647, 0.15905205,
0.83278804, 0.124298 , 0.12019101, 0.14605329, 0.18787717,
0.38579213, 0.87751493, 0.56456941, 0.128075 , 0.10317864,
0.91170132, 0.14230296, 0.8877404 , 0.1260745 , 0.12970092,
0.90754457, 0.12634745, 0.90892342, 0.35988753, 0.30441689,
0.18965844, 0.15014721, 0.26821068, 0.65489976, 0.64587182,
0.65490661, 0.90712204, 0.56935712, 0.13013567, 0.86010034,
0.10126334, 0.13013567, 0.41848035])

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值