机器学习项目实战——11集成学习算法之泰坦尼克号船员获救预测

数据集采用的是kaggle比赛中公开的数据集——泰坦尼克号

对之前的机器学习方法分别进行了预测。

包括:逻辑回归    0.7901234567901234、

           神经网络    0.7878787878787877、

            KNN          0.8125701459034792、

            决策树       0.8080808080808081、

            随机森林    0.7991021324354657、   0.8181818181818182

            Bagging     0.8282828282828283、和随机森林做集成

            Adaboost​​​​​​​   0.8181818181818182、和bagging做集成

            Stacking    0.8125701459034792

整体代码:

import pandas

titanic = pandas.read_csv("titanic_train.csv")

# 空余的age填充整体age的中值
titanic["Age"] = titanic["Age"].fillna(titanic["Age"].median())
print(titanic.describe())





print(titanic["Sex"].unique())

# 把male变成0,把female变成1
titanic.loc[titanic["Sex"] == "male", "Sex"] = 0
titanic.loc[titanic["Sex"] == "female", "Sex"] = 1






print(titanic["Embarked"].unique())
# 数据填充
titanic["Embarked"] = titanic["Embarked"].fillna('S')
# 把类别变成数字
titanic.loc[titanic["Embarked"] == "S", "Embarked"] = 0
titanic.loc[titanic["Embarked"] == "C", "Embarked"] = 1
titanic.loc[titanic["Embarked"] == "Q", "Embarked"] = 2




from sklearn.preprocessing import StandardScaler

# 选定特征
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]
x_data = titanic[predictors]
y_data = titanic["Survived"]

# 数据标准化
scaler = StandardScaler()
x_data = scaler.fit_transform(x_data)


# 逻辑回归
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
# 逻辑回归模型
LR = LogisticRegression()
# 计算交叉验证的误差
scores = model_selection.cross_val_score(LR, x_data, y_data, cv=3)
# 求平均
print(scores.mean())
# 0.7901234567901234



# 神经网络模型
from sklearn.neural_network import MLPClassifier
# 建模
mlp = MLPClassifier(hidden_layer_sizes=(20,10),max_iter=1000)
# 计算交叉验证的误差
scores = model_selection.cross_val_score(mlp, x_data, y_data, cv=3)
# 求平均
print(scores.mean())
# 0.7878787878787877



# KNN模型
from sklearn import neighbors
knn = neighbors.KNeighborsClassifier(21)
# 计算交叉验证的误差
scores = model_selection.cross_val_score(knn, x_data, y_data, cv=3)
# 求平均
print(scores.mean())
# 0.8125701459034792



# 决策树模型
from sklearn import tree
# 决策树模型
dtree = tree.DecisionTreeClassifier(max_depth=5, min_samples_split=4)
# 计算交叉验证的误差
scores = model_selection.cross_val_score(dtree, x_data, y_data, cv=3)
# 求平均
print(scores.mean())
# 0.8080808080808081



# 随机森林模型
from sklearn.ensemble import RandomForestClassifier
RF1 = RandomForestClassifier(random_state=1, n_estimators=10, min_samples_split=2)
# 计算交叉验证的误差
scores = model_selection.cross_val_score(RF1, x_data, y_data, cv=3)
# 求平均
print(scores.mean())
# 0.7991021324354657

RF2 = RandomForestClassifier(n_estimators=100, min_samples_split=4)
# 计算交叉验证的误差
scores = model_selection.cross_val_score(RF2, x_data, y_data, cv=3)
# 求平均
print(scores.mean())
# 0.8181818181818182



# Bagging
from sklearn.ensemble import BaggingClassifier
bagging_clf = BaggingClassifier(RF2, n_estimators=20)
# 计算交叉验证的误差
scores = model_selection.cross_val_score(bagging_clf, x_data, y_data, cv=3)
# 求平均
print(scores.mean())
# 0.8282828282828283


# AdaBoost模型
from sklearn.ensemble import AdaBoostClassifier
# AdaBoost模型
adaboost = AdaBoostClassifier(bagging_clf,n_estimators=10)
# 计算交叉验证的误差
scores = model_selection.cross_val_score(adaboost, x_data, y_data, cv=3)
# 求平均
print(scores.mean())
# 0.8181818181818182


# Stacking
from sklearn.ensemble import VotingClassifier
from mlxtend.classifier import StackingClassifier

sclf = StackingClassifier(classifiers=[bagging_clf, mlp, LR],
                          meta_classifier=LogisticRegression())

sclf2 = VotingClassifier([('adaboost',adaboost), ('mlp',mlp), ('LR',LR),('knn',knn),('dtree',dtree)])

# 计算交叉验证的误差
scores = model_selection.cross_val_score(sclf2, x_data, y_data, cv=3)
# 求平均
print(scores.mean())
# 0.8125701459034792

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值