python建模大赛算法_Python数据分析kaggle-Titanic+天池-工业蒸汽量预测建模算法

本文介绍了使用Python进行数据分析的实战案例,涉及kaggle上的泰坦尼克号生存预测及天池工业蒸汽量预测。通过数据预处理、特征选择与模型构建,探讨了逻辑回归、随机森林等算法的应用。最终,通过交叉验证评估模型性能,并讨论了模型优化方向。
摘要由CSDN通过智能技术生成

做数据分析许久了, 简单写写比赛的数据分析项目思路

一 使用逻辑回归/随机森林等对kaggle比赛项目 "给出泰坦尼克号上的乘客的信息, 预测乘客是否幸存"进行简单的数据分析过程, 使用的工具是Jupyter Notebook

项目提供了两份数据,分别是titanic_train.csv(训练集,用来构建模型)和test(测试集,用来对模型准确度进行测试)

读取并观察数据

import numpy as np

import pandas as pd

data = pd.read_csv("data/titanic_train.csv")

# 观察数据结构信息

print(data.info())

print(data.describe())

# 查看前两行数据

data.head(2)

通过观察可知,训练集共891行,并且Age, Cabin, Embarked字段存在缺失值:

PassengerId -- 类似于编号,每个人对应一个Id, 具有唯一性

Survived -- 是否幸存, 1表示幸存者, 0则表示否

Pclass -- 船舱等级, 1为一等舱, 2为二等舱, 3为三等舱

Name -- 姓名

Sex -- 性别, female女性, male男性

Age -- 年龄(缺失值个数:177,缺失占比:19.8%)

SibSp -- 同船配偶以及兄弟姐妹的人数

Parch -- 同船父母或者子女的人数

Ticket -- 船票号

Fare -- 船票的票价

Cabin -- 舱位号码(缺失值个数:687,缺失占比:77%)

Embarked -- 登船港口(缺失值个数:2,占比:0.2%)

特征分析

在进行数据建模前必须找出特征因素,找出与幸存有关的特征,特征的好坏直接决定了模型的可靠程度.

PassengerId仅仅是标识船员的id和票号Ticket,理论上与是否幸存无关,此特征先行放弃.

Cabin船位号码缺失值较多,暂时不考虑.下面继续分析其他特征值域幸存者的关联

首先对特征的缺失值进行补充:

#对年龄的缺失值取中位数

#对登船港口缺失值取S(S港口登船人员最多)

data["Age"] = data["Age"].fillna(data["Age"].median())

data["Embarked"] = data["Embarked"].fillna("S")

对特征值进行转换,以便利于建模

#将性别转换为0,1

data.loc[data["Sex"] == "male", "Sex"] = 0

data.loc[data["Sex"] == "female", "Sex"] = 1

#将登船港口改为0,1,2

data.loc[data["Embarked"] == "S", "Embarked"] = 0

data.loc[data["Embarked"] == "C", "Embarked"] = 1

data.loc[data["Embarked"] == "Q", "Embarked"] = 2

使用data.loc[data["Embarked"] == "S", "Embarked"] = 0

首先运用SelectKBest对特征进行一个简单的分析

import numpy as np

from sklearn.feature_selection import SelectKBest, f_classif

import matplotlib.pyplot as plt

predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]

selector = SelectKBest(f_classif, k=5)

selector.fit(data[predictors], data["Survived"])

scores = -np.log10(selector.pvalues_)

plt.bar(range(len(predictors)), scores)

plt.xticks(range(len(predictors)), predictors, rotation='vertical')

plt.show()

上述可知,"Pclass, Sex, Fare, Embarked"这几个特征相对比较重要,那我们首先用这几个特征进行建模

线性回归算法

from sklearn.linear_model import LinearRegression

from sklearn.model_selection import KFold

predictors = ["Pclass", "Sex", "Fare", "Embarked"]

# 线性回归 算法

alg = LinearRegression()

kf = KFold(n_splits=3,random_state = 1)

predictions = []

for train,test in kf.split(data):

train_predictors = (data[predictors].iloc[train,:])

train_target = data["Survived"].iloc[train]

alg.fit(train_predictors, train_target)

test_predictions = alg.predict(data[predictors].iloc[test,:])

predictions.append(test_predictions)

predictions = np.concatenate(predictions, axis=0)

predictions[predictions > 0.5] = 1

predictions[predictions <= 0.5] = 0

accuracy = sum(predictions[predictions == data["Survived"]]) / len(predictions)

print(accuracy)

0.2615039281705948(结果太差,可忽略)

逻辑回归 算法

from sklearn.model_selection import cross_val_score

from sklearn.linear_model import LogisticRegression

from sklearn.model_selection import train_test_split

X = data[predictors]

y = data["Survived"]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)

alg = LogisticRegression(random_state=1)

alg.fit(X_train, y_train)

scores = cross_val_score(alg, X_train, y_train, cv=3)

print(scores.mean())

alg.score(X_test,y_test)

0.7834680134680134

0.7762711864406779

结果一般,再试试随机森林

from sklearn.model_selection import cross_val_score, KFold

from sklearn.ensemble import RandomForestClassifier

predictors = ["Pclass", "Sex", "Fare", "Embarked"]

alg = RandomForestClassifier(random_state=1, n_estimators=10, min_samples_split=2, min_samples_leaf=1)

alg.fit(X_train, y_train)

kf = KFold(n_splits=3, random_state=1)

scores = cross_val_score(alg, X_train, y_train, cv=kf)

print(scores.mean())

alg.score(X_test,y_test)

0.7885894117049895

0.7864406779661017

alg = RandomForestClassifier(random_state=1, n_estimators=100, min_samples_split=2, min_samples_leaf=1)

alg.fit(X_train, y_train)

kf = KFold(n_splits=3, random_state=1)

scores = cross_val_score(alg, X_train, y_train, cv=kf)

print(scores.mean())

alg.score(X_test,y_test)

0.7952726595942675

0.8067796610169492

随机森林参数随意取得

这只是简单的特征提取和建模分析,后续可以进一步提取特征值,进行更多的建模分析

二 使用Lasso/随机森林/SVM等对天池比赛项目 "工业蒸汽量预测建模算法"进行简单的数据分析过程, 使用的工具是Jupyter Notebook

import warnings

import numpy as np

import pandas as pd

import matplotlib.pyplot as plt

import seaborn as sns

warnings.filterwarnings("ignore")

train = pd.read_table("data/zhengqi_train.txt")

test = pd.read_table("data/zhengqi_test.txt")

train_x = train.drop(['target'],axis=1)

train_y = train['target']

data = pd.concat([train_x,test])

test.head()

from sklearn.feature_selection import SelectKBest, f_classif

figsize = 15,8

figure = plt.subplots(figsize=figsize)

selector = SelectKBest(f_classif, k=5)

selector.fit(train_x,train_y)

scores = -np.log10(selector.pvalues_)

plt.bar(range(38), scores)

plt.xticks(range(38),train_x.columns)

# plt.figure(figsize=(5,5))

plt.show()

fig = plt.subplots(figsize=(30,20))

j = 1

for cols in data.columns:

plt.subplot(5,8,j)

sns.distplot(train[cols])

sns.distplot(test[cols])

j+=1

从上面数据来看,特征'V5','V9','V11','V17','V22','V28'训练集和测试集分布不均,删除类似特征,并且'V14'对目标值影响较小,所以也删除此特征.

# 删除无用特征并进行模型尝试

data.drop(['V5','V9','V11','V14','V17','V22','V28'],axis=1,inplace=True)

#数据分割

from sklearn.model_selection import train_test_split

x_train,x_test,y_train,y_test = train_test_split(train_x,train_y, test_size = 0.3,random_state = 0)

建模

from sklearn.linear_model import Lasso,LinearRegression

from sklearn.svm import SVR

from sklearn.ensemble import RandomForestRegressor

import lightgbm as lgb

from sklearn.model_selection import KFold, cross_val_score

from sklearn.metrics import mean_squared_error

def kfold_scores(alg,x_train, y_train):

kf = KFold(n_splits = 5, random_state= 1, shuffle=False)

predict_y = []

for kf_train,kf_test in kf.split(x_train):

alg.fit(x_train.iloc[kf_train],y_train.iloc[kf_train])

y_pred_train = alg.predict(x_train.iloc[kf_test])

mse = mean_squared_error(y_train.iloc[kf_test],y_pred_train)

predict_y.append(mse)

print("交叉验证集MSE均值为 %s" % (np.mean(predict_y)))

y_pred_test = alg.predict(x_test)

mse = mean_squared_error(y_test, y_pred_test)

print("测试集MSE为 %s" % mse)

alg = Lasso(alpha = 0.002)

mse_mean = kfold_scores(alg,x_train,y_train)

交叉验证集MSE均值为 0.11694436512799282

测试集MSE为 0.11029985071315368

alg = RandomForestRegressor()

mse_mean = kfold_scores(alg,x_train,y_train)

交叉验证集MSE均值为 0.1404184691171128

测试集MSE为 0.1467028448096886

其他算法代入也如上即可,后续优化空间:特征值选择,目标值处理,异常值剔除等!

alg = lgb.LGBMRegressor()

mse_mean = kfold_scores(alg,x_train,y_train)

y = alg.predict(test)

k = y.tolist()

with open('data/data.txt','w') as f:

for i in k:

f.write(str(i) + '\r\n')

f.close()

将test的结果输出到txt文件

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值