应用场景:
机器学习的特征选择,去除无用的特征,可以提升模型效果、降低训练时间等等 数据分析领域,找出收入波动的最大因素!!
实例演示:泰坦尼克沉船事件中,最影响生死的因素有哪些?
1、导入相关的包
import pandas as pd
import numpy as np
# 特征最影响结果的K个特征
from sklearn.feature_selection import SelectKBest
# 卡方检验,作为SelectKBest的参数
from sklearn.feature_selection import chi2
2、导入泰坦尼克号的数据
df = pd.read_csv("./datas/titanic/titanic_train.csv")
df.head()
df = df[["PassengerId", "Survived", "Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]].copy()
df.head()
3、数据清理和转换
- 3.1 查看是否有空值列
df.info()
- 3.2 给Age列填充平均值
df["Age"] = df["Age"].fillna(df["Age"].median())
df.head()
- 3.3 将性别列变成数字
# 性别
df.Sex.unique()
df.loc[df["Sex"] == "male", "Sex"] = 0
df.loc[df["Sex"] == "female", "Sex"] = 1
- 3.4 给Embarked列填充空值,字符串转换成数字
# Embarked
df.Embarked.unique()
# 填充空值
df["Embarked"] = df["Embarked"].fillna(0)
# 字符串变成数字
df.loc[df["Embarked"] == "S", "Embarked"] = 1
df.loc[df["Embarked"] == "C", "Embarked"] = 2
df.loc[df["Embarked"] == "Q", "Embarked"] = 3
4、将特征列和结果列拆分开
y = df.pop("Survived")
X = df
X.head()
y.head()
5、使用卡方检验选择topK的特征
# 选择所有的特征,目的是看到特征重要性排序
bestfeatures = SelectKBest(score_func=chi2, k=len(X.columns))
fit = bestfeatures.fit(X, y)
6、按照重要性顺序打印特征列表
df_scores = pd.DataFrame(fit.scores_)
df_scores
df_columns = pd.DataFrame(X.columns)
df_columns
# 合并两个df
df_feature_scores = pd.concat([df_columns,df_scores],axis=1)
# 列名
df_feature_scores.columns = ['feature_name','Score'] #naming the dataframe columns
# 查看
df_feature_scores
df_feature_scores.sort_values(by="Score", ascending=False)