Sklearn中的f_classif和f_regression

本文介绍了Sklearn中的f_classif和f_regression方法,它们用于评估特征与因变量的相关性。f_classif基于方差分析,适用于分类问题,通过比较不同类别中特征的均值差异来评估预测能力。f_regression则计算特征与连续因变量的样本相关系数,并转化为F统计量,适用于回归问题。f值越大,表明特征与因变量的相关性越强,可用于特征选择。
摘要由CSDN通过智能技术生成

这两天在看Sklearn的文档,在feature_selection一节中遇到俩f值,它们是用来判断模型中特征与因变量的相关性的。刚开始看的时候一头雾水,因为需要数理统计中方差分析的背景,现在在这里简要剖析一下这两个方法的原理和用法。
我们先来看看sklearn的API是怎么解释这两个方法的:

Compute the ANOVA F-value for the provided sample. ——sklearn.feature_selection.f_classif

f_calssif计算ANOVA中的 f 值,这和特征选择怎么搭上关系???
下面是对f_regression的解释,更晕了。。

Univariate linear regression tests.
Linear model for testing the individual effect of each of many regressors. This is a scoring function to be used in a feature seletion procedure, not a free standing feature selection procedure.
This is done in 2 steps:
The correlation between each regressor and the target is computed, that is,
((X[:, i] - mean(X[:, i])) * (y - mean_y)) / (std(X[:, i]) * std(y)).
It is converted to an F score then to a p-value.
——sklearn.feature_selection.f_regression


方差分析(ANOVA)

在传统的统计学中 f 值是用于方差分析的(analysis of variance),感兴趣的旁友可以参考任意一本统计学教材,里面有关于方差分析的详细推导和流程,我在这里就做一下简单的引入。传统的方差分析(或者说是多重均值比较)是这样的,举个经典的栗子:
我们开发出了一种降血压的药,需要检验这个降血压药品的药效如何。我们就做了如下实验,给定不同剂量,分别是0,1,2,3,4这四个级别的剂量(0剂量表示病人服用了安慰剂),给4组病人服用,在一定时间后测量病人的血压差,在得到数据以后。我们要问,这种新药是不是有显著药效,也就是说病人的血压差是不是显著的不等于0。
数据如下:

剂量 血压差
0 x01 x02 x0n0
1 x11 x12 x1n1
2 x21 x22 x2n2
3 x31 x32 x3n3
4 x41 x42 x4n4

我们得到了5个总体 Xi,i=0,1,2,3,4 ,这五个总体的均值为 μi ,我们假设是:
H0:μ0=μ1

import pandas as pd from sklearn.datasets import load_wine from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.feature_selection import SelectKBest, f_classif from sklearn.decomposition import PCA from sklearn.metrics import accuracy_score, classification_report from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC data = load_wine() # 导入数据集 X = pd.DataFrame(data.data, columns=data.feature_names) y = pd.Series(data.target) # 划分训练集测试集 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # 构建分类模型 model = LogisticRegression() model.fit(X_train, y_train) # 预测测试集结果 y_pred = model.predict(X_test) #评估模型性能 accuracy = accuracy_score(y_test, y_pred) report = classification_report(y_test, y_pred) print('准确率:', accuracy) # 特征选择 selector = SelectKBest(f_classif, k=6) X_new = selector.fit_transform(X, y) print('所选特征:', selector.get_support()) # 模型降维 pca = PCA(n_components=2) X_new = pca.fit_transform(X_new) # 划分训练集测试集 X_train, X_test, y_train, y_test = train_test_split(X_new, y, test_size=0.2, random_state=0) def Sf(model,X_train, X_test, y_train, y_test,modelname): mode = model() mode.fit(X_train, y_train) y_pred = mode.predict(X_test) accuracy = accuracy_score(y_test, y_pred) print(modelname, accuracy) importance = mode.feature_importances_ print(importance) def Sf1(model,X_train, X_test, y_train, y_test,modelname): mode = model() mode.fit(X_train, y_train) y_pred = mode.predict(X_test) accuracy = accuracy_score(y_test, y_pred) print(modelname, accuracy) modelname='支持向量机' Sf1(SVC,X_train, X_test, y_train, y_test,modelname) modelname='逻辑回归' Sf1(LogisticRegression,X_train, X_test, y_train, y_test,modelname) modelname='高斯朴素贝叶斯算法训练分类器' Sf1(GaussianNB,X_train, X_test, y_train, y_test,modelname) modelname='K近邻分类' Sf1(KNeighborsClassifier,X_train, X_test, y_train, y_test,modelname) modelname='决策树分类' Sf(DecisionTreeClassifier,X_train, X_test, y_train, y_test,modelname) modelname='随机森林分类' Sf(RandomForestClassifier,X_train, X_test, y_train, y_test,modelname)加一个画图展示
06-09
评论 11
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值