特征选择算法之 chisquare 算法

本文介绍了卡方(Chi-Squared)特征选择算法,用于评估特征与目标变量的相关性。通过计算卡方值,可以确定特征的重要性。公式为X^2 = ∑ ((YA - YB)^2 / YB),其中YA是实际值,YB是假设无关时的理想值。当卡方值较大,表示特征与目标变量的相关性较强。在数据挖掘工具如Weka中,有内置的ChiSquaredAttributeEval函数来实现这一检验。
摘要由CSDN通过智能技术生成

chisquare特征选择算法:

通过计算各个特征的卡方值,进行排序后得到。

步骤如下: 1. 假设该特征与目标特征无关。 2. 计算卡方值,若卡方值较小,则相关性较小; 若较大,则相关性较大。


每个特征的卡方值计算应如下:

             X^2 = ∑ ((YA - YB)^ 2 / YB)

其中,YA是每个样本中,YA的实际值,而YB为理想值,即假设无关成立时,理想的值。

由于假设该特征与目标特征无关,则应当在该特征的范围上,目标特征值均匀分布。

例如:

假设X 有三种分类,XA,XB,XC. Y 有两种分类, Y1,Y2

则对于卡方值计算有

  Y1 Y2 合计
XA a b a+b
XB c d c+d
XC e f e+f
合计 a+c+e b+d+f  

由于X属性与Y属性无关, 则有, (a+b) * (a+c+

n many data analysis tasks, one is often confronted with very high dimensional data. Feature selection techniques are designed to find the relevant feature subset of the original features which can facilitate clustering, classification and retrieval. The feature selection problem is essentially a combinatorial optimization problem which is computationally expensive. Traditional feature selection methods address this issue by selecting the top ranked features based on certain scores computed independently for each feature. These approaches neglect the possible correlation between different features and thus can not produce an optimal feature subset. Inspired from the recent developments on manifold learning and L1-regularized models for subset selection, we propose here a new approach, called {\em Multi-Cluster/Class Feature Selection} (MCFS), for feature selection. Specifically, we select those features such that the multi-cluster/class structure of the data can be best preserved. The corresponding optimization problem can be efficiently solved since it only involves a sparse eigen-problem and a L1-regularized least squares problem. It is important to note that MCFS can be applied in superised, unsupervised and semi-supervised cases. If you find these algoirthms useful, we appreciate it very much if you can cite our following works: Papers Deng Cai, Chiyuan Zhang, Xiaofei He, "Unsupervised Feature Selection for Multi-cluster Data", 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'10), July 2010. Bibtex source Xiaofei He, Deng Cai, and Partha Niyogi, "Laplacian Score for Feature Selection", Advances in Neural Information Processing Systems 18 (NIPS'05), Vancouver, Canada, 2005 Bibtex source
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值