the steps that may be taken to solve a feature selection problem:特征选择的步骤

参考:JMLR的paper《an introduction to variable and feature selection》


we summarize the steps that may be taken to solve a feature selection problem in a check list:


1. Do you have domain knowledge? If yes, construct a better set of “ad hoc” features.


2. Are your features commensurate(可以同单位度量的)? If no, consider normalizing them.


3. Do you suspect interdependence of features? If yes, expand your feature set by constructing conjunctive features or products of features(通过构建联合特征<应该是多个variables当做一个feature>或高次特征,扩展您的功能集), as much as your computer resources allow you(see example of use in Section 4.4).


4. Do you need to prune(裁剪) the input variables (e.g. for cost, speed or data understanding reasons)? If no, construct disjunctive features or weighted sums of features(构建析取特征<应该是一个variables当做一个feature>或加权和特征) (e.g. by clustering or matrix factorization, see Section 5).


5. Do you need to assess features individually(单独评估每个feature) (e.g. to understand their influence on the system or because their number is so large that you need to do a first filtering)? If yes, use a variable ranking method (Section 2 and Section 7.2); else, do it anyway to get baseline results.


6. Do you need a predictor? If no, stop.


7. Do you suspect your data is “dirty” (has a few meaningless input patterns and/or noisy outputs or wrong class labels)? If yes, detect the outlier examples using the top ranking variables obtained in step 5 as representation; check and/or discard them(注意:这里的them是example的意思,不是feature。。。).


8. Do you know what to try first? If no, use a linear predictor. Use a forward selection method(Section 4.2) with the “probe” method as a stopping criterion (Section 6) or use the L
0-norm embedded method (Section 4.3). For comparison, following the ranking of step 5, construct a sequence of predictors of same nature using increasing subsets of features. Can you match or improve performance with a smaller subset? If yes, try a non-linear predictor with that subset.


9. Do you have new ideas, time, computational resources, and enough examples? If yes, compare several feature selection methods, including your new idea, correlation coefficients, backward selection and embedded methods (Section 4). Use linear and non-linear predictors. Select the best approach with model selection (Section 6).


10. Do you want a stable solution (to improve performance and/or understanding)? If yes, sub-sample your data and redo your analysis for several “bootstraps” (Section 7.1)






Section 2:describing filters that select variables by ranking them with correlation coefficients.(常用的标准有皮尔逊相关系数、互信息等)

Section 3:Limitations of such approaches(filters) are illustrated by a set of constructed examples. (通过以上标准每次筛选一个“最好的”variable是有局限的,因为variables的组合往往可以比一个variable效果好;即使是看起来没用的variable,如何和有用的variables结合,或者几个没用的variables结合,都可以provide a significant performance improvement)

Section 4:Subset selection methods are then introduced. These include wrapper methods that assess subsets of variables according to their usefulness to a given predictor(就是非常简单的逐步增加或者候选消除:http://blog.csdn.net/mmc2015/article/details/47426437). We show how some embedded methods implement
the same idea, but proceed more efficiently by directly optimizing a two-part objective function with
a goodness-of-fit term and a penalty for a large number of variables(就是所谓的L
0-norm、L1-norm等). 

Section 5:We then turn to the problem of feature construction, whose goals include increasing the predictor performance and building more compact feature subsets. All of the previous steps benefit from reliably assessing the statistical significance of the relevance of features. (常见的方法有:聚类,本质思想是,将多个相似的variables用他们的聚类中心代替,最常用的是k-mean和层次聚类;矩阵分解法,本质思想是对输入的variables进行线性转换,如PCA/SVD/LDA等;非线性变换,kernel方法。。。)

Section 6:We briefly review model selection methods and statistical tests used to that effect. 

Section 7:Finally, we conclude the paper with a discussion section in which we go over more advanced issues.




we recommend using a linear predictor of your choice (e.g. a linear SVM) and select variables in two alternate ways: (1) with a variable ranking method using a correlation coefficient or mutual information; (2) with a nested subset selection method performing forward or backward selection or with multiplicative updates




评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值