- Wrapper methods measure the “usefulness” of features based on the classifier performance.
- information gain
- chi-square test
- fisher score
- correlation coefficient
- variance threshold
- Filter methods pick up the intrinsic properties of the features (i.e., the “relevance” of the features) measured via univariate statistics instead of cross-validation performance.
- recursive feature elimination
- sequential feature selection algorithms
- genetic algorithms
- Embedded methods, are quite similar to wrapper methods since they are also used to optimize the objective function or performance of a learning algorithm or model.
- L1 (LASSO) regularization
- decision tree
Wrapper methods are essentially solving the “real” problem (optimizing the classifier performance), but they are also computationally more expensive compared to filter methods due to the repeated learning steps and cross-validation.
The difference to wrapper methods is that an intrinsic model building metric is used during learning.
转载自