无限潜在特征选择ILFS算法的分类预测,实现特征选择,对分类特征变量做特征重要性排序。

%% 清空环境变量

clc;

clear;

warning off

close all

%% 读取数据

res = xlsread('特征选择数据集.xlsx');

%% 分析数据

num_class = length(unique(res(:, end))); % 类别数(Excel最后一列放类别)

num_res = size(res, 1); % 样本数(每一行,是一个样本)

num_size = 0.7; % 训练集占数据集的比例

res = res(randperm(num_res), :); % 打乱数据集(不打乱数据时,注释该行)

flag_conusion = 1; % 标志位为1,打开混淆矩阵(要求2018版本及以上)

%% 设置变量存储数据

P_train = []; P_test = [];

T_train = []; T_test = [];

%% 划分数据集

for i = 1 : num_class

mid_res = res((res(:, end) == i), :); % 循环取出不同类别的样本

mid_size = size(mid_res, 1); % 得到不同类别样本个数

mid_tiran = round(num_size * mid_size); % 得到该类别的训练样本个数

P_train = [P_train; mid_res(1: mid_tiran, 1: end - 1)]; % 训练集输入

T_train = [T_train; mid_res(1: mid_tiran, end)]; % 训练集输出

P_test = [P_test; mid_res(mid_tiran + 1: end, 1: end - 1)]; % 测试集输入

T_test = [T_test; mid_res(mid_tiran + 1: end, end)]; % 测试集输出

end

%% 数据转置

P_train = P_train'; P_test = P_test';

T_train = T_train'; T_test = T_test';

%% 得到训练集和测试样本个数

M = size(P_train, 2);

N = size(P_test , 2);

%% 数据归一化

[p_train, ps_input] = mapminmax(P_train, 0, 1);

p_test = mapminmax('apply', P_test, ps_input );

t_train = T_train;

t_test = T_test ;

%% 矩阵转置

p_train = p_train'; p_test = p_test';

t_train = t_train'; t_test = t_test';

%% 特征选择

k = 8; % 保留特征个数

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论
n many data analysis tasks, one is often confronted with very high dimensional data. Feature selection techniques are designed to find the relevant feature subset of the original features which can facilitate clustering, classification and retrieval. The feature selection problem is essentially a combinatorial optimization problem which is computationally expensive. Traditional feature selection methods address this issue by selecting the top ranked features based on certain scores computed independently for each feature. These approaches neglect the possible correlation between different features and thus can not produce an optimal feature subset. Inspired from the recent developments on manifold learning and L1-regularized models for subset selection, we propose here a new approach, called {\em Multi-Cluster/Class Feature Selection} (MCFS), for feature selection. Specifically, we select those features such that the multi-cluster/class structure of the data can be best preserved. The corresponding optimization problem can be efficiently solved since it only involves a sparse eigen-problem and a L1-regularized least squares problem. It is important to note that MCFS can be applied in superised, unsupervised and semi-supervised cases. If you find these algoirthms useful, we appreciate it very much if you can cite our following works: Papers Deng Cai, Chiyuan Zhang, Xiaofei He, "Unsupervised Feature Selection for Multi-cluster Data", 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'10), July 2010. Bibtex source Xiaofei He, Deng Cai, and Partha Niyogi, "Laplacian Score for Feature Selection", Advances in Neural Information Processing Systems 18 (NIPS'05), Vancouver, Canada, 2005 Bibtex source
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

智能算法及其模型预测

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值