论文代码阅读及部分复现:Deep Lasso

论文地址:A Performance-Driven Benchmark for Feature Selection in Tabular Deep Learning | OpenReview

论文代码:

GitHub - vcherepanova/tabular-feature-selection: Repository for the feature selection with tabular models

数据同FT-Transformer那一篇 https://www.dropbox.com/s/o53umyg6mn3zhxy/  

这篇文章提出了新的特征筛选方法:Deep-Lasso

一、论文概述。

 通常真是数据集中很少回包含“对于预测完全没有帮助的高斯噪声特征”,但是以往在评估特征选择算法时,会手动造数据并包含由高斯分布生成的“纯噪音特征”,不仅和事实大相径庭,而且这也使得特征选择算法的任务变得比真实情况更加“简单”了;所以作者基于真实数据集构造特征选择的评估基准,并加入了随机噪音特征、损坏特征和作为特征工程原型的二阶特征(有可能是冗余特征),通过对特征选择后的数据下游模型(MLP和FT-Transformer)效能评估来评估特征选择的效果。

二、实验方法

实验选用了真实世界存在的数据集,分别对它们加入了以下“额外特征”用以让上游模型进行特征筛选,以此传入下游模型训练:

随机特征:直接加入符合高斯分布的噪声特征
被破坏的特征:从原始特征中选出某些特征使用高斯噪音或拉普拉斯噪声破坏
二阶特征:随机选择原始特征的乘积,此处是模拟特征工程中造出冗余数据的场景。当然,也有可能造出有利于预测的“非噪声”,必须根据下游模型的性能进行评估。

在实际实验中,会将上游模型和下游模型训练的过程打包作为一个过程,给到Optuna训练超参,最后再使用不同的随机数种子进行模型训练。

实验使用了以下的特征选择方法:

1、单变量统计检验:对于分类模型,使用的是ANOVA方差分析,通过组内平方和与组件平方和之间的度量获得F检验统计量,以此判断不同分类下对应特征之间是否存在显著差异;而对于回归问题则使用F检验,通过回归平方和与残差平方和之间的度量获得F检验统计量,以此判断特征与应变量之间是否存在显著的线性关系。
2、加入L1正则项以促进模型的稀疏性,根据各个特征的系数做排序
3、1L Lasso(First Layer Lasso):对于MLP中的第一层隐藏层加入Group Lasso,并将第一层各个特征的分组权重加以排序。

min_\theta\alpha L_\theta(X,Y)+(1-\alpha)\sum _{j=1}^{m}||W^{(j)}||_2

式中前半段是第一层的损失函数,而后半段就是每一个特征对应权重的二阶范数作为惩罚项,以此作特征筛选。
4、自适应Lasso:和上面的1L Lasso类似,但是对每一个特征的系数又会使用自适应权重参数加权

min_\theta\alpha L_\theta(X,Y)+(1-\alpha)\sum _{j=1}^{m}\frac{1}{||\widehat{W}^{(j)}||_2^{\gamma }}||W^{(j)}||_2

式中的W_hat是W的一个自适应系数。
5、LassoNet:融合了特征选择的神经网络结构,使用一个“Skip层”的结构来控制要进入后面隐层的特征数量,从而实现特征的稀疏化。
6、随机森林:决策树的bagging组合,根据特征对集合的贡献进行排序;具体而言就是在节点分裂时,选用的特征能够减少多少数据的不纯度
7、XGBoost:GBDT算法中最流行的实现;计算特征重要性的策略是对于每个使用该特征的节点计算平均增益
8、Attention Map:对于像FT-Transformer这样的用到了注意力机制的模型,使用Attention Map的方式来度量特征重要性。Attention Map可以理解为Transformer模型在前向传播时,对于注意力(缩放后的q点积k)进行softmax归一化之后的结果,可以被认为是特征筛选的一环。
9、论文作者提出的Deep Lasso,是可微分模型的一个Lasso推广,对每个特征的梯度加入Lasso惩罚项,使得梯度稀疏,认为这样可以使得模型对不相关的特征变化具有鲁棒性。

min_\theta\alpha L_\theta(X,Y)+(1-\alpha)\sum _{j=1}^{m}|| \frac{\partial L_\theta(X,Y)}{\partial X^{(j)}} ||_2

相比于正常的1L Lasso外,这里的的惩罚项变为了损失函数梯度的二阶范数。当训练完成后,就可以用损失函数梯度的二阶范数作为每个特征的重要性的度量了:

|| \frac{\partial L_\theta(X,Y)}{\partial X^{(j)}} ||_2

三、实验细节

本节会节选一些重要代码进行讲解,完整实验代码可以去本文最开始的github页面查看。

1、噪音制造

if add_noise == 'random_feats':
	np.random.seed(0)
	n_feats = int(numerical_array.shape[1]/(1-noise_percent)*noise_percent)
	uninformative_features = np.random.randn(numerical_array.shape[0], n_feats)
	numerical_array = np.concatenate([numerical_array, uninformative_features], axis=1)
elif add_noise == 'corrupted_feats':
	np.random.seed(0)
	n_max = int(numerical_array.shape[1] / 0.1 * 0.9)
	n_feats = int(numerical_array.shape[1]/(1-noise_percent)*noise_percent)
	features_idx = np.random.choice(numerical_array.shape[1], n_max, replace=True)[:n_feats]
	features_copy = numerical_array[:,features_idx]
	features_std = np.nanstd(features_copy, axis=0)
	alpha_noise = 0.5
	corrupted_features = (1-alpha_noise)*features_copy + alpha_noise*np.random.randn(numerical_array.shape[0], n_feats)*features_std
	numerical_array = np.concatenate([numerical_array, corrupted_features], axis=1)
elif add_noise == 'secondorder_feats':
	np.random.seed(0)
	n_max = int(numerical_array.shape[1]/0.1*0.9)
	n_feats = int(numerical_array.shape[1]/(1-noise_percent)*noise_percent)
	features_1 = np.random.choice(numerical_array.shape[1], n_max, replace=True)[:n_feats]
	features_2 = np.random.choice(numerical_array.shape[1], n_max, replace=True)[:n_feats]
	second_order_features = numerical_array[:,features_1]*numerical_array[:,features_2]
	numerical_array = np.concatenate([numerical_array, second_order_features], axis=1)

random_feats(随机噪声) 就直接用标准正态分布生成;而corrupted_feats(被破坏的特征)则是选择几个连续特征,按比例加入0均值同方差的噪声;secondorder_feats(二阶特征)则是随机选择多个特侦将它们相乘成为新的特征加入数据集中。

除此之外还有一个需要注意的点:在处理数据时,他们直接将原本的train、val、test三个npy文件的数据糅合在一起加入噪声并,然后再用train_test_split重新划分训练集验证集与测试集。

对于分类特征而言,他们直接将缺失值作为一个新的分类处理。

而对于连续特征而言,使用均值填充缺失值,之后使用中心化缩放或是分位数转换作为预处理。

对于因变量y而言,回归问题使用中心化预处理,而分类问题一般不作预处理。

2、提取特征重要性

我们逐一来看看论文中是如何针对不同的的上游模型进行特征提取:

if cfg.dataset.task == 'regression':
	if cfg.model.name=='xgboost':
		model = xgb.XGBRegressor(**cfg.model, seed = cfg.hyp.seed)
	elif cfg.model.name == "univariate":
		model = SelectKBest(score_func=f_regression, k="all")
	elif cfg.model.name == "lasso":
		model = Lasso(alpha=cfg.model.alpha, random_state=cfg.hyp.seed)
	elif cfg.model.name == 'forest':
		model = RandomForestRegressor(n_estimators=cfg.model.n_estimators,
									  max_depth=cfg.model.max_depth,
									  random_state=cfg.hyp.seed,
									  n_jobs=-1)
	else:
		raise NotImplementedError('Model is not implemented')
else:
	if cfg.model.name == 'xgboost':
		model = xgb.XGBClassifier(**cfg.model, seed = cfg.hyp.seed)
	elif cfg.model.name == "univariate":
		model = SelectKBest(score_func=f_classif, k="all")
	elif cfg.model.name == "lasso":
		model = LogisticRegression(penalty='l1', solver="saga",
								   C=cfg.model.alpha, random_state=cfg.hyp.seed)
	elif cfg.model.name == 'forest':
		model = RandomForestClassifier(n_estimators=cfg.model.n_estimators,
									   max_depth=cfg.model.max_depth,
									   random_state=cfg.hyp.seed,
									   n_jobs=-1)
	else:
		raise NotImplementedError('Model is not implemented')

首先是非神经网络类的模型,被作者归类在了"classical_model"里面,也就是xgboost,单变量统计检验,Lasso和随机森林。注意此处的单变量统计检验实际上并非一个“模型”,而是scikit-learn中的SelectKBest函数。

#  Feature Selection
if cfg.mode == 'feature_selection':
	if cfg.model.name == 'xgboost':
		importances = model.feature_importances_
	elif cfg.model.name == 'univariate':
		importances = np.abs(model.scores_)
	elif cfg.model.name == 'lasso':
		importances = np.abs(model.coef_)
	elif cfg.model.name == 'forest':
		importances = model.feature_importances_
	else:
		raise NotImplementedError('Model is not implemented')

模型的重要性提取相对简单,基本上所有的包都有对应的方法可以直接条用。

需要注意的是“classical_model”中,除了xgboost以外,其余的模型都无法处理带有分类特征的问题。

而对于神经网络模型而言,主要有FT_Transformer、MLP以及ResNet三大类基础模型。其中MLP和ResNet较为常见,此处不作赘述;而FT_Transformer简而言之就是在传入Transformer之前加了一个Feature Tokenizer:将连续变量作线性变换,离散变量作Embedding,最后再加入一个CLS向量作为结果向量;将处理完的数据传入一个Encoder-only的Transformer中。

对于神经网络类模型的特征筛选,大多数都是在trainning.py文件中,给模型添加一个正则项用以作特征筛选:

def add_dimension_glasso(var, dim=0):
    return var.pow(2).sum(dim=dim).add(1e-8).pow(1/2.).sum()

if hyp.regularization=='deep_lasso':
	grad_params = autograd.grad(loss, inputs_num, create_graph=True, allow_unused=True)
	reg = add_dimension_glasso(grad_params[0], dim=0)
	loss = hyp.reg_weight*reg + (1-hyp.reg_weight)*loss
elif hyp.regularization=='lasso':
	reg = add_dimension_glasso(net.module.head.weight)
	loss = hyp.reg_weight*reg + (1-hyp.reg_weight)*loss
elif hyp.regularization=='first_lasso':
	reg = add_dimension_glasso(net.module.layers[0].weight)
	loss = hyp.reg_weight * reg + (1 - hyp.reg_weight) * loss

loss.backward()
optimizer.step()
train_loss += loss.item()
total += targets.size(0)

if hyp.regularization == 'deep_lasso':
	grad_avg += grad_params[0].detach().cpu().abs().mean(0)
	del grad_params

 add_dimension_glasso函数就是针对神经网络中的某层中加入L1正则项的函数,注意此处在二阶范数中还加入了一个1e-8。我个人理解这是为了避免求和为0的平滑项。

对于正则项为lasso的部分,我们将正则项加在网络的最后一层输出层(head),而正则项为1L Lasso的部分则加载网络的第一层(layers[0])。

至于Deep Lasso,代码中首先使用了autograd.grad来求得梯度。这里简单介绍一下autograd.grad函数:

第一个outputs函数是要求梯度的因变量,也就是要求梯度的损失函数,而第二个inputs函数则是对应损失函数中的自变量。后面的create_graph设置为True时,可以计算高阶梯度;allow_unused设置为True则表示允许部分输入可以没有被用到,否则会报错。(此处代码中inputs参数只传入了inputs_num,没有传入分类特征,可能是因为分类特征本身需要经过Embedding层?此处需要进行实验。)

之后针对这个梯度求出其二阶范数,将其作为惩罚项加入损失函数中。

除此之外,函数还会用一个grad_avg变量维护梯度的均值,但是实际上之后并没有用到这一变量。

而对于FT-Transformer而言,为了保留训练过程中计算出来的attention map(缩放后的点积注意力),则需要在模型定义中做出略微调整:

class MultiheadAttention(nn.Module):
    def __init__(
            self, d: int, n_heads: int, dropout: float, initialization: str
    ) -> None:
        if n_heads > 1:
            assert d % n_heads == 0
        assert initialization in ["xavier", "kaiming"]

        super().__init__()
        self.W_q = nn.Linear(d, d)
        self.W_k = nn.Linear(d, d)
        self.W_v = nn.Linear(d, d)
        self.W_out = nn.Linear(d, d) if n_heads > 1 else None
        self.n_heads = n_heads
        self.dropout = nn.Dropout(dropout) if dropout else None

        for m in [self.W_q, self.W_k, self.W_v]:
            if initialization == "xavier" and (n_heads > 1 or m is not self.W_v):
                # gain is needed since W_qkv is represented with 3 separate layers
                nn_init.xavier_uniform_(m.weight, gain=1 / math.sqrt(2))
            nn_init.zeros_(m.bias)
        if self.W_out is not None:
            nn_init.zeros_(self.W_out.bias)

    def _reshape(self, x: Tensor) -> Tensor:
        batch_size, n_tokens, d = x.shape
        d_head = d // self.n_heads
        return (
            x.reshape(batch_size, n_tokens, self.n_heads, d_head)
                .transpose(1, 2)
                .reshape(batch_size * self.n_heads, n_tokens, d_head)
        )

    def forward(
            self,
            x_q: Tensor,
            x_kv: Tensor,
            key_compression: ty.Optional[nn.Linear],
            value_compression: ty.Optional[nn.Linear],
    ) -> Tensor:
        q, k, v = self.W_q(x_q), self.W_k(x_kv), self.W_v(x_kv)
        for tensor in [q, k, v]:
            assert tensor.shape[-1] % self.n_heads == 0
        if key_compression is not None:
            assert value_compression is not None
            k = key_compression(k.transpose(1, 2)).transpose(1, 2)
            v = value_compression(v.transpose(1, 2)).transpose(1, 2)
        else:
            assert value_compression is None

        batch_size = len(q)
        d_head_key = k.shape[-1] // self.n_heads
        d_head_value = v.shape[-1] // self.n_heads
        n_q_tokens = q.shape[1]

        q = self._reshape(q)
        k = self._reshape(k)
        attention_logits = q @ k.transpose(1, 2) / math.sqrt(d_head_key)
        attention_probs = F.softmax(attention_logits, dim=-1)
        if self.dropout is not None:
            attention_probs = self.dropout(attention_probs)
        x = attention_probs @ self._reshape(v)
        x = (
            x.reshape(batch_size, self.n_heads, n_q_tokens, d_head_value)
            .transpose(1, 2)
            .reshape(batch_size, n_q_tokens, self.n_heads * d_head_value)
        )
        if self.W_out is not None:
            x = self.W_out(x)
        return x, {
            'attention_logits': attention_logits,
            'attention_probs': attention_probs,
        }

这里的多头注意力机制定义和原本的定义相同,只不过多了一步将注意力输出的操作。这里的attention_probs变量就是我们之后要拿来作为attentionMap度量的部分。

class SaveAttentionMaps:
    def __init__(self):
        self.attention_maps = None
        #self.n_batches = 0

    def __call__(self, _, __, output):
        if self.attention_maps is None:
            self.attention_maps = output[1]['attention_probs'].detach().cpu().sum(0)
        else:
            self.attention_maps+=output[1]['attention_probs'].detach().cpu().sum(0)


def get_feat_importance_attention(net, testloader, device):
    net.eval()
    hook = SaveAttentionMaps()
    for block in net.layers:
        block['attention'].register_forward_hook(hook)

    for batch_idx, (inputs_num, inputs_cat, targets) in enumerate(testloader):
        inputs_num, inputs_cat, targets = inputs_num.to(device).float(), inputs_cat.to(device), targets.to(device)
        inputs_num, inputs_cat = inputs_num if inputs_num.nelement() != 0 else None, \
                                 inputs_cat if inputs_cat.nelement() != 0 else None

        net(inputs_num, inputs_cat)

    n_blocks = len(net.layers)
    n_objects = len(testloader.dataset)
    n_heads = net.layers[0]['attention'].n_heads
    n_features = inputs_num.shape[1]
    n_tokens = n_features + 1
    attention_maps = hook.attention_maps
    average_attention_map = attention_maps/(n_objects*n_blocks*n_heads)
    assert attention_maps.shape == (n_tokens, n_tokens)

    # Calculate feature importance and ranks.
    average_cls_attention_map = average_attention_map[0]  # consider only the [CLS] token
    feature_importance = average_cls_attention_map[1:]  # drop the [CLS] token importance
    assert feature_importance.shape == (n_features,)

    feature_ranks = scipy.stats.rankdata(-feature_importance.numpy())
    feature_indices_sorted_by_importance = feature_importance.argsort(descending=True).numpy()
    return average_cls_attention_map, feature_importance, feature_ranks, feature_indices_sorted_by_importance

 这里的register_hook方法是用来在作反向传播计算梯度时调用另一个函数。此处构建了一个SaveAttentionMaps类,每次调用时会将attention_probs加总起来最后求平均值。最后将除了平均值作为特征重要性的度量feature_importance,将它排序之后就获得了各个特征的排名。

(average_cls_attention_map那边是不是写错了?)

def get_feat_importance_deeplasso(net, testloader, criterion, device):
    net.eval()
    grads = []
    for batch_idx, (inputs_num, inputs_cat, targets) in enumerate(testloader):
        inputs_num, inputs_cat, targets = inputs_num.to(device).float(), inputs_cat.to(device), targets.to(device)
        inputs_num, inputs_cat = inputs_num if inputs_num.nelement() != 0 else None, \
                                    inputs_cat if inputs_cat.nelement() != 0 else None
        inputs_num.requires_grad_()
        outputs = net(inputs_num, inputs_cat)
        loss = criterion(outputs, targets)
        grad_params = autograd.grad(loss, inputs_num, create_graph=True, allow_unused=True)
        grads.append(grad_params[0].detach().cpu())

    grads = torch.cat(grads)
    importances = grads.pow(2).sum(dim=0).pow(1/2.)
    return importances


def get_feat_importance_lasso(net):
    importances = net.module.head.weight.detach().cpu().pow(2).sum(dim=0).pow(1 / 2.)
    return importances

def get_feat_importance_firstlasso(net):
    importances = net.module.layers[0].weight.detach().cpu().pow(2).sum(dim=0).pow(1 / 2.)
    return importances

而对于Lasso和1L Lasso,其特征重要性则分别是网络最后一层与第一层权重的二次范数;而deepLasso的特征重要性则需要通过在验证集上正向传播后计算其梯度的二阶范数。注意此处的inputs_cat在构造时已经用torch.empty占位了,所以to(DEVICE)方法不会报错。

3、warm-up

这里的warm-up笔者之前没有相应了解,此处只做简单的阐述:

在神经网络训练初期,初始化的各个权重所计算出来的结果和实际是的距离很可能非常远,从而导致损失函数相当大,反向传播的梯度也很大。此时如果使用了一个很大的学习率,你们就会使得优化“过犹不及”,具体可以看下面的图:

蓝色的学习率过大,容易略过最优点并造成损失函数震荡等问题。而红色的学习率低,不会造成损失震荡或略过最优点,但是会造成训练时间过长的问题。哪怕是学习率衰减也无法避免这一问题。所以大家就期望能够找到一种方法,能够是的学习率现很小,等到之后梯度没那么大时再增大,之后再按照正常规则慢慢衰减。这就是warmup的基本思想。

""" warmup.py
    code for warmup learning rate scheduler
    borrowed from https://github.com/ArneNx/pytorch_warmup/tree/warmup_fix
    and modified July 2020
"""
import math
from torch.optim import Optimizer
class BaseWarmup:
    """Base class for all warmup schedules

    Arguments:
        optimizer (Optimizer): an instance of a subclass of Optimizer
        warmup_params (list): warmup paramters
        last_step (int): The index of last step. (Default: -1)
        warmup_period (int or list): Warmup period
    """

    def __init__(self, optimizer, warmup_params, last_step=-1, warmup_period=0):
        if not isinstance(optimizer, Optimizer):
            raise TypeError('{} is not an Optimizer'.format(
                type(optimizer).__name__))
        self.optimizer = optimizer
        self.warmup_params = warmup_params
        self.last_step = last_step
        self.base_lrs = [group['lr'] for group in self.optimizer.param_groups]
        self.warmup_period = warmup_period
        self.dampen()

    def state_dict(self):
        """Returns the state of the warmup scheduler as a :class:`dict`.

        It contains an entry for every variable in self.__dict__ which
        is not the optimizer.
        """
        return {key: value for key, value in self.__dict__.items() if key != 'optimizer'}

    def load_state_dict(self, state_dict):
        """Loads the warmup scheduler's state.

        Arguments:
            state_dict (dict): warmup scheduler state. Should be an object returned
                from a call to :meth:`state_dict`.
        """
        self.__dict__.update(state_dict)

    def dampen(self, step=None):
        """Dampen the learning rates.

        Arguments:
            step (int): The index of current step. (Default: None)
        """
        if step is None:
            step = self.last_step + 1
        self.last_step = step
        if isinstance(self.warmup_period, int) and step < self.warmup_period:
            for i, (group, params) in enumerate(zip(self.optimizer.param_groups,
                                                    self.warmup_params)):
                if isinstance(self.warmup_period, list) and step >= self.warmup_period[i]:
                    continue
                omega = self.warmup_factor(step, **params)
                group['lr'] = omega * self.base_lrs[i]

    def warmup_factor(self, step, warmup_period):
        """Place holder for objects that inherit BaseWarmup."""
        raise NotImplementedError


def get_warmup_params(warmup_period, group_count):
    if type(warmup_period) == list:
        if len(warmup_period) != group_count:
            raise ValueError(
                'size of warmup_period does not equal {}.'.format(group_count))
        for x in warmup_period:
            if type(x) != int:
                raise ValueError(
                    'An element in warmup_period, {}, is not an int.'.format(
                        type(x).__name__))
        warmup_params = [dict(warmup_period=x) for x in warmup_period]
    elif type(warmup_period) == int:
        warmup_params = [dict(warmup_period=warmup_period)
                         for _ in range(group_count)]
    else:
        raise TypeError('{} is not a list nor an int.'.format(
            type(warmup_period).__name__))
    return warmup_params


class LinearWarmup(BaseWarmup):
    """Linear warmup schedule.

    Arguments:
        optimizer (Optimizer): an instance of a subclass of Optimizer
        warmup_period (int or list): Warmup period
        last_step (int): The index of last step. (Default: -1)
    """

    def __init__(self, optimizer, warmup_period, last_step=-1):
        group_count = len(optimizer.param_groups)
        warmup_params = get_warmup_params(warmup_period, group_count)
        super().__init__(optimizer, warmup_params, last_step, warmup_period)

    def warmup_factor(self, step, warmup_period):
        return min(1.0, (step+1) / warmup_period)


class ExponentialWarmup(BaseWarmup):
    """Exponential warmup schedule.

    Arguments:
        optimizer (Optimizer): an instance of a subclass of Optimizer
        warmup_period (int or list): Effective warmup period
        last_step (int): The index of last step. (Default: -1)
    """

    def __init__(self, optimizer, warmup_period, last_step=-1):
        group_count = len(optimizer.param_groups)
        warmup_params = get_warmup_params(warmup_period, group_count)
        super().__init__(optimizer, warmup_params, last_step, warmup_period)

    def warmup_factor(self, step, warmup_period):
        if step + 1 >= warmup_period:
            return 1.0
        else:
            return 1.0 - math.exp(-(step+1) / warmup_period)

 实际在代码中,定义了LinearWarmup和ExponentialWarmup两类warm-up实例,在每次训练一个epoch后运行dampen函数以此更新优化器中的学习率。

在warmup类中,传入的参数有:

optimizer:也就是要优化器,目标就是更新这个优化器中的学习率。

warmup_params:本项目中的warmup_params都是由get_warmup_params函数生成的,简单而言就是将warmup_period统一作为由字典组成的列表传入dampen函数中。

last_step:运行dampen函数时前一次的步骤数是什么,每次运行dampen函数时会+1。等到这个变量大于warmup_period时,就不再更新学习率。

warmup_period:warmup的时长,当训练的epoch数超过了这个值时,遍不再进行warmup。

warmup的核心是找出一个omega系数,然后让他乘以学习率。不同的warmup函数就代表着omega系数的计算逻辑。

Linear Warmup是根据训练的阶段线性地增加omega系数,而ExponentialWarmup则是使用指数函数指数级地递增。

四、实验结果

对于随机噪声而言, 下游为MLP时XGBoost、随即森林、单变量统计检验量和Deep Lasso的表现相当,而第一层Lasso、Lasso回归、AGL、LassoNet和AttentionMap的表现较弱。对于下游模型为FT-Transformer时,随机森林和XGBoost的特征选择要优于其他上游模型。

 对于损坏的特征而言,Deep Lasso方法和XGBoost方法的特征选择优于其他方法。在下游模型为FT-Transformer时,XGBoost更佳;而下游模型为MLP时,Deep Lasso的特征选择方法表现更好。

 

最后,对于二阶特征而言,Deep Lasso的效果要明显优于其他上游特征选择模型,特别是有75%的额外特征时,Deep Lasso的效果十分优秀(看最后那一列Rank)。论文作者认为这表明Deep Lasso在涉及大量虚假或冗余特征的更具挑战性的特征选择问题上表现出色。

五、相关复现

1、加入LassoNet

使用LassoNet做特征选择。注意打包好的LassoNet实际上是无法处理特征分类的(模型中没有写Embedding操作),而且LassoNet又加了一层调整参数M以及lambda的外层循环,如果参数量过大,会导致训练时间变长。此处仅仅考虑连续特征。

首先,在train_classical.py中加入LassoNet的相关判断:

...
elif cfg.model.name == 'lassonet':
	model = LassoNetRegressor(hidden_dims=(cfg.model.hidden_dims,),
					 M=cfg.model.M,
					 lambda_start="auto"
					 )
...

elif cfg.model.name == 'lassonet':
	model = LassoNetClassifier(hidden_dims=(cfg.model.hidden_dims,),
					 M=cfg.model.M,
					 lambda_start="auto"
					 )

...
#模型训练函数
elif cfg.model.name == "lassonet":
	model.path(X_train, y_train)

...
#模型的特征重要性参数
elif cfg.model.name == "lassonet":
	importances = np.abs(model.feature_importances_.cpu().numpy())

之后,在tune_full_pipeline.py与run_full_pipeline.py中加入对于LassoNet的传参。

#tune_full_pipeline.py
...

if model == "lassonet":
	hidden_dims = trial.suggest_int("hidden_dims", 1, 512)
	M = trial.suggest_int("M",10,50)
	model_params = {
		'hidden_dims': hidden_dims,
		"M":M
		}
	training_params = {}
...
# run_full_pipeline.py
...

if model == "lassonet":
	model_params = {
		'hidden_dims': hypers_dict[f"hidden_dims"],
		"M":hypers_dict[f"M"]
	}
	training_params = {
		}
...

 最后,还要在config/hyp与config/model文件夹下定义yaml设定文件

hyp_for_lassonet.yaml:

save_period: -1
seed: 1

lassonet.yaml

name: lassonet
hidden_dims: 20
M: 20

2、复现结果

python tune_full_pipeline.py model=mlp model_downstream=mlp dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5
python tune_full_pipeline.py model=mlp model_downstream=mlp dataset=california_housing name=1l_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 hyp.regularization='first_lasso' topk=0.5
python tune_full_pipeline.py model=xgboost model_downstream=mlp dataset=california_housing name=xgboost_mlp hyp=hyp_for_xgboost hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 topk=0.5



python tune_full_pipeline.py model=mlp model_downstream=mlp dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5
python tune_full_pipeline.py model=mlp model_downstream=mlp dataset=california_housing name=1l_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 hyp.regularization='first_lasso' topk=0.5
python tune_full_pipeline.py model=xgboost model_downstream=mlp dataset=california_housing name=xgboost_mlp hyp=hyp_for_xgboost hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 topk=0.5


python tune_full_pipeline.py model=mlp model_downstream=mlp dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5
python tune_full_pipeline.py model=mlp model_downstream=mlp dataset=california_housing name=1l_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 hyp.regularization='first_lasso' topk=0.5
python tune_full_pipeline.py model=xgboost model_downstream=mlp dataset=california_housing name=xgboost_mlp hyp=hyp_for_xgboost hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 topk=0.5

python tune_full_pipeline.py model=mlp model_downstream=xgboost dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_xgboost dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5
python tune_full_pipeline.py model=mlp model_downstream=xgboost dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_xgboost dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5
python tune_full_pipeline.py model=mlp model_downstream=xgboost dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_xgboost dataset.add_noise=random_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5



python tune_full_pipeline.py model=ft_transformer_attention_map model_downstream=ft_transformer dataset=california_housing name=am_fttransformer hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 topk=0.5
python tune_full_pipeline.py model=ft_transformer_attention_map model_downstream=ft_transformer dataset=california_housing name=am_fttransformer hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 topk=0.5
python tune_full_pipeline.py model=ft_transformer_attention_map model_downstream=ft_transformer dataset=california_housing name=am_fttransformer hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 topk=0.5

python tune_full_pipeline.py model=lassonet model_downstream=mlp dataset=california_housing name=lassonet hyp=hyp_for_lassonet hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 topk=0.5
python tune_full_pipeline.py model=lassonet model_downstream=mlp dataset=california_housing name=lassonet hyp=hyp_for_lassonet hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 topk=0.5
python tune_full_pipeline.py model=lassonet model_downstream=mlp dataset=california_housing name=lassonet hyp=hyp_for_lassonet hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 topk=0.5

python run_full_pipeline.py --multirun model=mlp model_downstream=mlp dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
python run_full_pipeline.py --multirun model=mlp model_downstream=mlp dataset=california_housing name=1l_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 hyp.regularization='first_lasso' topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
python run_full_pipeline.py --multirun model=xgboost model_downstream=mlp dataset=california_housing name=xgboost_mlp hyp=hyp_for_xgboost hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9


python run_full_pipeline.py --multirun model=mlp model_downstream=mlp dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
python run_full_pipeline.py --multirun model=mlp model_downstream=mlp dataset=california_housing name=1l_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 hyp.regularization='first_lasso' topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
python run_full_pipeline.py --multirun model=xgboost model_downstream=mlp dataset=california_housing name=xgboost_mlp hyp=hyp_for_xgboost hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9


python run_full_pipeline.py --multirun model=mlp model_downstream=mlp dataset=california_housing name=deep_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 hyp.regularization='deep_lasso' topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
python run_full_pipeline.py --multirun model=mlp model_downstream=mlp dataset=california_housing name=1l_lasso_mlp hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 hyp.regularization='first_lasso' topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
python run_full_pipeline.py --multirun model=xgboost model_downstream=mlp dataset=california_housing name=xgboost_mlp hyp=hyp_for_xgboost hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9


python run_full_pipeline.py --multirun model=ft_transformer_attention_map model_downstream=ft_transformer dataset=california_housing name=am_fttransformer hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
python run_full_pipeline.py --multirun model=ft_transformer_attention_map model_downstream=ft_transformer dataset=california_housing name=am_fttransformer hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
python run_full_pipeline.py --multirun model=ft_transformer_attention_map model_downstream=ft_transformer dataset=california_housing name=am_fttransformer hyp=hyp_for_neural_network hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9

python run_full_pipeline.py --multirun model=lassonet model_downstream=mlp dataset=california_housing name=lassonet hyp=hyp_for_lassonet hyp_downstream=hyp_for_neural_network dataset.add_noise=random_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
python run_full_pipeline.py --multirun model=lassonet model_downstream=mlp dataset=california_housing name=lassonet hyp=hyp_for_lassonet hyp_downstream=hyp_for_neural_network dataset.add_noise=corrupted_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9
python run_full_pipeline.py --multirun model=lassonet model_downstream=mlp dataset=california_housing name=lassonet hyp=hyp_for_lassonet hyp_downstream=hyp_for_neural_network dataset.add_noise=secondorder_feats dataset.noise_percent=0.5 topk=0.5 hyp.seed=0,1,2,3,4,5,6,7,8,9


最后的结果如下表:

noise_corrupted_featsnoise_random_featsnoise_secondorder_feats
1l_lasso_mlp-0.443401089-0.444775664-0.442463761
am_fttransformer-0.421166045-0.425570707-0.425962898
deep_lasso_mlp-0.453364613-0.448686116-0.439467445
lassonet_mlp-0.448687207-0.456497796-0.449663611
xgboost_mlp-0.452297124-0.449051298-0.456485655

笔者对California Housing数据集进行了复现。在下游模型中,FT-Transformer的效果似乎要优于其他模型。但是由于上下游模型类型不同,所以FT_Transformer那一行实际上并不能和其他实验的结果简单相比。

而对于MLP作下游模型来说,首层Lasso的特征选择方法在损坏特征噪音与随机特征噪音中药优于其他模型的结果,而在二阶特征中,Deep_Lasso的效果则要优于其他模型。

  • 17
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值