Normalized Softmax Loss

目录

19-BMVC-Classification is a Strong Baseline for Deep Metric Learning

Normalized Softmax Loss

Layer Normalization

类别平衡采样

binary embeddings


19-BMVC-Classification is a Strong Baseline for Deep Metric Learning

1) we establish that classification is a strong baseline for deep metric learning across different datasets, base feature networks and embedding dimensions,

2) we provide insights into the performance effects of binarization and subsampling classes for scalable extreme classification-based training(极端分类),

3) we propose a classification-based approach to learn high-dimensional binary embeddings.

Normalized Softmax Loss

当类的权重看做proxy,使用余弦距离,Normalized softmax loss符合proxy paradigm

  • 移除最后一层线性层的bias。

nn.Linear 默认参数初始化方法

  • 输入x和权重p都经过L2归一化(因为这里是余弦相似度)
  • Temperature scaling:经典概率校准方法。\sigma放大类间差异,提升精度。
 

 

class NormSoftmaxLoss(nn.Module):
    """
    L2 normalize weights and apply temperature scaling on logits.
    """
    def __init__(self,
                 dim,
                 num_instances,
                 temperature=0.05):
        super(NormSoftmaxLoss, self).__init__()

        # 移除线性层的bias
        self.weight = Parameter(torch.Tensor(num_instances, dim))
        # Initialization from nn.Linear (https://github.com/pytorch/pytorch/blob/v1.0.0/torch/nn/modules/linear.py#L129)
        stdv = 1. / math.sqrt(self.weight.size(1))
        self.weight.data.uniform_(-stdv, stdv)
        

        self.temperature = temperature
        self.loss_fn = nn.CrossEntropyLoss()

    def forward(self, embeddings, instance_targets):
        norm_weight = nn.functional.normalize(self.weight, dim=1)# L2归一化权重

        prediction_logits = nn.functional.linear(embeddings, norm_weight)

        loss = self.loss_fn(prediction_logits / self.temperature, instance_targets)
        return loss

CrossEntropyLoss= LogSoftmax + NLLLoss

计算softmax要计算指数, 可能出现nan.。分类问题里使用CrossEntropy的时候需要进行log运算, 如果将Log运算和Softmax结合在一起, 可以避免这个问题。

NLLLoss:与label相乘取负求均值

这里还提到了Large Margin Cosine Loss (LMCL):《CosFace: Large Margin Cosine Loss for Deep Face Recognition》2018,Hao Wang et al. Tencent AI Lab

Layer Normalization

嵌入具有以0为中心值的分布。

  1. easily binarize embeddings via thresholding at zero.
  2. helps the network better initialize new parameters and reach better optima.

BatchNorm: 对一个batch-size样本内的每个特征做归一化
LayerNorm: 针对每条样本,对每条样本的所有特征做归一化

BatchNorm和LayerNorm的区别_DataAlgo的博客-CSDN博客_layernorm和batchnorm的区别

self.standardize = nn.LayerNorm(input_dim=2048, elementwise_affine=False)

类别平衡采样

每个batch采样c个类,每个类采样s个样本。

缓解损失由类内最差近似示例限定(17-ICCV-No Fuss Distance Metric Learning using Proxies)

Subsampling:二次采样,不使用全部的类

binary embeddings

二值化嵌入

汉明距离(Hamming distance):无需计算内积, 可以降低计算复杂度;

两个二进制编码异或运算后各位数值加和的结果, 如 1011101(2)​与1001001(2)​之间的汉明距离是 2, 本质上就是两个二值向量的欧式距离;

binary_query_embeddings = np.require(query_embeddings > 0, dtype='float32')
binary_db_embeddings = np.require(db_embeddings > 0, dtype='float32')
# knn retrieval from embeddings (binary embeddings + euclidean = hamming distance)
dists, retrieved_result_indices = _retrieve_knn_faiss_gpu_euclidean(binary_query_embeddings,binary_db_embeddings,k,gpu_id=gpu_id)

代码:GitHub - azgo14/classification_metric_learning

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值