Python实现Complement Entropy Loss 参考 ICLR 2019论文 COMPLEMENT OBJECTIVE TRAINING

import numpy as np


def zero_hot(labels_dense, num_classes):
    """Convert class labels from scalars to one-hot vectors."""
    
    num_labels = labels_dense.shape[0]
    index_offset = np.arange(num_labels) * num_classes
    labels_one_hot = np.ones((num_labels, num_classes))
    labels_one_hot.flat[index_offset + labels_dense.ravel()] = 0
    return labels_one_hot




def Complememnt_EntropyLoss(labels, predictions, num_classes):
    #get the length of labels
    # get predcitions using softmax function
    num_labels = labels.shape[0]
    
    #temp 
    temp = []
    
    for i in range(num_labels):
        x=np.take(predictions[i], labells[i])
        temp.append(x)
          
    temp = (1- temp) + le-7   
    
    temp2 = np.reshape(temp,(-1, 1))
       
    Px = predictions / temp2
    
    Px_log = np.log(Px + le-10)
    
    output = Px*Px_log
    
    y_zerohot = zero_hot(labels, num_classes)
    
    output = output*y_zerohot
    
    loss = np.sum(output)
    loss /= num_labels.as_type(np.float32)
    
    loss /= num_classes.as_type(np.float32)
    
    return loss
    
    
    
    

本文参考  ICLR 2019论文 COMPLEMENT OBJECTIVE TRAINING, 论文链接:https://arxiv.org/abs/1903.01182,Torch实现:https://github.com/henry8527/COT

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值