论文阅读 [TPAMI-2022] Average Top-k Aggregate Loss for Supervised Learning

论文阅读 [TPAMI-2022] Average Top-k Aggregate Loss for Supervised Learning

论文搜索(studyai.com)

搜索论文: Average Top-k Aggregate Loss for Supervised Learning

搜索论文: http://www.studyai.com/search/whole-site/?q=Average+Top-k+Aggregate+Loss+for+Supervised+Learning

关键字(Keywords)

Aggregates; Training; Training data; Supervised learning; Data models; Loss measurement; Task analysis; Aggregate loss; average top- k k k k loss; supervised learning; learning theory

机器学习; 机器视觉

监督学习; 图像分类; SVM

摘要(Abstract)

In this work, we introduce the average top- k k kk ( A T k \mathrm {AT}_k ATk AT k) loss, which is the average over the k k kk largest individual losses over a training data, as a new aggregate loss for supervised learning.

在本文中,我们引入了平均top- k k kk( A T k \mathrm{AT}k ATkAT k)损失,它是在训练数据上前 k k k个最大个体损失的平均值,作为监督学习的一个新的平均损失。

We show that the A T k \mathrm {AT}_k ATk AT k loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss.

我们展示了 A T k \mathrm {AT}_k ATk AT k 损失是另外两种更广泛使用的损失(即平均损失和最大损失)的一个自然推广。

Yet, the A T k \mathrm {AT}_k ATk AT k loss can better adapt to different data distributions because of the extra flexibility provided by the different choices of k k kk.

然而, A T k \mathrm{AT}k ATkAT k 损失可以更好地适应不同的数据分布,因为 k k kk的不同选择提供了额外的灵活性。

Furthermore, it remains a convex function over all individual losses and can be combined with different types of individual loss without significant increase in computation.

此外,它仍然是所有单个损失的凸函数,可以与不同类型的单个损失结合使用,而不会显著增加计算量。

We then provide interpretations of the A T k \mathrm {AT}_k ATk AT k loss from the perspective of the modification of individual loss and robustness to training data distributions.

然后,我们从个体损失的修正和训练数据分布的稳健性的角度解释了 A T k \mathrm{AT}k ATkAT k损失。

We further study the classification calibration of the A T k \mathrm {AT}_k ATk AT k loss and the error bounds of A T k \mathrm {AT}_k ATk AT k-SVM model.

我们进一步研究了 A T k \mathrm {AT}_k ATk AT k 损失的分类校正和 A T k \mathrm {AT}_k ATk AT k-SVM模型的误差界。

We demonstrate the applicability of minimum average top- k k kk learning for supervised learning problems including binary/multi-class classification and regression, using experiments on both synthetic and real datasets…

我们通过在合成数据集和真实数据集上的实验,证明了最小平均top- k k kk学习对监督学习问题的适用性,包括二元/多类分类和回归。

作者(Authors)

[‘Siwei Lyu’, ‘Yanbo Fan’, ‘Yiming Ying’, ‘Bao-Gang Hu’]

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值