Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics

动机:

In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task’s loss. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task
【CC】本文是对多任务目标函数优化的文章,经典的原因是有详实的数学/简单的数学推导,所以这篇文章有大量的公式,基于这个推导能够方便的用在工程上! 作者说在多任务NN网络性能受到各个子任务的权重影响,本文想找一种理论上的子任务权重求解的方案,利用的叫做homoscedastic uncertainty(同方差不确定性)

Multi-task learning aims to improve learning effificiency and prediction accuracy by learning multiple objectives
from a shared representation. It can be considered an approach to inductive knowledge transfer which improves generalisation by sharing the domain information between complimentary tasks. It does this by using a shared representation to learn multiple tasks – what is learned from one task can help learn other tasks
【CC】多任务的方式说白了就是一个公共的表示层+各种header,现在比较普遍的方式了。这种共享方式可以认为在各任务间共享了knowledge能够提高泛化性。这样在各自任务上表现也会比较好。这只是直观解释,实际上有paper从数学上研究过这种共享的方式到底优化的是哪里

Scene understanding algorithms must understand both the geometry and semantics of the scene at the same time. This forms an interesting multi-task learning problem because scene understanding involves joint learning of various regression and classifification tasks with different units and scales.
【CC】以scene understanding举例,说这是个经典的多任务:有回归/分类,还有多尺度。这里讲scene understanding是为了后面loss func的推导和网络架构设计

解题思路:

We interpret homoscedastic uncertainty as task-dependent weighting and show how to derive a principled multi-task loss function which can learn to balance various regression and classifification losses
【CC】开宗明义:使用数据的homoscedastic uncertainty作为子任务的权重;这里把顺序掉一下:理论描述放一放,先看看推导过程,回头再看会更好理解

形式化推导:

Multi-task learning concerns the problem of optimising a model with respect to multiple objectives. The naive approach to combining multi objective losses would be to simply perform a weighted linear sum of the losses for each individual task:
在这里插入图片描述
【CC】总的Loss就是各个子Loss的线性组合;话说能不能通过一个MLP去学习一个W呢?

In this section we derive a multi-task loss function based on maximising the Gaussian likelihood with homoscedastic
uncertainty. L

  • 6
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值