论文阅读 [TPAMI-2022] Learning Semantic Correspondence Exploiting an Object-Level Prior

论文阅读 [TPAMI-2022] Learning Semantic Correspondence Exploiting an Object-Level Prior

论文搜索(studyai.com)

搜索论文: Learning Semantic Correspondence Exploiting an Object-Level Prior

搜索论文: http://www.studyai.com/search/whole-site/?q=Learning+Semantic+Correspondence+Exploiting+an+Object-Level+Prior

关键字(Keywords)

Semantics; Training; Task analysis; Clutter; Feature extraction; Strain; Robustness; Semantic correspondence; object-level prior; differentiable argmax function

机器视觉; 自然语言处理

语义关联; 语义分析

摘要(Abstract)

We address the problem of semantic correspondence, that is, establishing a dense flow field between images depicting different instances of the same object or scene category.

我们解决了语义对应的问题,即在描述同一对象或场景类别的不同实例的图像之间建立一个密集的流场。.

We propose to use images annotated with binary foreground masks and subjected to synthetic geometric deformations to train a convolutional neural network (CNN) for this task.

我们建议使用带有二值前景遮罩注释的图像,并进行合成几何变形,以训练卷积神经网络(CNN)来完成这项任务。.

Using these masks as part of the supervisory signal provides an object-level prior for the semantic correspondence task and offers a good compromise between semantic flow methods, where the amount of training data is limited by the cost of manually selecting point correspondences, and semantic alignment ones, where the regression of a single global geometric transformation between images may be sensitive to image-specific details such as background clutter.

使用这些掩码作为监控信号的一部分,为语义对应任务提供了一个对象级别的优先级,并在语义流方法和语义对齐方法之间提供了一个很好的折衷方案,其中训练数据的数量受到手动选择点对应的成本的限制,其中,图像之间单个全局几何变换的回归可能对特定于图像的细节(如背景杂波)敏感。.

We propose a new CNN architecture, dubbed SFNet, which implements this idea.

我们提出了一个新的CNN架构,称为SFNet,它实现了这个想法。.

It leverages a new and differentiable version of the argmax function for end-to-end training, with a loss that combines mask and flow consistency with smoothness terms.

它利用argmax函数的一个新的可微版本进行端到端的训练,其损失是将掩码和流一致性与平滑度项结合起来。.

Experimental results demonstrate the effectiveness of our approach, which significantly outperforms the state of the art on standard benchmarks…

实验结果证明了我们的方法的有效性,在标准基准上显著优于最新技术。。.

作者(Authors)

[‘Junghyup Lee’, ‘Dohyung Kim’, ‘Wonkyung Lee’, ‘Jean Ponce’, ‘Bumsub Ham’]

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值