Related work怎么写

该文探讨了从外观建模到动态人-物交互追踪的相关工作,包括人体和对象重建、静态场景中的人类行为建模、以及动态人-物交互的最新进展。特别指出,现有方法多关注手部交互,而文中方法致力于全身捕捉,并能预测接触信息,提高交互估计的准确性。
摘要由CSDN通过智能技术生成

Related Work怎么写

多看论文,多总结!

1 BEHAVE: Dataset and Method for Tracking Human Object Interactions (CVPR2022)

In this section, we first briefly review work focused on object and human reconstruction, in isolation from their environmental context. Such methods focus on modelling appearance and do not consider interactions. Next, we cover methods focused on humans in static scenes and finally discuss closer-related work to ours, for modelling dynamic human-object interactions.

在这一节中,我们首先简要回顾了专注于物体和人体重建的工作,与他们的环境背景隔离。这些方法主要是对外观进行建模,并不考虑交互作用。接下来,我们将介绍专注于静态场景中的人类的方法,最后讨论与我们更接近的动态人-物交互建模的工作。

1.1. Appearance modelling: Humans and objects without scene context

Human reconstruction and performance capture

Perceiving humans from monocular RGB data [12, 29, 31, 41, 43, 44, 58, 59, 64, 87] and under multiple views [37–40, 62] settings has been widely explored. Recent work tends to focus on reconstructing fine details like hand gestures and facial expressions [20,25,85,91], self-contacts [27,54], interactions between humans [26], and even clothing [6, 11].从单眼RGB数据[12, 29, 31, 41, 43, 44, 58, 59, 64, 87]和多视图[37-40, 62]设置下感知人类已经被广泛探索。最近的工作往往集中在重建精细的细节,如手势和面部表情[20,25,85,91],自我接触[27,54],人类之间的互动[26],甚至服装[6, 11]。

These methods benefit from representing human with parametric body models [52, 58, 81], thus motivating our use of recent implicit diffused representations [8, 10] as backbone for our tracker. 这些方法得益于用参数化的身体模型来表示人类[52, 58, 81],从而促使我们使用最近的隐性扩散表征[8, 10]作为我们追踪器的骨干。

Following the success of pixel-aligned implicit function learning [64, 65], recent methods can capture human performance from sparse [38, 80] or even a single RGB camera [47,48]. However, capturing 3D humans from RGB data involves a fundamental ambiguity between depth and scale.在像素对齐的隐含函数学习[64, 65]取得成功之后,最近的方法可以从稀疏的[38, 80]甚至单一的RGB相机[47,48]中捕捉人类的表现。然而,从RGB数据中捕捉3D人类涉及深度和尺度之间的基本模糊性。
Therefore, recent methods use RGBD [56,69,73,76,84] or volumetric data [9,10,19] for reliable human capture. These insights motivate us to build novel trackers based on multiview RGBD data. 因此,最近的方法使用RGBD[56,69,73,76,84]或体积数据[9,10,19]进行可靠的人体捕捉。这些见解促使我们建立基于多视角RGBD数据的新型跟踪器。

Object reconstruction

Most existing work on reconstructing 3D objects from RGB [21, 46, 53, 75, 78] and RGBD [45, 55, 82] data does so in isolation, without the human involvement or the interaction. While challenging, it is arguably more interesting to reconstruct objects in a dynamic setting under severe occlusions from the human.

大多数现有的从RGB[21, 46, 53, 75, 78]和RGBD[45, 55, 82]数据中重建3D物体的工作都是孤立进行的,没有人类的参与或互动。虽然具有挑战性,但可以说,在人类严重遮挡的情况下,在动态环境中重建物体是更有趣的

1.2. Interaction modelling: Humans and objects with scene context

Humans in static scenes

Modelling how humans act in a scene is both important and challenging. Tasks like placement of humans into static scenes [34, 49, 90], motion prediction [15,32] or human pose reconstruction [16,33,77,86,89] under scene constrains, or learning priors for humanobject interactions [66], have been investigated extensively in recent years. These methods are relevant but restricted to modelling humans interacting with static objects. We address a more challenging problem of jointly tracking human-object interactions in dynamic environments where objects are manipulated.

对人类在场景中的行为进行建模既重要又具有挑战性。近年来,像将人类放置在静态场景中[34, 49, 90]、运动预测[15,32]或场景约束下的人类姿势重建[16,33,77,86,89],或学习人与物体互动的先验因素[66]等任务已经被广泛地研究。这些方法都是相关的,但仅限于模拟人类与静态物体的互动。我们解决的是一个更具挑战性的问题,即在物体被操纵的动态环境中共同跟踪人与物体的互动。

Dynamic human object interactions

Recently, there has been a strong push on modeling hand-object interactions based on 3D [42,72], 2.5D [13,14] and 2D [22,24,28,35,83] data. Although powerful, these methods are currently restricted to modelling only hand-object interactions. In contrast, we are interested in full body capture. Methods for dynamic full body human object interaction approach the problem via 2D action recognition [36, 51] or reconstruct 3D object trajectories during interactions [23]. Despite being impressive, such methods either lack full 3D reasoning [36,51] or are limited to specific objects [23]. 最近,基于3D[42,72]、2.5D[13,14]和2D[22,24,28,35,83]数据的手-物交互建模得到了大力推动。尽管功能强大但这些方法目前只限于对手-物互动的建模。与此相反,我们对全身捕捉感兴趣。动态全身人类物体交互的方法是通过二维动作识别[36,51]或重建交互过程中的三维物体轨迹[23]来解决这个问题。尽管令人印象深刻,这些方法要么缺乏完全的三维推理[36,51],要么仅限于特定的物体[23]。

More recent work reconstructs and tracks human-object interactions from RGB [71] or RGBD streams [70], but does not consider contact prediction, thus missing a component necessary for accurate interaction estimates. 最近的工作是从RGB[71]或RGBD流[70]中重建和跟踪人与物体的互动,但没有考虑接触预测,因此缺少了准确的互动估计所需的一个组成部分。

Very relevant to our work, PHOSA [88] reconstructs humans and objects from a single image. PHOSA uses hand crafted heuristics, instance specific optimization for fitting, and pre-defined contact regions, which limits generalization to diverse human-object interactions. Our method on the other hand learns to predict the necessary information from data, making our models more scale-able. As shown in the experiments, the accuracy of our method is significantly higher to PHOSA. 与我们的工作非常相关,PHOSA[88]从单一图像中重建人类和物体。PHOSA使用手工制作的启发式方法、特定实例的优化拟合,以及预先定义的接触区域,这限制了对不同的人-物互动的概括。另一方面,我们的方法学会了从数据中预测必要的信息,使我们的模型更具可扩展性。如实验所示,我们的方法的准确性明显高于PHOSA

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

路过的风666

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值