[Paper Notes] [3DV2020 Best Paper] Grasping Field

这篇论文介绍了一种名为Grasping Field的新型交互表示,它从3D点到2D的映射,用于学习人类抓握的隐式表示。Grasping Field解决了先前工作中的限制,如手部与对象重建的分辨率问题和预先定义的接触区域。作者提出了一种生成模型,能根据对象的点云生成合理的手部抓握,并通过深度神经网络在单次传递中重建3D手部和对象。
摘要由CSDN通过智能技术生成

Paper information

Grasping Field: Learning Implicit Representations for human grasps
Paper
Code
Video

Introduction

Target

Mapping 3D points to 2D: from (x, y, z) to (dist_to_hand, dist_to_obj)

Previous work

3D hand and object reconstruction from a single RGB image1

physical constraints: no interpenetration and proper contact

  • limitations
    • pre-define regions of the hand that can be in contact with objects
    • resolution limited
    • limited objects of genus zero
    • resolution limited by hand and object meshes
    • can only evaluate interaction after obtaining the body and object meshes

Contribution

  • Grasping Field: propose a interaction representation
  • parameterize the Grasping Field
  • propose a generative model, to generate plausible hand grasps given an object point cloud
  • propose deep neutral networks to reconstruct the 3D hand and object given an RGB input in a single pass

Method

What is Grasping Field?

key points: It is just a mapping f G F : R 3 → R 2 f_{GF} : \mathbb{R}^3 \rightarrow \mathbb{R}^2 f

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值