文章目录
Paper information
Grasping Field: Learning Implicit Representations for human grasps
Paper
Code
Video
Introduction
Target
Mapping 3D points to 2D: from (x, y, z) to (dist_to_hand, dist_to_obj)
Previous work
3D hand and object reconstruction from a single RGB image1
physical constraints: no interpenetration and proper contact
- limitations
-
- pre-define regions of the hand that can be in contact with objects
-
- resolution limited
-
- limited objects of genus zero
-
- resolution limited by hand and object meshes
-
- can only evaluate interaction after obtaining the body and object meshes
Contribution
- Grasping Field: propose a interaction representation
- parameterize the Grasping Field
- propose a generative model, to generate plausible hand grasps given an object point cloud
- propose deep neutral networks to reconstruct the 3D hand and object given an RGB input in a single pass
Method
What is Grasping Field?
key points: It is just a mapping f G F : R 3 → R 2 f_{GF} : \mathbb{R}^3 \rightarrow \mathbb{R}^2 f