-
Notifications
You must be signed in to change notification settings - Fork 7
rss 001
[rss 001] Understanding Hand-Object Manipulation with Grasp Types and Object Attributes [PDF] [project page] [notes]
Minjie Cai, Kris M. Kitani, Yoichi Sato
Action prediction using grasp and object attribute recognition
-
Detect grasped part of the object during manipulation
-
Explore spatial hand-object configuration
-
Bayesian model to link object attributes and grasp types
-
infer manipulation action through interpretable discriminative classifier (learn high level constraints)
tested on GEA gaze dataset
Hand detector ==> pixel level probability mask ==> Bounding box
Grasp detector : linear SVM using feature vectors extracted from hand patches.
Output : probability of grasps and highest score grasp
Trained on 8 graps from Feix and al taxonomy
4 object attribute categories : [prismatic, round, flat, deformable]
First 3 are linked to dimensions, last to rigidity
Object position and size can be assiciated to hand position/opening
==> train object shape recognition based on hand appearance : regress normalized relative location and scale of the object
==> extract object patches
Output : probability of object attribute
To find most probable object attribute/grasp couple
Object is restricted to grasped part of the object
Grasp types : set of canonical hand poses to describe strategies for holding objects
Object attributes : physical attributes (rigidity, shape...)
Visual attributes : physical attributes that can be inferred from the image