Skip to content

rss 001

Yana edited this page May 18, 2017 · 2 revisions

Robotics: Science and Systems 2016

[rss 001] Understanding Hand-Object Manipulation with Grasp Types and Object Attributes [PDF] [project page] [notes]

Minjie Cai, Kris M. Kitani, Yoichi Sato

Synthesis

Action prediction using grasp and object attribute recognition

  • Detect grasped part of the object during manipulation

  • Explore spatial hand-object configuration

  • Bayesian model to link object attributes and grasp types

  • infer manipulation action through interpretable discriminative classifier (learn high level constraints)

tested on GEA gaze dataset

Pipeline

Grasp detection

Hand detector ==> pixel level probability mask ==> Bounding box

Grasp detector : linear SVM using feature vectors extracted from hand patches.

Output : probability of grasps and highest score grasp

Trained on 8 graps from Feix and al taxonomy

Object attribute detection

4 object attribute categories : [prismatic, round, flat, deformable]

First 3 are linked to dimensions, last to rigidity

Object position and size can be assiciated to hand position/opening

==> train object shape recognition based on hand appearance : regress normalized relative location and scale of the object

==> extract object patches

Output : probability of object attribute

Bayesian network

To find most probable object attribute/grasp couple

Use this in context of for action recognition or grasp recognition

Remarks

Object is restricted to grasped part of the object

Definitions

Grasp types : set of canonical hand poses to describe strategies for holding objects

Object attributes : physical attributes (rigidity, shape...)

Visual attributes : physical attributes that can be inferred from the image

Clone this wiki locally