Statistical Gesture Models for 3D Motion Capture from a Library of Gestures with Variants

Zhenbo LI, Patrick HORAIN, André-Marie PEZ, Catherine PELACHAUD


A challenge for 3D motion capture by monocular vision is 3D-2D projection ambiguities that may bring incorrect poses during tracking. In this paper, we propose improving 3D motion capture by learning human gesture models from a library of gestures with variants. This library has been created with virtual human animations. Gestures are described as Gaussian Process Dynamic Models (GPDM) and are used as constraints for motion tracking. Given the raw input poses from the tracker, the gesture model helps to correct ambiguous poses. The benefit of the proposed method is demonstrated with results.

Full text:

Copyright 2009 Springer-Verlag. The copyright to this Contribution is transferred to Springer-Verlag GmbH Berlin Heidelberg (hereinafter called Springer-Verlag).
Springer-Verlag will take, either in its own name or in that of the Author, any necessary steps to protect these rights against infringement by third parties. It will have the copyright notice inserted into all editions of the Contribution according to the provisions of the Universal Copyright Convention (UCC) and dutifully take care of all formalities in this connection, either in its own name or in that of the Author.
PDF (536 kbytes, to be published).

Publication reference:

Initial abstracts (2 pages each):