We investigate a new algorithm for real-time 3D human-motion tracking through a monocular camera. The algorithm is based on body-silhouette shape matching combined with particle-filter-based selected-region tracking in the 2D view. The selected-region tracking, combined with human-body structural data, restricts the temporal interpretation of 3D human poses to those best corresponding to the 2D silhouette shapes from the monocular camera. Compensatory algorithms are included to improve tracking performance. The experimental results demonstrate that our approach performs real-time human-motion tracking with good quality and reasonable robustness.
Related publications
1. M Kim, S Joo, S Jo, 3D Human-Pose Tracking through a Monocular Vision, Proc. of RiTA 2012. [PDF]