TY - GEN
T1 - Monocular image 3D human pose estimation under self-occlusion
AU - Radwan, Ibrahim
AU - Dhall, Abhinav
AU - Goecke, Roland
PY - 2013
Y1 - 2013
N2 - In this paper, an automatic approach for 3D pose reconstruction from a single image is proposed. The presence of human body articulation, hallucinated parts and cluttered background leads to ambiguity during the pose inference, which makes the problem non-trivial. Researchers have explored various methods based on motion and shading in order to reduce the ambiguity and reconstruct the 3D pose. The key idea of our algorithm is to impose both kinematic and orientation constraints. The former is imposed by projecting a 3D model onto the input image and pruning the parts, which are incompatible with the anthropomorphism. The latter is applied by creating synthetic views via regressing the input view to multiple oriented views. After applying the constraints, the 3D model is projected onto the initial and synthetic views, which further reduces the ambiguity. Finally, we borrow the direction of the unambiguous parts from the synthetic views to the initial one, which results in the 3D pose. Quantitative experiments are performed on the Human Eva-I dataset and qualitatively on unconstrained images from the Image Parse dataset. The results show the robustness of the proposed approach to accurately reconstruct the 3D pose form a single image.
AB - In this paper, an automatic approach for 3D pose reconstruction from a single image is proposed. The presence of human body articulation, hallucinated parts and cluttered background leads to ambiguity during the pose inference, which makes the problem non-trivial. Researchers have explored various methods based on motion and shading in order to reduce the ambiguity and reconstruct the 3D pose. The key idea of our algorithm is to impose both kinematic and orientation constraints. The former is imposed by projecting a 3D model onto the input image and pruning the parts, which are incompatible with the anthropomorphism. The latter is applied by creating synthetic views via regressing the input view to multiple oriented views. After applying the constraints, the 3D model is projected onto the initial and synthetic views, which further reduces the ambiguity. Finally, we borrow the direction of the unambiguous parts from the synthetic views to the initial one, which results in the 3D pose. Quantitative experiments are performed on the Human Eva-I dataset and qualitatively on unconstrained images from the Image Parse dataset. The results show the robustness of the proposed approach to accurately reconstruct the 3D pose form a single image.
KW - 3D pose reconstruction
KW - pose estimation
KW - self-occlusion
UR - http://www.scopus.com/inward/record.url?scp=84898826788&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2013.237
DO - 10.1109/ICCV.2013.237
M3 - Conference contribution
SN - 9781479928392
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 1888
EP - 1895
BT - Proceedings - 2013 IEEE International Conference on Computer Vision, ICCV 2013
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2013 14th IEEE International Conference on Computer Vision, ICCV 2013
Y2 - 1 December 2013 through 8 December 2013
ER -