Denis Tome'

SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera

IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2020

Abstract

We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device. This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distortions that result in drastic differences in resolution between lower and upper body. We propose an encoder-decoder architecture with a novel multi-branch decoder designed to account for the varying uncertainty in 2D predictions. The quantitative evaluation, on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric approaches. To tackle the lack of labelled data we also introduced a large photo-realistic synthetic dataset. xR-EgoPose offers high quality renderings of people with diverse skintones, body shapes and clothing, performing a range of actions. Our experiments show that the high variability in our new synthetic training corpus leads to good generalization to real world footage and to state of theart results on real world datasets with ground truth. Moreover, an evaluation on the Human3.6M benchmark shows that the performance of our method is on par with top performing approaches on the more classic problem of 3D human pose from a third person viewpoint.

Dataset


This figure illustrates the challenges faced in egocentric human pose estimation: severe self-occlusions, extreme perspective effects and lower pixel density for the lower body. The color gradient indicates the density of image pixels for each area of the body: green is higher pixel density, whereas red is lower density.


To faces these challenges we created a new new large-scale dataset, composed of 383K frames, focuses on realism, with augmentation of characters, environments, and lighting conditions. It departs from the only other existing monocular egocentric dataset from a headmounted fish-eye camera in its photorealistic quality , different viewpoint (since the images are rendered from a camera located on a VR HMD), and its high variability in characters, backgrounds and actions. A further advantage of this architecture is that the second branch is only needed at training time and can be removed at test time, guaranteeing the same performance and a faster execution.

Videos

Materials

Project
PDF
Data
DOI

BibTeX

			
@ARTICLE{9217955,
  author={Tome, Denis and Alldieck, Thiemo and Peluse, Patrick and Pons-Moll, Gerard and Agapito, Lourdes and Badino, Hernan and De la Torre, Fernando},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera}, 
  year={2020},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/TPAMI.2020.3029700}
}