LEAP is a neural network architecture for representing volumetric animatable human bodies. It follows traditional human body modeling techniques and leverages a statistical human prior to generalize to unseen humans.
Inspired by traditional human body modeling techniques, LEAP models human bodies implicitly in the canonical pose and is controlled through carefully designed latent codes that encode shape- and pose-dependent deformations. The whole model is end-to-end differentiable and compatible with optimization and deep learning pipelines.
We introduce a novel deep neural network which, given a set of bone transformations (joint locations and rotations) and a query point in space, first maps the query point to a canonical space via learned inverse linear blend skinning (LBS) weights and then efficiently queries the occupancy value via an occupancy network that models accurate identity and pose dependent deformations in the canonical space.
To account for the undefined skinning weights for the points that are not on the surface of a human body, LEAP leverages the consistency relationship between the forward LBS and the inverse LBS weights and incorporates cycle-distance into the occupancy network.
LEAP captures accurate identify- and pose-dependent deformations through advanced encoding schemes that incorporate prior knowledge about the kinematic structure and plausible shape of a human body.
We release a pretrained LEAP model that generalizes to unseen human bodies. LEAP can be easily integrated into learning pipelines that require an efficient occupancy check to resolve collisions with other geometries flexibly represented as point clounds.
@InProceedings{LEAP:CVPR:21, title = {{LEAP}: Learning Articulated Occupancy of People}, author = {Mihajlovic, Marko and Zhang, Yan and Black, Michael J and Tang, Siyu}, booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)}, month = jun, year = {2021}, }