Jiahao Wu, Yunfei Liu ✉, Lijian Lin, Ye Zhu, Lei Zhu, Jingyi Li, Yu Li
International Digital Economy Academy (IDEA)
[2026.02.02] Paper release of our PEAR on arXiv!
[2026.02.02] The inference code and the first version of the PEAR model have been released!
[2026.02.11] Training code released.
[TODO] Training dastasets and final version of the PEAR model.
We propose PEAR, a unified framework for real-time expressive 3D human mesh recovery. It is the first method capable of simultaneously predicting EHM-s parameters at 100 FPS.
Clone this repository and install the dependencies:
git clone --recursive https://github.com/Pixel-Talk/PEAR.git
cd PEAR
# The specified PyTorch, Python, and CUDA versions are not strictly required.
# Most compatible configurations should work.
conda create -n pear python=3.9.22
conda activate pear
pip install -r requirements.txt
pip install "git+https://github.com/facebookresearch/pytorch3d.git" --no-build-isolation
pip install chumpy --no-build-isolation- SMPL: Download
SMPL_NEUTRAL.pklfrom SMPL and place it in theassets/SMPL. - SMPLX: Download
SMPLX_NEUTRAL_2020.npzfrom SMPLX and place it in theassets/SMPLX. - FLAME: Download the
generic_model.pklfrom FLAME2020. Save this file to bothassets/FLAME/FLAME2020/generic_model.pklandassets/SMPLX/flame_generic_model.pkl. - SMPLX2SMPL: unzip
SMPLX2SMPL.zip.
assets/
├── FLAME/
├── SMPL/
├── SMPLX/
├── SMPLX2SMPL/
├── icons2.png
├── method.png
└── teaser.png
All pretrained models will be downloaded automatically.
For video inference, run:
python app.pyFor image inference, run:
python inference_images.py --input_path example/imagesThe full training datasets are currently not publicly released.
However, a sample .tar file is provided for demonstration purposes.
Download it from Google Drive and place it under:
ehms_datasets/
├── 000000.tar
Then run:
python train_ehms.py -c train -d 0,1,2,3,4,5,6,7 # Adjust according to your available GPUsIf you find this repository useful for your research, please use the following BibTeX entry for citation.
@misc{wu2026pear,
title={PEAR: Pixel-aligned Expressive humAn mesh Recovery},
author={Jiahao Wu and Yunfei Liu and Lijian Lin and Ye Zhu and Lei Zhu and Jingyi Li and Yu Li},
year={2026},
eprint={2601.22693},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.22693},
}
We would like to thank the authors of prior works, including FLAME, SMPL-X, SMPL, MANO, SMPLest-X, Multi-HMR, and SAM3D-Body.
See the LICENSE file for details about the license under which this code is made available.