Skip to content

Pixel-Talk/PEAR

Repository files navigation

PEAR: Pixel-aligned Expressive humAn mesh Recovery

Jiahao Wu, Yunfei Liu ✉, Lijian Lin, Ye Zhu, Lei Zhu, Jingyi Li, Yu Li

International Digital Economy Academy (IDEA)

arXiv ProjectPage Youtube huggingface

📰 News

[2026.02.02] Paper release of our PEAR on arXiv!

[2026.02.02] The inference code and the first version of the PEAR model have been released!

[2026.02.11] Training code released.

[TODO] Training dastasets and final version of the PEAR model.

💡 Overview

We propose PEAR, a unified framework for real-time expressive 3D human mesh recovery. It is the first method capable of simultaneously predicting EHM-s parameters at 100 FPS.

⚡ Quick Start

🔧 Preparation

Clone this repository and install the dependencies:

git clone --recursive https://github.com/Pixel-Talk/PEAR.git
cd PEAR

# The specified PyTorch, Python, and CUDA versions are not strictly required.
# Most compatible configurations should work.
conda create -n pear python=3.9.22
conda activate pear

pip install -r requirements.txt
pip install "git+https://github.com/facebookresearch/pytorch3d.git" --no-build-isolation
pip install chumpy --no-build-isolation
  • SMPL: Download SMPL_NEUTRAL.pkl from SMPL and place it in the assets/SMPL.
  • SMPLX: Download SMPLX_NEUTRAL_2020.npz from SMPLX and place it in the assets/SMPLX.
  • FLAME: Download the generic_model.pkl from FLAME2020. Save this file to both assets/FLAME/FLAME2020/generic_model.pkl and assets/SMPLX/flame_generic_model.pkl.
  • SMPLX2SMPL: unzip SMPLX2SMPL.zip.
assets/
├── FLAME/
├── SMPL/
├── SMPLX/
├── SMPLX2SMPL/
├── icons2.png
├── method.png
└── teaser.png

⚡ Inference

All pretrained models will be downloaded automatically.

For video inference, run:

python app.py

For image inference, run:

python inference_images.py --input_path example/images

⚡ Training

The full training datasets are currently not publicly released. However, a sample .tar file is provided for demonstration purposes.

Download it from Google Drive and place it under:

ehms_datasets/
├── 000000.tar

Then run:

python train_ehms.py -c train -d 0,1,2,3,4,5,6,7  # Adjust according to your available GPUs

🤗 Citation

If you find this repository useful for your research, please use the following BibTeX entry for citation.

@misc{wu2026pear,
  title={PEAR: Pixel-aligned Expressive humAn mesh Recovery}, 
  author={Jiahao Wu and Yunfei Liu and Lijian Lin and Ye Zhu and Lei Zhu and Jingyi Li and Yu Li},
  year={2026},
  eprint={2601.22693},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2601.22693}, 
}

Acknowledgements

We would like to thank the authors of prior works, including FLAME, SMPL-X, SMPL, MANO, SMPLest-X, Multi-HMR, and SAM3D-Body.

License

See the LICENSE file for details about the license under which this code is made available.

About

PEAR :Pixel-aligned Expressive humAn mesh Recovery

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages