update readme
This commit is contained in:
62
README.md
62
README.md
@ -1,23 +1,26 @@
|
|||||||
SMPL layer for PyTorch
|
pose2smpl
|
||||||
=======
|
=======
|
||||||
|
|
||||||
[SMPL](http://smpl.is.tue.mpg.de) human body [\[1\]](#references) layer for [PyTorch](https://pytorch.org/) (tested with v0.4 and v1.x)
|
### Fitting SMPL Parameters by 3D-pose Key-points
|
||||||
is a differentiable PyTorch layer that deterministically maps from pose and shape parameters to human body joints and vertices.
|
|
||||||
It can be integrated into any architecture as a differentiable layer to predict body meshes.
|
The repository provides a tool to fit **SMPL parameters** from **3D-pose** datasets that contain key-points of human body.
|
||||||
The code is adapted from the [manopth](https://github.com/hassony2/manopth) repository by [Yana Hasson](https://github.com/hassony2).
|
|
||||||
|
The SMPL human body layer for Pytorch is from the [smplpytorch](https://github.com/gulvarol/smplpytorch) repository.
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img src="assets/image.png" alt="smpl" width="300"/>
|
<img src="assets/fit.gif" width="350"/>
|
||||||
|
<img src="assets/gt.gif" width="350"/>
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
|
|
||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
### 1. The `smplpytorch` package
|
### 1. The `smplpytorch` package
|
||||||
* **Run without installing:** You will need to install the dependencies listed in [environment.yml](environment.yml):
|
* **Run without installing:** You will need to install the dependencies listed in [environment.yml](environment.yml):
|
||||||
|
|
||||||
* `conda env update -f environment.yml` in an existing environment, or
|
* `conda env update -f environment.yml` in an existing environment, or
|
||||||
* `conda env create -f environment.yml`, for a new `smplpytorch` environment
|
* `conda env create -f environment.yml`, for a new `smplpytorch` environment
|
||||||
* **Install:** To import `SMPL_Layer` in another project with `from smplpytorch.pytorch.smpl_layer import SMPL_Layer` do one of the following.
|
* **Install:** To import `SMPL_Layer` in another project with `from smplpytorch.pytorch.smpl_layer import SMPL_Layer` do one of the following.
|
||||||
|
|
||||||
* Option 1: This should automatically install the dependencies.
|
* Option 1: This should automatically install the dependencies.
|
||||||
``` bash
|
``` bash
|
||||||
git clone https://github.com/gulvarol/smplpytorch.git
|
git clone https://github.com/gulvarol/smplpytorch.git
|
||||||
@ -33,35 +36,40 @@ The code is adapted from the [manopth](https://github.com/hassony2/manopth) repo
|
|||||||
* Download the models from the [SMPL website](http://smpl.is.tue.mpg.de/) by choosing "SMPL for Python users". Note that you need to comply with the [SMPL model license](http://smpl.is.tue.mpg.de/license_model).
|
* Download the models from the [SMPL website](http://smpl.is.tue.mpg.de/) by choosing "SMPL for Python users". Note that you need to comply with the [SMPL model license](http://smpl.is.tue.mpg.de/license_model).
|
||||||
* Extract and copy the `models` folder into the `smplpytorch/native/` folder (or set the `model_root` parameter accordingly).
|
* Extract and copy the `models` folder into the `smplpytorch/native/` folder (or set the `model_root` parameter accordingly).
|
||||||
|
|
||||||
## Demo
|
### 3. Download Dataset
|
||||||
|
|
||||||
Forward pass the randomly created pose and shape parameters from the SMPL layer and display the human body mesh and joints:
|
- Download the datasets you want to fit
|
||||||
|
|
||||||
`python demo.py`
|
currently supported datasets:
|
||||||
|
|
||||||
## Acknowledgements
|
- [HumanAct12](https://ericguo5513.github.io/action-to-motion/)
|
||||||
The code **largely** builds on the [manopth](https://github.com/hassony2/manopth) repository from [Yana Hasson](https://github.com/hassony2), which implements the [MANO](http://mano.is.tue.mpg.de) hand model [\[2\]](#references) layer.
|
- [UTD-MHAD](https://personal.utdallas.edu/~kehtar/UTD-MHAD.html)
|
||||||
|
|
||||||
The code is a PyTorch port of the original [SMPL](http://smpl.is.tue.mpg.de) model from [chumpy](https://github.com/mattloper/chumpy). It builds on the work of [Loper](https://github.com/mattloper) et al. [\[1\]](#references).
|
- Set the **DATASET.PATH** in the corresponding configuration file to the location of dataset.
|
||||||
|
|
||||||
The code [reuses](https://github.com/gulvarol/smpl/pytorch/rodrigues_layer.py) [part of the code](https://github.com/MandyMo/pytorch_HMR/blob/master/src/util.py) by [Zhang Xiong](https://github.com/MandyMo) to compute the rotation utilities.
|
## Fitting
|
||||||
|
|
||||||
If you find this code useful for your research, please cite the original [SMPL](http://smpl.is.tue.mpg.de) publication:
|
### 1. Executing Code
|
||||||
|
|
||||||
|
You can start the fitting procedure by the following code and the configuration file in *fit/configs* corresponding to the dataset_name will be loaded:
|
||||||
|
|
||||||
```
|
```
|
||||||
@article{SMPL:2015,
|
python fit/tools/main.py --dataset_name [DATASET NAME] --dataset_path [DATASET PATH]
|
||||||
author = {Loper, Matthew and Mahmood, Naureen and Romero, Javier and Pons-Moll, Gerard and Black, Michael J.},
|
|
||||||
title = {{SMPL}: A Skinned Multi-Person Linear Model},
|
|
||||||
journal = {ACM Trans. Graphics (Proc. SIGGRAPH Asia)},
|
|
||||||
number = {6},
|
|
||||||
pages = {248:1--248:16},
|
|
||||||
volume = {34},
|
|
||||||
year = {2015}
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## References
|
### 2. Output
|
||||||
|
|
||||||
|
- **Direction**: The output SMPL parameters will be stored in *fit/output*
|
||||||
|
|
||||||
|
- **Format:** The output are *.pkl* files, and the data format is:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
"label": [The label of action],
|
||||||
|
"pose_params": pose parameters of SMPL (shape = [frame_num, 72]),
|
||||||
|
"shape_params": pose parameters of SMPL (shape = [frame_num, 10]),
|
||||||
|
"Jtr": key-point coordinates of SMPL model (shape = [frame_num, 24, 3])
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
\[1\] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black, "SMPL: A Skinned Multi-Person Linear Model," SIGGRAPH Asia, 2015.
|
|
||||||
|
|
||||||
\[2\] Javier Romero, Dimitrios Tzionas, and Michael J. Black, "Embodied Hands: Modeling and Capturing Hands and Bodies Together," SIGGRAPH Asia, 2017.
|
|
||||||
|
|||||||
BIN
assets/fit.gif
Normal file
BIN
assets/fit.gif
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 3.9 MiB |
BIN
assets/gt.gif
Normal file
BIN
assets/gt.gif
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 3.1 MiB |
BIN
assets/image.png
BIN
assets/image.png
Binary file not shown.
|
Before Width: | Height: | Size: 109 KiB |
@ -12,8 +12,9 @@ sys.path.append(os.getcwd())
|
|||||||
from smplpytorch.pytorch.smpl_layer import SMPL_Layer
|
from smplpytorch.pytorch.smpl_layer import SMPL_Layer
|
||||||
from train import train
|
from train import train
|
||||||
from transform import transform
|
from transform import transform
|
||||||
from save import save_params
|
from save import save_pic, save_params
|
||||||
from load import load
|
from load import load
|
||||||
|
import numpy as np
|
||||||
torch.backends.cudnn.benchmark=True
|
torch.backends.cudnn.benchmark=True
|
||||||
|
|
||||||
def parse_args():
|
def parse_args():
|
||||||
@ -101,6 +102,6 @@ if __name__ == "__main__":
|
|||||||
logger,writer,device,
|
logger,writer,device,
|
||||||
args,cfg)
|
args,cfg)
|
||||||
|
|
||||||
# save_pic(res,smpl_layer,file,logger,args.dataset_name)
|
# save_pic(res,smpl_layer,file,logger,args.dataset_name,target)
|
||||||
save_params(res,file,logger, args.dataset_name)
|
save_params(res,file,logger, args.dataset_name)
|
||||||
|
|
||||||
@ -15,11 +15,13 @@ def create_dir_not_exist(path):
|
|||||||
os.mkdir(path)
|
os.mkdir(path)
|
||||||
|
|
||||||
|
|
||||||
def save_pic(res, smpl_layer, file, logger, dataset_name):
|
def save_pic(res, smpl_layer, file, logger, dataset_name,target):
|
||||||
_, _, verts, Jtr = res
|
_, _, verts, Jtr = res
|
||||||
file_name = re.split('[/.]', file)[-2]
|
file_name = re.split('[/.]', file)[-2]
|
||||||
fit_path = "fit/output/{}/picture/fit/{}".format(dataset_name,file_name)
|
fit_path = "fit/output/{}/picture/fit/{}".format(dataset_name,file_name)
|
||||||
|
gt_path = "fit/output/{}/picture/gt/{}".format(dataset_name,file_name)
|
||||||
create_dir_not_exist(fit_path)
|
create_dir_not_exist(fit_path)
|
||||||
|
create_dir_not_exist(gt_path)
|
||||||
logger.info('Saving pictures at {}'.format(fit_path))
|
logger.info('Saving pictures at {}'.format(fit_path))
|
||||||
for i in tqdm(range(Jtr.shape[0])):
|
for i in tqdm(range(Jtr.shape[0])):
|
||||||
display_model(
|
display_model(
|
||||||
@ -32,6 +34,16 @@ def save_pic(res, smpl_layer, file, logger, dataset_name):
|
|||||||
batch_idx=i,
|
batch_idx=i,
|
||||||
show=False,
|
show=False,
|
||||||
only_joint=False)
|
only_joint=False)
|
||||||
|
display_model(
|
||||||
|
{'verts': verts.cpu().detach(),
|
||||||
|
'joints': target.cpu().detach()},
|
||||||
|
model_faces=smpl_layer.th_faces,
|
||||||
|
with_joints=True,
|
||||||
|
kintree_table=smpl_layer.kintree_table,
|
||||||
|
savepath=os.path.join(gt_path+"/frame_{}".format(i)),
|
||||||
|
batch_idx=i,
|
||||||
|
show=False,
|
||||||
|
only_joint=True)
|
||||||
logger.info('Pictures saved')
|
logger.info('Pictures saved')
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@ -73,7 +73,6 @@ def train(smpl_layer, target,
|
|||||||
|
|
||||||
early_stop = Early_Stop()
|
early_stop = Early_Stop()
|
||||||
for epoch in tqdm(range(cfg.TRAIN.MAX_EPOCH)):
|
for epoch in tqdm(range(cfg.TRAIN.MAX_EPOCH)):
|
||||||
# for epoch in range(cfg.TRAIN.MAX_EPOCH):
|
|
||||||
verts, Jtr = smpl_layer(pose_params, th_betas=shape_params)
|
verts, Jtr = smpl_layer(pose_params, th_betas=shape_params)
|
||||||
loss = F.smooth_l1_loss(Jtr.index_select(1, index["smpl_index"]) * 100,
|
loss = F.smooth_l1_loss(Jtr.index_select(1, index["smpl_index"]) * 100,
|
||||||
target.index_select(1, index["dataset_index"]) * 100)
|
target.index_select(1, index["dataset_index"]) * 100)
|
||||||
|
|||||||
@ -1,7 +1,7 @@
|
|||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
import imageio, os
|
import imageio, os
|
||||||
images = []
|
images = []
|
||||||
filenames = sorted(fn for fn in os.listdir('./fit/output/UTD_MHAD/picture/') )
|
filenames = sorted(fn for fn in os.listdir('./fit/output/HumanAct12/picture/fit/P01G01R01F0001T0064A0101') )
|
||||||
for filename in filenames:
|
for filename in filenames:
|
||||||
images.append(imageio.imread('./fit/output/UTD_MHAD/picture/fit/a10_s1_t1_skeleton/'+filename))
|
images.append(imageio.imread('./fit/output/HumanAct12/picture/fit/P01G01R01F0001T0064A0101/'+filename))
|
||||||
imageio.mimsave('./fit.gif', images, duration=0.3)
|
imageio.mimsave('./assets/fit.gif', images, duration=0.3)
|
||||||
Reference in New Issue
Block a user