add Gait3D support
This commit is contained in:
@@ -13,7 +13,7 @@ Download URL: http://www.cbsr.ia.ac.cn/GaitDatasetB-silh.zip
|
||||
......
|
||||
......
|
||||
```
|
||||
- Run `python misc/pretreatment.py --input_path CASIA-B --output_path CASIA-B-pkl`
|
||||
- Run `python datasets/pretreatment.py --input_path CASIA-B --output_path CASIA-B-pkl`
|
||||
- Processed
|
||||
```
|
||||
CASIA-B-pkl
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,33 @@
|
||||
# Gait3D
|
||||
This is the pre-processing instructions for the Gait3D dataset. The original dataset can be found [here](https://gait3d.github.io/). The original dataset is not publicly available. You need to request access to the dataset in order to download it. This README explains how to extract the original dataset and convert it to a format suitable for OpenGait.
|
||||
## Data Preparation
|
||||
https://github.com/Gait3D/Gait3D-Benchmark#data-preparation
|
||||
## Data Pretreatment
|
||||
```python
|
||||
python datasets/pretreatment.py --input_path 'Gait3D/2D_Silhouettes' --output_path 'Gait3D-sils-64-64-pkl'
|
||||
python datasets/pretreatment_smpl.py --input_path 'Gait3D/3D_SMPLs' --output_path 'Gait3D-smpls-pkl'
|
||||
|
||||
(optional) python datasets/pretreatment.py --input_path 'Gait3D/2D_Silhouettes' --img_size 128 --output_path 'Gait3D-sils-128-128-pkl'
|
||||
|
||||
python datasets/Gait3D/merge_two_modality.py --sils_path 'Gait3D-sils-64-64-pkl' --smpls_path 'Gait3D-smpls-pkl' --output_path 'Gait3D-merged-pkl' --link 'hard'
|
||||
```
|
||||
|
||||
## Train
|
||||
### Baseline model:
|
||||
`CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./configs/baseline/baseline_Gait3D.yaml --phase train`
|
||||
### SMPLGait model:
|
||||
`CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./configs/smplgait/smplgait.yaml --phase train`
|
||||
|
||||
## Citation
|
||||
If you use this dataset in your research, please cite the following paper:
|
||||
```
|
||||
@inproceedings{zheng2022gait3d,
|
||||
title={Gait Recognition in the Wild with Dense 3D Representations and A Benchmark},
|
||||
author={Jinkai Zheng, Xinchen Liu, Wu Liu, Lingxiao He, Chenggang Yan, Tao Mei},
|
||||
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
|
||||
year={2022}
|
||||
}
|
||||
```
|
||||
|
||||
## Acknowledgements
|
||||
This dataset was collected by the [Zheng at. al.](https://gait3d.github.io/). The pre-processing instructions are based on (https://github.com/Gait3D/Gait3D-Benchmark).
|
||||
@@ -0,0 +1,46 @@
|
||||
import os
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
import shutil
|
||||
|
||||
|
||||
def merge(sils_path, smpls_path, output_path, link):
|
||||
if link == 'hard':
|
||||
link_method = os.link
|
||||
elif link == 'soft':
|
||||
link_method = os.symlink
|
||||
else:
|
||||
link_method = shutil.copyfile
|
||||
for _id in os.listdir(sils_path):
|
||||
id_path = os.path.join(sils_path, _id)
|
||||
for _type in os.listdir(id_path):
|
||||
type_path = os.path.join(id_path, _type)
|
||||
for _view in os.listdir(type_path):
|
||||
view_path = os.path.join(type_path, _view)
|
||||
for _seq in os.listdir(view_path):
|
||||
sils_seq_path = os.path.join(view_path, _seq)
|
||||
smpls_seq_path = os.path.join(
|
||||
smpls_path, _id, _type, _view, _seq)
|
||||
output_seq_path = os.path.join(output_path, _id, _type, _view)
|
||||
os.makedirs(output_seq_path, exist_ok=True)
|
||||
link_method(sils_seq_path, os.path.join(
|
||||
output_seq_path, "sils-"+_seq))
|
||||
link_method(smpls_seq_path, os.path.join(
|
||||
output_seq_path, "smpls-"+_seq))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description='Gait3D dataset mergence.')
|
||||
parser.add_argument('--sils_path', default='', type=str,
|
||||
help='Root path of raw silhs dataset.')
|
||||
parser.add_argument('--smpls_path', default='', type=str,
|
||||
help='Root path of raw smpls dataset.')
|
||||
parser.add_argument('-o', '--output_path', default='',
|
||||
type=str, help='Output path of pickled dataset.')
|
||||
parser.add_argument('-l', '--link', default='hard', type=str,
|
||||
choices=['hard', 'soft', 'copy'], help='Link type of output data.')
|
||||
args = parser.parse_args()
|
||||
|
||||
merge(sils_path=Path(args.sils_path), smpls_path=Path(
|
||||
args.smpls_path), output_path=Path(args.output_path), link=args.link)
|
||||
|
||||
@@ -6,19 +6,19 @@ This is the official support for competition of [Human Identification at a Dista
|
||||
Download the raw dataset from the [official link](http://hid2022.iapr-tc4.org/). You will get three compressed files, i.e. `train.tar`, `HID2022_test_gallery.zip` and `HID2022_test_probe.zip`.
|
||||
After unpacking these three files, run this command:
|
||||
```shell
|
||||
python misc/HID/pretreatment_HID.py --input_train_path="train" --input_gallery_path="HID2022_test_gallery" --input_probe_path="HID2022_test_probe" --output_path="HID-128-pkl"
|
||||
python datasets/HID/pretreatment_HID.py --input_train_path="train" --input_gallery_path="HID2022_test_gallery" --input_probe_path="HID2022_test_probe" --output_path="HID-128-pkl"
|
||||
```
|
||||
|
||||
## Train the dataset
|
||||
Modify the `dataset_root` in `./misc/HID/baseline_hid.yaml`, and then run this command:
|
||||
Modify the `dataset_root` in `configs/baseline/baseline_hid.yaml`, and then run this command:
|
||||
```shell
|
||||
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./misc/HID/baseline_hid.yaml --phase train
|
||||
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs configs/baseline/baseline_hid.yaml --phase train
|
||||
```
|
||||
You can also download the [trained model](https://github.com/ShiqiYu/OpenGait/releases/download/v1.1/pretrained_hid_model.zip) and place it in `output` after unzipping.
|
||||
|
||||
## Get the submission file
|
||||
```shell
|
||||
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./misc/HID/baseline_hid.yaml --phase test
|
||||
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs configs/baseline/baseline_hid.yaml --phase test
|
||||
```
|
||||
The result will be generated in your working directory.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user