diff --git a/README.md b/README.md index 37f1ae4..7f2e725 100644 --- a/README.md +++ b/README.md @@ -8,6 +8,7 @@ OpenGait is a flexible and extensible gait recognition project provided by the [ ## What's New +- [Jul 2022] Our paper "[GaitEdge: Beyond Plain End-to-end Gait Recognition for Better Practicality](configs/gaitedge/README.md)" has been accepted by ECCV 2022. - [Jun 2022] Paper "[A Comprehensive Survey on Deep Gait Recognition: Algorithms, Datasets and Challenges](https://arxiv.org/pdf/2206.13732.pdf)" is available now. - [Jun 2022] Paper "[Learning Gait Representation from Massive Unlabelled Walking Sequences: A Benchmark](https://arxiv.org/pdf/2206.13964.pdf)" is available now. And the code will be released as soon as possible. - [Mar 2022] More results on [GREW](https://www.grew-benchmark.org) are supported, and the model files are coming soon. @@ -21,116 +22,18 @@ OpenGait is a flexible and extensible gait recognition project provided by the [ - **AMP Support**: The [`Auto Mixed Precision (AMP)`](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html?highlight=amp) option is available. - **Nice log**: We use [`tensorboard`](https://pytorch.org/docs/stable/tensorboard.html) and `logging` to log everything, which looks pretty. +## Getting Started + + +Please see [0.get_started.md](docs/0.get_started.md). We also provide the following tutorials for your reference: +- [Prepare dataset](docs/2.prepare_dataset.md) +- [Detailed configuration](docs/3.detailed_config.md) +- [Customize model](docs/4.how_to_create_your_model.md) +- [Advanced usages](docs/5.advanced_usages.md) + ## Model Zoo +Results and models are available in the [model zoo](docs/1.model_zoo.md). -### [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp) -| Model | NM | BG | CL | Configuration | Input Size | Inference Time | Model Size | -| :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :--------: | :--------: | :-------------------------------------------------------------------------------------------: | :--------: | :------------: | :------------: | -| Baseline | 96.3 | 92.2 | 77.6 | [baseline.yaml](config/baseline/baseline.yaml) | 64x44 | 12s | 3.78M | -| [GaitSet(AAAI2019)](https://arxiv.org/pdf/1811.06186.pdf) | 95.8(95.0) | 90.0(87.2) | 75.4(70.4) | [gaitset.yaml](config/gaitset/gaitset.yaml) | 64x44 | 13s | 2.59M | -| [GaitPart(CVPR2020)](http://home.ustc.edu.cn/~saihui/papers/cvpr2020_gaitpart.pdf) | 96.1(96.2) | 90.7(91.5) | 78.7(78.7) | [gaitpart.yaml](config/gaitpart/gaitpart.yaml) | 64x44 | 56s | 1.20M | -| [GLN*(ECCV2020)](http://home.ustc.edu.cn/~saihui/papers/eccv2020_gln.pdf) | 96.4(95.6) | 93.1(92.0) | 81.0(77.2) | [gln_phase1.yaml](config/gln/gln_phase1.yaml), [gln_phase2.yaml](config/gln/gln_phase2.yaml) | 128x88 | 47s/46s | 8.54M / 14.70M | -| [GaitGL(ICCV2021)](https://openaccess.thecvf.com/content/ICCV2021/papers/Lin_Gait_Recognition_via_Effective_Global-Local_Feature_Representation_and_Local_Temporal_ICCV_2021_paper.pdf) | 97.4(97.4) | 94.5(94.5) | 83.8(83.6) | [gaitgl.yaml](config/gaitgl/gaitgl.yaml) | 64x44 | 38s | 3.10M | - -### [OUMVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html) -| Model | Rank@1 | Configuration | Input Size | Inference Time | Model Size | -| :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :------------------------------------------: | :--------: | :-------------: | :--------: | -| Baseline | 86.7 | [baseline.yaml](config/baseline/baseline_OUMVLP.yaml) | 64x44 | 1m13s | 44.11M | -| [GaitSet(AAAI2019)](https://arxiv.org/pdf/1811.06186.pdf) | 87.2(87.1) | [gaitset.yaml](config/gaitset/gaitset_OUMVLP.yaml) | 64x44 | 1m26s | 6.31M | -| [GaitPart(CVPR2020)](http://home.ustc.edu.cn/~saihui/papers/cvpr2020_gaitpart.pdf) | 88.6(88.7) | [gaitpart.yaml](config/gaitpart/gaitpart_OUMVLP.yaml) | 64x44 | 8m04s | 3.78M | -| [GaitGL(ICCV2021)](https://openaccess.thecvf.com/content/ICCV2021/papers/Lin_Gait_Recognition_via_Effective_Global-Local_Feature_Representation_and_Local_Temporal_ICCV_2021_paper.pdf) | 89.9(89.7) | [gaitgl.yaml](config/gaitgl/gaitgl_OUMVLP.yaml) | 64x44 | 5m23s | 95.62M | - - -### [GREW](https://www.grew-benchmark.org) -| Model | Rank@1 | Configuration | Input Size | Inference Time | Model Size | -| :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :------------------------------------------: | :--------: | :-------------: | :--------: | -| Baseline | 48.5 | [baseline.yaml](config/baseline/baseline_GREW.yaml) | 64x44 | 2m23s | 84.12M | -| [GaitSet(AAAI2019)](https://arxiv.org/pdf/1811.06186.pdf) | 48.4 | [gaitset.yaml](config/gaitset/gaitset_GREW.yaml) | 64x44 | - | - | -| [GaitPart(CVPR2020)](http://home.ustc.edu.cn/~saihui/papers/cvpr2020_gaitpart.pdf) | 47.6 | [gaitpart.yaml](config/gaitpart/gaitpart_GREW.yaml) | 64x44 | - | - | -| [GaitGL(ICCV2021)](https://openaccess.thecvf.com/content/ICCV2021/papers/Lin_Gait_Recognition_via_Effective_Global-Local_Feature_Representation_and_Local_Temporal_ICCV_2021_paper.pdf) | 41.5 | [gaitgl.yaml](config/gaitgl/gaitgl_GREW.yaml) | 64x44 | - | - | -| [GaitGL(BNNeck)(ICCV2021)](https://openaccess.thecvf.com/content/ICCV2021/papers/Lin_Gait_Recognition_via_Effective_Global-Local_Feature_Representation_and_Local_Temporal_ICCV_2021_paper.pdf) | 51.7 | [gaitgl.yaml](config/gaitgl/gaitgl_GREW_BNNeck.yaml) | 64x44 | - | - | -| [RealGait(Arxiv now)](https://arxiv.org/pdf/2201.04806.pdf)| (54.1) | - | - | - | - | - - ------------------------------------------- - -The results in the parentheses are mentioned in the papers. - -**Note**: -- All results are Rank@1, excluding identical-view cases. -- The shown result of GLN is implemented without compact block. -- Only two RTX3090 are used for infering CASIA-B, and eight are used for infering OUMVLP. - - - -## Get Started -### Installation -1. clone this repo. - ``` - git clone https://github.com/ShiqiYu/OpenGait.git - ``` -2. Install dependenices: - - pytorch >= 1.6 - - torchvision - - pyyaml - - tensorboard - - opencv-python - - tqdm - - py7zr - - Install dependenices by [Anaconda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html): - ``` - conda install tqdm pyyaml tensorboard opencv py7zr - conda install pytorch==1.6.0 torchvision -c pytorch - ``` - Or, Install dependenices by pip: - ``` - pip install tqdm pyyaml tensorboard opencv-python py7zr - pip install torch==1.6.0 torchvision==0.7.0 - ``` -### Prepare dataset -See [prepare dataset](docs/0.prepare_dataset.md). - -### Get trained model -- Option 1: - ``` - python misc/download_pretrained_model.py - ``` -- Option 2: Go to the [release page](https://github.com/ShiqiYu/OpenGait/releases/), then download the model file and uncompress it to [output](output). - -### Train -Train a model by -``` -CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./config/baseline/baseline.yaml --phase train -``` -- `python -m torch.distributed.launch` [DDP](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) launch instruction. -- `--nproc_per_node` The number of gpus to use, and it must equal the length of `CUDA_VISIBLE_DEVICES`. -- `--cfgs` The path to config file. -- `--phase` Specified as `train`. - -- `--log_to_file` If specified, the terminal log will be written on disk simultaneously. - -You can run commands in [train.sh](train.sh) for training different models. - -### Test -Evaluate the trained model by -``` -CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./config/baseline/baseline.yaml --phase test -``` -- `--phase` Specified as `test`. -- `--iter` Specify a iteration checkpoint. - -**Tip**: Other arguments are the same as train phase. - -You can run commands in [test.sh](test.sh) for testing different models. - -## Customize -1. Read the [detailed config](docs/1.detailed_config.md) to figure out the usage of needed setting items; -2. See [how to create your model](docs/2.how_to_create_your_model.md); -3. There are some advanced usages, refer to [advanced usages](docs/3.advanced_usages.md), please. - -## Warning -- In `DDP` mode, zombie processes may be generated when the program terminates abnormally. You can use this command [sh misc/clean_process.sh](./misc/clean_process.sh) to clear them. ## Authors: **Open Gait Team (OGT)** diff --git a/assets/gaitedge.png b/assets/gaitedge.png new file mode 100644 index 0000000..f32c5e4 Binary files /dev/null and b/assets/gaitedge.png differ diff --git a/config/baseline/baseline.yaml b/configs/baseline/baseline.yaml similarity index 100% rename from config/baseline/baseline.yaml rename to configs/baseline/baseline.yaml diff --git a/config/baseline/baseline_GREW.yaml b/configs/baseline/baseline_GREW.yaml similarity index 100% rename from config/baseline/baseline_GREW.yaml rename to configs/baseline/baseline_GREW.yaml diff --git a/config/baseline/baseline_OUMVLP.yaml b/configs/baseline/baseline_OUMVLP.yaml similarity index 100% rename from config/baseline/baseline_OUMVLP.yaml rename to configs/baseline/baseline_OUMVLP.yaml diff --git a/config/baseline/baseline_hid.yaml b/configs/baseline/baseline_hid.yaml similarity index 100% rename from config/baseline/baseline_hid.yaml rename to configs/baseline/baseline_hid.yaml diff --git a/config/default.yaml b/configs/default.yaml similarity index 100% rename from config/default.yaml rename to configs/default.yaml diff --git a/configs/gaitedge/README.md b/configs/gaitedge/README.md new file mode 100644 index 0000000..f8190f4 --- /dev/null +++ b/configs/gaitedge/README.md @@ -0,0 +1,8 @@ +# GaitEdge: Beyond Plain End-to-end Gait Recognition for Better Practicality + +This [paper](https://arxiv.org/abs/2203.03972) has been accepted by ECCV 2022, the source code and CASIA-B* dataset mentioned in the paper will be released within two weeks. + +## Abstract +Gait is one of the most promising biometrics to identify individuals at a long distance. Although most previous methods have focused on recognizing the silhouettes, several end-to-end methods that extract gait features directly from RGB images perform better. However, we argue that these end-to-end methods inevitably suffer from the gait-unrelated noises, i.e., low-level texture and colorful information. Experimentally, we design both the cross-domain evaluation and visualization to stand for this view. In this work, we propose a novel end-to-end framework named GaitEdge which can effectively block gait-unrelated information and release end-to-end training potential. Specifically, GaitEdge synthesizes the output of the pedestrian segmentation network and then feeds it to the subsequent recognition network, where the synthetic silhouettes consist of trainable edges of bodies and fixed interiors to limit the information that the recognition network receives. Besides, GaitAlign for aligning silhouettes is embedded into the GaitEdge without loss of differentiability. Experimental results on CASIA-B and our newly built TTG-200 indicate that GaitEdge significantly outperforms the previous methods and provides a more practical end-to-end paradigm for blocking RGB noises effectively. + +![img](../../assets/gaitedge.png) \ No newline at end of file diff --git a/config/gaitgl/gaitgl.yaml b/configs/gaitgl/gaitgl.yaml similarity index 100% rename from config/gaitgl/gaitgl.yaml rename to configs/gaitgl/gaitgl.yaml diff --git a/config/gaitgl/gaitgl_GREW.yaml b/configs/gaitgl/gaitgl_GREW.yaml similarity index 100% rename from config/gaitgl/gaitgl_GREW.yaml rename to configs/gaitgl/gaitgl_GREW.yaml diff --git a/config/gaitgl/gaitgl_GREW_BNNeck.yaml b/configs/gaitgl/gaitgl_GREW_BNNeck.yaml similarity index 100% rename from config/gaitgl/gaitgl_GREW_BNNeck.yaml rename to configs/gaitgl/gaitgl_GREW_BNNeck.yaml diff --git a/config/gaitgl/gaitgl_OUMVLP.yaml b/configs/gaitgl/gaitgl_OUMVLP.yaml similarity index 100% rename from config/gaitgl/gaitgl_OUMVLP.yaml rename to configs/gaitgl/gaitgl_OUMVLP.yaml diff --git a/config/gaitpart/gaitpart.yaml b/configs/gaitpart/gaitpart.yaml similarity index 100% rename from config/gaitpart/gaitpart.yaml rename to configs/gaitpart/gaitpart.yaml diff --git a/config/gaitpart/gaitpart_GREW.yaml b/configs/gaitpart/gaitpart_GREW.yaml similarity index 100% rename from config/gaitpart/gaitpart_GREW.yaml rename to configs/gaitpart/gaitpart_GREW.yaml diff --git a/config/gaitpart/gaitpart_OUMVLP.yaml b/configs/gaitpart/gaitpart_OUMVLP.yaml similarity index 100% rename from config/gaitpart/gaitpart_OUMVLP.yaml rename to configs/gaitpart/gaitpart_OUMVLP.yaml diff --git a/config/gaitset/gaitset.yaml b/configs/gaitset/gaitset.yaml similarity index 100% rename from config/gaitset/gaitset.yaml rename to configs/gaitset/gaitset.yaml diff --git a/config/gaitset/gaitset_GREW.yaml b/configs/gaitset/gaitset_GREW.yaml similarity index 100% rename from config/gaitset/gaitset_GREW.yaml rename to configs/gaitset/gaitset_GREW.yaml diff --git a/config/gaitset/gaitset_OUMVLP.yaml b/configs/gaitset/gaitset_OUMVLP.yaml similarity index 100% rename from config/gaitset/gaitset_OUMVLP.yaml rename to configs/gaitset/gaitset_OUMVLP.yaml diff --git a/config/gln/gln_phase1.yaml b/configs/gln/gln_phase1.yaml similarity index 100% rename from config/gln/gln_phase1.yaml rename to configs/gln/gln_phase1.yaml diff --git a/config/gln/gln_phase2.yaml b/configs/gln/gln_phase2.yaml similarity index 100% rename from config/gln/gln_phase2.yaml rename to configs/gln/gln_phase2.yaml diff --git a/docs/0.get_started.md b/docs/0.get_started.md new file mode 100644 index 0000000..d93f1b3 --- /dev/null +++ b/docs/0.get_started.md @@ -0,0 +1,68 @@ +# Get Started +## Installation +1. clone this repo. + ``` + git clone https://github.com/ShiqiYu/OpenGait.git + ``` +2. Install dependenices: + - pytorch >= 1.6 + - torchvision + - pyyaml + - tensorboard + - opencv-python + - tqdm + - py7zr + + Install dependenices by [Anaconda](https://conda.io/projects/conda/en/latest/user-guide/install/index.html): + ``` + conda install tqdm pyyaml tensorboard opencv py7zr + conda install pytorch==1.6.0 torchvision -c pytorch + ``` + Or, Install dependenices by pip: + ``` + pip install tqdm pyyaml tensorboard opencv-python py7zr + pip install torch==1.6.0 torchvision==0.7.0 + ``` +## Prepare dataset +See [prepare dataset](docs/0.prepare_dataset.md). + +## Get trained model +- Option 1: + ``` + python misc/download_pretrained_model.py + ``` +- Option 2: Go to the [release page](https://github.com/ShiqiYu/OpenGait/releases/), then download the model file and uncompress it to [output](output). + +## Train +Train a model by +``` +CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./config/baseline/baseline.yaml --phase train +``` +- `python -m torch.distributed.launch` [DDP](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) launch instruction. +- `--nproc_per_node` The number of gpus to use, and it must equal the length of `CUDA_VISIBLE_DEVICES`. +- `--cfgs` The path to config file. +- `--phase` Specified as `train`. + +- `--log_to_file` If specified, the terminal log will be written on disk simultaneously. + +You can run commands in [train.sh](train.sh) for training different models. + +## Test +Evaluate the trained model by +``` +CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./config/baseline/baseline.yaml --phase test +``` +- `--phase` Specified as `test`. +- `--iter` Specify a iteration checkpoint. + +**Tip**: Other arguments are the same as train phase. + +You can run commands in [test.sh](test.sh) for testing different models. + +## Customize +1. Read the [detailed config](docs/1.detailed_config.md) to figure out the usage of needed setting items; +2. See [how to create your model](docs/2.how_to_create_your_model.md); +3. There are some advanced usages, refer to [advanced usages](docs/3.advanced_usages.md), please. + +## Warning +- In `DDP` mode, zombie processes may be generated when the program terminates abnormally. You can use this command [sh misc/clean_process.sh](./misc/clean_process.sh) to clear them. diff --git a/docs/1.model_zoo.md b/docs/1.model_zoo.md new file mode 100644 index 0000000..8f440e1 --- /dev/null +++ b/docs/1.model_zoo.md @@ -0,0 +1,39 @@ +# Model Zoo + +## [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp) +| Model | NM | BG | CL | Configuration | Input Size | Inference Time | Model Size | +| :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :--------: | :--------: | :-------------------------------------------------------------------------------------------: | :--------: | :------------: | :------------: | +| Baseline | 96.3 | 92.2 | 77.6 | [baseline.yaml](../configs/baseline/baseline.yaml) | 64x44 | 12s | 3.78M | +| [GaitSet(AAAI2019)](https://arxiv.org/pdf/1811.06186.pdf) | 95.8(95.0) | 90.0(87.2) | 75.4(70.4) | [gaitset.yaml](../configs/gaitset/gaitset.yaml) | 64x44 | 13s | 2.59M | +| [GaitPart(CVPR2020)](http://home.ustc.edu.cn/~saihui/papers/cvpr2020_gaitpart.pdf) | 96.1(96.2) | 90.7(91.5) | 78.7(78.7) | [gaitpart.yaml](../configs/gaitpart/gaitpart.yaml) | 64x44 | 56s | 1.20M | +| [GLN*(ECCV2020)](http://home.ustc.edu.cn/~saihui/papers/eccv2020_gln.pdf) | 96.4(95.6) | 93.1(92.0) | 81.0(77.2) | [gln_phase1.yaml](../configs/gln/gln_phase1.yaml), [gln_phase2.yaml](../configs/gln/gln_phase2.yaml) | 128x88 | 47s/46s | 8.54M / 14.70M | +| [GaitGL(ICCV2021)](https://openaccess.thecvf.com/content/ICCV2021/papers/Lin_Gait_Recognition_via_Effective_Global-Local_Feature_Representation_and_Local_Temporal_ICCV_2021_paper.pdf) | 97.4(97.4) | 94.5(94.5) | 83.8(83.6) | [gaitgl.yaml](../configs/gaitgl/gaitgl.yaml) | 64x44 | 38s | 3.10M | + +## [OUMVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html) +| Model | Rank@1 | Configuration | Input Size | Inference Time | Model Size | +| :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :------------------------------------------: | :--------: | :-------------: | :--------: | +| Baseline | 86.7 | [baseline.yaml](../configs/baseline/baseline_OUMVLP.yaml) | 64x44 | 1m13s | 44.11M | +| [GaitSet(AAAI2019)](https://arxiv.org/pdf/1811.06186.pdf) | 87.2(87.1) | [gaitset.yaml](../configs/gaitset/gaitset_OUMVLP.yaml) | 64x44 | 1m26s | 6.31M | +| [GaitPart(CVPR2020)](http://home.ustc.edu.cn/~saihui/papers/cvpr2020_gaitpart.pdf) | 88.6(88.7) | [gaitpart.yaml](../configs/gaitpart/gaitpart_OUMVLP.yaml) | 64x44 | 8m04s | 3.78M | +| [GaitGL(ICCV2021)](https://openaccess.thecvf.com/content/ICCV2021/papers/Lin_Gait_Recognition_via_Effective_Global-Local_Feature_Representation_and_Local_Temporal_ICCV_2021_paper.pdf) | 89.9(89.7) | [gaitgl.yaml](../configs/gaitgl/gaitgl_OUMVLP.yaml) | 64x44 | 5m23s | 95.62M | + + +## [GREW](https://www.grew-benchmark.org) +| Model | Rank@1 | Configuration | Input Size | Inference Time | Model Size | +| :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :------------------------------------------: | :--------: | :-------------: | :--------: | +| Baseline | 48.5 | [baseline.yaml](../configs/baseline/baseline_GREW.yaml) | 64x44 | 2m23s | 84.12M | +| [GaitSet(AAAI2019)](https://arxiv.org/pdf/1811.06186.pdf) | 48.4 | [gaitset.yaml](../configs/gaitset/gaitset_GREW.yaml) | 64x44 | - | - | +| [GaitPart(CVPR2020)](http://home.ustc.edu.cn/~saihui/papers/cvpr2020_gaitpart.pdf) | 47.6 | [gaitpart.yaml](../configs/gaitpart/gaitpart_GREW.yaml) | 64x44 | - | - | +| [GaitGL(ICCV2021)](https://openaccess.thecvf.com/content/ICCV2021/papers/Lin_Gait_Recognition_via_Effective_Global-Local_Feature_Representation_and_Local_Temporal_ICCV_2021_paper.pdf) | 41.5 | [gaitgl.yaml](../configs/gaitgl/gaitgl_GREW.yaml) | 64x44 | - | - | +| [GaitGL(BNNeck)(ICCV2021)](https://openaccess.thecvf.com/content/ICCV2021/papers/Lin_Gait_Recognition_via_Effective_Global-Local_Feature_Representation_and_Local_Temporal_ICCV_2021_paper.pdf) | 51.7 | [gaitgl.yaml](../configs/gaitgl/gaitgl_GREW_BNNeck.yaml) | 64x44 | - | - | +| [RealGait(Arxiv now)](https://arxiv.org/pdf/2201.04806.pdf)| (54.1) | - | - | - | - | + + +------------------------------------------ + +The results in the parentheses are mentioned in the papers. + +**Note**: +- All results are Rank@1, excluding identical-view cases. +- The shown result of GLN is implemented without compact block. +- Only two RTX3090 are used for infering CASIA-B, and eight are used for infering OUMVLP. diff --git a/docs/0.prepare_dataset.md b/docs/2.prepare_dataset.md similarity index 97% rename from docs/0.prepare_dataset.md rename to docs/2.prepare_dataset.md index 9110c53..8cba6b4 100644 --- a/docs/0.prepare_dataset.md +++ b/docs/2.prepare_dataset.md @@ -1,5 +1,5 @@ # Prepare dataset -Suppose you have downloaded the original dataset, we need to preprocess the data and save it as pickle file. Remember to set your path to the root of processed dataset in [config/*.yaml](config/). +Suppose you have downloaded the original dataset, we need to preprocess the data and save it as pickle file. Remember to set your path to the root of processed dataset in [configs/*.yaml](configs/). ## Preprocess **CASIA-B** @@ -170,4 +170,4 @@ python datasets/pretreatment.py --input_path Path_of_GREW-rearranged --output_pa ``` ## Split dataset -You can use the partition file in dataset folder directly, or you can create yours. Remember to set your path to the partition file in [config/*.yaml](config/). +You can use the partition file in dataset folder directly, or you can create yours. Remember to set your path to the partition file in [configs/*.yaml](configs/). diff --git a/docs/1.detailed_config.md b/docs/3.detailed_config.md similarity index 98% rename from docs/1.detailed_config.md rename to docs/3.detailed_config.md index fb6232c..5186caa 100644 --- a/docs/1.detailed_config.md +++ b/docs/3.detailed_config.md @@ -37,7 +37,7 @@ * Model to be trained > * Args > * model : Model type, please refer to [Model Library](../opengait/modeling/models) for the supported values. -> * **others** : Please refer to the [Training Configuration File of Corresponding Model](../config). +> * **others** : Please refer to the [Training Configuration File of Corresponding Model](../configs). ---- ### evaluator_cfg * Evaluator configuration @@ -78,7 +78,7 @@ > * **others**: Please refer to `evaluator_cfg`. --- **Note**: -- All the config items will be merged into [default.yaml](../config/default.yaml), and the current config is preferable. +- All the config items will be merged into [default.yaml](../configs/default.yaml), and the current config is preferable. - The output directory, which includes the log, checkpoint and summary files, is depended on the defined `dataset_name`, `model` and `save_name` settings, like `output/${dataset_name}/${model}/${save_name}`. # Example diff --git a/docs/2.how_to_create_your_model.md b/docs/4.how_to_create_your_model.md similarity index 100% rename from docs/2.how_to_create_your_model.md rename to docs/4.how_to_create_your_model.md diff --git a/docs/3.advanced_usages.md b/docs/5.advanced_usages.md similarity index 94% rename from docs/3.advanced_usages.md rename to docs/5.advanced_usages.md index 1c77818..f212b6e 100644 --- a/docs/3.advanced_usages.md +++ b/docs/5.advanced_usages.md @@ -1,8 +1,8 @@ # Advanced Usages ### Cross-Dataset Evalution -> You can conduct cross-dataset evalution by just modifying several arguments in your [data_cfg](../config/baseline/baseline.yaml#L1). +> You can conduct cross-dataset evalution by just modifying several arguments in your [data_cfg](../configs/baseline/baseline.yaml#L1). > -> Take [baseline.yaml](../config/baseline/baseline.yaml) as an example: +> Take [baseline.yaml](../configs/baseline/baseline.yaml) as an example: > ```yaml > data_cfg: > dataset_name: CASIA-B @@ -65,7 +65,7 @@ >> ]) >> return transform >> ``` -> * *Step2*: Reset the [`transform`](../config/baseline.yaml#L100) arguments in your config file: +> * *Step2*: Reset the [`transform`](../configs/baseline.yaml#L100) arguments in your config file: >> ```yaml >> transform: >> - type: TransformDemo diff --git a/test.sh b/test.sh index f7656ff..81daa38 100644 --- a/test.sh +++ b/test.sh @@ -1,32 +1,32 @@ # # **************** For CASIA-B **************** # # Baseline -CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./config/baseline/baseline.yaml --phase test +CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./configs/baseline/baseline.yaml --phase test # # GaitSet -# CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./config/gaitset/gaitset.yaml --phase test +# CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./configs/gaitset/gaitset.yaml --phase test # # GaitPart -# CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./config/gaitpart/gaitpart.yaml --phase test +# CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./configs/gaitpart/gaitpart.yaml --phase test # GaitGL -# CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --master_port 12345 --nproc_per_node=4 opengait/main.py --cfgs ./config/gaitgl/gaitgl.yaml --phase test +# CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --master_port 12345 --nproc_per_node=4 opengait/main.py --cfgs ./configs/gaitgl/gaitgl.yaml --phase test # # GLN # # Phase 1 -# CUDA_VISIBLE_DEVICES=3,4 python -m torch.distributed.launch --master_port 12345 --nproc_per_node=2 opengait/main.py --cfgs ./config/gln/gln_phase1.yaml --phase test +# CUDA_VISIBLE_DEVICES=3,4 python -m torch.distributed.launch --master_port 12345 --nproc_per_node=2 opengait/main.py --cfgs ./configs/gln/gln_phase1.yaml --phase test # # Phase 2 -# CUDA_VISIBLE_DEVICES=2,5 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./config/gln/gln_phase2.yaml --phase test +# CUDA_VISIBLE_DEVICES=2,5 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./configs/gln/gln_phase2.yaml --phase test # # **************** For OUMVLP **************** # # Baseline -# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./config/baseline/baseline_OUMVLP.yaml --phase test +# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./configs/baseline/baseline_OUMVLP.yaml --phase test # # GaitSet -# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./config/gaitset/gaitset_OUMVLP.yaml --phase test +# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./configs/gaitset/gaitset_OUMVLP.yaml --phase test # # GaitPart -# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./config/gaitpart/gaitpart_OUMVLP.yaml --phase test +# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./configs/gaitpart/gaitpart_OUMVLP.yaml --phase test # GaitGL -# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./config/gaitgl/gaitgl_OUMVLP.yaml --phase test +# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./configs/gaitgl/gaitgl_OUMVLP.yaml --phase test diff --git a/train.sh b/train.sh index 4dbc531..5910b37 100644 --- a/train.sh +++ b/train.sh @@ -1,32 +1,32 @@ # # **************** For CASIA-B **************** # # Baseline -CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./config/baseline/baseline.yaml --phase train +CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./configs/baseline/baseline.yaml --phase train # # GaitSet -# CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./config/gaitset/gaitset.yaml --phase train +# CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./configs/gaitset/gaitset.yaml --phase train # # GaitPart -# CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./config/gaitpart/gaitpart.yaml --phase train +# CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./configs/gaitpart/gaitpart.yaml --phase train # GaitGL -# CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./config/gaitgl/gaitgl.yaml --phase train +# CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./configs/gaitgl/gaitgl.yaml --phase train # # GLN # # Phase 1 -# CUDA_VISIBLE_DEVICES=2,5,6,7 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./config/gln/gln_phase1.yaml --phase train +# CUDA_VISIBLE_DEVICES=2,5,6,7 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./configs/gln/gln_phase1.yaml --phase train # # Phase 2 -# CUDA_VISIBLE_DEVICES=2,5,6,7 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./config/gln/gln_phase2.yaml --phase train +# CUDA_VISIBLE_DEVICES=2,5,6,7 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./configs/gln/gln_phase2.yaml --phase train # # **************** For OUMVLP **************** # # Baseline -# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./config/baseline/baseline_OUMVLP.yaml --phase train +# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./configs/baseline/baseline_OUMVLP.yaml --phase train # # GaitSet -# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./config/gaitset/gaitset_OUMVLP.yaml --phase train +# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./configs/gaitset/gaitset_OUMVLP.yaml --phase train # # GaitPart -# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./config/gaitpart/gaitpart_OUMVLP.yaml --phase train +# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./configs/gaitpart/gaitpart_OUMVLP.yaml --phase train # GaitGL -# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./config/gaitgl/gaitgl_OUMVLP.yaml --phase train +# CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 opengait/main.py --cfgs ./configs/gaitgl/gaitgl_OUMVLP.yaml --phase train