bb6cd5149a
* fix bug in fix_BN * gaitgl OUMVLP support. * update ./doc/3.advance_usage.md Cross-Dataset Evalution & Data Agumentation * update config * update docs.3 * update docs.3 * add loss doc and gather input decorator * refine the create model doc * support rearrange directory of unzipped OUMVLP * fix some bugs in loss_aggregator.py * refine docs and little fix * add oumvlp pretreatment description * pretreatment dataset fix oumvlp description * add gaitgl oumvlp result * assert gaitgl input size * add pipeline * update the readme. * update pipeline and readme * Corrigendum. * add logo and remove path * update new logo * Update README.md * modify logo size Co-authored-by: 12131100 <12131100@mail.sustech.edu.cn> Co-authored-by: noahshen98 <77523610+noahshen98@users.noreply.github.com> Co-authored-by: Noah <595311942@qq.com>
88 lines
4.0 KiB
Markdown
88 lines
4.0 KiB
Markdown
# Advanced Usages
|
|
### Cross-Dataset Evalution
|
|
> You can conduct cross-dataset evalution by just modifying several arguments in your [data_cfg](../config/baseline.yaml#L1).
|
|
>
|
|
> Take [baseline.yaml](../config/baseline.yaml) as an example:
|
|
> ```yaml
|
|
> data_cfg:
|
|
> dataset_name: CASIA-B
|
|
> dataset_root: your_path
|
|
> dataset_partition: ./misc/partitions/CASIA-B_include_005.json
|
|
> num_workers: 1
|
|
> remove_no_gallery: false # Remove probe if no gallery for it
|
|
> test_dataset_name: CASIA-B
|
|
> ```
|
|
> Now, suppose we get the model trained on [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp), and then we want to test it on [OUMVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html).
|
|
>
|
|
> We should alter the `dataset_root`, `dataset_partition` and `test_dataset_name`, just like:
|
|
> ```yaml
|
|
> data_cfg:
|
|
> dataset_name: CASIA-B
|
|
> dataset_root: your_OUMVLP_path
|
|
> dataset_partition: ./misc/partitions/OUMVLP.json
|
|
> num_workers: 1
|
|
> remove_no_gallery: false # Remove probe if no gallery for it
|
|
> test_dataset_name: OUMVLP
|
|
> ```
|
|
---
|
|
>
|
|
<!-- ### Identification Function
|
|
> Sometime, your test dataset may be neither the popular [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp) nor the largest [OUMVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html). Meanwhile, you need to customize a special identification function to fit your dataset.
|
|
>
|
|
> * If your path structure is similar to [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp) (the 3-flod style: `id-type-view`), we recommand you to -->
|
|
|
|
### Data Augmentation
|
|
> In OpenGait, there is a basic transform class almost called by all the models, this is [BaseSilCuttingTransform](../lib/data/transform.py#L20), which is used to cut the input silhouettes.
|
|
>
|
|
> Accordingly, by referring to this implementation, you can easily customize the data agumentation in just two steps:
|
|
> * *Step1*: Define the transform function or class in [transform.py](../lib/data/transform.py), and make sure it callable. The style of [torchvision.transforms](https://pytorch.org/vision/stable/_modules/torchvision/transforms/transforms.html) is recommanded, and following shows a demo;
|
|
>> ```python
|
|
>> import torchvision.transforms as T
|
|
>> class demo1():
|
|
>> def __init__(self, args):
|
|
>> pass
|
|
>>
|
|
>> def __call__(self, seqs):
|
|
>> '''
|
|
>> seqs: with dimension of [sequence, height, width]
|
|
>> '''
|
|
>> pass
|
|
>> return seqs
|
|
>>
|
|
>> class demo2():
|
|
>> def __init__(self, args):
|
|
>> pass
|
|
>>
|
|
>> def __call__(self, seqs):
|
|
>> pass
|
|
>> return seqs
|
|
>>
|
|
>> def TransformDemo(base_args, demo1_args, demo2_args):
|
|
>> transform = T.Compose([
|
|
>> BaseSilCuttingTransform(**base_args),
|
|
>> demo1(args=demo1_args),
|
|
>> demo2(args=demo2_args)
|
|
>> ])
|
|
>> return transform
|
|
>> ```
|
|
> * *Step2*: Reset the [`transform`](../config/baseline.yaml#L100) arguments in your config file:
|
|
>> ```yaml
|
|
>> transform:
|
|
>> - type: TransformDemo
|
|
>> base_args: {'img_w': 64}
|
|
>> demo1_args: false
|
|
>> demo2_args: false
|
|
>> ```
|
|
|
|
### Visualization
|
|
> To learn how does the model work, sometimes, you need to visualize the intermediate result.
|
|
>
|
|
> For this purpose, we have defined a built-in instantiation of [`torch.utils.tensorboard.SummaryWriter`](https://pytorch.org/docs/stable/tensorboard.html), that is [`self.msg_mgr.writer`](../lib/utils/msg_manager.py#L24), to make sure you can log the middle information everywhere you want.
|
|
>
|
|
> Demo: if we want to visualize the output feature of [baseline's backbone](../lib/modeling/models/baseline.py#L27), we could just insert the following codes at [baseline.py#L28](../lib/modeling/models/baseline.py#L28):
|
|
>> ```python
|
|
>> summary_writer = self.msg_mgr.writer
|
|
>> if torch.distributed.get_rank() == 0 and self.training and self.iteration % 100==0:
|
|
>> summary_writer.add_video('outs', outs.mean(2).unsqueeze(2), self.iteration)
|
|
>> ```
|
|
> Note that this example requires the [`moviepy`](https://github.com/Zulko/moviepy) package, and hence you should run `pip install moviepy` first. |