Files
OpenGait/docs/3.advanced_usages.md
T
Junhao Liang bb6cd5149a 1.0.0 official release (#18)
* fix bug in fix_BN

* gaitgl OUMVLP support.

* update ./doc/3.advance_usage.md Cross-Dataset Evalution & Data Agumentation

* update config

* update docs.3

* update docs.3

* add loss doc and gather input decorator

* refine the create model doc

* support rearrange directory of unzipped OUMVLP

* fix some bugs in loss_aggregator.py

* refine docs and little fix

* add oumvlp pretreatment description

* pretreatment dataset fix oumvlp description

* add gaitgl oumvlp result

* assert gaitgl input size

* add pipeline

* update the readme.

* update pipeline and readme

* Corrigendum.

* add logo and remove path

* update new logo

* Update README.md

* modify logo size

Co-authored-by: 12131100 <12131100@mail.sustech.edu.cn>
Co-authored-by: noahshen98 <77523610+noahshen98@users.noreply.github.com>
Co-authored-by: Noah <595311942@qq.com>
2021-12-08 20:05:28 +08:00

4.0 KiB

Advanced Usages

Cross-Dataset Evalution

You can conduct cross-dataset evalution by just modifying several arguments in your data_cfg.

Take baseline.yaml as an example:

data_cfg:
  dataset_name: CASIA-B
  dataset_root:  your_path
  dataset_partition: ./misc/partitions/CASIA-B_include_005.json
  num_workers: 1
  remove_no_gallery: false # Remove probe if no gallery for it
  test_dataset_name: CASIA-B

Now, suppose we get the model trained on CASIA-B, and then we want to test it on OUMVLP.

We should alter the dataset_root, dataset_partition and test_dataset_name, just like:

data_cfg:
  dataset_name: CASIA-B
  dataset_root:  your_OUMVLP_path
  dataset_partition: ./misc/partitions/OUMVLP.json
  num_workers: 1
  remove_no_gallery: false # Remove probe if no gallery for it
  test_dataset_name: OUMVLP

Data Augmentation

In OpenGait, there is a basic transform class almost called by all the models, this is BaseSilCuttingTransform, which is used to cut the input silhouettes.

Accordingly, by referring to this implementation, you can easily customize the data agumentation in just two steps:

  • Step1: Define the transform function or class in transform.py, and make sure it callable. The style of torchvision.transforms is recommanded, and following shows a demo;
import torchvision.transforms as T
class demo1():
    def __init__(self, args):
        pass
    
    def __call__(self, seqs):
        '''
            seqs: with dimension of [sequence, height, width]
        '''
        pass
        return seqs

class demo2():
    def __init__(self, args):
        pass
    
    def __call__(self, seqs):
        pass
        return seqs

 def TransformDemo(base_args, demo1_args, demo2_args):
    transform = T.Compose([
        BaseSilCuttingTransform(**base_args), 
        demo1(args=demo1_args), 
        demo2(args=demo2_args)
    ])
    return transform
  • Step2: Reset the transform arguments in your config file:
transform:
- type: TransformDemo
    base_args: {'img_w': 64}
    demo1_args: false
    demo2_args: false

Visualization

To learn how does the model work, sometimes, you need to visualize the intermediate result.

For this purpose, we have defined a built-in instantiation of torch.utils.tensorboard.SummaryWriter, that is self.msg_mgr.writer, to make sure you can log the middle information everywhere you want.

Demo: if we want to visualize the output feature of baseline's backbone, we could just insert the following codes at baseline.py#L28:

summary_writer = self.msg_mgr.writer
if torch.distributed.get_rank() == 0 and self.training and self.iteration % 100==0:
    summary_writer.add_video('outs', outs.mean(2).unsqueeze(2), self.iteration)

Note that this example requires the moviepy package, and hence you should run pip install moviepy first.