5.0 KiB
Advanced Usages
Cross-Dataset Evalution
You can conduct cross-dataset evalution by just modifying several arguments in your data_cfg.
Take baseline.yaml as an example:
data_cfg: dataset_name: CASIA-B dataset_root: your_path dataset_partition: ./datasets/CASIA-B/CASIA-B.json num_workers: 1 remove_no_gallery: false # Remove probe if no gallery for it test_dataset_name: CASIA-BNow, suppose we get the model trained on CASIA-B, and then we want to test it on OUMVLP.
We should alter the
dataset_root,dataset_partitionandtest_dataset_name, just like:data_cfg: dataset_name: CASIA-B dataset_root: your_OUMVLP_path dataset_partition: ./datasets/OUMVLP/OUMVLP.json num_workers: 1 remove_no_gallery: false # Remove probe if no gallery for it test_dataset_name: OUMVLP
Data Augmentation
In OpenGait, there is a basic transform class almost called by all the models, this is BaseSilCuttingTransform, which is used to cut the input silhouettes.
Accordingly, by referring to this implementation, you can easily customize the data agumentation in just two steps:
- Step1: Define the transform function or class in transform.py, and make sure it callable. The style of torchvision.transforms is recommanded, and following shows a demo;
import torchvision.transforms as T class demo1(): def __init__(self, args): pass def __call__(self, seqs): ''' seqs: with dimension of [sequence, height, width] ''' pass return seqs class demo2(): def __init__(self, args): pass def __call__(self, seqs): pass return seqs def TransformDemo(base_args, demo1_args, demo2_args): transform = T.Compose([ BaseSilCuttingTransform(**base_args), demo1(args=demo1_args), demo2(args=demo2_args) ]) return transform
- Step2: Reset the
transformarguments in your config file:transform: - type: TransformDemo base_args: {'img_w': 64} demo1_args: false demo2_args: false
Visualization
To learn how does the model work, sometimes, you need to visualize the intermediate result.
For this purpose, we have defined a built-in instantiation of
torch.utils.tensorboard.SummaryWriter, that isself.msg_mgr.writer, to make sure you can log the middle information everywhere you want.Demo: if we want to visualize the output feature of baseline's backbone, we could just insert the following codes at baseline.py#L28:
summary_writer = self.msg_mgr.writer if torch.distributed.get_rank() == 0 and self.training and self.iteration % 100==0: summary_writer.add_video('outs', outs.mean(2).unsqueeze(2), self.iteration)Note that this example requires the
moviepypackage, and hence you should runpip install moviepyfirst.
Keep Best Checkpoints
If you want to retain the strongest evaluation checkpoints instead of relying only on the latest or final save, you can enable best-checkpoint tracking in
trainer_cfg.Example:
trainer_cfg: with_test: true eval_iter: 1000 save_iter: 1000 best_ckpt_cfg: keep_n: 3 metric_names: - scalar/test_f1/ - scalar/test_accuracy/Behavior:
- The normal numbered checkpoints are still written by
save_iter.- After each eval, the trainer checks the configured scalar metrics and keeps the top
Ncheckpoints separately for each metric.- Best checkpoints are saved under
output/.../checkpoints/best/<metric>/.- Each best-metric directory contains an
index.jsonfile with the retained iterations, scores, and paths.This is useful for long or unstable runs where the best checkpoint may appear well before the final iteration.