# Advanced Usages ### Cross-Dataset Evalution > You can conduct cross-dataset evalution by just modifying several arguments in your [data_cfg](../config/baseline/baseline.yaml#L1). > > Take [baseline.yaml](../config/baseline/baseline.yaml) as an example: > ```yaml > data_cfg: > dataset_name: CASIA-B > dataset_root: your_path > dataset_partition: ./datasets/CASIA-B/CASIA-B_include_005.json > num_workers: 1 > remove_no_gallery: false # Remove probe if no gallery for it > test_dataset_name: CASIA-B > ``` > Now, suppose we get the model trained on [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp), and then we want to test it on [OUMVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html). > > We should alter the `dataset_root`, `dataset_partition` and `test_dataset_name`, just like: > ```yaml > data_cfg: > dataset_name: CASIA-B > dataset_root: your_OUMVLP_path > dataset_partition: ./datasets/OUMVLP/OUMVLP.json > num_workers: 1 > remove_no_gallery: false # Remove probe if no gallery for it > test_dataset_name: OUMVLP > ``` --- > ### Data Augmentation > In OpenGait, there is a basic transform class almost called by all the models, this is [BaseSilCuttingTransform](../opengait/data/transform.py#L20), which is used to cut the input silhouettes. > > Accordingly, by referring to this implementation, you can easily customize the data agumentation in just two steps: > * *Step1*: Define the transform function or class in [transform.py](../opengait/data/transform.py), and make sure it callable. The style of [torchvision.transforms](https://pytorch.org/vision/stable/_modules/torchvision/transforms/transforms.html) is recommanded, and following shows a demo; >> ```python >> import torchvision.transforms as T >> class demo1(): >> def __init__(self, args): >> pass >> >> def __call__(self, seqs): >> ''' >> seqs: with dimension of [sequence, height, width] >> ''' >> pass >> return seqs >> >> class demo2(): >> def __init__(self, args): >> pass >> >> def __call__(self, seqs): >> pass >> return seqs >> >> def TransformDemo(base_args, demo1_args, demo2_args): >> transform = T.Compose([ >> BaseSilCuttingTransform(**base_args), >> demo1(args=demo1_args), >> demo2(args=demo2_args) >> ]) >> return transform >> ``` > * *Step2*: Reset the [`transform`](../config/baseline.yaml#L100) arguments in your config file: >> ```yaml >> transform: >> - type: TransformDemo >> base_args: {'img_w': 64} >> demo1_args: false >> demo2_args: false >> ``` ### Visualization > To learn how does the model work, sometimes, you need to visualize the intermediate result. > > For this purpose, we have defined a built-in instantiation of [`torch.utils.tensorboard.SummaryWriter`](https://pytorch.org/docs/stable/tensorboard.html), that is [`self.msg_mgr.writer`](../opengait/utils/msg_manager.py#L24), to make sure you can log the middle information everywhere you want. > > Demo: if we want to visualize the output feature of [baseline's backbone](../opengait/modeling/models/baseline.py#L27), we could just insert the following codes at [baseline.py#L28](../opengait/modeling/models/baseline.py#L28): >> ```python >> summary_writer = self.msg_mgr.writer >> if torch.distributed.get_rank() == 0 and self.training and self.iteration % 100==0: >> summary_writer.add_video('outs', outs.mean(2).unsqueeze(2), self.iteration) >> ``` > Note that this example requires the [`moviepy`](https://github.com/Zulko/moviepy) package, and hence you should run `pip install moviepy` first.