Release GaitLU-1M
This commit is contained in:
@@ -3,24 +3,26 @@
|
|||||||
<div align="center"><img src="./assets/nm.gif" width = "100" height = "100" alt="nm" /><img src="./assets/bg.gif" width = "100" height = "100" alt="bg" /><img src="./assets/cl.gif" width = "100" height = "100" alt="cl" /></div>
|
<div align="center"><img src="./assets/nm.gif" width = "100" height = "100" alt="nm" /><img src="./assets/bg.gif" width = "100" height = "100" alt="bg" /><img src="./assets/cl.gif" width = "100" height = "100" alt="cl" /></div>
|
||||||
|
|
||||||
------------------------------------------
|
------------------------------------------
|
||||||
|
<!-- 📣📣📣 **[*GaitLU-1M*](https://ieeexplore.ieee.org/document/10242019) relseased, pls checking the [tutorial](datasets/GaitLU-1M/README.md).** 📣📣📣
|
||||||
📣📣📣 **[*SUSTech1K*](https://lidargait.github.io) relseased, pls checking the [tutorial](datasets/SUSTech1K/README.md).** 📣📣📣
|
📣📣📣 **[*SUSTech1K*](https://lidargait.github.io) relseased, pls checking the [tutorial](datasets/SUSTech1K/README.md).** 📣📣📣
|
||||||
|
|
||||||
🎉🎉🎉 **[*OpenGait*](https://openaccess.thecvf.com/content/CVPR2023/papers/Fan_OpenGait_Revisiting_Gait_Recognition_Towards_Better_Practicality_CVPR_2023_paper.pdf) has been accpected by CVPR2023 as a highlight paper!** 🎉🎉🎉
|
🎉🎉🎉 **[*OpenGait*](https://openaccess.thecvf.com/content/CVPR2023/papers/Fan_OpenGait_Revisiting_Gait_Recognition_Towards_Better_Practicality_CVPR_2023_paper.pdf) has been accpected by CVPR2023 as a highlight paper!** 🎉🎉🎉 -->
|
||||||
|
|
||||||
OpenGait is a flexible and extensible gait recognition project provided by the [Shiqi Yu Group](https://faculty.sustech.edu.cn/yusq/) and supported in part by [WATRIX.AI](http://www.watrix.ai).
|
OpenGait is a flexible and extensible gait recognition project provided by the [Shiqi Yu Group](https://faculty.sustech.edu.cn/yusq/) and supported in part by [WATRIX.AI](http://www.watrix.ai).
|
||||||
|
|
||||||
## What's New
|
## What's New
|
||||||
|
- **[Nov 2023]** The first million-level unlabeled gait dataset, i.e., [GaitLU-1M](https://ieeexplore.ieee.org/document/10242019), is released and supported in [datasets/GaitLU-1M](datasets/GaitLU-1M/README.md).
|
||||||
- **[Oct 2023]** Several representative pose-based methods are supported in [opengait/modeling/models](./opengait/modeling/models). This feature is mainly inherited from [FastPoseGait](https://github.com/BNU-IVC/FastPoseGait). Many thanks to the contributors😊.
|
- **[Oct 2023]** Several representative pose-based methods are supported in [opengait/modeling/models](./opengait/modeling/models). This feature is mainly inherited from [FastPoseGait](https://github.com/BNU-IVC/FastPoseGait). Many thanks to the contributors😊.
|
||||||
- **[July 2023]** [CCPG](https://github.com/BNU-IVC/CCPG) is supported in [datasets/CCPG](./datasets/CCPG).
|
- **[July 2023]** [CCPG](https://github.com/BNU-IVC/CCPG) is supported in [datasets/CCPG](./datasets/CCPG).
|
||||||
- **[July 2023]** [SUSTech1K](https://lidargait.github.io) is released and supported in [datasets/SUSTech1K](./datasets/SUSTech1K).
|
- **[July 2023]** [SUSTech1K](https://lidargait.github.io) is released and supported in [datasets/SUSTech1K](./datasets/SUSTech1K).
|
||||||
- [May 2023] A real gait recognition system [All-in-One-Gait](https://github.com/jdyjjj/All-in-One-Gait) provided by [Dongyang Jin](https://github.com/jdyjjj) is available.
|
- [May 2023] A real gait recognition system [All-in-One-Gait](https://github.com/jdyjjj/All-in-One-Gait) provided by [Dongyang Jin](https://github.com/jdyjjj) is available.
|
||||||
- [Apr 2023] [CASIA-E](datasets/CASIA-E/README.md) is supported by OpenGait.
|
<!-- - [Apr 2023] [CASIA-E](datasets/CASIA-E/README.md) is supported by OpenGait.
|
||||||
- [Feb 2023] [HID 2023 competition](https://hid2023.iapr-tc4.org/) is open, welcome to participate. Additionally, the tutorial for the competition has been updated in [datasets/HID/](./datasets/HID).
|
- [Feb 2023] [HID 2023 competition](https://hid2023.iapr-tc4.org/) is open, welcome to participate. Additionally, the tutorial for the competition has been updated in [datasets/HID/](./datasets/HID).
|
||||||
- [Dec 2022] Dataset [Gait3D](https://github.com/Gait3D/Gait3D-Benchmark) is supported in [datasets/Gait3D](./datasets/Gait3D).
|
- [Dec 2022] Dataset [Gait3D](https://github.com/Gait3D/Gait3D-Benchmark) is supported in [datasets/Gait3D](./datasets/Gait3D).
|
||||||
- [Mar 2022] Dataset [GREW](https://www.grew-benchmark.org) is supported in [datasets/GREW](./datasets/GREW).
|
- [Mar 2022] Dataset [GREW](https://www.grew-benchmark.org) is supported in [datasets/GREW](./datasets/GREW). -->
|
||||||
|
|
||||||
## Our Publications
|
## Our Publications
|
||||||
- [**TPAMI 2023**] Learning Gait Representation from Massive Unlabelled Walking Videos: A Benchmark, [*Paper*](https://ieeexplore.ieee.org/document/10242019), [*Dataset*](https://github.com/ChaoFan996/GaitSSB)(Coming soon), and [*Code*](opengait/modeling/models/gaitssb.py).
|
- [**TPAMI 2023**] Learning Gait Representation from Massive Unlabelled Walking Videos: A Benchmark, [*Paper*](https://ieeexplore.ieee.org/document/10242019), [*Dataset*](datasets/GaitLU-1M/README.md), and [*Code*](opengait/modeling/models/gaitssb.py).
|
||||||
- [**CVPR 2023**] LidarGait: Benchmarking 3D Gait Recognition with Point Clouds, [*Paper*](https://openaccess.thecvf.com/content/CVPR2023/papers/Shen_LidarGait_Benchmarking_3D_Gait_Recognition_With_Point_Clouds_CVPR_2023_paper.pdf), [*Dataset*](https://lidargait.github.io) and [*Code*](datasets/SUSTech1K/README.md).
|
- [**CVPR 2023**] LidarGait: Benchmarking 3D Gait Recognition with Point Clouds, [*Paper*](https://openaccess.thecvf.com/content/CVPR2023/papers/Shen_LidarGait_Benchmarking_3D_Gait_Recognition_With_Point_Clouds_CVPR_2023_paper.pdf), [*Dataset*](https://lidargait.github.io) and [*Code*](datasets/SUSTech1K/README.md).
|
||||||
- [**CVPR 2023 Highlight**] OpenGait: Revisiting Gait Recognition Toward Better Practicality, [*Paper*](https://openaccess.thecvf.com/content/CVPR2023/papers/Fan_OpenGait_Revisiting_Gait_Recognition_Towards_Better_Practicality_CVPR_2023_paper.pdf), [*Code*](configs/gaitbase).
|
- [**CVPR 2023 Highlight**] OpenGait: Revisiting Gait Recognition Toward Better Practicality, [*Paper*](https://openaccess.thecvf.com/content/CVPR2023/papers/Fan_OpenGait_Revisiting_Gait_Recognition_Towards_Better_Practicality_CVPR_2023_paper.pdf), [*Code*](configs/gaitbase).
|
||||||
- [**ECCV 2022**] GaitEdge: Beyond Plain End-to-end Gait Recognition for Better Practicality, [*Paper*](), [*Code*](configs/gaitedge/README.md).
|
- [**ECCV 2022**] GaitEdge: Beyond Plain End-to-end Gait Recognition for Better Practicality, [*Paper*](), [*Code*](configs/gaitedge/README.md).
|
||||||
@@ -34,7 +36,7 @@ The workflow of [All-in-One-Gait](https://github.com/jdyjjj/All-in-One-Gait) inv
|
|||||||
See [here](https://github.com/jdyjjj/All-in-One-Gait) for details.
|
See [here](https://github.com/jdyjjj/All-in-One-Gait) for details.
|
||||||
|
|
||||||
## Highlighted features
|
## Highlighted features
|
||||||
- **Mutiple Dataset supported**: [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp), [OUMVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html), [SUSTech1K](https://lidargait.github.io), [HID](http://hid2022.iapr-tc4.org/), [GREW](https://www.grew-benchmark.org), [Gait3D](https://github.com/Gait3D/Gait3D-Benchmark), [CCPG](https://openaccess.thecvf.com/content/CVPR2023/papers/Li_An_In-Depth_Exploration_of_Person_Re-Identification_and_Gait_Recognition_in_CVPR_2023_paper.pdf), and [CASIA-E](https://www.scidb.cn/en/detail?dataSetId=57be0e918db743279baf44a38d013a06).
|
- **Multiple Dataset supported**: [CASIA-B](http://www.cbsr.ia.ac.cn/english/Gait%20Databases.asp), [OUMVLP](http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html), [SUSTech1K](https://lidargait.github.io), [HID](http://hid2022.iapr-tc4.org/), [GREW](https://www.grew-benchmark.org), [Gait3D](https://github.com/Gait3D/Gait3D-Benchmark), [CCPG](https://openaccess.thecvf.com/content/CVPR2023/papers/Li_An_In-Depth_Exploration_of_Person_Re-Identification_and_Gait_Recognition_in_CVPR_2023_paper.pdf), [CASIA-E](https://www.scidb.cn/en/detail?dataSetId=57be0e918db743279baf44a38d013a06), and [GaitLU-1M](https://ieeexplore.ieee.org/document/10242019).
|
||||||
- **Multiple Models Support**: We reproduced several SOTA methods and reached the same or even better performance.
|
- **Multiple Models Support**: We reproduced several SOTA methods and reached the same or even better performance.
|
||||||
- **DDP Support**: The officially recommended [`Distributed Data Parallel (DDP)`](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) mode is used during both the training and testing phases.
|
- **DDP Support**: The officially recommended [`Distributed Data Parallel (DDP)`](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) mode is used during both the training and testing phases.
|
||||||
- **AMP Support**: The [`Auto Mixed Precision (AMP)`](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html?highlight=amp) option is available.
|
- **AMP Support**: The [`Auto Mixed Precision (AMP)`](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html?highlight=amp) option is available.
|
||||||
|
|||||||
@@ -0,0 +1,95 @@
|
|||||||
|
data_cfg:
|
||||||
|
dataset_name: CASIA-B
|
||||||
|
dataset_root: your_path
|
||||||
|
dataset_partition: ./datasets/CASIA-B/CASIA-B.json
|
||||||
|
num_workers: 1
|
||||||
|
remove_no_gallery: false # Remove probe if no gallery for it
|
||||||
|
dataset_name: CASIA-B
|
||||||
|
|
||||||
|
evaluator_cfg:
|
||||||
|
enable_float16: true
|
||||||
|
restore_ckpt_strict: true
|
||||||
|
restore_hint: 12000
|
||||||
|
save_name: GaitSSB_Finetune
|
||||||
|
sampler:
|
||||||
|
batch_shuffle: false
|
||||||
|
batch_size: 4
|
||||||
|
sample_type: all_ordered # all indicates whole sequence used to test, while ordered means input sequence by its natural order; Other options: fixed_unordered
|
||||||
|
frames_all_limit: 720 # limit the number of sampled frames to prevent out of memory
|
||||||
|
metric: euc # cos
|
||||||
|
transform:
|
||||||
|
- type: BaseSilCuttingTransform
|
||||||
|
|
||||||
|
loss_cfg:
|
||||||
|
- loss_term_weight: 1.0
|
||||||
|
margin: 0.3
|
||||||
|
type: TripletLoss
|
||||||
|
log_prefix: triplet
|
||||||
|
|
||||||
|
model_cfg:
|
||||||
|
model: GaitSSB_Finetune
|
||||||
|
backbone_cfg:
|
||||||
|
type: ResNet9
|
||||||
|
block: BasicBlock
|
||||||
|
channels: # Layers configuration for automatically model construction
|
||||||
|
- 64
|
||||||
|
- 128
|
||||||
|
- 256
|
||||||
|
- 512
|
||||||
|
layers:
|
||||||
|
- 1
|
||||||
|
- 1
|
||||||
|
- 1
|
||||||
|
- 1
|
||||||
|
strides:
|
||||||
|
- 1
|
||||||
|
- 2
|
||||||
|
- 2
|
||||||
|
- 1
|
||||||
|
maxpool: false
|
||||||
|
parts_num: 31
|
||||||
|
backbone_lr:
|
||||||
|
- 0.
|
||||||
|
- 0.001
|
||||||
|
- 0.001
|
||||||
|
- 0.001
|
||||||
|
projector_lr: 0.01
|
||||||
|
|
||||||
|
optimizer_cfg:
|
||||||
|
lr: 0.1
|
||||||
|
momentum: 0.9
|
||||||
|
solver: SGD
|
||||||
|
weight_decay: 0.0005
|
||||||
|
|
||||||
|
scheduler_cfg:
|
||||||
|
gamma: 0.1
|
||||||
|
milestones: # Learning Rate Reduction at each milestones
|
||||||
|
- 6000
|
||||||
|
- 8000
|
||||||
|
- 10000
|
||||||
|
scheduler: MultiStepLR
|
||||||
|
|
||||||
|
trainer_cfg:
|
||||||
|
find_unused_parameters: true
|
||||||
|
enable_float16: true # half_percesion float for memory reduction and speedup
|
||||||
|
fix_BN: true
|
||||||
|
with_test: false
|
||||||
|
log_iter: 100
|
||||||
|
optimizer_reset: true
|
||||||
|
restore_ckpt_strict: false
|
||||||
|
restore_hint: ./output/GaitLU-1M/GaitSSB_Pretrain/GaitSSB_Pretrain/checkpoints/GaitSSB_Pretrain-150000.pt
|
||||||
|
save_iter: 2000
|
||||||
|
save_name: GaitSSB_Finetune
|
||||||
|
sync_BN: true
|
||||||
|
total_iter: 12000
|
||||||
|
sampler:
|
||||||
|
batch_shuffle: true
|
||||||
|
batch_size:
|
||||||
|
- 8 # TripletSampler, batch_size[0] indicates Number of Identity
|
||||||
|
- 16 # batch_size[1] indicates Samples sequqnce for each Identity
|
||||||
|
frames_num_fixed: 30 # fixed frames number for training
|
||||||
|
sample_type: fixed_unordered # fixed control input frames number, unordered for controlling order of input tensor; Other options: unfixed_ordered or all_ordered
|
||||||
|
frames_skip_num: 4
|
||||||
|
type: TripletSampler
|
||||||
|
transform:
|
||||||
|
- type: BaseSilCuttingTransform
|
||||||
@@ -0,0 +1,96 @@
|
|||||||
|
data_cfg:
|
||||||
|
dataset_name: GREW
|
||||||
|
dataset_root: your_path
|
||||||
|
dataset_partition: ./datasets/GREW/GREW.json
|
||||||
|
num_workers: 1
|
||||||
|
remove_no_gallery: false # Remove probe if no gallery for it
|
||||||
|
test_dataset_name: GREW
|
||||||
|
|
||||||
|
evaluator_cfg:
|
||||||
|
enable_float16: true
|
||||||
|
restore_ckpt_strict: true
|
||||||
|
restore_hint: 80000
|
||||||
|
save_name: GaitSSB_Finetune
|
||||||
|
eval_func: GREW_submission
|
||||||
|
sampler:
|
||||||
|
batch_shuffle: false
|
||||||
|
batch_size: 4
|
||||||
|
sample_type: all_ordered # all indicates whole sequence used to test, while ordered means input sequence by its natural order; Other options: fixed_unordered
|
||||||
|
frames_all_limit: 720 # limit the number of sampled frames to prevent out of memory
|
||||||
|
metric: euc # cos
|
||||||
|
transform:
|
||||||
|
- type: BaseSilCuttingTransform
|
||||||
|
|
||||||
|
loss_cfg:
|
||||||
|
- loss_term_weight: 1.0
|
||||||
|
margin: 0.3
|
||||||
|
type: TripletLoss
|
||||||
|
log_prefix: triplet
|
||||||
|
|
||||||
|
model_cfg:
|
||||||
|
model: GaitSSB_Finetune
|
||||||
|
backbone_cfg:
|
||||||
|
type: ResNet9
|
||||||
|
block: BasicBlock
|
||||||
|
channels: # Layers configuration for automatically model construction
|
||||||
|
- 64
|
||||||
|
- 128
|
||||||
|
- 256
|
||||||
|
- 512
|
||||||
|
layers:
|
||||||
|
- 1
|
||||||
|
- 1
|
||||||
|
- 1
|
||||||
|
- 1
|
||||||
|
strides:
|
||||||
|
- 1
|
||||||
|
- 2
|
||||||
|
- 2
|
||||||
|
- 1
|
||||||
|
maxpool: false
|
||||||
|
parts_num: 31
|
||||||
|
backbone_lr:
|
||||||
|
- 0.
|
||||||
|
- 0.001
|
||||||
|
- 0.001
|
||||||
|
- 0.001
|
||||||
|
projector_lr: 0.01
|
||||||
|
|
||||||
|
optimizer_cfg:
|
||||||
|
lr: 0.1
|
||||||
|
momentum: 0.9
|
||||||
|
solver: SGD
|
||||||
|
weight_decay: 0.
|
||||||
|
|
||||||
|
scheduler_cfg:
|
||||||
|
gamma: 0.1
|
||||||
|
milestones: # Learning Rate Reduction at each milestones
|
||||||
|
- 50000
|
||||||
|
- 60000
|
||||||
|
- 70000
|
||||||
|
scheduler: MultiStepLR
|
||||||
|
|
||||||
|
trainer_cfg:
|
||||||
|
find_unused_parameters: true
|
||||||
|
enable_float16: true # half_percesion float for memory reduction and speedup
|
||||||
|
fix_BN: true
|
||||||
|
with_test: false
|
||||||
|
log_iter: 100
|
||||||
|
optimizer_reset: true
|
||||||
|
restore_ckpt_strict: false
|
||||||
|
restore_hint: ./output/GaitLU-1M/GaitSSB_Pretrain/GaitSSB_Pretrain/checkpoints/GaitSSB_Pretrain-150000.pt
|
||||||
|
save_iter: 20000
|
||||||
|
save_name: GaitSSB_Finetune
|
||||||
|
sync_BN: true
|
||||||
|
total_iter: 80000
|
||||||
|
sampler:
|
||||||
|
batch_shuffle: true
|
||||||
|
batch_size:
|
||||||
|
- 128 # TripletSampler, batch_size[0] indicates Number of Identity
|
||||||
|
- 4 # batch_size[1] indicates Samples sequqnce for each Identity
|
||||||
|
frames_num_fixed: 30 # fixed frames number for training
|
||||||
|
sample_type: fixed_unordered # fixed control input frames number, unordered for controlling order of input tensor; Other options: unfixed_ordered or all_ordered
|
||||||
|
frames_skip_num: 4
|
||||||
|
type: TripletSampler
|
||||||
|
transform:
|
||||||
|
- type: BaseSilCuttingTransform
|
||||||
@@ -0,0 +1,95 @@
|
|||||||
|
data_cfg:
|
||||||
|
dataset_name: OUMVLP
|
||||||
|
dataset_root: your_path
|
||||||
|
dataset_partition: ./datasets/OUMVLP/OUMVLP.json
|
||||||
|
num_workers: 1
|
||||||
|
remove_no_gallery: true # Remove probe if no gallery for it
|
||||||
|
test_dataset_name: OUMVLP
|
||||||
|
|
||||||
|
evaluator_cfg:
|
||||||
|
enable_float16: true
|
||||||
|
restore_ckpt_strict: true
|
||||||
|
restore_hint: 80000
|
||||||
|
save_name: GaitSSB_Finetune
|
||||||
|
sampler:
|
||||||
|
batch_shuffle: false
|
||||||
|
batch_size: 4
|
||||||
|
sample_type: all_ordered # all indicates whole sequence used to test, while ordered means input sequence by its natural order; Other options: fixed_unordered
|
||||||
|
frames_all_limit: 720 # limit the number of sampled frames to prevent out of memory
|
||||||
|
metric: euc # cos
|
||||||
|
transform:
|
||||||
|
- type: BaseSilCuttingTransform
|
||||||
|
|
||||||
|
loss_cfg:
|
||||||
|
- loss_term_weight: 1.0
|
||||||
|
margin: 0.3
|
||||||
|
type: TripletLoss
|
||||||
|
log_prefix: triplet
|
||||||
|
|
||||||
|
model_cfg:
|
||||||
|
model: GaitSSB_Finetune
|
||||||
|
backbone_cfg:
|
||||||
|
type: ResNet9
|
||||||
|
block: BasicBlock
|
||||||
|
channels: # Layers configuration for automatically model construction
|
||||||
|
- 64
|
||||||
|
- 128
|
||||||
|
- 256
|
||||||
|
- 512
|
||||||
|
layers:
|
||||||
|
- 1
|
||||||
|
- 1
|
||||||
|
- 1
|
||||||
|
- 1
|
||||||
|
strides:
|
||||||
|
- 1
|
||||||
|
- 2
|
||||||
|
- 2
|
||||||
|
- 1
|
||||||
|
maxpool: false
|
||||||
|
parts_num: 31
|
||||||
|
backbone_lr:
|
||||||
|
- 0.001
|
||||||
|
- 0.001
|
||||||
|
- 0.001
|
||||||
|
- 0.001
|
||||||
|
projector_lr: 0.01
|
||||||
|
|
||||||
|
optimizer_cfg:
|
||||||
|
lr: 0.1
|
||||||
|
momentum: 0.9
|
||||||
|
solver: SGD
|
||||||
|
weight_decay: 0.005
|
||||||
|
|
||||||
|
scheduler_cfg:
|
||||||
|
gamma: 0.1
|
||||||
|
milestones: # Learning Rate Reduction at each milestones
|
||||||
|
- 50000
|
||||||
|
- 60000
|
||||||
|
- 70000
|
||||||
|
scheduler: MultiStepLR
|
||||||
|
|
||||||
|
trainer_cfg:
|
||||||
|
find_unused_parameters: true
|
||||||
|
enable_float16: true # half_percesion float for memory reduction and speedup
|
||||||
|
fix_BN: true
|
||||||
|
with_test: false
|
||||||
|
log_iter: 100
|
||||||
|
optimizer_reset: true
|
||||||
|
restore_ckpt_strict: false
|
||||||
|
restore_hint: ./output/GaitLU-1M/GaitSSB_Pretrain/GaitSSB_Pretrain/checkpoints/GaitSSB_Pretrain-150000.pt
|
||||||
|
save_iter: 20000
|
||||||
|
save_name: GaitSSB_Finetune
|
||||||
|
sync_BN: true
|
||||||
|
total_iter: 80000
|
||||||
|
sampler:
|
||||||
|
batch_shuffle: true
|
||||||
|
batch_size:
|
||||||
|
- 32 # TripletSampler, batch_size[0] indicates Number of Identity
|
||||||
|
- 16 # batch_size[1] indicates Samples sequqnce for each Identity
|
||||||
|
frames_num_fixed: 30 # fixed frames number for training
|
||||||
|
sample_type: fixed_unordered # fixed control input frames number, unordered for controlling order of input tensor; Other options: unfixed_ordered or all_ordered
|
||||||
|
frames_skip_num: 4
|
||||||
|
type: TripletSampler
|
||||||
|
transform:
|
||||||
|
- type: BaseSilCuttingTransform
|
||||||
@@ -6,14 +6,23 @@ GaitLU-1M is extracted from public videos shot around the world, making it cover
|
|||||||
|
|
||||||
This great diversity and scale offer an excellent chance to learn general gait representation in a self-supervised manner.
|
This great diversity and scale offer an excellent chance to learn general gait representation in a self-supervised manner.
|
||||||
|
|
||||||
## Download (Coming soon)
|
## Download
|
||||||
Download the dataset from [Baidu Yun]() or [OneDrive]() and decompress the file by following command:
|
### Step1:
|
||||||
```shell
|
Download the dataset from [Baidu Yun](https://pan.baidu.com/s/1aexoZY-deZFXSuyfOOjwJg) (code: 4rat) or [OneDrive](https://connecthkuhk-my.sharepoint.com/:f:/g/personal/noahshen_connect_hku_hk/EvFZrNKeV7VCgJKCaLay7T8Bv7TW5PHIcXWzv0XyFPliIA?e=9ZHkx9).
|
||||||
unzip -P password GaitLU-1M.zip | xargs -n1 tar xzvf
|
|
||||||
```
|
|
||||||
To obtain the password, you should sign the [Release Agreement](./Release_Agreement.pdf) and [Ethical Requirement](./Ethical_Requirements.pdf) and send the signed documents to the administrator(12131100@mail.sustech.edu.cn).
|
|
||||||
|
|
||||||
Then you can get GaitLU-1M formatted as:
|
There are 6 sub-zip files and you can aggregate them by:
|
||||||
|
```shell
|
||||||
|
zip -F GaitLU_Anno_part.zip --out GaitLU_Anno.zip
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step2:
|
||||||
|
Then, you can decompress the file by:
|
||||||
|
```shell
|
||||||
|
unzip -P password GaitLU_Anno.zip -d <output_folder>
|
||||||
|
```
|
||||||
|
To obtain the password, you should sign the [Release Agreement](./Release_Agreement.pdf) and [Ethical Requirement](./Ethical_Requirements.pdf) and send the signed documents to our administrator (12131100@mail.sustech.edu.cn).
|
||||||
|
|
||||||
|
Finally, you can get GaitLU-1M formatted as:
|
||||||
```
|
```
|
||||||
silhouette_cut_pkl
|
silhouette_cut_pkl
|
||||||
├── 000 # Random number
|
├── 000 # Random number
|
||||||
@@ -27,12 +36,12 @@ silhouette_cut_pkl
|
|||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Usage (Coming soon)
|
## Usage
|
||||||
For the training phase, you should modify the `dataset_root` in `configs/gaitssb/pretrain.yaml` and run the following command:
|
For the training phase, you should modify the `dataset_root` in `configs/gaitssb/pretrain.yaml` and run the following command:
|
||||||
```shell
|
```shell
|
||||||
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs configs/gaitssb/pretrain.yaml --phase train --log_to_file
|
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs configs/gaitssb/pretrain.yaml --phase train --log_to_file
|
||||||
```
|
```
|
||||||
The officially provided pretrained checkpoint can be found [here]().
|
The officially provided pretrained checkpoint can be found [here]() (Coming soon).
|
||||||
|
|
||||||
Then you can evaluate the pretrained model on labelled gait datasets by runing:
|
Then you can evaluate the pretrained model on labelled gait datasets by runing:
|
||||||
```shell
|
```shell
|
||||||
@@ -60,12 +69,12 @@ If you use this dataset in your research, please cite the following paper:
|
|||||||
```
|
```
|
||||||
If you think OpenGait is useful, please cite the following paper:
|
If you think OpenGait is useful, please cite the following paper:
|
||||||
```
|
```
|
||||||
@misc{fan2022opengait,
|
@InProceedings{Fan_2023_CVPR,
|
||||||
title={OpenGait: Revisiting Gait Recognition Toward Better Practicality},
|
author = {Fan, Chao and Liang, Junhao and Shen, Chuanfu and Hou, Saihui and Huang, Yongzhen and Yu, Shiqi},
|
||||||
author={Chao Fan and Junhao Liang and Chuanfu Shen and Saihui Hou and Yongzhen Huang and Shiqi Yu},
|
title = {OpenGait: Revisiting Gait Recognition Towards Better Practicality},
|
||||||
year={2022},
|
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
||||||
eprint={2211.06597},
|
month = {June},
|
||||||
archivePrefix={arXiv},
|
year = {2023},
|
||||||
primaryClass={cs.CV}
|
pages = {9707-9716}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
Reference in New Issue
Block a user