Persist orchestration artifacts, including plan definition, progress state, decisions, issues, and learnings gathered during delegated execution and QA gates. This preserves implementation rationale and auditability without coupling documentation snapshots to runtime logic commits.
OpenGait is a flexible and extensible gait analysis project provided by the Shiqi Yu Group and supported in part by WATRIX.AI. The corresponding paper has been accepted by CVPR2023 as a highlight paper. The extension paper has been accepted to TPAMI2025.
What's New
- [Sep 2025] BiggerGait has been accepted to NeurIPS2025🎉 and is available at here. Here are checkpoints.
- [Jun 2025] Scoliosis1K-Pose has been accepted to MICCAI2025🎉. Extends ScoNet by introducing pose annotations and clinical priors for interpretable scoliosis screening. Dataset is available on the project homepage.
- [Jun 2025] LidarGait++ has been accepted to CVPR2025🎉 and open-source in configs/lidargaitv2.
- [Jun 2025] The extension paper of OpenGait, further strengthened by the advancements of DeepGaitV2, SkeletonGait, and SkeletonGait++, has been accepted for publication in TPAMI🎉. We sincerely acknowledge the valuable contributions and continuous support from the OpenGait community.
- [Feb 2025] The diffusion-based DenoisingGait has been accepted to CVPR2025🎉 Congratulations to Dongyang! This is his SECOND paper!
- [Feb 2025] Chao successfully defended his Ph.D. thesis in Oct. 2024🎉🎉🎉 You can access the full text in Chao's Thesis in English or 樊超的学位论文(中文版).
- [Dec 2024] The multimodal MultiGait++ has been accepted to AAAI2025🎉 Congratulations to Dongyang! This is his FIRST paper!
- [Jun 2024] The first large-scale gait-based scoliosis screening benchmark ScoNet is accepted to MICCAI2024🎉 Congratulations to Zirui! This is his FIRST paper! The code is released here, and you can refer to project homepage for details.
- [May 2024] The code of Large Vision Model based method BigGait is available at here. CCPG's checkpoints.
- [Apr 2024] Our team's latest checkpoints for projects such as DeepGaitv2, SkeletonGait, SkeletonGait++, and SwinGait will be released on Hugging Face. Additionally, previously released checkpoints will also be gradually made available on it.
- [Mar 2024] Chao gives a talk about 'Progress in Gait Recognition'. The video and slides are both available😊
- [Mar 2024] The code of SkeletonGait++ is released here, and you can refer to readme for details.
- [Mar 2024] BigGait has been accepted to CVPR2024🎉 Congratulations to Dingqiang! This is his FIRST paper!
- [Jan 2024] The code of transfomer-based SwinGait is available at here.
Our Works
- [NeurIPS'25] BiggerGait: Unlocking Gait Recognition with Layer-wise Representations from Large Vision Models Paper, and BiggerGait Code.
- [MICCAI'25] Pose as Clinical Prior: Learning Dual Representations for Scoliosis Screening. Paper and Scoliosis1K Dataset.
- [CVPR'25] LidarGait++: Learning Local Features and Size Awareness from LiDAR Point Clouds for 3D Gait Recognition. Paper and LidarGait++ Code
- [TPAMI'25] OpenGait: A Comprehensive Benchmark Study for Gait Recognition Towards Better Practicality. Paper. This extension includes a key update with in-depth insights into emerging trends and challenges of gait recognition in Sec. VII.
- [CVPR'25] On Denoising Walking Videos for Gait Recognition. Paper and DenoisingGait Code
- [Chao's Thesis] Gait Representation Learning and Recognition, Chinese Original and English Translation.
- [AAAI'25] Exploring More from Multiple Gait Modalities for Human Identification, Paper and MultiGait++ Code.
- [TBIOM'24] A Comprehensive Survey on Deep Gait Recognition: Algorithms, Datasets, and Challenges, Survey Paper.
- [MICCAI'24] Gait Patterns as Biomarkers: A Video-Based Approach for Classifying Scoliosis, Paper, Scoliosis1K Dataset, and ScoNet Code.
- [CVPR'24] BigGait: Learning Gait Representation You Want by Large Vision Models. Paper, and BigGait Code.
- [AAAI'24] SkeletonGait++: Gait Recognition Using Skeleton Maps. Paper, and SkeletonGait++ Code.
- [AAAI'24] Cross-Covariate Gait Recognition: A Benchmark. Paper, CCGR Dataset, and ParsingGait Code.
- [Arxiv'23] Exploring Deep Models for Practical Gait Recognition. Paper, DeepGaitV2 Code, and SwinGait Code.
- [TPAMI'23] Learning Gait Representation from Massive Unlabelled Walking Videos: A Benchmark, Paper, GaitLU-1M Dataset, and GaitSSB Code.
- [CVPR'23] LidarGait: Benchmarking 3D Gait Recognition with Point Clouds, Paper, SUSTech1K Dataset and LidarGait Code.
- [CVPR'23] OpenGait: Revisiting Gait Recognition Toward Better Practicality, Highlight Paper, and GaitBase Code.
- [ECCV'22] GaitEdge: Beyond Plain End-to-end Gait Recognition for Better Practicality, Paper, and GaitEdge Code.
A Real Gait Recognition System: All-in-One-Gait
The workflow of All-in-One-Gait involves the processes of pedestrian tracking, segmentation and recognition. See here for details.
Highlighted features
- Multiple Dataset supported: CASIA-B, OUMVLP, SUSTech1K, HID, GREW, Gait3D, CCPG, CASIA-E, and GaitLU-1M.
- Multiple Models Support: We reproduced several SOTA methods and reached the same or even better performance.
- DDP Support: The officially recommended
Distributed Data Parallel (DDP)mode is used during both the training and testing phases. - AMP Support: The
Auto Mixed Precision (AMP)option is available. - Nice log: We use
tensorboardandloggingto log everything, which looks pretty.
Getting Started
Quick Start (uv)
# Install dependencies
uv sync --extra torch
# Train
CUDA_VISIBLE_DEVICES=0,1 uv run python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./configs/baseline/baseline.yaml --phase train
# Test
CUDA_VISIBLE_DEVICES=0,1 uv run python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./configs/baseline/baseline.yaml --phase test
Note: The
--nproc_per_nodeargument must exactly match the number of GPUs specified inCUDA_VISIBLE_DEVICES. For single-GPU evaluation, useCUDA_VISIBLE_DEVICES=0and--nproc_per_node=1with the DDP launcher.
Please see 0.get_started.md. We also provide the following tutorials for your reference:
Model Zoo
✨✨✨You can find all the checkpoint files at ✨✨✨!
The result list of appearance-based gait recognition is available here.
The result list of pose-based gait recognition is available here.
Authors:
- Chao Fan (樊超), 12131100@mail.sustech.edu.cn
- Chuanfu Shen (沈川福), 11950016@mail.sustech.edu.cn
- Junhao Liang (梁峻豪), 12132342@mail.sustech.edu.cn
Now OpenGait is mainly maintained by Dongyang Jin (金冬阳), 11911221@mail.sustech.edu.cn
Acknowledgement
-
GLN: Saihui Hou (侯赛辉)
-
GaitGL: Beibei Lin (林贝贝)
-
GREW: GREW TEAM
-
FastPoseGait Team: FastPoseGait Team
-
Gait3D Team: Gait3D Team
Citation
@InProceedings{Fan_2023_CVPR,
author = {Fan, Chao and Liang, Junhao and Shen, Chuanfu and Hou, Saihui and Huang, Yongzhen and Yu, Shiqi},
title = {OpenGait: Revisiting Gait Recognition Towards Better Practicality},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {9707-9716}
}
@ARTICLE{fan2025opengait,
author={Fan, Chao and Hou, Saihui and Liang, Junhao and Shen, Chuanfu and Ma, Jingzhe and Jin, Dongyang and Huang, Yongzhen and Yu, Shiqi},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={OpenGait: A Comprehensive Benchmark Study for Gait Recognition Towards Better Practicality},
year={2025},
volume={},
number={},
pages={1-18},
doi={10.1109/TPAMI.2025.3576283}
}
Note: This code is only used for academic purposes, people cannot use this code for anything that might be considered commercial use.



