lidargaitv2 open-source
This commit is contained in:
@@ -13,6 +13,7 @@ The corresponding [paper](https://openaccess.thecvf.com/content/CVPR2023/papers/
|
||||
The extension [paper](https://arxiv.org/pdf/2405.09138) has been accepted to TPAMI2025.
|
||||
|
||||
## What's New
|
||||
- **[Jun 2025]** [LidarGait++](https://openaccess.thecvf.com/content/CVPR2025/papers/Shen_LidarGait_Learning_Local_Features_and_Size_Awareness_from_LiDAR_Point_CVPR_2025_paper.pdf) has been accepted to CVPR2025🎉 and open-source in [configs/lidargaitv2](./configs/lidargaitv2/README.md).
|
||||
- **[Jun 2025]** The extension paper of [OpenGait](https://arxiv.org/pdf/2405.09138), further strengthened by the advancements of [DeepGaitV2](https://github.com/ShiqiYu/OpenGait/blob/master/opengait/modeling/models/deepgaitv2.py), SkeletonGait, and [SkeletonGait++](opengait/modeling/models/skeletongait%2B%2B.py), has been accepted for publication in TPAMI🎉. We sincerely acknowledge the valuable contributions and continuous support from the OpenGait community.
|
||||
- **[Feb 2025]** The diffusion-based [DenoisingGait](https://arxiv.org/pdf/2505.18582) has been accepted to CVPR2025🎉 Congratulations to [Dongyang](https://scholar.google.com.hk/citations?user=1xA5KxAAAAAJ)! This is his SECOND paper!
|
||||
- **[Feb 2025]** Chao successfully defended his Ph.D. thesis in Oct. 2024🎉🎉🎉 You can access the full text in [*Chao's Thesis in English*](https://www.researchgate.net/publication/388768400_Gait_Representation_Learning_and_Recognition?_sg%5B0%5D=qaGVpS8gKWPyR7olHoFd4bCs40AZdJzaM96P3TSnxrpiP9zCIUTxzeEq8YhQOlE4WemB7iMF2fHvcJFAYHTlJhTIB2J6faVa5s-xcQVj.4112nauMM4MWUNSyUa9eMeF0MEeplptpFOgb5kSgIk3lMcfPK6TdPX1bW1y_bKSdbwXuBf29GloRsVwBdexhug&_tp=eyJjb250ZXh0Ijp7ImZpcnN0UGFnZSI6ImhvbWUiLCJwYWdlIjoicHJvZmlsZSIsInByZXZpb3VzUGFnZSI6InByb2ZpbGUiLCJwb3NpdGlvbiI6InBhZ2VDb250ZW50In19) or [*樊超的学位论文(中文版)*](https://www.researchgate.net/publication/388768605_butaitezhengxuexiyushibiesuanfayanjiu).
|
||||
@@ -39,6 +40,7 @@ Our team's latest checkpoints for projects such as DeepGaitv2, SkeletonGait, Ske
|
||||
- [Mar 2022] Dataset [GREW](https://www.grew-benchmark.org) is supported in [datasets/GREW](./datasets/GREW). -->
|
||||
|
||||
## Our Works
|
||||
- [**CVPR'25**] LidarGait++: Learning Local Features and Size Awareness from LiDAR Point Clouds for 3D Gait Recognition. [*Paper*](https://openaccess.thecvf.com/content/CVPR2025/papers/Shen_LidarGait_Learning_Local_Features_and_Size_Awareness_from_LiDAR_Point_CVPR_2025_paper.pdf) and [*LidarGait++ Code*](configs/lidargaitv2/README.md)
|
||||
- [**TPAMI'25**] OpenGait: A Comprehensive Benchmark Study for Gait Recognition Towards Better Practicality. [*Paper*](https://arxiv.org/pdf/2405.09138). _This extension includes a key update with in-depth insights into emerging trends and challenges of gait recognition in Sec. VII_.
|
||||
- [**CVPR'25**] On Denoising Walking Videos for Gait Recognition. [*Paper*](https://arxiv.org/pdf/2505.18582) and [*DenoisingGait Code* (coming soon)]
|
||||
- [**Chao's Thesis**] Gait Representation Learning and Recognition, [Chinese Original](https://www.researchgate.net/publication/388768605_butaitezhengxuexiyushibiesuanfayanjiu) and [English Translation](https://www.academia.edu/127496287/Gait_Representation_Learning_and_Recognition).
|
||||
|
||||
Reference in New Issue
Block a user