add performance table

This commit is contained in:
darkliang
2022-07-17 19:39:01 +08:00
parent b183455eb8
commit 13894439a4
5 changed files with 13 additions and 8 deletions
+9 -2
View File
@@ -1,8 +1,15 @@
# GaitEdge: Beyond Plain End-to-end Gait Recognition for Better Practicality # GaitEdge: Beyond Plain End-to-end Gait Recognition for Better Practicality
This [paper](https://arxiv.org/abs/2203.03972) has been accepted by ECCV 2022, the source code and CASIA-B* dataset mentioned in the paper will be released within two weeks. This [paper](https://arxiv.org/abs/2203.03972) has been accepted by ECCV 2022.
## Abstract ## Abstract
Gait is one of the most promising biometrics to identify individuals at a long distance. Although most previous methods have focused on recognizing the silhouettes, several end-to-end methods that extract gait features directly from RGB images perform better. However, we argue that these end-to-end methods inevitably suffer from the gait-unrelated noises, i.e., low-level texture and colorful information. Experimentally, we design both the cross-domain evaluation and visualization to stand for this view. In this work, we propose a novel end-to-end framework named GaitEdge which can effectively block gait-unrelated information and release end-to-end training potential. Specifically, GaitEdge synthesizes the output of the pedestrian segmentation network and then feeds it to the subsequent recognition network, where the synthetic silhouettes consist of trainable edges of bodies and fixed interiors to limit the information that the recognition network receives. Besides, GaitAlign for aligning silhouettes is embedded into the GaitEdge without loss of differentiability. Experimental results on CASIA-B and our newly built TTG-200 indicate that GaitEdge significantly outperforms the previous methods and provides a more practical end-to-end paradigm for blocking RGB noises effectively. Gait is one of the most promising biometrics to identify individuals at a long distance. Although most previous methods have focused on recognizing the silhouettes, several end-to-end methods that extract gait features directly from RGB images perform better. However, we argue that these end-to-end methods inevitably suffer from the gait-unrelated noises, i.e., low-level texture and colorful information. Experimentally, we design both the cross-domain evaluation and visualization to stand for this view. In this work, we propose a novel end-to-end framework named GaitEdge which can effectively block gait-unrelated information and release end-to-end training potential. Specifically, GaitEdge synthesizes the output of the pedestrian segmentation network and then feeds it to the subsequent recognition network, where the synthetic silhouettes consist of trainable edges of bodies and fixed interiors to limit the information that the recognition network receives. Besides, GaitAlign for aligning silhouettes is embedded into the GaitEdge without loss of differentiability. Experimental results on CASIA-B and our newly built TTG-200 indicate that GaitEdge significantly outperforms the previous methods and provides a more practical end-to-end paradigm for blocking RGB noises effectively.
![img](../../assets/gaitedge.png) ![img](../../assets/gaitedge.png)
## Performance
| Model | NM | BG | CL | TTG-200 (cross-domain) | Configuration |
|:----------:|:----:|:----:|:----:|:----------------------:|:----------------------------------------------:|
| GaitGL | 94.0 | 89.6 | 81.0 | 53.2 | [phase1_rec.yaml](./phase1_rec.yaml) |
| GaitGL-E2E | 99.1 | 98.2 | 89.1 | 45.6 | [phase2_e2e.yaml](./phase2_e2e.yaml) |
| GaitEdge | 98.0 | 96.3 | 88.0 | 53.9 | [phase2_gaitedge.yaml](./phase2_gaitedge.yaml) |
+1 -1
View File
@@ -1,7 +1,7 @@
# Note : *** the batch_size should be equal to the gpus number at the test phase!!! *** # Note : *** the batch_size should be equal to the gpus number at the test phase!!! ***
data_cfg: data_cfg:
dataset_name: CASIA-B_new dataset_name: CASIA-B_new
dataset_root: /home1/data/casiab-new-64-cut-pkl/ dataset_root: your_path
dataset_partition: ./datasets/CASIA-B*/CASIA-B*.json dataset_partition: ./datasets/CASIA-B*/CASIA-B*.json
num_workers: 1 num_workers: 1
remove_no_gallery: false remove_no_gallery: false
+1 -1
View File
@@ -1,7 +1,7 @@
# Note : *** the batch_size should be equal to the gpus number at the test phase!!! *** # Note : *** the batch_size should be equal to the gpus number at the test phase!!! ***
data_cfg: data_cfg:
dataset_name: CASIA-B_new dataset_name: CASIA-B_new
dataset_root: /home1/data/casiab-128-end2end dataset_root: your_path
dataset_partition: ./datasets/CASIA-B*/CASIA-B*.json dataset_partition: ./datasets/CASIA-B*/CASIA-B*.json
num_workers: 1 num_workers: 1
remove_no_gallery: false remove_no_gallery: false
+1 -2
View File
@@ -1,6 +1,6 @@
data_cfg: data_cfg:
dataset_name: CASIA-B_new dataset_name: CASIA-B_new
dataset_root: /home1/data/casiab-128-end2end dataset_root: your_path
dataset_partition: ./datasets/CASIA-B*/CASIA-B*.json dataset_partition: ./datasets/CASIA-B*/CASIA-B*.json
num_workers: 1 num_workers: 1
remove_no_gallery: false # Remove probe if no gallery for it remove_no_gallery: false # Remove probe if no gallery for it
@@ -12,7 +12,6 @@ evaluator_cfg:
restore_ckpt_strict: true restore_ckpt_strict: true
restore_hint: 20000 restore_hint: 20000
save_name: GaitGL_E2E save_name: GaitGL_E2E
eval_func: identification_real_scene
sampler: sampler:
batch_size: 4 batch_size: 4
sample_type: all_ordered sample_type: all_ordered
+1 -2
View File
@@ -1,6 +1,6 @@
data_cfg: data_cfg:
dataset_name: CASIA-B_new dataset_name: CASIA-B_new
dataset_root: /home1/data/casiab-128-end2end dataset_root: your_path
dataset_partition: ./datasets/CASIA-B*/CASIA-B*.json dataset_partition: ./datasets/CASIA-B*/CASIA-B*.json
num_workers: 1 num_workers: 1
remove_no_gallery: false # Remove probe if no gallery for it remove_no_gallery: false # Remove probe if no gallery for it
@@ -12,7 +12,6 @@ evaluator_cfg:
restore_ckpt_strict: true restore_ckpt_strict: true
restore_hint: 20000 restore_hint: 20000
save_name: GaitEdge save_name: GaitEdge
eval_func: identification_real_scene
sampler: sampler:
batch_size: 4 batch_size: 4
sample_type: all_ordered sample_type: all_ordered