Tutorial for Scoliosis1K
Download the Scoliosis1K Dataset
You can download the dataset from the official website. The dataset is provided as four compressed files:
Scoliosis1K-sil-raw.zipScoliosis1K-sil-pkl.zipScoliosis1K-pose-raw.zipScoliosis1K-pose-pkl.zip
We recommend using the provided pickle (.pkl) files for convenience.
Decompress them with the following commands:
unzip -P <password> Scoliosis1K-sil-pkl.zip
unzip -P <password> Scoliosis1K-pose-pkl.zip
Note
: The <password> can be obtained by signing the release agreement and sending it to 12331257@mail.sustech.edu.cn.
Dataset Structure
After decompression, you will get the following structure:
├── Scoliosis1K-sil-pkl
│ ├── 00000 # Identity
│ │ ├── Positive # Class
│ │ │ ├── 000_180 # View
│ │ │ └── 000_180.pkl # Estimated Silhouette (PP-HumanSeg v2)
│
├── Scoliosis1K-pose-pkl
│ ├── 00000 # Identity
│ │ ├── Positive # Class
│ │ │ ├── 000_180 # View
│ │ │ └── 000_180.pkl # Estimated 2D Pose (ViTPose)
Processing from RAW Dataset (optional)
If you prefer, you can process the raw dataset into .pkl format.
# For silhouette raw data
python datasets/pretreatment.py --input_path=<path_to_raw_silhouettes> --output_path=<output_path>
# For pose raw data
python datasets/pretreatment.py --input_path=<path_to_raw_pose> --output_path=<output_path> --pose --dataset=OUMVLP
Training and Testing
Before training or testing, modify the dataset_root field in
configs/sconet/sconet_scoliosis1k.yaml.
Then run the following commands:
# Training
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -m torch.distributed.launch --nproc_per_node=4 \
opengait/main.py --cfgs configs/sconet/sconet_scoliosis1k.yaml --phase train --log_to_file
# Testing
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -m torch.distributed.launch --nproc_per_node=4 \
opengait/main.py --cfgs configs/sconet/sconet_scoliosis1k.yaml --phase test --log_to_file
Pose-to-Heatmap Conversion
From our paper: Pose as Clinical Prior: Learning Dual Representations for Scoliosis Screening (MICCAI 2025)
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python -m torch.distributed.launch --nproc_per_node=4 \
datasets/pretreatment_heatmap.py \
--pose_data_path=<path_to_pose_pkl> \
--save_root=<output_path> \
--dataset_name=OUMVLP
DRF Preprocessing
For the DRF model, OpenGait expects a combined runtime dataset with:
0_heatmap.pkl: the two-channel skeleton map sequence1_pav.pkl: the paper-style Postural Asymmetry Vector (PAV), repeated along the sequence axis so it matches OpenGait's multi-input loader contract
The PAV pass is implemented from the paper:
- convert pose to COCO17 if needed
- pad missing joints
- pelvis-center and height normalize the sequence
- compute vertical, midline, and angular deviations for the 8 symmetric joint pairs
- apply IQR filtering per metric
- average over time
- min-max normalize across the full dataset (paper default), or across
TRAIN_SETwhen--stats_partitionis provided as an anti-leakage variant
Run:
uv run python datasets/pretreatment_scoliosis_drf.py \
--pose_data_path=<path_to_pose_pkl> \
--output_path=<path_to_drf_pkl>
To reproduce the paper defaults more closely, the script now uses
configs/drf/pretreatment_heatmap_drf.yaml by default, which enables
summed two-channel skeleton maps and a literal 128-pixel height normalization.
If you explicitly want train-only PAV min-max statistics, add:
--stats_partition=./datasets/Scoliosis1K/Scoliosis1K_118.json
The output layout is:
<path_to_drf_pkl>/
├── pav_stats.pkl
├── 00000/
│ ├── Positive/
│ │ ├── 000_180/
│ │ │ ├── 0_heatmap.pkl
│ │ │ └── 1_pav.pkl
Point configs/drf/drf_scoliosis1k.yaml:data_cfg.dataset_root to this output directory before training or testing.
DRF Training and Testing
CUDA_VISIBLE_DEVICES=0,1,2,3 \
uv run python -m torch.distributed.launch --nproc_per_node=4 \
opengait/main.py --cfgs configs/drf/drf_scoliosis1k.yaml --phase train
CUDA_VISIBLE_DEVICES=0,1,2,3 \
uv run python -m torch.distributed.launch --nproc_per_node=4 \
opengait/main.py --cfgs configs/drf/drf_scoliosis1k.yaml --phase test