Add comprehensive knowledge base documentation across multiple domains

This commit is contained in:
2026-02-12 14:36:37 +08:00
parent f754f6f383
commit 0fdd35bd78
8 changed files with 336 additions and 0 deletions
+77
View File
@@ -0,0 +1,77 @@
# PROJECT KNOWLEDGE BASE
**Generated:** 2026-02-11T10:53:29Z
**Commit:** f754f6f
**Branch:** master
## OVERVIEW
OpenGait is a research-grade, config-driven gait analysis framework centered on distributed PyTorch training/testing.
Core runtime lives in `opengait/`; `configs/` and `datasets/` are first-class operational surfaces, not just support folders.
## STRUCTURE
```text
OpenGait/
├── opengait/ # runtime package (train/test, model/data/eval pipelines)
├── configs/ # model- and dataset-specific experiment specs
├── datasets/ # preprocessing/rearrangement scripts + partitions
├── docs/ # user workflow docs
├── train.sh # launch patterns (DDP)
└── test.sh # eval launch patterns (DDP)
```
## WHERE TO LOOK
| Task | Location | Notes |
|------|----------|-------|
| Train/test entry | `opengait/main.py` | DDP init + config load + model dispatch |
| Model registration | `opengait/modeling/models/__init__.py` | dynamic class import/registration |
| Backbone/loss registration | `opengait/modeling/backbones/__init__.py`, `opengait/modeling/losses/__init__.py` | same dynamic pattern |
| Config merge behavior | `opengait/utils/common.py::config_loader` | merges into `configs/default.yaml` |
| Data loading contract | `opengait/data/dataset.py`, `opengait/data/collate_fn.py` | `.pkl` only, sequence sampling modes |
| Evaluation dispatch | `opengait/evaluation/evaluator.py` | dataset-specific eval routines |
| Dataset preprocessing | `datasets/pretreatment.py` + dataset subdirs | many standalone CLI tools |
## CODE MAP
| Symbol / Module | Type | Location | Refs | Role |
|-----------------|------|----------|------|------|
| `config_loader` | function | `opengait/utils/common.py` | high | YAML merge + default overlay |
| `get_ddp_module` | function | `opengait/utils/common.py` | high | wraps modules with DDP passthrough |
| `BaseModel` | class | `opengait/modeling/base_model.py` | high | canonical train/test lifecycle |
| `LossAggregator` | class | `opengait/modeling/loss_aggregator.py` | medium | consumes `training_feat` contract |
| `DataSet` | class | `opengait/data/dataset.py` | high | dataset partition + sequence loading |
| `CollateFn` | class | `opengait/data/collate_fn.py` | high | fixed/unfixed/all sampling policy |
| `evaluate_*` funcs | functions | `opengait/evaluation/evaluator.py` | medium | metric/report orchestration |
| `models` package registry | dynamic module | `opengait/modeling/models/__init__.py` | high | config string → model class |
## CONVENTIONS
- Launch pattern is DDP-first (`python -m torch.distributed.launch ... opengait/main.py --cfgs ... --phase ...`).
- Model/loss/backbone discoverability is filesystem-driven via package-level dynamic imports.
- Experiment config semantics: custom YAML overlays `configs/default.yaml` (local key precedence).
- Outputs are keyed by config identity: `output/${dataset_name}/${model}/${save_name}`.
## ANTI-PATTERNS (THIS PROJECT)
- Do not feed non-`.pkl` sequence files into runtime loaders (`opengait/data/dataset.py`).
- Do not violate sampler shape assumptions (`trainer_cfg.sampler.batch_size` is `[P, K]` for triplet regimes).
- Do not ignore DDP cleanup guidance; abnormal exits can leave zombie processes (`misc/clean_process.sh`).
- Do not add unregistered model/loss classes outside expected directories (`opengait/modeling/models`, `opengait/modeling/losses`).
## UNIQUE STYLES
- `datasets/` is intentionally script-heavy (rearrange/extract/pretreat), not a pure library package.
- Research model zoo is broad; many model files co-exist as first-class references.
- Recent repo trajectory includes scoliosis screening models (ScoNet lineage), not only person-ID gait benchmarks.
## COMMANDS
```bash
# train
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./configs/baseline/baseline.yaml --phase train
# test
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 opengait/main.py --cfgs ./configs/baseline/baseline.yaml --phase test
# preprocess (generic)
python datasets/pretreatment.py --input_path <raw_or_rearranged> --output_path <pkl_root>
```
## NOTES
- LSP symbol map was unavailable in this environment (missing `basedpyright-langserver`), so centrality here is import/search-derived.
- `train.sh` / `test.sh` are canonical launch examples across datasets/models.
- Academic-use-only restriction is stated in repository README.
+30
View File
@@ -0,0 +1,30 @@
# CONFIG SURFACE KNOWLEDGE BASE
## OVERVIEW
`configs/` is the operational API for experiments. Runtime behavior is primarily configured here, not hardcoded.
## STRUCTURE
```text
configs/
├── default.yaml # base config merged into every run
├── <model-family>/*.yaml # experiment overlays
└── */README.md # family-specific instructions (when present)
```
## WHERE TO LOOK
| Task | Location | Notes |
|------|----------|-------|
| Global defaults | `default.yaml` | base for all runs |
| Model selection | `model_cfg.model` | must match class name in `modeling/models` |
| Data split binding | `data_cfg.dataset_partition` | points to `datasets/*/*.json` |
| Sampler behavior | `trainer_cfg.sampler`, `evaluator_cfg.sampler` | directly controls collate/sampler path |
## CONVENTIONS
- Config files are overlays merged into `default.yaml` via `MergeCfgsDict`.
- Keys accepted by classes/functions are validated at runtime; unknown keys are logged as unexpected.
- Paths and names here directly determine output directory keying (`output/<dataset>/<model>/<save_name>`).
## ANTI-PATTERNS
- Dont use model names not registered in `opengait/modeling/models`.
- Dont treat `batch_size` as scalar in triplet training regimes when config expects `[P, K]`.
- Dont bypass dataset partition files; loader expects explicit train/test pid sets.
+32
View File
@@ -0,0 +1,32 @@
# DATASET PREP KNOWLEDGE BASE
## OVERVIEW
`datasets/` is a script-heavy preprocessing workspace. It transforms raw benchmarks into OpenGaits required pickle layout and partition metadata.
## STRUCTURE
```text
datasets/
├── pretreatment.py # generic image->pkl pipeline (and pose mode)
├── pretreatment_heatmap.py # heatmap generation for skeleton workflows
├── <DatasetName>/README.md # dataset-specific acquisition + conversion steps
├── <DatasetName>/*.json # train/test partition files
└── <DatasetName>/*.py # extract/rearrange/convert scripts
```
## WHERE TO LOOK
| Task | Location | Notes |
|------|----------|-------|
| Generic preprocessing | `pretreatment.py` | handles multiple datasets, pose switch |
| OUMVLP pose index flow | `OUMVLP/README.md`, `OUMVLP/pose_index_extractor.py` | required for temporal consistency |
| Heatmap + skeleton prep | `pretreatment_heatmap.py`, `ln_sil_heatmap.py`, `configs/skeletongait/README.md` | multi-step pipeline |
| Dataset splits | `<Dataset>/<Dataset>.json` | consumed by runtime `data_cfg.dataset_partition` |
## CONVENTIONS
- Final runtime-ready format is `id/type/view/*.pkl`.
- Many dataset folders provide both rearrange and extraction scripts; follow README ordering strictly.
- Some pipelines require auxiliary artifacts (e.g., OUMVLP pose match indices) before pretreatment.
## ANTI-PATTERNS
- Dont point runtime to raw image trees; training expects pkl-converted structure.
- Dont skip dataset-specific rearrange steps; many raw layouts are incompatible with runtime parser.
- Dont ignore documented optional/required flags in per-dataset README commands.
+86
View File
@@ -0,0 +1,86 @@
# ScoNet and DRF: Status, Architecture, and Training Guide
This document provides a technical overview of the Scoliosis screening models in OpenGait, mapping paper concepts to the repository's implementation status.
## DRF implementation status in OpenGait
As of the current version, the **Dual Representation Framework (DRF)** described in the MICCAI 2025 paper *"Pose as Clinical Prior: Learning Dual Representations for Scoliosis Screening"* is **not yet explicitly implemented** as a standalone model in this repository.
### Current State
- **ScoNet-MT (Functional Implementation)**: While the class in `opengait/modeling/models/sconet.py` is named `ScoNet`, it is functionally the **ScoNet-MT** (Multi-Task) variant described in the MICCAI 2024 paper. It utilizes both classification and triplet losses.
- **Dual Representation (DRF)**: While `opengait/modeling/models/skeletongait++.py` implements a dual-representation (silhouette + pose heatmap) architecture for gait recognition, the specific DRF screening model (MICCAI 2025) is not yet explicitly implemented as a standalone class.
- **Naming Note**: The repository uses the base name `ScoNet` for the multi-task implementation, as it is the high-performance variant recommended for use.
### Implementation Blueprint for DRF
To implement DRF within the OpenGait framework, follow this structure:
1. **Model Location**: Create `opengait/modeling/models/drf.py` inheriting from `BaseModel`.
2. **Input Handling**: Extend `inputs_pretreament` to handle both silhouettes and pose heatmaps (refer to `SkeletonGaitPP.inputs_pretreament` in `skeletongait++.py`).
3. **Dual-Branch Backbone**: Use separate early layers for silhouette and skeleton map streams, then fuse via `AttentionFusion` (from `skeletongait++.py:135`) or a PAV-Guided Attention module as described in the DRF paper.
4. **Forward Contract**:
- `training_feat`: Must include `triplet` (for identity/feature consistency) and `softmax` (for screening classification).
- `visual_summary`: Include `image/sils` and `image/heatmaps` for TensorBoard visualization.
- `inference_feat`: Return `logits` for classification.
5. **Config**: Create `configs/drf/drf_scoliosis1k.yaml` specifying `model: DRF` and configuring the dual-stream backbone.
6. **Evaluator**: Use `eval_func: evaluate_scoliosis` in the config to leverage the existing screening metrics (Accuracy, Precision, Recall, F1).
7. **Dataset**: Requires the **Scoliosis1K-Pose** dataset which provides 17 anatomical keypoints in MS-COCO format alongside the existing silhouettes.
---
## ScoNet/ScoNet-MT architecture mapping
> [!IMPORTANT]
> **Naming Clarification**: The implementation in this repository is **ScoNet-MT**, not the single-task ScoNet.
> - **ScoNet (Single-Task)**: Defined in the paper as using only CrossEntropyLoss.
> - **ScoNet-MT (Multi-Task)**: Defined as using $L_{total} = L_{ce} + L_{triplet}$.
>
> **Evidence for ScoNet-MT in this repo:**
> 1. **Dual Loss Configuration**: `configs/sconet/sconet_scoliosis1k.yaml` (lines 24-33) defines both `TripletLoss` (margin: 0.2) and `CrossEntropyLoss`.
> 2. **Dual-Key Forward Pass**: `sconet.py` (lines 42-46) returns both `'triplet'` and `'softmax'` keys in the `training_feat` dictionary.
> 3. **Triplet Sampling**: The trainer uses `TripletSampler` with `batch_size: [8, 8]` (P=8, K=8) to support triplet mining (config lines 92-99).
>
> A "pure" ScoNet implementation would require removing the `TripletLoss`, switching to a standard `InferenceSampler`, and removing the `triplet` key from the model's `forward` return.
The `ScoNet` (functionally ScoNet-MT) implementation in `opengait/modeling/models/sconet.py` maps to the paper as follows:
| Paper Component | Code Reference | Description |
| :--- | :--- | :--- |
| **Backbone** | `ResNet9` in `backbones/resnet.py` | A customized ResNet with 4 layers and configurable channels. |
| **Temporal Aggregation** | `self.TP` (Temporal Pooling) | Uses `PackSequenceWrapper(torch.max)` to aggregate frame features. |
| **Spatial Features** | `self.HPP` (Horizontal Pooling) | `HorizontalPoolingPyramid` with `bin_num: 16`. |
| **Feature Mapping** | `self.FCs` (`SeparateFCs`) | Maps pooled features to a latent embedding space. |
| **Classification Head** | `self.BNNecks` (`SeparateBNNecks`) | Produces logits for the 3-class screening task. |
| **Label Mapping** | `sconet.py` lines 21-23 | `negative: 0`, `neutral: 1`, `positive: 2`. |
---
## Training guide (dataloader, optimizer, logging)
### Dataloader Setup
The training configuration is defined in `configs/sconet/sconet_scoliosis1k.yaml`:
- **Sampler**: `TripletSampler` (standard for OpenGait).
- **Batch Size**: `[8, 8]` (8 identities, 8 sequences per identity).
- **Sequence Sampling**: `fixed_unordered` with `frames_num_fixed: 30`.
- **Transform**: `BaseSilCuttingTransform` for silhouette preprocessing.
### Optimizer and Scheduler
- **Optimizer**: SGD
- `lr: 0.1`
- `momentum: 0.9`
- `weight_decay: 0.0005`
- **Scheduler**: `MultiStepLR`
- `milestones: [10000, 14000, 18000]`
- `gamma: 0.1`
- **Total Iterations**: 20,000.
### Logging
- **TensorBoard**: OpenGait natively supports TensorBoard logging. Training losses (`triplet`, `softmax`) and accuracies are logged every `log_iter: 100`.
- **WandB**: There is **no native Weights & Biases (WandB) integration** in the current codebase. Users wishing to use WandB must manually integrate it into `opengait/utils/msg_manager.py` or `opengait/main.py`.
- **Evaluation**: Metrics (Accuracy, Precision, Recall, F1) are computed by `evaluate_scoliosis` in `opengait/evaluation/evaluator.py` and logged to the console/file.
---
## Evidence References
- **Model Implementation**: `opengait/modeling/models/sconet.py`
- **Training Config**: `configs/sconet/sconet_scoliosis1k.yaml`
- **Evaluation Logic**: `opengait/evaluation/evaluator.py::evaluate_scoliosis`
- **Backbone Definition**: `opengait/modeling/backbones/resnet.py::ResNet9`
+33
View File
@@ -0,0 +1,33 @@
# OPENGAIT RUNTIME KNOWLEDGE BASE
## OVERVIEW
`opengait/` is the runtime package: distributed launch entry, model lifecycle orchestration, data/evaluation integration.
## STRUCTURE
```text
opengait/
├── main.py # DDP entrypoint + config load + model dispatch
├── modeling/ # BaseModel + model/backbone/loss registries
├── data/ # dataset parser + sampler/collate/transform
├── evaluation/ # benchmark-specific evaluation functions
└── utils/ # config merge, DDP passthrough, logging helpers
```
## WHERE TO LOOK
| Task | Location | Notes |
|------|----------|-------|
| Start train/test flow | `main.py` | parses `--cfgs`/`--phase`, initializes DDP |
| Resolve model name from YAML | `modeling/models/__init__.py` | class auto-registration via iter_modules |
| Build full train loop | `modeling/base_model.py` | loaders, optimizer/scheduler, ckpt, inference |
| Merge config with defaults | `utils/common.py::config_loader` | overlays onto `configs/default.yaml` |
| Shared logging | `utils/msg_manager.py` | global message manager |
## CONVENTIONS
- Imports are package-relative-at-runtime (`from modeling...`, `from data...`, `from utils...`) because `opengait/main.py` is launched as script target.
- Runtime is DDP-first; non-DDP assumptions are usually invalid.
- Losses and models are configured by names, not direct imports in `main.py`.
## ANTI-PATTERNS
- Dont bypass `config_loader`; default config merge is expected by all modules.
- Dont instantiate models outside registry path (`modeling/models`), or YAML `model_cfg.model` lookup breaks.
- Dont bypass `get_ddp_module`; attribute passthrough wrapper is used for downstream method access.
+22
View File
@@ -0,0 +1,22 @@
# DATA PIPELINE KNOWLEDGE BASE
## OVERVIEW
`opengait/data/` converts preprocessed dataset trees into training/evaluation batches for all models.
## WHERE TO LOOK
| Task | Location | Notes |
|------|----------|-------|
| Dataset parsing + file loading | `dataset.py` | expects partition json and `.pkl` sequence files |
| Sequence sampling strategy | `collate_fn.py` | fixed/unfixed/all + ordered/unordered behavior |
| Augmentations/transforms | `transform.py` | transform factories resolved from config |
| Batch identity sampling | `sampler.py` | sampler types referenced from config |
## CONVENTIONS
- Dataset root layout is `id/type/view/*.pkl` after preprocessing.
- `dataset_partition` JSON with `TRAIN_SET` / `TEST_SET` is required.
- `sample_type` drives control flow (`fixed_unordered`, `all_ordered`, etc.) and shape semantics downstream.
## ANTI-PATTERNS
- Never pass non-`.pkl` sequence files (`dataset.py` raises hard ValueError).
- Dont violate expected `batch_size` semantics for triplet samplers (`[P, K]` list).
- Dont assume all models use identical feature counts; collate is feature-index sensitive.
+33
View File
@@ -0,0 +1,33 @@
# MODELING DOMAIN KNOWLEDGE BASE
## OVERVIEW
`opengait/modeling/` defines model contracts and algorithm implementations: `BaseModel`, loss aggregation, backbones, concrete model classes.
## STRUCTURE
```text
opengait/modeling/
├── base_model.py # canonical train/test lifecycle
├── loss_aggregator.py # training_feat -> weighted summed loss
├── modules.py # shared NN building blocks
├── backbones/ # backbone registry + implementations
├── losses/ # loss registry + implementations
└── models/ # concrete methods (Baseline, ScoNet, DeepGaitV2, ...)
```
## WHERE TO LOOK
| Task | Location | Notes |
|------|----------|-------|
| Add new model | `models/*.py` + `docs/4.how_to_create_your_model.md` | must inherit `BaseModel` |
| Add new loss | `losses/*.py` | expose via dynamic registry |
| Change training lifecycle | `base_model.py` | affects every model |
| Debug feature/loss key mismatches | `loss_aggregator.py` | checks `training_feat` keys vs `loss_cfg.log_prefix` |
## CONVENTIONS
- `forward()` output contract is fixed dict with keys: `training_feat`, `visual_summary`, `inference_feat`.
- `training_feat` subkeys must align with configured `loss_cfg[*].log_prefix`.
- Backbones/losses/models are discovered dynamically via package `__init__.py`; filenames matter operationally.
## ANTI-PATTERNS
- Do not return arbitrary forward outputs; `LossAggregator` and evaluator assume fixed contract.
- Do not put model classes outside `models/`; config lookup by `getattr(models, name)` depends on registry.
- Do not ignore DDP loss wrapping (`get_ddp_module`) in loss construction.
+23
View File
@@ -0,0 +1,23 @@
# MODEL ZOO IMPLEMENTATION KNOWLEDGE BASE
## OVERVIEW
This directory is the algorithm zoo. Each file usually contributes one `BaseModel` subclass selected by `model_cfg.model`.
## WHERE TO LOOK
| Task | Location | Notes |
|------|----------|-------|
| Baseline pattern | `baseline.py` | minimal template for silhouette models |
| Scoliosis pipeline | `sconet.py` | label remapping + screening-specific head |
| Large-model fusion | `BiggerGait_DINOv2.py`, `BigGait.py` | external pretrained dependencies |
| Diffusion/noise handling | `denoisinggait.py`, `diffgait_utils/` | high-complexity flow/feature fusion |
| Skeleton variants | `skeletongait++.py`, `gaitgraph1.py`, `gaitgraph2.py` | pose-map/graph assumptions |
## CONVENTIONS
- Most models follow: preprocess input -> backbone -> temporal pooling -> horizontal pooling -> neck/head -> contract dict.
- Input modality assumptions differ by model (silhouette / RGB / pose / multimodal); config and preprocess script must match.
- Many models rely on utilities from `modeling/modules.py`; shared changes there are high blast-radius.
## ANTI-PATTERNS
- Dont mix modality assumptions silently (e.g., pose tensor layout vs silhouette layout).
- Dont rename classes without updating `model_cfg.model` references in configs.
- Dont treat `BigGait_utils`/`diffgait_utils` as generic utilities; they are model-family specific.