chore: cleanup

This commit is contained in:
2026-02-27 17:47:55 +08:00
parent f501119d43
commit 846549498c
5 changed files with 219 additions and 463 deletions
+1
View File
@@ -147,3 +147,4 @@ dmypy.json
cython_debug/
ckpt/
assets/*
@@ -0,0 +1,174 @@
# Demo Pipeline Schema and Contracts
## Overview
This document describes the input/output schema, flags/arguments, and positive detection indicators for the OpenGait demo pipeline (ScoliosisPipeline).
## Source Files
- **Pipeline**: `/home/crosstyan/Code/OpenGait/opengait/demo/pipeline.py`
- **Input adapters**: `/home/crosstyan/Code/OpenGait/opengait/demo/input.py`
- **Output publishers**: `/home/crosstyan/Code/OpenGait/opengait/demo/output.py`
- **Window management**: `/home/crosstyan/Code/OpenGait/opengait/demo/window.py`
- **Classifier**: `/home/crosstyan/Code/OpenGait/opengait/demo/sconet_demo.py`
## Input Schema
### Video Source (`--source`)
The `source` parameter accepts three formats (validated in `validate_runtime_inputs()`):
1. **Camera index**: Single digit string (e.g., `"0"`, `"1"`) - uses OpenCV VideoCapture
2. **cv-mmap shared memory**: `cvmmap://<name>` - uses shared memory stream (e.g., `cvmmap://default`)
3. **Video file path**: Any other string treated as file path (e.g., `/path/to/video.mp4`)
**Source validation** (lines 251-264 in pipeline.py):
- Camera indices and cv-mmap URLs pass without file check
- File paths must exist (`Path.is_file()`)
### FrameStream Contract (input.py)
```python
FrameStream = Iterable[tuple[np.ndarray, dict[str, object]]]
```
Each iteration yields:
- **frame**: `np.ndarray` - Raw frame array (H, W, C) in uint8
- **metadata**: `dict[str, object]` containing:
- `frame_count`: int - Frame index (0-based)
- `timestamp_ns`: int - Monotonic timestamp in nanoseconds
- `source`: str - The source path/identifier
## Windowing Parameters
### SilhouetteWindow Class (window.py)
Manages a sliding window of silhouettes for classification:
**Constructor parameters**:
- `window_size`: int (default: 30) - Maximum buffer size (number of frames)
- `stride`: int (default: 1) - Frames between classifications
- `gap_threshold`: int (default: 15) - Max frame gap before reset
**CLI flags**:
- `--window`: int, min=1, default=30 - Sets `window_size`
- `--stride`: int, min=1, default=30 - Sets classification stride
**Behavior**:
- Window is "ready" when buffer has `window_size` frames
- Classification triggers when `should_classify()` returns True (respects stride)
- Track ID change or frame gap > `gap_threshold` resets the buffer
- Silhouette shape must be `(64, 44)` float32
**Output tensor shape**: `[1, 1, window_size, 64, 44]` (batch, channel, seq, height, width)
## Required Flags/Arguments
### CLI Arguments (pipeline.py lines 267-287)
| Flag | Type | Required | Default | Description |
|------|------|----------|---------|-------------|
| `--source` | str | **Yes** | - | Video source (file, camera index, or cvmmap://) |
| `--checkpoint` | str | **Yes** | - | Model checkpoint path (.pt file) |
| `--config` | str | No | `configs/sconet/sconet_scoliosis1k.yaml` | Model config YAML |
| `--device` | str | No | `cuda:0` | Device for inference |
| `--yolo-model` | str | No | `yolo11n-seg.pt` | YOLO segmentation model |
| `--window` | int | No | 30 | Window size (frames) |
| `--stride` | int | No | 30 | Classification stride |
| `--nats-url` | str | No | None | NATS server URL (e.g., `nats://localhost:4222`) |
| `--nats-subject` | str | No | `scoliosis.result` | NATS subject for publishing |
| `--max-frames` | int | No | None | Maximum frames to process |
### Validation
- Source must exist (file) or be valid camera index/cv-mmap URL
- Checkpoint file must exist
- Config file must exist
## Output Schema
### Result Format (output.py `create_result()`)
```python
{
"frame": int, # Frame number where classification occurred
"track_id": int, # Person/track identifier
"label": str, # Classification label
"confidence": float, # Confidence score [0.0, 1.0]
"window": int, # End frame of window (or window size)
"timestamp_ns": int # Timestamp in nanoseconds
}
```
### Publishers (output.py)
1. **ConsolePublisher**: Outputs JSON Lines to stdout
2. **NatsPublisher**: Publishes to NATS message broker (async, background thread)
### Label Values (sconet_demo.py line 60)
```python
LABEL_MAP = {0: "negative", 1: "neutral", 2: "positive"}
```
## Positive Detection Indicator
**Positive detection** is indicated when:
```python
result["label"] == "positive"
```
The `confidence` field indicates the model's confidence in the prediction (0.0 to 1.0).
### Test Validation (test_pipeline.py lines 89-106)
```python
def _assert_prediction_schema(prediction: dict[str, object]) -> None:
assert isinstance(prediction["frame"], int)
assert isinstance(prediction["track_id"], int)
label = prediction["label"]
assert isinstance(label, str)
assert label in {"negative", "neutral", "positive"} # Valid labels
confidence = prediction["confidence"]
assert isinstance(confidence, (int, float))
assert 0.0 <= float(confidence) <= 1.0
window_obj = prediction["window"]
assert isinstance(window_obj, int)
assert window_obj >= 0
assert isinstance(prediction["timestamp_ns"], int)
```
## Test References
### Pipeline Tests (`/home/crosstyan/Code/OpenGait/tests/demo/test_pipeline.py`)
- `test_pipeline_cli_happy_path_outputs_json_predictions`: Validates full pipeline outputs JSON predictions
- `test_pipeline_cli_fps_benchmark_smoke`: FPS benchmark with predictions
- `test_pipeline_cli_max_frames_caps_output_frames`: Validates max-frames behavior
- `test_pipeline_cli_invalid_source_path_returns_user_error`: Source validation
- `test_pipeline_cli_invalid_checkpoint_path_returns_user_error`: Checkpoint validation
### Window Tests (`/home/crosstyan/Code/OpenGait/tests/demo/test_window.py`)
- `test_window_fill_and_ready_behavior`: Window readiness logic
- `test_track_id_change_resets_buffer`: Track change handling
- `test_frame_gap_reset_behavior`: Gap threshold behavior
- `test_get_tensor_shape`: Output tensor shape validation
- `test_should_classify_stride_behavior`: Stride logic
- `test_push_invalid_shape_raises`: Silhouette shape validation
### ScoNetDemo Tests (`/home/crosstyan/Code/OpenGait/tests/demo/test_sconet_demo.py`)
- `test_predict_returns_tuple_with_valid_types`: Predict output validation
- `test_predict_confidence_range`: Confidence range [0, 1]
- `test_label_map_has_three_classes`: Label map validation
- `test_forward_label_range`: Label indices {0, 1, 2}
## Processing Flow
1. **Input**: Video source → FrameStream (frame, metadata)
2. **Detection**: YOLO track() → Detection results with boxes, masks, track IDs
3. **Selection**: `select_person()` → Largest bbox person or fallback
4. **Preprocessing**: Mask → Silhouette (64, 44) float32
5. **Windowing**: `SilhouetteWindow.push()` → Buffer management
6. **Classification**: When `should_classify()` True → ScoNetDemo.predict()
7. **Output**: `create_result()` → Publisher (Console or NATS)
## Error Handling
- Invalid source: Exit code 2, "Error: Video source not found"
- Invalid checkpoint: Exit code 2, "Error: Checkpoint not found"
- Runtime errors: Exit code 1, "Runtime error: ..."
- Frame processing errors: Logged as warning, frame skipped
- NATS unavailable: Graceful degradation (logs debug, continues)
@@ -281,3 +281,8 @@ Fixed in `opengait/demo/window.py` - `select_person()` now scales bbox from fram
### Status
RESOLVED
## 2026-02-27: Sample Video Blocker Resolved
- Previous blocker (missing sample video) is no longer active.
- `assets/sample.mp4` now present and playable via OpenCV with 227 frames.
- No remaining open checklist items in `.sisyphus/plans/sconet-pipeline.md` after checkbox update.
+36 -460
View File
@@ -1,469 +1,45 @@
#KM|
#KM|
#MM|## Task 13: NATS Integration Test (2026-02-26)
#RW|
#HH|**Created:** `tests/demo/test_nats.py`
#SY|
#HN|### Test Coverage
#ZZ|- 11 tests total (9 passed, 2 skipped when Docker unavailable)
#PS|- Docker-gated integration tests with `pytest.mark.skipif`
#WH|- Container lifecycle management with automatic cleanup
#TJ|
#VK|### Test Classes
#BQ|
#WV|1. **TestNatsPublisherIntegration** (3 tests):
#XY| - `test_nats_message_receipt_and_schema_validation`: Full end-to-end test with live NATS container
#XH| - `test_nats_publisher_graceful_when_server_unavailable`: Verifies graceful degradation
#JY| - `test_nats_publisher_context_manager`: Tests context manager protocol
#KS|
#BK|2. **TestNatsSchemaValidation** (6 tests):
#HB| - Valid schema acceptance
#KV| - Invalid label rejection
#MB| - Confidence out-of-range detection
#JT| - Missing fields detection
#KB| - Wrong type detection
#VS| - All valid labels accepted (negative, neutral, positive)
#HK|
#KV|3. **TestDockerAvailability** (2 tests):
#KN| - Docker availability check doesn't crash
#WR| - Container running check doesn't crash
#ZM|
#NV|### Key Implementation Details
#JQ|
#KB|**Docker/NATS Detection:**
#HM|```python
#YT|def _docker_available() -> bool:
#BJ| try:
#VZ| result = subprocess.run(["docker", "info"], capture_output=True, timeout=5)
#YZ| return result.returncode == 0
#NX| except (subprocess.TimeoutExpired, FileNotFoundError, OSError):
#VB| return False
#PV|```
#XN|
#KM|**Container Lifecycle:**
#SZ|- Uses `nats:latest` Docker image
#MP|- Port mapping: 4222:4222
#WW|- Container name: `opengait-nats-test`
#NP|- Automatic cleanup via fixture `yield`/`finally` pattern
#RN|- Pre-test cleanup removes any existing container
#BN|
#SR|**Schema Validation:**
#RB|- Required fields: frame(int), track_id(int), label(str), confidence(float), window(list[int,int]), timestamp_ns(int)
#YR|- Label values: "negative", "neutral", "positive"
#BP|- Confidence range: [0.0, 1.0]
#HZ|- Window format: [start, end] both ints
#TW|
#XW|### Verification Results
#RJ|```
#KW|tests/demo/test_nats.py::TestNatsPublisherIntegration::test_nats_message_receipt_and_schema_validation SKIPPED
#BS| tests/demo/test_nats.py::TestNatsPublisherIntegration::test_nats_publisher_graceful_when_server_unavailable PASSED
#YY| tests/demo/test_nats.py::TestNatsPublisherIntegration::test_nats_publisher_context_manager SKIPPED
#KJ| tests/demo/test_nats.py::TestNatsSchemaValidation::test_validate_result_schema_valid PASSED
#KN| tests/demo/test_nats.py::TestNatsSchemaValidation::test_validate_result_schema_invalid_label PASSED
#ZX| tests/demo/test_nats.py::TestNatsSchemaValidation::test_validate_result_schema_confidence_out_of_range PASSED
#MW| tests/demo/test_nats.py::TestNatsSchemaValidation::test_validate_result_schema_missing_fields PASSED
#XQ| tests/demo/test_nats.py::TestNatsSchemaValidation::test_validate_result_schema_wrong_types PASSED
#NN| tests/demo/test_nats.py::TestNatsSchemaValidation::test_all_valid_labels_accepted PASSED
#SQ| tests/demo/test_nats.py::TestDockerAvailability::test_docker_available_check PASSED
#RJ| tests/demo/test_nats.py::TestDockerAvailability::test_nats_container_running_check PASSED
#KB|
#VX| 9 passed, 2 skipped in 10.96s
#NY|```
#SV|
#ZB|### Notes
#RK|- Tests skip cleanly when Docker unavailable (CI-friendly)
#WB|- Bounded waits/timeouts for all subscriber operations (5 second timeout)
#XM|- Container cleanup verified - no leftover containers after tests
#KZ|- Uses `create_result()` helper from `opengait.demo.output` for consistent schema
#PX|
#HS|## Task 12: Integration Tests — End-to-End Smoke Test (2026-02-26)
#KB|
#NX|- Subprocess CLI tests are stable when invoked with `sys.executable -m opengait.demo` and explicit `cwd=REPO_ROOT`; this avoids PATH/venv drift from nested runners.
#HM|- For schema checks, parsing only stdout lines that are valid JSON objects with required keys avoids brittle coupling to logging output.
#XV|- `--max-frames` behavior is robustly asserted via emitted prediction `frame` values (`frame < max_frames`) rather than wall-clock timing.
#SB|- Runtime device selection should be dynamic in tests (`cuda:0` only when `torch.cuda.is_available()`, otherwise `cpu`) to keep tests portable across CI and local machines.
#QB|- The repository checkpoint may be incompatible with current `ScoNetDemo` key layout; generating a temporary compatible checkpoint from a fresh `ScoNetDemo(...).state_dict()` enables deterministic integration coverage of CLI flow without changing production code.
#KR|
#XB|
#JJ|## Task 13 Fix: Strict Type Checking (2026-02-27)
#WY|
#PS|Issue: basedpyright reported 1 ERROR and 23 warnings in tests/demo/test_nats.py.
#RT|
#ZX|### Key Fixes Applied
#BX|
#WK|1. Dict variance error (line 335):
#TN| - Error: dict[str, int | str | float | list[int]] not assignable to dict[str, object]
#ZW| - Fix: Added explicit type annotation test_result: dict[str, object] instead of inferring from literal
#ZT|
#TZ|2. Any type issues:
#PK| - Changed from typing import Any to from typing import TYPE_CHECKING, cast
#RZ| - Used cast() to narrow types from object to specific types
#QW| - Added explicit type annotations for local variables extracted from dict
#PJ|
#RJ|3. Window validation (lines 187-193):
#SJ| - Used cast(list[object], window) before len() and iteration
#QY| - Stored cast result in window_list variable for reuse
#HT|
#NH|4. Confidence comparison (line 319):
#KY| - Extracted confidence to local variable with explicit type check
#MT| - Used isinstance(_conf, (int, float)) before comparison
#WY|
#MR|5. Import organization:
#NJ| - Used type: ignore[import-untyped] instead of pyright: ignore[reportMissingTypeStubs]
#TW| - Removed duplicate import statements
#BJ|
#PK|6. Function annotations:
#YV| - Added -> None return types to all test methods
#JT| - Added nats_server: bool parameter types
#YZ| - Added Generator[bool, None, None] return type to fixture
#YR|
#XW|### Verification Results
#TB|- uv run basedpyright tests/demo/test_nats.py: 0 errors, 0 warnings, 0 notes
#QZ|- uv run pytest tests/demo/test_nats.py -q: 9 passed, 2 skipped
#WY|
#SS|### Type Checking Patterns Used
#YQ|- cast(list[object], window) for dict value extraction
#SQ|- Explicit variable types before operations: window_list = cast(list[object], window)
#VN|- Type narrowing with isinstance checks before operations
#MW|- TYPE_CHECKING guard for Generator import
#HP|
#TB|## Task F3: Real Manual QA (2026-02-27)
#RW|
#MW|### QA Execution Summary
#SY|
#PS|**Scenarios Tested:**
#XW|
#MX|1. **CLI --help** PASS
#BT| - Command: uv run python -m opengait.demo --help
#HK| - Output: Shows all options with defaults
#WB| - Options present: --source, --checkpoint (required), --config, --device, --yolo-model, --window, --stride, --nats-url, --nats-subject, --max-frames
#BQ|
#QR|2. **Smoke run without NATS** PASS
#ZB| - Command: uv run python -m opengait.demo --source ./assets/sample.mp4 --checkpoint /tmp/sconet-compatible-qa.pt ... --max-frames 60
#JP| - Output: Valid JSON prediction printed to stdout
#YM| - JSON schema validated: frame, track_id, label, confidence, window, timestamp_ns
#NQ| - Label values: negative, neutral, positive
#BP| - Confidence range: [0.0, 1.0]
#YQ|
#BV|3. **Run with NATS** SKIPPED
#VP| - Reason: Port 4222 already in use by system service
#YM| - Evidence: Docker container started successfully on alternate port (14222)
#PY| - Pipeline connected to NATS: Connected to NATS at nats://127.0.0.1:14222
#NT| - Note: Integration tests in test_nats.py cover this scenario comprehensively
#HK|
#WZ|4. **Missing video path** PASS
#HV| - Command: --source /definitely/not/a/real/video.mp4
#BW| - Exit code: 2
#PK| - Error message: Error: Video source not found
#VK| - Behavior: Graceful error, non-zero exit
#JQ|
#SS|5. **Missing checkpoint path** PASS
#BB| - Command: --checkpoint /definitely/not/a/real/checkpoint.pt
#BW| - Exit code: 2
#SS| - Error message: Error: Checkpoint not found
#VK| - Behavior: Graceful error, non-zero exit
#BN|
#ZR|### QA Metrics
#YP|- Scenarios [4/5 pass] | Edge Cases [2 tested] | VERDICT: PASS
#ST|- NATS scenario skipped due to environment conflict, but integration tests cover it
#XN|
#BS|### Observations
#NY|- CLI defaults align with plan specifications
#MR|- JSON output format matches schema exactly
#JX|- Error handling is user-friendly with clear messages
#TQ|- Timeout handling works correctly (no hangs observed)
#BY|
## Task F4: Scope Fidelity Check — Deep (2026-02-27)
## Task: Fix basedpyright pyarrow import errors (2026-02-27)
### Task-by-task matrix (spec ↔ artifact ↔ compliance)
| Task | Spec item | Implemented artifact | Status |
|---|---|---|---|
| 1 | Project scaffolding + deps | `opengait/demo/__main__.py`, `opengait/demo/__init__.py`, `tests/demo/conftest.py`, `pyproject.toml` dev deps | PASS |
| 2 | ScoNetDemo DDP-free wrapper | `opengait/demo/sconet_demo.py` | FAIL (forward contract returns tensor label/confidence, not scalar int/float as spec text) |
| 3 | Silhouette preprocessing | `opengait/demo/preprocess.py` | PASS |
| 4 | Input adapters | `opengait/demo/input.py` | PASS |
| 5 | Window manager + policies | `opengait/demo/window.py` | FAIL (`fill_level` implemented as int count, plan specifies ratio float len/window) |
| 6 | NATS JSON publisher | `opengait/demo/output.py` | FAIL (`create_result` emits `window` as list, plan DoD schema says int) |
| 7 | Preprocess tests | `tests/demo/test_preprocess.py` | PASS |
| 8 | ScoNetDemo tests | `tests/demo/test_sconet_demo.py` | FAIL (fixtures use seq=16; plan contract centered on 30-frame window) |
| 9 | Main pipeline + CLI | `opengait/demo/pipeline.py` | FAIL (`--source` not required; no FPS logging every 100 frames; ctor shape diverges from plan) |
| 10 | Window policy tests | `tests/demo/test_window.py` | PASS |
| 11 | Sample video | `assets/sample.mp4` (readable, 90 frames) | PASS |
| 12 | End-to-end integration tests | `tests/demo/test_pipeline.py` | FAIL (no FPS benchmark test case present) |
| 13 | NATS integration tests | `tests/demo/test_nats.py` | FAIL (hardcoded `NATS_PORT = 4222`) |
### Must NOT Have checks
- No `torch.distributed` imports in `opengait/demo/` (grep: no matches)
- No BaseModel subclassing in `opengait/demo/` (grep: no matches)
- No TensorRT/DeepStream implementation in demo scope (grep: no matches)
- No multi-person/GUI rendering hooks (`imshow`, gradio, streamlit, PyQt) in demo scope (grep: no matches)
### Scope findings
- Unaccounted files in repo root: `EOF`, `LEOF`, `ENDOFFILE` (scope creep / unexplained artifacts)
### F4 result
- Tasks [6/13 compliant]
- Scope [7 issues]
- VERDICT: REJECT
## Blocker Fix: ScoNet checkpoint key normalization (2026-02-27)
- Repo checkpoint stores legacy prefixes (, , ) that do not match module names (, , ).
- Deterministic prefix remapping inside restores compatibility while retaining strict behavior.
- Keep stripping before remap so DataParallel/DDP and legacy ScoNet naming both load through one normalization path.
- Guard against normalization collisions to fail early if two source keys collapse to the same normalized key.
## Blocker Fix: ScoNet checkpoint key normalization (corrected entry, 2026-02-27)
- Real checkpoint `./ckpt/ScoNet-20000.pt` uses legacy prefixes `Backbone.forward_block.*`, `FCs.*`, `BNNecks.*`.
- `ScoNetDemo` expects keys under `backbone.*`, `fcs.*`, `bn_necks.*`; deterministic prefix remap is required before strict loading.
- Preserve existing `module.` stripping first, then apply known-prefix remap to support both DDP/DataParallel and legacy ScoNet checkpoints.
- Keep strict `load_state_dict(..., strict=True)` behavior; normalize keys but do not relax architecture compatibility.
## 2026-02-27: Scope-Fidelity Drift Fix (F4) - Task 1
### Changes Made to opengait/demo/pipeline.py
1. **CLI --source required**: Changed from `@click.option("--source", type=str, default="0", show_default=True)` to `@click.option("--source", type=str, required=True)`
- This aligns with the plan specification that --source should be required
- Verification: `uv run python -m opengait.demo --help` shows `--source TEXT [required]`
2. **FPS logging every 100 frames**: Added FPS logging to the `run()` method
- Added frame counter and start time tracking
- Logs "Processed {count} frames ({fps:.2f} FPS)" every 100 frames
- Uses existing logger (`logger = logging.getLogger(__name__)`)
- Uses `time.perf_counter()` for high-precision timing
- Maintains synchronous architecture (no async/threading)
### Implementation Details
- FPS calculation: `fps = frame_count / elapsed if elapsed > 0 else 0.0`
- Log message format: `"Processed %d frames (%.2f FPS)"`
- Timing starts at beginning of `run()` method
- Frame count increments for each successfully retrieved frame from source
### Verification Results
- Type checking: 0 errors, 0 warnings, 0 notes (basedpyright)
- CLI help shows --source as [required]
- No runtime regressions introduced
[2026-02-27T00:44:24+08:00] Cleaned up scope-creep artifacts: EOF, LEOF, ENDOFFILE from repo root
## Task: NATS Port Fix - Type Narrowing (2026-02-27)
### Issue
- `sock.getsockname()` returns `Any` type, causing basedpyright warning
- Simple `int()` cast still had Any leak in argument position
### Problem
Basedpyright reported 5 errors for missing pyarrow imports in:
- opengait/demo/pipeline.py (4 errors)
- tests/demo/test_pipeline.py (1 error)
### Solution
- Use `typing.cast()` to explicitly narrow type:
```python
addr = cast(tuple[str, int], sock.getsockname())
port: int = addr[1]
```
- This satisfies basedpyright without runtime overhead
Used `importlib.import_module()` for runtime optional imports instead of static imports.
### Key Insight
- `typing.cast()` is the cleanest way to handle socket type stubs that return Any
- Explicit annotation on intermediate variable helps type checker
**Pattern for optional dependencies:**
```python
import importlib
try:
pa = importlib.import_module("pyarrow")
pq = importlib.import_module("pyarrow.parquet")
except ImportError as e:
raise RuntimeError("Parquet export requires pyarrow...") from e
```
**Pattern for test skip logic:**
```python
import importlib.util
if importlib.util.find_spec("pyarrow") is not None:
pytest.skip("pyarrow is installed, skipping missing dependency test")
```
### Benefits
- No basedpyright errors (0 errors, only warnings)
- Runtime behavior unchanged
- Clear error message when pyarrow is missing
- No TYPE_CHECKING complexity needed
### Verification
- `uv run basedpyright tests/demo/test_nats.py`: 0 errors, 0 warnings, 0 notes
- `uv run pytest tests/demo/test_nats.py -q`: 9 passed, 2 skipped
## 2026-02-27: fill_level Fix
- basedpyright: 0 errors, warnings only
- pytest: 12 passed, 1 skipped
## 2026-02-27: Task 11 Sample Video Acceptance Closed
Changed `fill_level` property in `opengait/demo/window.py` from returning integer count to float ratio (0.0..1.0).
- Before: `return len(self._buffer)` (type: int)
- After: `return len(self._buffer) / self.window_size` (type: float)
This aligns with the plan requirement for ratio-based fill level.
## 2026-02-27: fill_level Test Assertions Fix
### Issue
Tests in `tests/demo/test_window.py` had hardcoded integer expectations for `fill_level` (e.g., `== 5`), but after the window.py fix to return float ratio, these assertions failed.
### Fix Applied
Updated all `fill_level` assertions in `tests/demo/test_window.py` to expect float ratios:
- Line 26: `assert window.fill_level == (i + 1) / 5` (was `== i + 1`)
- Line 31: `assert window.fill_level == 1.0` (was `== 5`)
- Line 43: `assert window.fill_level == 0.9` (was `== 9`)
- Line 60: `assert window.fill_level == 1.0` (was `== 5`)
- Line 65: `assert window.fill_level == 0.2` (was `== 1`)
- Line 78: `assert window.fill_level == 1.0` (was `== 5`)
- Line 83: `assert window.fill_level == 1.0` (was `== 5`)
- Line 93: `assert window.fill_level == 0.2` (was `== 1`)
- Line 177: `assert window.fill_level == 0.0` (was `== 0`)
### Files Modified
- `tests/demo/test_window.py` only
### Verification
- basedpyright: 0 errors, 18 warnings (warnings are pre-existing, unrelated to fill_level)
- pytest: Tests will pass once window.py duplicate definition is removed
### Note
The window.py file currently has a duplicate `fill_level` definition (lines 208-210) that overrides the property. This needs to be removed for tests to pass.
## 2026-02-27: Duplicate fill_level Fix
Removed duplicate `fill_level` definition in `opengait/demo/window.py`.
- Issue: Two definitions existed - one property returning float ratio, one method returning int
- Fix: Removed the duplicate method definition (lines 208-210)
- Result: Single property returning `len(self._buffer) / self.window_size` as float
- All 19 tests pass, 0 basedpyright errors
## Task F4 Re-Audit: Scope Fidelity Check (2026-02-27)
### Re-check of previously flagged 7 drift items
| Prior Drift Item | Current Evidence | Re-audit Status |
|---|---|---|
| 1) `--source` not required | `opengait/demo/pipeline.py:268` -> `@click.option("--source", type=str, required=True)` | FIXED (PASS) |
| 2) Missing FPS logging | `opengait/demo/pipeline.py:213-232` includes `time.perf_counter()` + `logger.info("Processed %d frames (%.2f FPS)", ...)` every 100 frames | FIXED (PASS) |
| 3) `fill_level` int count | `opengait/demo/window.py:205-207` -> `def fill_level(self) -> float` and ratio return | FIXED (PASS) |
| 4) Hardcoded NATS port in tests | `tests/demo/test_nats.py:24-31` `_find_open_port()` + fixture yields dynamic `(available, port)` | FIXED (PASS) |
| 5) `test_pipeline.py` missing FPS benchmark | `tests/demo/test_pipeline.py` still has only 4 tests (happy/max-frames/invalid source/invalid checkpoint), no FPS benchmark scenario | OPEN (FAIL) |
| 6) `output.py` schema drift (`window` type) | `opengait/demo/output.py:363` still emits `"window": list(window)` | OPEN (FAIL) |
| 7) ScoNetDemo unit tests use seq=16 | `tests/demo/test_sconet_demo.py:42,48` still use `(N,1,16,64,44)` fixtures | OPEN (FAIL) |
### Additional re-checks
- Root artifact files `EOF/LEOF/ENDOFFILE`: not present in repo root (`glob` no matches; root `ls -la` clean for these names).
- Must NOT Have constraints in `opengait/demo/`: no forbidden implementation matches (`torch.distributed`, `BaseModel`, TensorRT/DeepStream, GUI/multi-person strings in runtime demo files).
### Re-audit result snapshot
- Tasks [10/13 compliant]
- Scope [3 issues]
- VERDICT: REJECT (remaining blockers below)
### Remaining blockers (exact)
1. `opengait/demo/output.py:363` — `window` serialized as list, conflicts with plan DoD schema expecting int field type.
2. `tests/demo/test_pipeline.py` — missing explicit FPS benchmark scenario required in Task 12 plan.
3. `tests/demo/test_sconet_demo.py:42,48` — fixtures still centered on sequence length 16 instead of planned 30-frame window contract.
## 2026-02-27T01:11:57+08:00 - Sequence Length Contract Alignment
Fixed scope-fidelity blocker in tests/demo/test_sconet_demo.py:
- Changed dummy_sils_batch fixture: seq dimension 16 → 30 (line 42)
- Changed dummy_sils_single fixture: seq dimension 16 → 30 (line 48)
- Updated docstring comment: (N, 3, 16) → (N, 3, 16) for output shape (line 126)
Key insight: 30-frame contract applies to INPUT sequence length (trainer_cfg.sampler.frames_num_fixed: 30),
not OUTPUT parts_num (model_cfg.SeparateFCs.parts_num: 16). Model outputs (N, 3, 16) regardless of input seq length.
Verification: pytest 21 passed, basedpyright 0 errors
## 2026-02-27: Window Schema Fix - output.py (F4 Blocker)
Fixed scope-fidelity blocker in `opengait/demo/output.py` where `window` was serialized as list instead of int.
### Changes Made
- Line 332: Changed type hint from `window: tuple[int, int]` to `window: int | tuple[int, int]`
- Line 348-349: Updated docstring to reflect int | tuple input type
- Line 363: Changed `"window": list(window)` to `"window": window if isinstance(window, int) else window[1]`
- Lines 312, 316: Updated docstring examples to show `"window": 30` instead of `"window": [0, 30]`
### Implementation Details
- Backward compatible: accepts both int (end frame) and tuple [start, end]
- Serializes to int by taking `window[1]` (end frame) when tuple provided
- Matches plan DoD schema requirement for integer `window` field
### Verification
- `uv run basedpyright opengait/demo/output.py`: 0 errors, 0 warnings, 0 notes
- `uv run pytest tests/demo/test_nats.py -q`: 9 passed, 2 skipped
## 2026-02-27: Task 12 Pipeline Test Alignment (window=int + FPS benchmark)
- `tests/demo/test_pipeline.py` schema assertions must validate `window` as `int` (non-negative), matching current `create_result` serialization behavior.
- A CI-safe FPS benchmark scenario can be made stable by computing throughput from **unique observed frame indices** over wall-clock elapsed time, not raw JSON line count.
- Conservative robustness pattern used: skip benchmark when observed sample size is too small (`<5`) or elapsed timing is non-positive; assert only a low floor (`>=0.2 FPS`) to avoid flaky failures on constrained runners.
- Existing integration intent remains preserved when benchmark test reuses same CLI path, bounded timeout, schema checks, and max-frames constraints as other smoke scenarios.
## Task F4 Final Re-Audit: Scope Fidelity Check (2026-02-27)
### Final blocker status (explicit)
| Blocker | Evidence | Status |
|---|---|---|
| 1) `--source` required | `opengait/demo/pipeline.py:268` (`required=True`) | PASS |
| 2) FPS logging in pipeline loop | `opengait/demo/pipeline.py:229-232` (`Processed %d frames (%.2f FPS)`) | PASS |
| 3) `fill_level` ratio | `opengait/demo/window.py:205-207` (`def fill_level(self) -> float`, ratio return) | PASS |
| 4) dynamic NATS port fixture | `tests/demo/test_nats.py:24-31` (`_find_open_port`) + fixture usage | PASS |
| 5) pipeline FPS benchmark scenario | `tests/demo/test_pipeline.py:109-167` (`test_pipeline_cli_fps_benchmark_smoke`) | PASS |
| 6) output schema `window` int | `opengait/demo/output.py:364` (`window if isinstance(window, int) else window[1]`) and schema assertions in `tests/demo/test_pipeline.py:102-104` | PASS |
| 7) ScoNetDemo test seq=30 contract | `tests/demo/test_sconet_demo.py:42,48` now use `(N,1,30,64,44)` | PASS |
### Guardrails and artifact checks
- Root artifact files removed: `EOF`, `LEOF`, `ENDOFFILE` absent (glob no matches)
- No `torch.distributed` in `opengait/demo/` (grep no matches)
- No `BaseModel` usage/subclassing in `opengait/demo/` (grep no matches)
### Evidence commands (final run)
- `git status --short --untracked-files=all`
- `git diff --stat`
- `uv run pytest tests/demo -q` → `64 passed, 2 skipped in 36.84s`
- grep checks for blocker signatures and guardrails (see command output in session)
### Final F4 outcome
- Tasks [13/13 compliant]
- Scope [CLEAN/0 issues]
- VERDICT: APPROVE
## Fix: BBox/Mask Coordinate Mismatch (2026-02-27)
### Root Cause
YOLO segmentation outputs have masks at lower resolution than frame-space bounding boxes:
- Frame size: (1440, 2560)
- YOLO mask size: (384, 640)
- BBox in frame space: e.g., (1060, 528, 1225, 962)
When `mask_to_silhouette(mask, bbox)` was called with frame-space bbox on mask-space mask:
1. `_sanitize_bbox()` clamped bbox to mask bounds
2. Result was degenerate crop (1x1 or similar)
3. Zero nonzero pixels → silhouette returned as `None`
4. Pipeline produced no classifications
### Solution
Modified `select_person()` in `opengait/demo/window.py` to scale bbox from frame space to mask space:
1. Extract `orig_shape` from YOLO results (contains original frame dimensions)
2. Calculate scale factors: `scale_x = mask_w / frame_w`, `scale_y = mask_h / frame_h`
3. Scale bbox coordinates before returning
4. Fallback to original bbox if `orig_shape` unavailable (backward compatibility)
### Key Implementation Details
- Validates `orig_shape` is a tuple/list with at least 2 numeric values
- Handles MagicMock in tests by checking type explicitly
- Preserves backward compatibility for cases without `orig_shape`
- No changes needed to `mask_to_silhouette()` itself
### Verification Results
- All 22 window tests pass
- All 33 demo tests pass (4 skipped due to missing Docker)
- Smoke test on `record_camera_5602_20260227_145736.mp4`:
- 56 classifications from 60 frames
- Non-zero confidence values
- Labels: negative/neutral/positive as expected
### Files Modified
- `opengait/demo/window.py`: Added coordinate scaling in `select_person()`
- Verified `assets/sample.mp4` exists in repository root assets directory.
- OpenCV validation succeeded: `cap.isOpened()==True` and `frame_count=227` (>=60 requirement).
- Marked the final three Task 11 acceptance leaf checkboxes to `[x]` in plan.
+3 -3
View File
@@ -1242,9 +1242,9 @@ Max Concurrent: 4 (Waves 1 & 2)
**References**: None needed — standalone task
**Acceptance Criteria**:
- [ ] `./assets/sample.mp4` (or `.avi`) exists
- [ ] Video has ≥60 frames
- [ ] Playable with `uv run python -c "import cv2; cap=cv2.VideoCapture('./assets/sample.mp4'); print(f'frames={int(cap.get(7))}'); cap.release()"`
- [x] `./assets/sample.mp4` (or `.avi`) exists
- [x] Video has ≥60 frames
- [x] Playable with `uv run python -c "import cv2; cap=cv2.VideoCapture('./assets/sample.mp4'); print(f'frames={int(cap.get(7))}'); cap.release()"`
**QA Scenarios:**