- Add comprehensive work plan for ArUco-based multi-camera calibration - Add recording_multi.py for multi-camera SVO recording - Add streaming_receiver.py for network streaming - Add svo_playback.py for synchronized playback - Add zed_network_utils.py for camera configuration - Add AGENTS.md with project context
25 KiB
ArUco-Based Multi-Camera Extrinsic Calibration from SVO
TL;DR
Quick Summary: Create a CLI tool that reads synchronized SVO recordings from multiple ZED cameras, detects ArUco markers on a 3D calibration box, computes camera extrinsics using robust pose averaging, and outputs accurate 4x4 transform matrices.
Deliverables:
calibrate_extrinsics.py- Main CLI toolpose_averaging.py- Robust pose estimation utilitiessvo_sync.py- Multi-SVO timestamp synchronizationtests/test_pose_math.py- Unit tests for pose calculations- Output JSON with calibrated extrinsics
Estimated Effort: Medium (3-5 days) Parallel Execution: YES - 2 waves Critical Path: Task 1 → Task 3 → Task 5 → Task 7 → Task 8
Context
Original Request
User wants to integrate ArUco marker detection with SVO recording playback to calibrate multi-camera extrinsics. The idea is to use timestamp-aligned SVO reading to extract frame batches at certain intervals, calculate camera extrinsics by averaging multiple pose estimates, and handle outliers.
Interview Summary
Key Discussions:
- Calibration target: 3D box with 6 diamond board faces (24 markers), defined in
standard_box_markers.parquet - Current extrinsics in
inside_network.jsonare inaccurate and need replacement - Output: New JSON file with 4x4 pose matrices, marker box as world origin
- Workflow: CLI with preview visualization
User Decisions:
- Frame sampling: Fixed interval + quality filter
- Outlier handling: Two-stage (per-frame + RANSAC on pose set)
- Minimum markers: 4+ per frame
- Image stream: Rectified LEFT (no distortion needed)
- Sync tolerance: <33ms (1 frame at 30fps)
- Tests: Add after implementation
Research Findings
- Existing patterns:
find_extrinsic_object.py(ArUco + solvePnP),svo_playback.py(multi-SVO sync) - ZED SDK intrinsics:
cam.get_camera_information().camera_configuration.calibration_parameters.left_cam - Rotation averaging:
scipy.spatial.transform.Rotation.mean()for geodesic mean - Translation averaging: Median with MAD-based outlier rejection
- Transform math:
T_world_cam = inv(T_cam_marker)when marker is world origin
Metis Review
Identified Gaps (addressed):
- World frame definition → Use coordinates from
standard_box_markers.parquet - Transform convention → Match
inside_network.jsonformat (T_world_from_cam, space-separated 4x4) - Image stream → Rectified LEFT view (no distortion)
- Sync tolerance → Moderate (<33ms)
- Parquet validation → Must validate schema early
- Planar degeneracy → Require multi-face visibility or 3D spread check
Work Objectives
Core Objective
Build a robust CLI tool for multi-camera extrinsic calibration using ArUco markers detected in synchronized SVO playback.
Concrete Deliverables
py_workspace/calibrate_extrinsics.py- Main entry pointpy_workspace/aruco/pose_averaging.py- Robust averaging utilitiespy_workspace/aruco/svo_sync.py- Multi-SVO synchronizationpy_workspace/tests/test_pose_math.py- Unit tests- Output:
calibrated_extrinsics.jsonwith per-camera 4x4 transforms
Definition of Done
uv run calibrate_extrinsics.py --help→ exits 0, shows required argsuv run calibrate_extrinsics.py --validate-markers→ validates parquet schemauv run calibrate_extrinsics.py --svos ... --output out.json→ produces valid JSON- Output JSON contains 4 cameras with 4x4 matrices in correct format
uv run pytest tests/test_pose_math.py→ all tests pass- Preview mode shows detected markers with axes overlay
Must Have
- Load multiple SVO files with timestamp synchronization
- Detect ArUco markers using cv2.aruco with DICT_4X4_50
- Estimate per-frame poses using cv2.solvePnP
- Two-stage outlier rejection (reprojection error + pose RANSAC)
- Robust pose averaging (geodesic rotation mean + median translation)
- Output 4x4 transforms in
inside_network.json-compatible format - CLI with click for argument parsing
- Preview visualization with detected markers and axes
Must NOT Have (Guardrails)
- NO intrinsic calibration (use ZED SDK pre-calibrated values)
- NO bundle adjustment or SLAM
- NO modification of
inside_network.jsonin-place - NO right camera processing (use left only)
- NO GUI beyond simple preview window
- NO depth-based verification
- NO automatic config file updates
Verification Strategy
UNIVERSAL RULE: ZERO HUMAN INTERVENTION
ALL tasks must be verifiable by agent-executed commands. No "user visually confirms" criteria.
Test Decision
- Infrastructure exists: NO (need to set up pytest)
- Automated tests: YES (tests-after)
- Framework: pytest
Agent-Executed QA Scenarios (MANDATORY)
Verification Tool by Deliverable Type:
| Type | Tool | How Agent Verifies |
|---|---|---|
| CLI | Bash | Run command, check exit code, parse output |
| JSON output | Bash (jq) | Parse JSON, validate structure and values |
| Preview | Playwright | Capture window screenshot (optional) |
| Unit tests | Bash (pytest) | Run tests, assert all pass |
Execution Strategy
Parallel Execution Waves
Wave 1 (Start Immediately):
├── Task 1: Core pose math utilities
├── Task 2: Parquet loader and validator
└── Task 4: SVO synchronization module
Wave 2 (After Wave 1):
├── Task 3: ArUco detection integration (depends: 1, 2)
├── Task 5: Robust pose aggregation (depends: 1)
└── Task 6: Preview visualization (depends: 3)
Wave 3 (After Wave 2):
├── Task 7: CLI integration (depends: 3, 4, 5, 6)
└── Task 8: Tests and validation (depends: all)
Critical Path: Task 1 → Task 3 → Task 7 → Task 8
Dependency Matrix
| Task | Depends On | Blocks | Can Parallelize With |
|---|---|---|---|
| 1 | None | 3, 5 | 2, 4 |
| 2 | None | 3 | 1, 4 |
| 3 | 1, 2 | 6, 7 | 5 |
| 4 | None | 7 | 1, 2 |
| 5 | 1 | 7 | 3, 6 |
| 6 | 3 | 7 | 5 |
| 7 | 3, 4, 5, 6 | 8 | None |
| 8 | 7 | None | None |
TODOs
-
1. Create pose math utilities module
What to do:
- Create
py_workspace/aruco/pose_math.py - Implement
rvec_tvec_to_matrix(rvec, tvec) -> np.ndarray(4x4 homogeneous) - Implement
matrix_to_rvec_tvec(T) -> tuple[np.ndarray, np.ndarray] - Implement
invert_transform(T) -> np.ndarray - Implement
compose_transforms(T1, T2) -> np.ndarray - Implement
compute_reprojection_error(obj_pts, img_pts, rvec, tvec, K) -> float - Use numpy for all matrix operations
Must NOT do:
- Do NOT use scipy in this module (keep it pure numpy for core math)
- Do NOT implement averaging here (that's Task 5)
Recommended Agent Profile:
- Category:
quick- Reason: Pure math utilities, straightforward implementation
- Skills: []
- No special skills needed
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 1 (with Tasks 2, 4)
- Blocks: Tasks 3, 5
- Blocked By: None
References:
py_workspace/aruco/find_extrinsic_object.py:123-145- solvePnP usage and rvec/tvec handling- OpenCV docs:
cv2.Rodrigues()for rvec↔rotation matrix conversion - OpenCV docs:
cv2.projectPoints()for reprojection
Acceptance Criteria:
Agent-Executed QA Scenarios:
Scenario: rvec/tvec round-trip conversion Tool: Bash (python) Steps: 1. python -c "from aruco.pose_math import *; import numpy as np; rvec=np.array([0.1,0.2,0.3]); tvec=np.array([1,2,3]); T=rvec_tvec_to_matrix(rvec,tvec); r2,t2=matrix_to_rvec_tvec(T); assert np.allclose(rvec,r2,atol=1e-6) and np.allclose(tvec,t2,atol=1e-6); print('PASS')" Expected Result: Prints "PASS" Scenario: Transform inversion identity Tool: Bash (python) Steps: 1. python -c "from aruco.pose_math import *; import numpy as np; T=np.eye(4); T[:3,3]=[1,2,3]; T_inv=invert_transform(T); result=compose_transforms(T,T_inv); assert np.allclose(result,np.eye(4),atol=1e-9); print('PASS')" Expected Result: Prints "PASS"Commit: YES
- Message:
feat(aruco): add pose math utilities for transform operations - Files:
py_workspace/aruco/pose_math.py
- Create
-
2. Create parquet loader and validator
What to do:
- Create
py_workspace/aruco/marker_geometry.py - Implement
load_marker_geometry(parquet_path) -> dict[int, np.ndarray]- Returns mapping: marker_id → corner coordinates (4, 3)
- Implement
validate_marker_geometry(geometry) -> bool- Check all expected marker IDs present
- Check coordinates are in meters (reasonable range)
- Check corner ordering is consistent
- Use awkward-array (already in project) for parquet reading
Must NOT do:
- Do NOT hardcode marker IDs (read from parquet)
- Do NOT assume specific number of markers (validate dynamically)
Recommended Agent Profile:
- Category:
quick- Reason: Simple data loading and validation
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 1 (with Tasks 1, 4)
- Blocks: Task 3
- Blocked By: None
References:
py_workspace/aruco/find_extrinsic_object.py:55-66- Parquet loading with awkward-arraypy_workspace/aruco/output/standard_box_markers.parquet- Actual data file
Acceptance Criteria:
Agent-Executed QA Scenarios:
Scenario: Load marker geometry from parquet Tool: Bash (python) Preconditions: standard_box_markers.parquet exists Steps: 1. cd /workspaces/zed-playground/py_workspace 2. python -c "from aruco.marker_geometry import load_marker_geometry; g=load_marker_geometry('aruco/output/standard_box_markers.parquet'); print(f'Loaded {len(g)} markers'); assert len(g) >= 4; print('PASS')" Expected Result: Prints marker count and "PASS" Scenario: Validate geometry returns True for valid data Tool: Bash (python) Steps: 1. python -c "from aruco.marker_geometry import *; g=load_marker_geometry('aruco/output/standard_box_markers.parquet'); assert validate_marker_geometry(g); print('PASS')" Expected Result: Prints "PASS"Commit: YES
- Message:
feat(aruco): add marker geometry loader with validation - Files:
py_workspace/aruco/marker_geometry.py
- Create
-
3. Integrate ArUco detection with ZED intrinsics
What to do:
- Create
py_workspace/aruco/detector.py - Implement
create_detector() -> cv2.aruco.ArucoDetectorusing DICT_4X4_50 - Implement
detect_markers(image, detector) -> tuple[corners, ids] - Implement
get_zed_intrinsics(camera) -> tuple[np.ndarray, np.ndarray]- Extract K matrix (3x3) and distortion from ZED SDK
- For rectified images, distortion should be zeros
- Implement
estimate_pose(corners, ids, marker_geometry, K, dist) -> tuple[rvec, tvec, error]- Match detected markers to known 3D points
- Call solvePnP with SOLVEPNP_SQPNP
- Compute and return reprojection error
- Require minimum 4 markers for valid pose
Must NOT do:
- Do NOT use deprecated
estimatePoseSingleMarkers - Do NOT accept poses with <4 markers
Recommended Agent Profile:
- Category:
unspecified-low- Reason: Integration of existing patterns, moderate complexity
- Skills: []
Parallelization:
- Can Run In Parallel: NO
- Parallel Group: Wave 2 (after Task 1, 2)
- Blocks: Tasks 6, 7
- Blocked By: Tasks 1, 2
References:
py_workspace/aruco/find_extrinsic_object.py:54-145- Full ArUco detection and solvePnP patternpy_workspace/libs/pyzed_pkg/pyzed/sl.pyi:5110-5180- CameraParameters with fx, fy, cx, cy, distopy_workspace/svo_playback.py:46- get_camera_information() usage
Acceptance Criteria:
Agent-Executed QA Scenarios:
Scenario: Detector creation succeeds Tool: Bash (python) Steps: 1. python -c "from aruco.detector import create_detector; d=create_detector(); print(type(d)); print('PASS')" Expected Result: Prints detector type and "PASS" Scenario: Pose estimation with synthetic data Tool: Bash (python) Steps: 1. python -c " import numpy as np from aruco.detector import estimate_pose from aruco.marker_geometry import load_marker_geometry # Create synthetic test with known geometry geom = load_marker_geometry('aruco/output/standard_box_markers.parquet') K = np.array([[700,0,960],[0,700,540],[0,0,1]], dtype=np.float64) # Test passes if function runs without error print('PASS') " Expected Result: Prints "PASS"Commit: YES
- Message:
feat(aruco): add ArUco detector with ZED intrinsics integration - Files:
py_workspace/aruco/detector.py
- Create
-
4. Create multi-SVO synchronization module
What to do:
- Create
py_workspace/aruco/svo_sync.py - Implement
SVOReaderclass:__init__(svo_paths: list[str])- Open all SVOsget_camera_info(idx) -> CameraInfo- Serial, resolution, intrinsicssync_to_latest_start()- Align all cameras to latest start timestampgrab_synced(tolerance_ms=33) -> dict[serial, Frame] | None- Get synced framesseek_to_frame(frame_num)- Seek all camerasclose()- Cleanup
- Frame should contain: image (numpy), timestamp_ns, serial_number
- Use pattern from
svo_playback.pyfor sync logic
Must NOT do:
- Do NOT implement complex clock drift correction
- Do NOT handle streaming (SVO only)
Recommended Agent Profile:
- Category:
unspecified-low- Reason: Adapting existing pattern, moderate complexity
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 1 (with Tasks 1, 2)
- Blocks: Task 7
- Blocked By: None
References:
py_workspace/svo_playback.py:18-102- Complete multi-SVO sync patternpy_workspace/libs/pyzed_pkg/pyzed/sl.pyi:10010-10097- SVO position and frame methods
Acceptance Criteria:
Agent-Executed QA Scenarios:
Scenario: SVOReader opens multiple files Tool: Bash (python) Preconditions: SVO files exist in py_workspace Steps: 1. python -c " from aruco.svo_sync import SVOReader import glob svos = glob.glob('*.svo2')[:2] if len(svos) >= 2: reader = SVOReader(svos) print(f'Opened {len(svos)} SVOs') reader.close() print('PASS') else: print('SKIP: Need 2+ SVOs') " Expected Result: Prints "PASS" or "SKIP" Scenario: Sync aligns timestamps Tool: Bash (python) Steps: 1. Test sync_to_latest_start returns without error Expected Result: No exception raisedCommit: YES
- Message:
feat(aruco): add multi-SVO synchronization reader - Files:
py_workspace/aruco/svo_sync.py
- Create
-
5. Implement robust pose aggregation
What to do:
- Create
py_workspace/aruco/pose_averaging.py - Implement
PoseAccumulatorclass:add_pose(T: np.ndarray, reproj_error: float, frame_id: int)get_inlier_poses(max_reproj_error=2.0) -> list[np.ndarray]compute_robust_mean() -> tuple[np.ndarray, dict]- Use scipy.spatial.transform.Rotation.mean() for rotation
- Use median for translation
- Return stats dict: {n_total, n_inliers, median_error, std_rotation_deg}
- Implement
ransac_filter_poses(poses, rot_thresh_deg=5.0, trans_thresh_m=0.05) -> list[int]- Return indices of inlier poses
Must NOT do:
- Do NOT implement bundle adjustment
- Do NOT modify poses in-place
Recommended Agent Profile:
- Category:
unspecified-low- Reason: Math-focused but requires scipy understanding
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 2 (with Task 3)
- Blocks: Task 7
- Blocked By: Task 1
References:
- Librarian findings on
scipy.spatial.transform.Rotation.mean() - Librarian findings on RANSAC-style pose filtering
Acceptance Criteria:
Agent-Executed QA Scenarios:
Scenario: Rotation averaging produces valid result Tool: Bash (python) Steps: 1. python -c " from aruco.pose_averaging import PoseAccumulator import numpy as np acc = PoseAccumulator() T = np.eye(4) acc.add_pose(T, reproj_error=1.0, frame_id=0) acc.add_pose(T, reproj_error=1.5, frame_id=1) mean_T, stats = acc.compute_robust_mean() assert mean_T.shape == (4,4) assert stats['n_inliers'] == 2 print('PASS') " Expected Result: Prints "PASS" Scenario: RANSAC rejects outliers Tool: Bash (python) Steps: 1. python -c " from aruco.pose_averaging import ransac_filter_poses import numpy as np # Create 3 similar poses + 1 outlier poses = [np.eye(4) for _ in range(3)] outlier = np.eye(4); outlier[:3,3] = [10,10,10] # Far away poses.append(outlier) inliers = ransac_filter_poses(poses, trans_thresh_m=0.1) assert len(inliers) == 3 assert 3 not in inliers print('PASS') " Expected Result: Prints "PASS"Commit: YES
- Message:
feat(aruco): add robust pose averaging with RANSAC filtering - Files:
py_workspace/aruco/pose_averaging.py
- Create
-
6. Add preview visualization
What to do:
- Create
py_workspace/aruco/preview.py - Implement
draw_detected_markers(image, corners, ids) -> np.ndarray- Draw marker outlines and IDs
- Implement
draw_pose_axes(image, rvec, tvec, K, length=0.1) -> np.ndarray- Use cv2.drawFrameAxes
- Implement
show_preview(images: dict[str, np.ndarray], wait_ms=1) -> int- Show multiple camera views in separate windows
- Return key pressed
Must NOT do:
- Do NOT implement complex GUI
- Do NOT block indefinitely (use waitKey with timeout)
Recommended Agent Profile:
- Category:
quick- Reason: Simple OpenCV visualization
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 2 (with Task 5)
- Blocks: Task 7
- Blocked By: Task 3
References:
py_workspace/aruco/find_extrinsic_object.py:138-145- drawFrameAxes usagepy_workspace/aruco/find_extrinsic_object.py:84-105- Marker visualization
Acceptance Criteria:
Agent-Executed QA Scenarios:
Scenario: Draw functions return valid images Tool: Bash (python) Steps: 1. python -c " from aruco.preview import draw_detected_markers import numpy as np img = np.zeros((480,640,3), dtype=np.uint8) corners = [np.array([[100,100],[200,100],[200,200],[100,200]], dtype=np.float32)] ids = np.array([[1]]) result = draw_detected_markers(img, corners, ids) assert result.shape == (480,640,3) print('PASS') " Expected Result: Prints "PASS"Commit: YES
- Message:
feat(aruco): add preview visualization utilities - Files:
py_workspace/aruco/preview.py
- Create
-
7. Create main CLI tool
What to do:
- Create
py_workspace/calibrate_extrinsics.py - Use click for CLI:
--svo PATH(multiple) - SVO file paths--markers PATH- Marker geometry parquet--output PATH- Output JSON path--sample-interval INT- Frame interval (default 30)--max-reproj-error FLOAT- Threshold (default 2.0)--preview / --no-preview- Show visualization--validate-markers- Only validate parquet and exit--self-check- Run and report quality metrics
- Main workflow:
- Load marker geometry and validate
- Open SVOs and sync
- Sample frames at interval
- For each synced frame set:
- Detect markers in each camera
- Estimate pose if ≥4 markers
- Accumulate poses per camera
- Compute robust mean per camera
- Output JSON in inside_network.json-compatible format
- Output JSON format:
{ "serial": { "pose": "r00 r01 r02 tx r10 r11 r12 ty ...", "stats": { "n_frames": N, "median_reproj_error": X } } }
Must NOT do:
- Do NOT modify existing config files
- Do NOT implement auto-update of inside_network.json
Recommended Agent Profile:
- Category:
unspecified-high- Reason: Integration of all components, complex workflow
- Skills: []
Parallelization:
- Can Run In Parallel: NO
- Parallel Group: Wave 3 (final integration)
- Blocks: Task 8
- Blocked By: Tasks 3, 4, 5, 6
References:
py_workspace/svo_playback.py- CLI structure with argparse (adapt to click)py_workspace/aruco/find_extrinsic_object.py- Main loop patternzed_settings/inside_network.json:20- Output pose format
Acceptance Criteria:
Agent-Executed QA Scenarios:
Scenario: CLI help works Tool: Bash Steps: 1. cd /workspaces/zed-playground/py_workspace 2. uv run calibrate_extrinsics.py --help Expected Result: Exit code 0, shows --svo, --markers, --output options Scenario: Validate markers only mode Tool: Bash Steps: 1. uv run calibrate_extrinsics.py --markers aruco/output/standard_box_markers.parquet --validate-markers Expected Result: Exit code 0, prints marker count Scenario: Full calibration produces JSON Tool: Bash Preconditions: SVO files exist Steps: 1. uv run calibrate_extrinsics.py \ --svo ZED_SN46195029.svo2 \ --svo ZED_SN44435674.svo2 \ --markers aruco/output/standard_box_markers.parquet \ --output /tmp/test_extrinsics.json \ --no-preview \ --sample-interval 100 2. jq 'keys' /tmp/test_extrinsics.json Expected Result: Exit code 0, JSON contains camera serials Scenario: Self-check reports quality Tool: Bash Steps: 1. uv run calibrate_extrinsics.py ... --self-check Expected Result: Prints per-camera stats including median reproj errorCommit: YES
- Message:
feat(aruco): add calibrate_extrinsics CLI tool - Files:
py_workspace/calibrate_extrinsics.py
- Create
-
8. Add unit tests and final validation
What to do:
- Create
py_workspace/tests/test_pose_math.py - Test cases:
test_rvec_tvec_roundtrip- Convert and backtest_transform_inversion- T @ inv(T) = Itest_transform_composition- Known compositionstest_reprojection_error_zero- Perfect projection = 0 error
- Create
py_workspace/tests/test_pose_averaging.py - Test cases:
test_mean_of_identical_poses- Returns same posetest_outlier_rejection- Outliers removed
- Add
scipyto pyproject.toml if not present - Run full test suite
Must NOT do:
- Do NOT require real SVO files for unit tests (use synthetic data)
Recommended Agent Profile:
- Category:
quick- Reason: Straightforward test implementation
- Skills: []
Parallelization:
- Can Run In Parallel: NO
- Parallel Group: Wave 3 (final)
- Blocks: None
- Blocked By: Task 7
References:
- Task 1 acceptance criteria for test patterns
- Task 5 acceptance criteria for averaging tests
Acceptance Criteria:
Agent-Executed QA Scenarios:
Scenario: All unit tests pass Tool: Bash Steps: 1. cd /workspaces/zed-playground/py_workspace 2. uv run pytest tests/ -v Expected Result: Exit code 0, all tests pass Scenario: Coverage check Tool: Bash Steps: 1. uv run pytest tests/ --tb=short Expected Result: Shows test results summaryCommit: YES
- Message:
test(aruco): add unit tests for pose math and averaging - Files:
py_workspace/tests/test_pose_math.py,py_workspace/tests/test_pose_averaging.py
- Create
Commit Strategy
| After Task | Message | Files | Verification |
|---|---|---|---|
| 1 | feat(aruco): add pose math utilities |
pose_math.py | python import test |
| 2 | feat(aruco): add marker geometry loader |
marker_geometry.py | python import test |
| 3 | feat(aruco): add ArUco detector |
detector.py | python import test |
| 4 | feat(aruco): add multi-SVO sync |
svo_sync.py | python import test |
| 5 | feat(aruco): add pose averaging |
pose_averaging.py | python import test |
| 6 | feat(aruco): add preview utils |
preview.py | python import test |
| 7 | feat(aruco): add calibrate CLI |
calibrate_extrinsics.py | --help works |
| 8 | test(aruco): add unit tests |
tests/*.py | pytest passes |
Success Criteria
Verification Commands
# CLI works
uv run calibrate_extrinsics.py --help # Expected: exit 0
# Marker validation
uv run calibrate_extrinsics.py --markers aruco/output/standard_box_markers.parquet --validate-markers # Expected: exit 0
# Tests pass
uv run pytest tests/ -v # Expected: all pass
# Full calibration (with real SVOs)
uv run calibrate_extrinsics.py --svo *.svo2 --markers aruco/output/standard_box_markers.parquet --output calibrated.json --no-preview
jq 'keys' calibrated.json # Expected: camera serials
Final Checklist
- All "Must Have" present
- All "Must NOT Have" absent
- All tests pass
- CLI --help shows all options
- Output JSON matches inside_network.json pose format
- Preview shows detected markers with axes