1.8 KiB
1.8 KiB
Draft: Depth-Based Extrinsic Verification/Fusion
Requirements (confirmed)
- Primary Goal: Both verify AND refine extrinsics using depth data
- Integration: Add to existing
calibrate_extrinsics.pyCLI (new flags) - Depth Mode: CLI argument with default to NEURAL_PLUS (or NEURAL)
- Target Geometry: Any markers (from parquet file), not just ArUco box
Technical Decisions
- Use ZED SDK
retrieve_measure(MEASURE.DEPTH)for depth maps - Extend
SVOReaderto optionally enable depth mode - Compute depth residuals at detected marker corner positions
- Use residual statistics for verification metrics
- ICP or optimization for refinement (if requested)
Research Findings
Depth Residual Formula
For 3D point P_world with camera extrinsics (R, t):
P_cam = R @ P_world + t
z_predicted = P_cam[2]
(u, v) = project(P_cam, K)
z_measured = depth_map[v, u]
residual = z_measured - z_predicted
Verification Metrics
- Mean absolute residual
- RMSE
- Depth-normalized error: |r| / z_pred
- Spatial bias detection (residual vs pixel position)
Refinement Approach
- ICP (Iterative Closest Point) on depth points near markers
- Point-to-plane ICP for better convergence
- Initialize with ArUco pose, refine with depth
User Decisions (Round 2)
- Refinement Method: Direct optimization (minimize depth residuals to adjust extrinsics)
- Verification Output: Full reporting (console + JSON + optional CSV)
- Depth Filtering: Confidence-based (use ZED confidence threshold + range limits)
Open Questions
- Test strategy: TDD or tests after?
- Minimum markers/frames for reliable depth verification?
Scope Boundaries
- INCLUDE: Depth retrieval, residual computation, verification metrics, optional ICP refinement
- EXCLUDE: Bundle adjustment, SLAM, right camera processing