feat: add pose-set comparison visualization and clarify conventions

This commit is contained in:
2026-02-08 08:07:33 +00:00
parent 4bc4b7dfb8
commit d6c7829b1e
2 changed files with 298 additions and 41 deletions
@@ -322,6 +322,36 @@ system's origin.
---
## Methodology: Comparing Different World Frames
Since `inside_network.json` (Fusion) and `calibrate_extrinsics.py` (ArUco) use different
world origins, raw coordinate comparison is meaningless. We validated consistency using
**rigid SE(3) alignment**:
1. **Match Serials**: Identify cameras present in both JSON files.
2. **Extract Centers**: Extract the translation column `t` from `T_world_from_cam` for
each camera.
* **Crucial**: Both systems use `T_world_from_cam`. It is **not** `cam_from_world`.
3. **Compute Alignment**: Solve for the rigid transform `(R_align, t_align)` that
minimizes the distance between the two point sets (Kabsch algorithm).
* Scale is fixed at 1.0 (both systems use meters).
4. **Apply & Compare**:
* Transform Fusion points: `P_aligned = R_align * P_fusion + t_align`.
* **Position Residual**: `|| P_aruco - P_aligned ||`.
* **Orientation Check**: Apply `R_align` to Fusion rotation matrices and compare
column vectors (Right/Down/Forward) with ArUco rotations.
5. **Up-Vector Verification**:
* Fusion uses Y-Up (gravity). ArUco uses Y-Down (image).
* After alignment, the transformed Fusion Y-axis should be approximately parallel
to the ArUco -Y axis (or +Y depending on the specific alignment solution found,
but they must be collinear with gravity).
**Result**: The overlay images in `output/` were generated using this aligned frame.
The low residuals (<2cm) confirm that the internal calibration is consistent, even
though the absolute world coordinates differ.
---
## Appendix: Stale README References
The following lines in `py_workspace/README.md` reference removed flags and should be