feat(calibration): add data-driven ground alignment with debug and fast iteration flags

This commit is contained in:
2026-02-07 03:20:16 +00:00
parent afc8e9034d
commit 446c02d42a
10 changed files with 1221 additions and 33 deletions
@@ -0,0 +1,32 @@
- Fixed edge cases in `compute_face_normal` to use stable edge definition for quad faces (corners[1]-corners[0] x corners[3]-corners[0]).
- Fixed edge cases in compute_face_normal to use stable edge definition for quad faces (corners[1]-corners[0] x corners[3]-corners[0]).
- Added explicit shape validation and zero-norm guards in rotation_align_vectors.
- Ensured concrete np.ndarray return types with explicit astype(np.float64) to satisfy type checking.
## Type Checking Warnings
- `basedpyright` reports numerous warnings, mostly related to `Any` types from `cv2` and `pyzed.sl` bindings which lack full type stubs.
- Deprecation warnings for `List`, `Dict`, `Tuple` (Python 3.9+) are present but existing style uses them. Kept consistent with existing code.
- `reportUnknownVariableType` is common due to dynamic nature of OpenCV/ZED returns.
## Parquet Metadata Handling
- `awkward` library used for parquet reading returns jagged arrays for list columns like `ids`.
- `ak.to_list()` is necessary to convert these to standard Python lists for dictionary values.
## Backward Compatibility
- While `FACE_MARKER_MAP` constant remains in `aruco/alignment.py` for potential external consumers, it is no longer used by the CLI tool.
- Users with old parquet files will now see a warning and no alignment, rather than silent fallback to potentially incorrect hardcoded IDs.
- None encountered during test implementation. API signatures were consistent with the implementation in `aruco/alignment.py`.
## Runtime Errors
## Messaging Consistency
## Iteration Speed
- Processing full SVO files (thousands of frames) is too slow for verifying simple logic changes. The `--max-samples` option addresses this by allowing early exit after a few successful samples.
@@ -0,0 +1,63 @@
- Implemented core alignment utilities in aruco/alignment.py.
- Used Rodrigues' rotation formula for vector alignment with explicit handling for parallel and anti-parallel cases.
- Implemented `FACE_MARKER_MAP` and `get_face_normal_from_geometry` to support multi-marker face normal averaging.
- Implemented `detect_ground_face` using dot-product scoring against camera up-vector with `loguru` debug logging.
- Integrated ground-plane alignment into `calibrate_extrinsics.py` with CLI-toggled heuristic and explicit face/marker selection.
## SVO Directory Expansion
- Implemented directory expansion for `--svo` argument.
- Iterates through provided paths, checks if directory, and finds `.svo` and `.svo2` files.
- Maintains backward compatibility for single file paths.
- Sorts found files to ensure deterministic processing order.
## ArUco Dictionary Selection
- Added `--aruco-dictionary` CLI option mapping string names to `cv2.aruco` constants.
- Defaults to `DICT_4X4_50` but supports all standard dictionaries including AprilTags.
- Passed to `create_detector` to allow flexibility for different marker sets.
## Minimum Markers Configuration
- Added `--min-markers` CLI option (default 1).
- Passed to `estimate_pose_from_detections` to filter out poses with insufficient marker support.
- Useful for improving robustness or allowing single-marker poses when necessary.
## Logging Improvements
- Added `loguru` debug logs for:
- Number of detected markers per frame.
- Pose acceptance/rejection with specific reasons (reprojection error, marker count).
## Dynamic Face Mapping
- Implemented `load_face_mapping` in `aruco/marker_geometry.py` to read face definitions from parquet metadata.
- Parquet file must contain `name` (string) and `ids` (list of ints) columns.
- `calibrate_extrinsics.py` now loads this map at runtime and passes it to alignment functions.
- `aruco/alignment.py` functions (`get_face_normal_from_geometry`, `detect_ground_face`) now accept an optional `face_marker_map` argument.
## Strict Data-Driven Alignment
- Removed implicit fallback to `FACE_MARKER_MAP` in `aruco/alignment.py`.
- `calibrate_extrinsics.py` now explicitly checks for loaded face mapping.
- If mapping is missing (e.g., old parquet without `name`/`ids`), alignment is skipped with a warning instead of using hardcoded defaults.
- This enforces the requirement that ground alignment configuration must come from the marker definition file.
- Alignment tests verify that `rotation_align_vectors` correctly handles identity, 90-degree, and anti-parallel cases.
- `detect_ground_face` and `get_face_normal_from_geometry` are now data-driven, requiring an explicit `face_marker_map` at runtime.
- Unit tests use mock geometry to verify normal computation and face selection logic without requiring real SVO/parquet data.
- **Parquet Schema**: The marker configuration parquet file (`standard_box_markers_600mm.parquet`) uses a schema with `name` (string), `ids` (list<int64>), and `corners` (list<list<list<float64>>>).
- **Dual Loading Strategy**: The system loads this single file in two ways:
1. `load_marker_geometry`: Flattens `ids` and `corners` to build a global map of Marker ID -> 3D Corners.
2. `load_face_mapping`: Uses `name` and `ids` to group markers by face (e.g., "bottom"), which is critical for ground plane alignment.
## Runtime Stability
- Fixed `AttributeError: 'FrameData' object has no attribute 'confidence_map'` by explicitly adding it to the dataclass and populating it in `SVOReader`.
- Added `--debug` flag to control log verbosity, defaulting to cleaner INFO level output.
## Consistency Hardening
- Removed "using default fallback" messaging from `calibrate_extrinsics.py` to align with the strict data-driven requirement.
## Fast Iteration
- Added `--max-samples` CLI option to `calibrate_extrinsics.py` to allow processing a limited number of samples (e.g., 1 or 3) instead of the full SVO.
- This significantly speeds up the development loop when testing changes to pose estimation or alignment logic that don't require the full dataset.
@@ -0,0 +1,393 @@
# Ground Plane Detection and Auto-Alignment
## TL;DR
> **Quick Summary**: Add ground plane detection and optional world-frame alignment to `calibrate_extrinsics.py` so the output coordinate system always has Y-up, regardless of how the calibration box is placed.
>
> **Deliverables**:
> - New `aruco/alignment.py` module with ground detection and alignment utilities
> - CLI options: `--auto-align`, `--ground-face`, `--ground-marker-id`
> - Face metadata in marker parquet files (or hardcoded mapping)
> - Debug logs for alignment decisions
>
> **Estimated Effort**: Medium
> **Parallel Execution**: NO - sequential (dependencies between tasks)
> **Critical Path**: Task 1 → Task 2 → Task 3 → Task 4 → Task 5
---
## Context
### Original Request
User wants to detect which side of the calibration box is on the ground and auto-align the world frame so Y is always up, matching the ZED convention seen in `inside_network.json`.
### Interview Summary
**Key Discussions**:
- Ground detection: support both heuristic (camera up-vector) AND user-specified (face name or marker ID)
- Alignment: opt-in via `--auto-align` flag (default OFF)
- Y-up convention confirmed from reference calibration
**Research Findings**:
- `inside_network.json` shows Y-up convention (cameras at Y ≈ -1.2m)
- Camera 41831756 has identity rotation → its axes match world axes
- Marker parquet contains face names and corner coordinates
- Face normals can be computed from corners: `cross(c1-c0, c3-c0)`
- `object_points.parquet`: 3 faces (a, b, c) with 4 markers each
- `standard_box_markers.parquet`: 6 faces with 1 marker each (21=bottom)
---
## Work Objectives
### Core Objective
Enable `calibrate_extrinsics.py` to detect the ground-facing box face and apply a corrective rotation so the output world frame has Y pointing up.
### Concrete Deliverables
- `aruco/alignment.py`: Ground detection and alignment utilities
- Updated `calibrate_extrinsics.py` with new CLI options
- Updated marker parquet files with face metadata (optional enhancement)
### Definition of Done
- [x] `uv run calibrate_extrinsics.py --auto-align ...` produces extrinsics with Y-up
- [x] `--ground-face` and `--ground-marker-id` work as explicit overrides
- [x] Debug logs show which face was detected as ground and alignment applied
- [ ] Tests pass, basedpyright shows 0 errors
### Must Have
- Heuristic ground detection using camera up-vector
- User override via `--ground-face` or `--ground-marker-id`
- Alignment rotation applied to all camera poses
- Debug logging for alignment decisions
### Must NOT Have (Guardrails)
- Do NOT modify marker parquet file format (use code-level face mapping for now)
- Do NOT change behavior when `--auto-align` is not specified
- Do NOT assume IMU/gravity data is available
- Do NOT break existing calibration workflow
---
## Verification Strategy
> **UNIVERSAL RULE: ZERO HUMAN INTERVENTION**
> All tasks verifiable by agent using tools.
### Test Decision
- **Infrastructure exists**: YES (pytest)
- **Automated tests**: YES (tests-after)
- **Framework**: pytest
### Agent-Executed QA Scenarios (MANDATORY)
**Scenario: Auto-align with heuristic detection**
```
Tool: Bash
Steps:
1. uv run calibrate_extrinsics.py --svo output --markers aruco/markers/object_points.parquet --aruco-dictionary DICT_APRILTAG_36h11 --auto-align --no-preview --sample-interval 100
2. Parse output JSON
3. Assert: All camera poses have rotation matrices where Y-axis column ≈ [0, 1, 0] (within tolerance)
Expected Result: Extrinsics aligned to Y-up
```
**Scenario: Explicit ground face override**
```
Tool: Bash
Steps:
1. uv run calibrate_extrinsics.py --svo output --markers aruco/markers/object_points.parquet --aruco-dictionary DICT_APRILTAG_36h11 --auto-align --ground-face b --no-preview --sample-interval 100
2. Check debug logs mention "using specified ground face: b"
Expected Result: Uses face 'b' as ground regardless of heuristic
```
**Scenario: No alignment when flag omitted**
```
Tool: Bash
Steps:
1. uv run calibrate_extrinsics.py --svo output --markers aruco/markers/object_points.parquet --aruco-dictionary DICT_APRILTAG_36h11 --no-preview --sample-interval 100
2. Compare output to previous run without --auto-align
Expected Result: Output unchanged from current behavior
```
---
## Execution Strategy
### Dependency Chain
```
Task 1: Create alignment module
Task 2: Add face-to-normal mapping
Task 3: Implement ground detection heuristic
Task 4: Add CLI options and integrate
Task 5: Add tests and verify
```
---
## TODOs
- [x] 1. Create `aruco/alignment.py` module with core utilities
**What to do**:
- Create new file `aruco/alignment.py`
- Implement `compute_face_normal(corners: np.ndarray) -> np.ndarray`: compute unit normal from (4,3) corners
- Implement `rotation_align_vectors(from_vec: np.ndarray, to_vec: np.ndarray) -> np.ndarray`: compute 3x3 rotation matrix that aligns `from_vec` to `to_vec` using Rodrigues formula
- Implement `apply_alignment_to_pose(T: np.ndarray, R_align: np.ndarray) -> np.ndarray`: apply alignment rotation to 4x4 pose matrix
- Add type hints and docstrings
**Must NOT do**:
- Do not add CLI logic here (that's Task 4)
- Do not hardcode face mappings here (that's Task 2)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`git-master`]
**Parallelization**:
- **Can Run In Parallel**: NO
- **Blocks**: Task 2, 3, 4
**References**:
- `aruco/pose_math.py` - Similar matrix utilities (rvec_tvec_to_matrix, invert_transform)
- `aruco/marker_geometry.py` - Pattern for utility modules
- Rodrigues formula: `R = I + sin(θ)K + (1-cos(θ))K²` where K is skew-symmetric of axis
**Acceptance Criteria**:
- [ ] File `aruco/alignment.py` exists
- [ ] `compute_face_normal` returns unit vector for valid (4,3) corners
- [ ] `rotation_align_vectors([0,0,1], [0,1,0])` produces 90° rotation about X
- [ ] `uv run python -c "from aruco.alignment import compute_face_normal, rotation_align_vectors, apply_alignment_to_pose"` → no errors
- [ ] `.venv/bin/basedpyright aruco/alignment.py` → 0 errors
**Commit**: YES
- Message: `feat(aruco): add alignment utilities for ground plane detection`
- Files: `aruco/alignment.py`
---
- [x] 2. Add face-to-marker-id mapping
**What to do**:
- In `aruco/alignment.py`, add `FACE_MARKER_MAP` constant:
```python
FACE_MARKER_MAP: dict[str, list[int]] = {
# object_points.parquet
"a": [16, 17, 18, 19],
"b": [20, 21, 22, 23],
"c": [24, 25, 26, 27],
# standard_box_markers.parquet
"bottom": [21],
"top": [23],
"front": [24],
"back": [22],
"left": [25],
"right": [26],
}
```
- Implement `get_face_normal_from_geometry(face_name: str, marker_geometry: dict[int, np.ndarray]) -> np.ndarray | None`:
- Look up marker IDs for face
- Get corners from geometry
- Compute and return average normal across markers in that face
**Must NOT do**:
- Do not modify parquet files
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`git-master`]
**Parallelization**:
- **Can Run In Parallel**: NO
- **Blocked By**: Task 1
- **Blocks**: Task 3, 4
**References**:
- Bash output from parquet inspection (earlier in conversation):
- Face a: IDs [16-19], normal ≈ [0,0,1]
- Face b: IDs [20-23], normal ≈ [0,1,0]
- Face c: IDs [24-27], normal ≈ [1,0,0]
**Acceptance Criteria**:
- [ ] `FACE_MARKER_MAP` contains mappings for both parquet files
- [ ] `get_face_normal_from_geometry("b", geometry)` returns ≈ [0,1,0]
- [ ] Returns `None` for unknown face names
**Commit**: YES (group with Task 1)
---
- [x] 3. Implement ground detection heuristic
**What to do**:
- In `aruco/alignment.py`, implement:
```python
def detect_ground_face(
visible_marker_ids: set[int],
marker_geometry: dict[int, np.ndarray],
camera_up_vector: np.ndarray = np.array([0, -1, 0]), # -Y in camera frame
) -> tuple[str, np.ndarray] | None:
```
- Logic:
1. For each face in `FACE_MARKER_MAP`:
- Check if any of its markers are in `visible_marker_ids`
- If yes, compute face normal from geometry
2. Find the face whose normal most closely aligns with `camera_up_vector` (highest dot product)
3. Return (face_name, face_normal) or None if no faces visible
- Add debug logging with loguru
**Must NOT do**:
- Do not transform normals by camera pose here (that's done in caller)
**Recommended Agent Profile**:
- **Category**: `unspecified-low`
- **Skills**: [`git-master`]
**Parallelization**:
- **Can Run In Parallel**: NO
- **Blocked By**: Task 2
- **Blocks**: Task 4
**References**:
- `calibrate_extrinsics.py:385` - Where marker IDs are detected
- Dot product alignment: `np.dot(normal, up_vec)` → highest = most aligned
**Acceptance Criteria**:
- [ ] Function returns face with normal most aligned to camera up
- [ ] Returns None when no mapped markers are visible
- [ ] Debug log shows which faces were considered and scores
**Commit**: YES (group with Task 1, 2)
---
- [x] 4. Integrate into `calibrate_extrinsics.py`
**What to do**:
- Add CLI options:
- `--auto-align/--no-auto-align` (default: False)
- `--ground-face` (optional string, e.g., "b", "bottom")
- `--ground-marker-id` (optional int)
- Add imports from `aruco.alignment`
- After computing all camera poses (after the main loop, before saving):
1. If `--auto-align` is False, skip alignment
2. Determine ground face:
- If `--ground-face` specified: use it directly
- If `--ground-marker-id` specified: find which face contains that ID
- Else: use heuristic `detect_ground_face()` with visible markers from first camera
3. Get ground face normal from geometry
4. Compute `R_align = rotation_align_vectors(ground_normal, [0, 1, 0])`
5. Apply to all camera poses: `T_aligned = R_align @ T`
6. Log alignment info
- Update results dict with aligned poses
**Must NOT do**:
- Do not change behavior when `--auto-align` is not specified
- Do not modify per-frame pose computation (only post-process)
**Recommended Agent Profile**:
- **Category**: `unspecified-high`
- **Skills**: [`git-master`]
**Parallelization**:
- **Can Run In Parallel**: NO
- **Blocked By**: Task 3
- **Blocks**: Task 5
**References**:
- `calibrate_extrinsics.py:456-477` - Where final poses are computed and stored
- `calibrate_extrinsics.py:266-271` - Existing CLI option pattern
- `aruco/alignment.py` - New utilities from Tasks 1-3
**Acceptance Criteria**:
- [ ] `--auto-align` flag exists and defaults to False
- [ ] `--ground-face` accepts string face names
- [ ] `--ground-marker-id` accepts integer marker ID
- [ ] When `--auto-align` used, output poses are rotated
- [ ] Debug logs show: "Detected ground face: X, normal: [a,b,c], applying alignment"
- [ ] `uv run python -m py_compile calibrate_extrinsics.py` → success
- [ ] `.venv/bin/basedpyright calibrate_extrinsics.py` → 0 errors
**Commit**: YES
- Message: `feat(calibrate): add --auto-align for ground plane detection and Y-up alignment`
- Files: `calibrate_extrinsics.py`
---
- [ ] 5. Add tests and verify end-to-end
**What to do**:
- Create `tests/test_alignment.py`:
- Test `compute_face_normal` with known corners
- Test `rotation_align_vectors` with various axis pairs
- Test `detect_ground_face` with mock marker data
- Run full calibration with `--auto-align` and verify output
- Compare aligned output to reference `inside_network.json` Y-up convention
**Must NOT do**:
- Do not require actual SVO files for unit tests (mock data)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`git-master`]
**Parallelization**:
- **Can Run In Parallel**: NO
- **Blocked By**: Task 4
**References**:
- `tests/test_depth_cli_postprocess.py` - Existing test pattern
- `/workspaces/zed-playground/zed_settings/inside_network.json` - Reference for Y-up verification
**Acceptance Criteria**:
- [ ] `uv run pytest tests/test_alignment.py` → all pass
- [ ] `uv run pytest` → all tests pass (including existing)
- [ ] Manual verification: aligned poses have Y-axis column ≈ [0,1,0] in rotation
**Commit**: YES
- Message: `test(aruco): add alignment module tests`
- Files: `tests/test_alignment.py`
---
## Commit Strategy
| After Task | Message | Files | Verification |
|------------|---------|-------|--------------|
| 1, 2, 3 | `feat(aruco): add alignment utilities for ground plane detection` | `aruco/alignment.py` | `uv run python -c "from aruco.alignment import *"` |
| 4 | `feat(calibrate): add --auto-align for ground plane detection and Y-up alignment` | `calibrate_extrinsics.py` | `uv run python -m py_compile calibrate_extrinsics.py` |
| 5 | `test(aruco): add alignment module tests` | `tests/test_alignment.py` | `uv run pytest tests/test_alignment.py` |
---
## Success Criteria
### Verification Commands
```bash
# Compile check
uv run python -m py_compile calibrate_extrinsics.py
# Type check
.venv/bin/basedpyright aruco/alignment.py calibrate_extrinsics.py
# Unit tests
uv run pytest tests/test_alignment.py
# Integration test (requires SVO files)
uv run calibrate_extrinsics.py --svo output --markers aruco/markers/object_points.parquet --aruco-dictionary DICT_APRILTAG_36h11 --auto-align --no-preview --sample-interval 100 --output aligned_extrinsics.json
# Verify Y-up in output
uv run python -c "import json, numpy as np; d=json.load(open('aligned_extrinsics.json')); T=np.fromstring(list(d.values())[0]['pose'], sep=' ').reshape(4,4); print('Y-axis:', T[:3,1])"
# Expected: Y-axis ≈ [0, 1, 0]
```
### Final Checklist
- [x] `--auto-align` flag works
- [x] `--ground-face` override works
- [x] `--ground-marker-id` override works
- [x] Heuristic detection works without explicit face specification
- [x] Output extrinsics have Y-up when aligned
- [x] No behavior change when `--auto-align` not specified
- [ ] All tests pass
- [ ] Type checks pass
+237
View File
@@ -0,0 +1,237 @@
import numpy as np
from loguru import logger
def compute_face_normal(corners: np.ndarray) -> np.ndarray:
"""
Compute the normal vector of a face defined by its corners.
Assumes corners are in order (e.g., clockwise or counter-clockwise).
Args:
corners: (N, 3) array of corner coordinates.
Returns:
(3,) normalized normal vector.
"""
if corners.ndim != 2 or corners.shape[1] != 3:
raise ValueError(f"Expected (N, 3) array, got {corners.shape}")
if corners.shape[0] < 3:
raise ValueError("At least 3 corners are required to compute a normal.")
# Use the cross product of two edges
# Stable edge definition for quad faces consistent with plan/repo convention:
# normal from (corners[1]-corners[0]) x (corners[3]-corners[0]) when N>=4,
# fallback to index 2 if N==3.
v1 = corners[1] - corners[0]
if corners.shape[0] >= 4:
v2 = corners[3] - corners[0]
else:
v2 = corners[2] - corners[0]
normal = np.cross(v1, v2)
norm = np.linalg.norm(normal)
if norm < 1e-10:
raise ValueError("Corners are collinear or degenerate; cannot compute normal.")
return (normal / norm).astype(np.float64)
def rotation_align_vectors(from_vec: np.ndarray, to_vec: np.ndarray) -> np.ndarray:
"""
Compute the 3x3 rotation matrix that aligns from_vec to to_vec.
Args:
from_vec: (3,) source vector.
to_vec: (3,) target vector.
Returns:
(3, 3) rotation matrix.
"""
if from_vec.shape != (3,) or to_vec.shape != (3,):
raise ValueError(
f"Expected (3,) vectors, got {from_vec.shape} and {to_vec.shape}"
)
norm_from = np.linalg.norm(from_vec)
norm_to = np.linalg.norm(to_vec)
if norm_from < 1e-10 or norm_to < 1e-10:
raise ValueError("Cannot align zero-norm vectors.")
# Normalize inputs
a = from_vec / norm_from
b = to_vec / norm_to
v = np.cross(a, b)
c = np.dot(a, b)
s = np.linalg.norm(v)
# Handle parallel case
if s < 1e-10:
if c > 0:
return np.eye(3, dtype=np.float64)
else:
# Anti-parallel case: 180 degree rotation around an orthogonal axis
# Find an orthogonal axis
if abs(a[0]) < 0.9:
ortho = np.array([1.0, 0.0, 0.0])
else:
ortho = np.array([0.0, 1.0, 0.0])
axis = np.cross(a, ortho)
axis /= np.linalg.norm(axis)
# Rodrigues formula for 180 degrees
K = np.array(
[
[0.0, -axis[2], axis[1]],
[axis[2], 0.0, -axis[0]],
[-axis[1], axis[0], 0.0],
]
)
return (np.eye(3) + 2 * (K @ K)).astype(np.float64)
# General case using Rodrigues' rotation formula
# R = I + [v]_x + [v]_x^2 * (1-c)/s^2
vx = np.array([[0.0, -v[2], v[1]], [v[2], 0.0, -v[0]], [-v[1], v[0], 0.0]])
R = np.eye(3) + vx + (vx @ vx) * ((1 - c) / (s**2))
return R.astype(np.float64)
def apply_alignment_to_pose(T: np.ndarray, R_align: np.ndarray) -> np.ndarray:
"""
Apply an alignment rotation to a 4x4 pose matrix.
The alignment is applied in the global frame (pre-multiplication of rotation).
Args:
T: (4, 4) homogeneous transformation matrix.
R_align: (3, 3) alignment rotation matrix.
Returns:
(4, 4) aligned transformation matrix.
"""
if T.shape != (4, 4):
raise ValueError(f"Expected 4x4 matrix, got {T.shape}")
if R_align.shape != (3, 3):
raise ValueError(f"Expected 3x3 matrix, got {R_align.shape}")
T_align = np.eye(4, dtype=np.float64)
T_align[:3, :3] = R_align
return (T_align @ T).astype(np.float64)
def get_face_normal_from_geometry(
face_name: str,
marker_geometry: dict[int, np.ndarray],
face_marker_map: dict[str, list[int]] | None = None,
) -> np.ndarray | None:
"""
Compute the average normal vector for a face based on available marker geometry.
Args:
face_name: Name of the face (key in face_marker_map).
marker_geometry: Dictionary mapping marker IDs to their (4, 3) corner coordinates.
face_marker_map: Dictionary mapping face names to marker IDs.
Returns:
(3,) normalized average normal vector, or None if no markers are available.
"""
if face_marker_map is None:
return None
if face_name not in face_marker_map:
return None
marker_ids = face_marker_map[face_name]
normals = []
for mid in marker_ids:
if mid in marker_geometry:
try:
n = compute_face_normal(marker_geometry[mid])
normals.append(n)
except ValueError:
continue
if not normals:
return None
avg_normal = np.mean(normals, axis=0)
norm = np.linalg.norm(avg_normal)
if norm < 1e-10:
return None
return (avg_normal / norm).astype(np.float64)
def detect_ground_face(
visible_marker_ids: set[int],
marker_geometry: dict[int, np.ndarray],
camera_up_vector: np.ndarray = np.array([0, -1, 0]),
face_marker_map: dict[str, list[int]] | None = None,
) -> tuple[str, np.ndarray] | None:
"""
Detect which face of the object is most likely the ground face.
The ground face is the one whose normal is most aligned with the camera's up vector.
Args:
visible_marker_ids: Set of marker IDs currently visible.
marker_geometry: Dictionary mapping marker IDs to their (4, 3) corner coordinates.
camera_up_vector: (3,) vector representing the 'up' direction in camera frame.
face_marker_map: Dictionary mapping face names to marker IDs.
Returns:
Tuple of (best_face_name, best_face_normal) or None if no face is detected.
"""
if not visible_marker_ids:
return None
if face_marker_map is None:
return None
if camera_up_vector.shape != (3,):
raise ValueError(
f"Expected (3,) camera_up_vector, got {camera_up_vector.shape}"
)
up_norm = np.linalg.norm(camera_up_vector)
if up_norm < 1e-10:
raise ValueError("camera_up_vector cannot be zero-norm.")
up_vec = camera_up_vector / up_norm
best_face = None
best_normal = None
best_score = -np.inf
# Iterate faces in mapping
for face_name, face_marker_ids in face_marker_map.items():
# Consider only faces with any visible marker ID
if not any(mid in visible_marker_ids for mid in face_marker_ids):
continue
normal = get_face_normal_from_geometry(
face_name, marker_geometry, face_marker_map=face_marker_map
)
if normal is None:
continue
# Score by dot(normal, normalized camera_up_vector)
score = np.dot(normal, up_vec)
logger.debug(f"Face '{face_name}' considered. Score: {score:.4f}")
if score > best_score:
best_score = score
best_face = face_name
best_normal = normal
if best_face is not None and best_normal is not None:
logger.debug(
f"Best ground face detected: '{best_face}' with score {best_score:.4f}"
)
return best_face, best_normal
return None
+40
View File
@@ -37,6 +37,46 @@ def load_marker_geometry(parquet_path: Union[str, Path]) -> dict[int, np.ndarray
return marker_geometry
def load_face_mapping(parquet_path: Union[str, Path]) -> dict[str, list[int]]:
"""
Reads face mapping from a parquet file.
The parquet file is expected to have 'name' and 'ids' columns.
'name' should be a string (face name), and 'ids' should be a list of integers.
Args:
parquet_path: Path to the parquet file.
Returns:
A dictionary mapping face names (lowercase) to lists of marker IDs.
"""
path = Path(parquet_path)
if not path.exists():
raise FileNotFoundError(f"Parquet file not found: {path}")
ops = ak.from_parquet(path)
# Check if required columns exist
if "name" not in ops.fields or "ids" not in ops.fields:
# Fallback or empty if columns missing (e.g. old format)
return {}
names = cast(np.ndarray, ak.to_numpy(ops["name"]))
# ids might be a jagged array if different faces have different marker counts
# ak.to_list() converts to python lists which is what we want for the dict values
ids_list = ak.to_list(ops["ids"])
face_map: dict[str, list[int]] = {}
for name, m_ids in zip(names, ids_list):
if name and m_ids:
# Normalize name to lowercase
key = str(name).lower()
# Ensure IDs are ints
face_map[key] = [int(mid) for mid in m_ids]
return face_map
def validate_marker_geometry(geometry: dict[int, np.ndarray]) -> None:
"""
Validates the marker geometry dictionary.
+12
View File
@@ -14,6 +14,7 @@ class FrameData:
frame_index: int
serial_number: int
depth_map: np.ndarray | None = None
confidence_map: np.ndarray | None = None
class SVOReader:
@@ -96,6 +97,7 @@ class SVOReader:
mat = sl.Mat()
cam.retrieve_image(mat, sl.VIEW.LEFT)
depth_map = self._retrieve_depth(cam)
confidence_map = self._retrieve_confidence(cam)
frames.append(
FrameData(
image=mat.get_data().copy(),
@@ -105,6 +107,7 @@ class SVOReader:
frame_index=cam.get_svo_position(),
serial_number=self.camera_info[i]["serial"],
depth_map=depth_map,
confidence_map=confidence_map,
)
)
elif err == sl.ERROR_CODE.END_OF_SVOFILE_REACHED:
@@ -151,6 +154,7 @@ class SVOReader:
mat = sl.Mat()
cam.retrieve_image(mat, sl.VIEW.LEFT)
depth_map = self._retrieve_depth(cam)
confidence_map = self._retrieve_confidence(cam)
frames[i] = FrameData(
image=mat.get_data().copy(),
timestamp_ns=cam.get_timestamp(
@@ -159,6 +163,7 @@ class SVOReader:
frame_index=cam.get_svo_position(),
serial_number=self.camera_info[i]["serial"],
depth_map=depth_map,
confidence_map=confidence_map,
)
elif err == sl.ERROR_CODE.END_OF_SVOFILE_REACHED:
cam.set_svo_position(0)
@@ -179,6 +184,13 @@ class SVOReader:
cam.retrieve_measure(depth_mat, sl.MEASURE.DEPTH)
return depth_mat.get_data().copy()
def _retrieve_confidence(self, cam: sl.Camera) -> np.ndarray | None:
if not self.enable_depth:
return None
conf_mat = sl.Mat()
cam.retrieve_measure(conf_mat, sl.MEASURE.CONFIDENCE)
return conf_mat.get_data().copy()
def get_depth_at(self, frame: FrameData, x: int, y: int) -> float | None:
if frame.depth_map is None:
return None
+230 -31
View File
@@ -7,7 +7,11 @@ import pyzed.sl as sl
from pathlib import Path
from typing import List, Dict, Any, Optional, Tuple
from aruco.marker_geometry import load_marker_geometry, validate_marker_geometry
from aruco.marker_geometry import (
load_marker_geometry,
validate_marker_geometry,
load_face_mapping,
)
from aruco.svo_sync import SVOReader
from aruco.detector import (
create_detector,
@@ -20,6 +24,37 @@ from aruco.pose_averaging import PoseAccumulator
from aruco.preview import draw_detected_markers, draw_pose_axes, show_preview
from aruco.depth_verify import verify_extrinsics_with_depth
from aruco.depth_refine import refine_extrinsics_with_depth
from aruco.alignment import (
get_face_normal_from_geometry,
detect_ground_face,
rotation_align_vectors,
apply_alignment_to_pose,
)
from loguru import logger
ARUCO_DICT_MAP = {
"DICT_4X4_50": cv2.aruco.DICT_4X4_50,
"DICT_4X4_100": cv2.aruco.DICT_4X4_100,
"DICT_4X4_250": cv2.aruco.DICT_4X4_250,
"DICT_4X4_1000": cv2.aruco.DICT_4X4_1000,
"DICT_5X5_50": cv2.aruco.DICT_5X5_50,
"DICT_5X5_100": cv2.aruco.DICT_5X5_100,
"DICT_5X5_250": cv2.aruco.DICT_5X5_250,
"DICT_5X5_1000": cv2.aruco.DICT_5X5_1000,
"DICT_6X6_50": cv2.aruco.DICT_6X6_50,
"DICT_6X6_100": cv2.aruco.DICT_6X6_100,
"DICT_6X6_250": cv2.aruco.DICT_6X6_250,
"DICT_6X6_1000": cv2.aruco.DICT_6X6_1000,
"DICT_7X7_50": cv2.aruco.DICT_7X7_50,
"DICT_7X7_100": cv2.aruco.DICT_7X7_100,
"DICT_7X7_250": cv2.aruco.DICT_7X7_250,
"DICT_7X7_1000": cv2.aruco.DICT_7X7_1000,
"DICT_ARUCO_ORIGINAL": cv2.aruco.DICT_ARUCO_ORIGINAL,
"DICT_APRILTAG_16h5": cv2.aruco.DICT_APRILTAG_16h5,
"DICT_APRILTAG_25h9": cv2.aruco.DICT_APRILTAG_25h9,
"DICT_APRILTAG_36h10": cv2.aruco.DICT_APRILTAG_36h10,
"DICT_APRILTAG_36h11": cv2.aruco.DICT_APRILTAG_36h11,
}
def apply_depth_verify_refine_postprocess(
@@ -121,9 +156,9 @@ def apply_depth_verify_refine_postprocess(
}
improvement = verify_res.rmse - verify_res_post.rmse
results[str(serial)]["refine_depth"]["improvement_rmse"] = (
improvement
)
results[str(serial)]["refine_depth"][
"improvement_rmse"
] = improvement
click.echo(
f"Camera {serial} refined: RMSE={verify_res_post.rmse:.3f}m "
@@ -190,24 +225,73 @@ def apply_depth_verify_refine_postprocess(
@click.option(
"--report-csv", type=click.Path(), help="Optional path for per-frame CSV report."
)
@click.option(
"--auto-align/--no-auto-align",
default=False,
help="Automatically align ground plane.",
)
@click.option(
"--ground-face", type=str, help="Explicit face name for ground alignment."
)
@click.option(
"--ground-marker-id", type=int, help="Explicit marker ID to define ground face."
)
@click.option(
"--aruco-dictionary",
default="DICT_4X4_50",
type=click.Choice(list(ARUCO_DICT_MAP.keys())),
help="ArUco dictionary to use.",
)
@click.option(
"--min-markers",
default=1,
type=int,
help="Minimum markers required for pose estimation.",
)
@click.option(
"--debug/--no-debug",
default=False,
help="Enable verbose debug logging.",
)
@click.option(
"--max-samples",
default=None,
type=int,
help="Maximum number of samples to process before stopping.",
)
def main(
svo,
markers,
output,
sample_interval,
max_reproj_error,
preview,
validate_markers,
self_check,
verify_depth,
refine_depth,
depth_mode,
depth_confidence_threshold,
report_csv,
svo: tuple[str, ...],
markers: str,
output: str,
sample_interval: int,
max_reproj_error: float,
preview: bool,
validate_markers: bool,
self_check: bool,
verify_depth: bool,
refine_depth: bool,
depth_mode: str,
depth_confidence_threshold: int,
report_csv: str | None,
auto_align: bool,
ground_face: str | None,
ground_marker_id: int | None,
aruco_dictionary: str,
min_markers: int,
debug: bool,
max_samples: int | None,
):
"""
Calibrate camera extrinsics relative to a global coordinate system defined by ArUco markers.
"""
# Configure logging level
logger.remove()
logger.add(
lambda msg: click.echo(msg, nl=False),
level="DEBUG" if debug else "INFO",
format="{message}",
)
depth_mode_map = {
"NEURAL": sl.DEPTH_MODE.NEURAL,
"ULTRA": sl.DEPTH_MODE.ULTRA,
@@ -219,17 +303,30 @@ def main(
if not (verify_depth or refine_depth):
sl_depth_mode = sl.DEPTH_MODE.NONE
# 1. Load Marker Geometry
try:
marker_geometry = load_marker_geometry(markers)
if validate_markers:
validate_marker_geometry(marker_geometry)
click.echo(f"Loaded {len(marker_geometry)} markers from {markers}")
except Exception as e:
click.echo(f"Error loading markers: {e}", err=True)
raise SystemExit(1)
# Expand SVO paths (files or directories)
expanded_svo = []
for path_str in svo:
path = Path(path_str)
if path.is_dir():
click.echo(f"Searching for SVO files in {path}...")
found = sorted(
[
str(p)
for p in path.iterdir()
if p.is_file() and p.suffix.lower() in (".svo", ".svo2")
]
)
if found:
click.echo(f"Found {len(found)} SVO files in {path}")
expanded_svo.extend(found)
else:
click.echo(f"Warning: No .svo/.svo2 files found in {path}", err=True)
elif path.is_file():
expanded_svo.append(str(path))
else:
click.echo(f"Warning: Path not found: {path}", err=True)
if not svo:
if not expanded_svo:
if validate_markers:
click.echo("Marker validation successful. No SVOs provided, exiting.")
return
@@ -239,8 +336,27 @@ def main(
)
raise click.UsageError("Missing option '--svo' / '-s'.")
# 1. Load Marker Geometry
try:
marker_geometry = load_marker_geometry(markers)
if validate_markers:
validate_marker_geometry(marker_geometry)
click.echo(f"Loaded {len(marker_geometry)} markers from {markers}")
# Load face mapping if available
face_marker_map = load_face_mapping(markers)
if face_marker_map:
click.echo(f"Loaded face mapping for {len(face_marker_map)} faces.")
else:
click.echo("No face mapping found in parquet (missing 'name'/'ids').")
face_marker_map = None
except Exception as e:
click.echo(f"Error loading markers: {e}", err=True)
raise SystemExit(1)
# 2. Initialize SVO Reader
reader = SVOReader(svo, depth_mode=sl_depth_mode)
reader = SVOReader(expanded_svo, depth_mode=sl_depth_mode)
if not reader.cameras:
click.echo("No SVO files could be opened.", err=True)
return
@@ -278,7 +394,10 @@ def main(
# Store verification frames for post-process check
verification_frames = {}
detector = create_detector()
# Track all visible marker IDs for heuristic ground detection
all_visible_ids = set()
detector = create_detector(dictionary_id=ARUCO_DICT_MAP[aruco_dictionary])
frame_count = 0
sampled_count = 0
@@ -303,6 +422,14 @@ def main(
# Detect markers
corners, ids = detect_markers(frame.image, detector)
if ids is not None:
all_visible_ids.update(ids.flatten().tolist())
logger.debug(
f"Cam {serial}: Detected {len(ids)} markers: {ids.flatten()}"
)
else:
logger.debug(f"Cam {serial}: No markers detected")
if ids is None:
if preview:
preview_frames[serial] = frame.image
@@ -310,7 +437,7 @@ def main(
# Estimate pose (T_cam_from_world)
pose_res = estimate_pose_from_detections(
corners, ids, marker_geometry, K, min_markers=4
corners, ids, marker_geometry, K, min_markers=min_markers
)
if pose_res:
@@ -333,6 +460,14 @@ def main(
accumulators[serial].add_pose(
T_world_cam, reproj_err, frame_count
)
logger.debug(
f"Cam {serial}: Pose accepted. Reproj={reproj_err:.3f}, Markers={n_markers}"
)
else:
logger.debug(
f"Cam {serial}: Pose rejected. Reproj {reproj_err:.3f} > {max_reproj_error}"
)
if preview:
img = draw_detected_markers(
@@ -340,6 +475,11 @@ def main(
)
img = draw_pose_axes(img, rvec, tvec, K, length=0.2)
preview_frames[serial] = img
else:
if ids is not None:
logger.debug(
f"Cam {serial}: Pose estimation failed (insufficient markers < {min_markers} or solver failure)"
)
elif preview:
preview_frames[serial] = frame.image
@@ -349,12 +489,15 @@ def main(
break
sampled_count += 1
if max_samples is not None and sampled_count >= max_samples:
click.echo(f"\nReached max samples ({max_samples}). Stopping.")
break
frame_count += 1
if frame_count % 100 == 0:
counts = [len(acc.poses) for acc in accumulators.values()]
click.echo(
f"Frame {frame_count}, Detections: {dict(zip(serials, counts))}"
f"Frame {frame_count}, Accepted Poses: {dict(zip(serials, counts))}"
)
except KeyboardInterrupt:
@@ -398,6 +541,62 @@ def main(
report_csv,
)
# 5. Optional Ground Plane Alignment
if auto_align:
click.echo("\nPerforming ground plane alignment...")
target_face = ground_face
# Use loaded map or skip if None
if face_marker_map is None:
click.echo(
"Warning: No face mapping available (missing 'name'/'ids' in parquet). Skipping alignment.",
err=True,
)
# Skip alignment logic by ensuring loop below doesn't run and heuristic fails gracefully
mapping_to_use = {}
else:
mapping_to_use = face_marker_map
if not target_face and ground_marker_id is not None:
# Map marker ID to face
for face, ids in mapping_to_use.items():
if ground_marker_id in ids:
target_face = face
logger.info(
f"Mapped ground-marker-id {ground_marker_id} to face '{face}'"
)
break
ground_normal = None
if target_face:
ground_normal = get_face_normal_from_geometry(
target_face, marker_geometry, face_marker_map=face_marker_map
)
if ground_normal is not None:
logger.info(f"Using explicit ground face '{target_face}'")
else:
# Heuristic detection
heuristic_res = detect_ground_face(
all_visible_ids, marker_geometry, face_marker_map=face_marker_map
)
if heuristic_res:
target_face, ground_normal = heuristic_res
logger.info(f"Heuristically detected ground face '{target_face}'")
if ground_normal is not None:
R_align = rotation_align_vectors(ground_normal, np.array([0, 1, 0]))
logger.info(f"Computed alignment rotation for face '{target_face}'")
for serial, data in results.items():
T_mean = np.fromstring(data["pose"], sep=" ").reshape(4, 4)
T_aligned = apply_alignment_to_pose(T_mean, R_align)
data["pose"] = " ".join(f"{x:.6f}" for x in T_aligned.flatten())
logger.debug(f"Applied alignment to camera {serial}")
else:
click.echo(
"Warning: Could not determine ground normal. Skipping alignment."
)
# 6. Save to JSON
with open(output, "w") as f:
json.dump(results, f, indent=4, sort_keys=True)
@@ -0,0 +1,49 @@
# Marker Parquet Format
This document describes the expected structure for marker configuration files (e.g., `standard_box_markers_600mm.parquet`). These files define both the physical geometry of markers and their logical grouping into faces.
## Schema
The parquet file must contain the following columns:
| Column | Type | Description |
| :--- | :--- | :--- |
| `name` | `string` | Name of the face (e.g., "bottom", "front"). Used for logical grouping. |
| `ids` | `list<int64>` | List of ArUco marker IDs belonging to this face. |
| `corners` | `list<list<list<float64>>>` | 3D coordinates of marker corners. Shape must be `(N, 4, 3)` where N is the number of markers. |
## Example Data
Based on `standard_box_markers_600mm.parquet`:
| name | ids | corners (approximate structure) |
| :--- | :--- | :--- |
| "bottom" | `[21]` | `[[[-0.225, -0.3, 0.226], [0.225, -0.3, 0.226], [0.225, -0.3, -0.224], [-0.225, -0.3, -0.224]]]` |
## Loader Behavior
The system uses two different loading strategies based on this file:
### 1. Geometry Loader (`load_marker_geometry`)
- **Ignores**: `name` column.
- **Uses**: `ids` and `corners`.
- **Process**:
- Flattens all `ids` and `corners` from all rows.
- Reshapes `corners` to `(-1, 4, 3)`.
- Validates that each marker has exactly 4 corners with 3 coordinates (x, y, z).
- Validates coordinates are finite and within reasonable range (< 100m).
- **Output**: `dict[int, np.ndarray]` mapping Marker ID → (4, 3) corner array.
### 2. Face Mapping Loader (`load_face_mapping`)
- **Uses**: `name` and `ids`.
- **Process**:
- Reads face names and associated marker IDs.
- Normalizes names to lowercase.
- **Output**: `dict[str, list[int]]` mapping Face Name → List of Marker IDs.
- **Usage**: Used for ground plane alignment (e.g., identifying the "bottom" face).
## Validation Rules
Runtime validation in `marker_geometry.py` ensures:
- `corners` shape is strictly `(4, 3)` per marker.
- No `NaN` or `Inf` values.
- Coordinates are absolute (meters) and must be < 100m.
+163
View File
@@ -0,0 +1,163 @@
import numpy as np
import pytest
from aruco.alignment import (
compute_face_normal,
rotation_align_vectors,
apply_alignment_to_pose,
get_face_normal_from_geometry,
detect_ground_face,
)
def test_compute_face_normal_valid_quad():
# Define a quad in the XY plane (normal should be Z)
corners = np.array([[0, 0, 0], [1, 0, 0], [1, 1, 0], [0, 1, 0]], dtype=np.float64)
# v1 = corners[1] - corners[0] = [1, 0, 0]
# v2 = corners[3] - corners[0] = [0, 1, 0]
# normal = [1, 0, 0] x [0, 1, 0] = [0, 0, 1]
normal = compute_face_normal(corners)
np.testing.assert_allclose(normal, [0, 0, 1], atol=1e-10)
def test_compute_face_normal_valid_triangle():
corners = np.array([[0, 0, 0], [1, 0, 0], [0, 1, 0]], dtype=np.float64)
normal = compute_face_normal(corners)
np.testing.assert_allclose(normal, [0, 0, 1], atol=1e-10)
def test_compute_face_normal_degenerate():
# Collinear points
corners = np.array([[0, 0, 0], [1, 0, 0], [2, 0, 0]], dtype=np.float64)
with pytest.raises(ValueError, match="collinear or degenerate"):
compute_face_normal(corners)
# Too few points
corners_few = np.array([[0, 0, 0], [1, 0, 0]], dtype=np.float64)
with pytest.raises(ValueError, match="At least 3 corners"):
compute_face_normal(corners_few)
def test_rotation_align_vectors_identity():
v1 = np.array([0, 0, 1], dtype=np.float64)
v2 = np.array([0, 0, 1], dtype=np.float64)
R = rotation_align_vectors(v1, v2)
np.testing.assert_allclose(R, np.eye(3), atol=1e-10)
def test_rotation_align_vectors_90_deg():
v1 = np.array([1, 0, 0], dtype=np.float64)
v2 = np.array([0, 1, 0], dtype=np.float64)
R = rotation_align_vectors(v1, v2)
# Check that R @ v1 == v2
np.testing.assert_allclose(R @ v1, v2, atol=1e-10)
# Check orthogonality
np.testing.assert_allclose(R @ R.T, np.eye(3), atol=1e-10)
def test_rotation_align_vectors_antiparallel():
v1 = np.array([0, 0, 1], dtype=np.float64)
v2 = np.array([0, 0, -1], dtype=np.float64)
R = rotation_align_vectors(v1, v2)
np.testing.assert_allclose(R @ v1, v2, atol=1e-10)
np.testing.assert_allclose(R @ R.T, np.eye(3), atol=1e-10)
def test_apply_alignment_to_pose():
# Identity pose
T = np.eye(4, dtype=np.float64)
# 90 deg rotation around Z
R_align = np.array([[0, -1, 0], [1, 0, 0], [0, 0, 1]], dtype=np.float64)
T_aligned = apply_alignment_to_pose(T, R_align)
expected = np.eye(4)
expected[:3, :3] = R_align
np.testing.assert_allclose(T_aligned, expected, atol=1e-10)
# Non-identity pose
T_pose = np.eye(4)
T_pose[:3, 3] = [1, 2, 3]
T_aligned_pose = apply_alignment_to_pose(T_pose, R_align)
# Pre-multiplication: R_align @ T_pose
# Rotation part: R_align @ I = R_align
# Translation part: R_align @ [1, 2, 3] = [-2, 1, 3]
expected_pose = np.eye(4)
expected_pose[:3, :3] = R_align
expected_pose[:3, 3] = R_align @ [1, 2, 3]
np.testing.assert_allclose(T_aligned_pose, expected_pose, atol=1e-10)
def test_get_face_normal_from_geometry():
face_marker_map = {"top": [1, 2]}
# Marker 1: XY plane (normal Z)
# Marker 2: XY plane (normal Z)
marker_geometry = {
1: np.array([[0, 0, 0], [1, 0, 0], [1, 1, 0], [0, 1, 0]], dtype=np.float64),
2: np.array([[2, 2, 0], [3, 2, 0], [3, 3, 0], [2, 3, 0]], dtype=np.float64),
}
normal = get_face_normal_from_geometry("top", marker_geometry, face_marker_map)
np.testing.assert_allclose(normal, [0, 0, 1], atol=1e-10)
# Missing face map
assert get_face_normal_from_geometry("top", marker_geometry, None) is None
# Missing face name
assert (
get_face_normal_from_geometry("bottom", marker_geometry, face_marker_map)
is None
)
# No visible markers for face
assert (
get_face_normal_from_geometry("top", {3: marker_geometry[1]}, face_marker_map)
is None
)
def test_detect_ground_face():
face_marker_map = {
"bottom": [1],
"top": [2],
}
# Marker 1: normal [0, -1, 0] (aligned with camera up [0, -1, 0])
# Marker 2: normal [0, 1, 0] (opposite to camera up)
marker_geometry = {
1: np.array(
[[0, 0, 0], [1, 0, 0], [1, 0, 1], [0, 0, 1]], dtype=np.float64
), # Normal is [0, 1, 0] or [0, -1, 0]?
# v1 = [1, 0, 0], v2 = [0, 0, 1] -> cross = [0, -1, 0]
2: np.array([[0, 1, 0], [1, 1, 0], [1, 1, -1], [0, 1, -1]], dtype=np.float64),
# v1 = [1, 0, 0], v2 = [0, 0, -1] -> cross = [0, 1, 0]
}
camera_up = np.array([0, -1, 0], dtype=np.float64)
# Both visible
res = detect_ground_face({1, 2}, marker_geometry, camera_up, face_marker_map)
assert res is not None
face_name, normal = res
assert face_name == "bottom"
np.testing.assert_allclose(normal, [0, -1, 0], atol=1e-10)
# Only top visible
res = detect_ground_face({2}, marker_geometry, camera_up, face_marker_map)
assert res is not None
face_name, normal = res
assert face_name == "top"
np.testing.assert_allclose(normal, [0, 1, 0], atol=1e-10)
# None visible
assert (
detect_ground_face(set(), marker_geometry, camera_up, face_marker_map) is None
)
# Missing map
assert detect_ground_face({1, 2}, marker_geometry, camera_up, None) is None