Compare commits
8 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 69e83d8430 | |||
| 9d63177de0 | |||
| 502a90761b | |||
| ed721729fd | |||
| 6c09f7044b | |||
| 68b2b32510 | |||
| 0209986a2c | |||
| 31c4690121 |
+2
-2
@@ -1,7 +1,7 @@
|
||||
cmake_minimum_required(VERSION 3.18)
|
||||
|
||||
project(RapidPoseTriangulation
|
||||
VERSION 0.1.0
|
||||
VERSION 0.2.0
|
||||
LANGUAGES CXX
|
||||
DESCRIPTION "Rapid Pose Triangulation library with Python bindings"
|
||||
)
|
||||
@@ -13,5 +13,5 @@ set(CMAKE_CXX_EXTENSIONS OFF)
|
||||
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
|
||||
|
||||
# Add subdirectories
|
||||
add_subdirectory(rpt)
|
||||
add_subdirectory(rpt_cpp)
|
||||
add_subdirectory(bindings)
|
||||
|
||||
@@ -1,81 +1,185 @@
|
||||
# RapidPoseTriangulation
|
||||
|
||||
Fast triangulation of multiple persons from multiple camera views. \
|
||||
A general overview can be found in the paper [RapidPoseTriangulation: Multi-view Multi-person Whole-body Human Pose Triangulation in a Millisecond](https://arxiv.org/pdf/2503.21692).
|
||||
Fast multi-view multi-person pose reconstruction, packaged as a Python-first C++ library.
|
||||
|
||||
<div align="center">
|
||||
<img src="media/2d-k.jpg" alt="2D detections"" width="65%"/>
|
||||
<b> </b>
|
||||
<img src="media/3d-p.jpg" alt="3D detections" width="30%"/>
|
||||
<br>
|
||||
<br>
|
||||
<img src="media/2d-p.jpg" alt="3D to 2D projection" width="95%"/>
|
||||
</div>
|
||||
This repository started from the original upstream RapidPoseTriangulation repository:
|
||||
|
||||
<br>
|
||||
- [Percipiote/RapidPoseTriangulation](https://gitlab.com/Percipiote/RapidPoseTriangulation)
|
||||
|
||||
## Build
|
||||
The current fork keeps the triangulation core, exposes it through `nanobind`, and adds an RGB-D reconstruction path ported from the original SimpleDepthPose repository:
|
||||
|
||||
- Clone this project:
|
||||
- [Percipiote/SimpleDepthPose](https://gitlab.com/Percipiote/SimpleDepthPose)
|
||||
|
||||
## What This Repository Is Now
|
||||
|
||||
- A packaged library named `rapid-pose-triangulation` with Python bindings under `rpt`
|
||||
- A C++ core built with `scikit-build-core` and `nanobind`
|
||||
- A triangulation library for calibrated multi-view 2D detections
|
||||
- An RGB-D reconstruction helper layer that samples aligned depth, applies joint offsets, lifts poses into world coordinates, and merges per-view proposals
|
||||
|
||||
Current package status:
|
||||
|
||||
- Python `>=3.10`
|
||||
- Runtime dependencies: NumPy, jaxtyping
|
||||
- Current version: `0.2.0`
|
||||
|
||||
## Current Capabilities
|
||||
|
||||
The public Python API exposed by `rpt` currently includes:
|
||||
|
||||
- Camera/config helpers: `make_camera`, `convert_cameras`, `make_triangulation_config`
|
||||
- Input preparation: `pack_poses_2d`
|
||||
- Triangulation: `triangulate_poses`, `triangulate_debug`, `triangulate_with_report`
|
||||
- Tracking/debug helpers: `filter_pairs_with_previous_poses`
|
||||
- RGB-D helpers: `sample_depth_for_poses`, `apply_depth_offsets`, `lift_depth_poses_to_world`, `merge_rgbd_views`, `reconstruct_rgbd`
|
||||
|
||||
At a high level there are now two supported reconstruction paths:
|
||||
|
||||
1. Multi-view RGB triangulation from calibrated 2D detections
|
||||
2. Multi-view RGB-D reconstruction from calibrated 2D detections plus aligned depth images
|
||||
|
||||
## Installation And Development Workflow
|
||||
|
||||
Clone the repo and use `uv` for local development:
|
||||
|
||||
```bash
|
||||
git clone https://gitlab.com/Percipiote/RapidPoseTriangulation.git
|
||||
cd RapidPoseTriangulation/
|
||||
```
|
||||
|
||||
- Enable GPU-access for docker building:
|
||||
|
||||
- Install _nvidia_ container tools: [Link](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
|
||||
|
||||
- Run `sudo nano /etc/docker/daemon.json` and add:
|
||||
|
||||
```json
|
||||
{
|
||||
"runtimes": {
|
||||
"nvidia": {
|
||||
"args": [],
|
||||
"path": "nvidia-container-runtime"
|
||||
}
|
||||
},
|
||||
"default-runtime": "nvidia"
|
||||
}
|
||||
```
|
||||
|
||||
- Restart docker: `sudo systemctl restart docker`
|
||||
|
||||
- Build docker container:
|
||||
|
||||
```bash
|
||||
docker build --progress=plain -t rapidposetriangulation .
|
||||
./run_container.sh
|
||||
```
|
||||
|
||||
- Build triangulator:
|
||||
|
||||
```bash
|
||||
cd /RapidPoseTriangulation/
|
||||
git clone https://git.weihua-iot.cn/crosstyan/RapidPoseTriangulation.git
|
||||
cd RapidPoseTriangulation
|
||||
uv sync --group dev
|
||||
uv run pytest tests/test_interface.py
|
||||
```
|
||||
|
||||
Run the test suite:
|
||||
|
||||
```bash
|
||||
uv run pytest -q
|
||||
```
|
||||
|
||||
Run static typing checks against the Python package:
|
||||
|
||||
```bash
|
||||
uv run basedpyright
|
||||
```
|
||||
|
||||
Build source and wheel artifacts:
|
||||
|
||||
```bash
|
||||
uv build
|
||||
```
|
||||
|
||||
<br>
|
||||
`run_container.sh` is still present in the repo, but it is a leftover helper script rather than the primary or best-supported development workflow.
|
||||
|
||||
## Extras
|
||||
## Typing Workflow
|
||||
|
||||
The Python package ships a typed facade in `src/rpt` plus a checked-in stub for the compiled nanobind module at `src/rpt/_core.pyi`.
|
||||
|
||||
<br>
|
||||
Refresh the extension stub after changing the bindings:
|
||||
|
||||
## Citation
|
||||
|
||||
Please cite [RapidPoseTriangulation](https://arxiv.org/pdf/2503.21692) if you found it helpful for your research or business.
|
||||
|
||||
```bibtex
|
||||
@article{
|
||||
rapidtriang,
|
||||
title={{RapidPoseTriangulation: Multi-view Multi-person Whole-body Human Pose Triangulation in a Millisecond}},
|
||||
author={Bermuth, Daniel and Poeppel, Alexander and Reif, Wolfgang},
|
||||
journal={arXiv preprint arXiv:2503.21692},
|
||||
year={2025}
|
||||
}
|
||||
```bash
|
||||
cmake --build build --target rpt_core_stub
|
||||
cp build/bindings/rpt/_core.pyi src/rpt/_core.pyi
|
||||
uv run basedpyright
|
||||
```
|
||||
|
||||
`tests/test_typing_artifacts.py` checks that the checked-in `_core.pyi` matches the generated nanobind stub whenever the build artifact is available.
|
||||
|
||||
## Python API Overview
|
||||
|
||||
Typical triangulation flow:
|
||||
|
||||
```python
|
||||
import numpy as np
|
||||
import rpt
|
||||
|
||||
cameras = rpt.convert_cameras(raw_cameras)
|
||||
config = rpt.make_triangulation_config(
|
||||
cameras,
|
||||
roomparams=np.asarray([[5.6, 6.4, 2.4], [0.0, -0.5, 1.2]], dtype=np.float32),
|
||||
joint_names=joint_names,
|
||||
)
|
||||
|
||||
poses_2d, person_counts = rpt.pack_poses_2d(views, joint_count=len(joint_names))
|
||||
poses_3d = rpt.triangulate_poses(poses_2d, person_counts, config)
|
||||
```
|
||||
|
||||
Typical RGB-D flow:
|
||||
|
||||
```python
|
||||
poses_2d, person_counts = rpt.pack_poses_2d(views, joint_count=len(joint_names))
|
||||
poses_3d = rpt.reconstruct_rgbd(
|
||||
poses_2d,
|
||||
person_counts,
|
||||
depth_images,
|
||||
config,
|
||||
use_depth_offsets=True,
|
||||
)
|
||||
```
|
||||
|
||||
The lower-level RGB-D helpers are also available if you want to inspect or customize the intermediate steps:
|
||||
|
||||
- `sample_depth_for_poses`: sample aligned depth around visible 2D joints
|
||||
- `apply_depth_offsets`: add per-joint offsets derived from SimpleDepthPose
|
||||
- `lift_depth_poses_to_world`: convert `[u, v, d, score]` joints into world-space `[x, y, z, score]`
|
||||
- `merge_rgbd_views`: merge per-view world-space pose proposals into final poses
|
||||
|
||||
## Ported From SimpleDepthPose
|
||||
|
||||
This fork ports the RGB-D fusion path from SimpleDepthPose into `rpt`.
|
||||
|
||||
Original upstream repository:
|
||||
|
||||
- [Percipiote/SimpleDepthPose](https://gitlab.com/Percipiote/SimpleDepthPose)
|
||||
|
||||
The ported pieces are:
|
||||
|
||||
- Depth sampling around each visible 2D joint, based on the `add_depth` preprocessing flow
|
||||
- Per-joint depth offsets, matching the SimpleDepthPose body-surface correction idea
|
||||
- UVD-to-world lifting using the calibrated camera intrinsics/extrinsics
|
||||
- Multi-view RGB-D pose fusion logic adapted from `PoseFuser`
|
||||
|
||||
Compared with the original SimpleDepthPose implementation, the port here has been changed to fit a reusable library:
|
||||
|
||||
- The workflow is exposed as stateless functions instead of script-driven pipelines
|
||||
- The fusion logic lives in the `rpt` core instead of a separate wrapper class
|
||||
- Camera and scene configuration are routed through `TriangulationConfig`
|
||||
- The RGB-D path is covered by repo tests and packaged with the same Python API as the triangulation path
|
||||
|
||||
This repo does not attempt to port the full SimpleDepthPose project. It only ports the RGB-D reconstruction pieces that fit the current library scope.
|
||||
|
||||
## Changed Vs Upstream RapidPoseTriangulation
|
||||
|
||||
Compared with the original upstream repository below, this fork has materially changed structure and scope:
|
||||
|
||||
- [Percipiote/RapidPoseTriangulation](https://gitlab.com/Percipiote/RapidPoseTriangulation)
|
||||
- SWIG bindings were replaced with `nanobind`
|
||||
- The repo was converted into a Python package under `src/rpt`
|
||||
- The triangulation interface was simplified around immutable cameras and config structs
|
||||
- The core was reshaped into a more library-oriented, zero-copy style API
|
||||
- Debug tracing and tracked association reports were added
|
||||
- Upstream integration layers and extra tooling were removed, including the old `extras/` stack and related deployment/inference wrappers
|
||||
- An RGB-D reconstruction pipeline was added by porting and adapting parts of SimpleDepthPose
|
||||
|
||||
In practice, upstream is closer to a larger project tree with integrations and historical tooling, while this fork is closer to a compact reconstruction library.
|
||||
|
||||
## Testing
|
||||
|
||||
The repo currently ships Python-facing tests for both triangulation and RGB-D reconstruction:
|
||||
|
||||
```bash
|
||||
uv run pytest tests/test_interface.py
|
||||
uv run pytest tests/test_rgbd.py
|
||||
```
|
||||
|
||||
Or run everything:
|
||||
|
||||
```bash
|
||||
uv run pytest -q
|
||||
```
|
||||
|
||||
The checked-in sample data under `data/` is used by the triangulation tests.
|
||||
|
||||
## Upstream References
|
||||
|
||||
Original upstream repositories referenced by this fork:
|
||||
|
||||
- RapidPoseTriangulation: [Percipiote/RapidPoseTriangulation](https://gitlab.com/Percipiote/RapidPoseTriangulation)
|
||||
- SimpleDepthPose: [Percipiote/SimpleDepthPose](https://gitlab.com/Percipiote/SimpleDepthPose)
|
||||
|
||||
@@ -17,6 +17,7 @@ find_package(nanobind CONFIG REQUIRED)
|
||||
set(RPT_PYTHON_PACKAGE_DIR "${CMAKE_CURRENT_BINARY_DIR}/rpt")
|
||||
file(MAKE_DIRECTORY "${RPT_PYTHON_PACKAGE_DIR}")
|
||||
configure_file("${PROJECT_SOURCE_DIR}/src/rpt/__init__.py" "${RPT_PYTHON_PACKAGE_DIR}/__init__.py" COPYONLY)
|
||||
configure_file("${PROJECT_SOURCE_DIR}/src/rpt/__init__.pyi" "${RPT_PYTHON_PACKAGE_DIR}/__init__.pyi" COPYONLY)
|
||||
configure_file("${PROJECT_SOURCE_DIR}/src/rpt/_helpers.py" "${RPT_PYTHON_PACKAGE_DIR}/_helpers.py" COPYONLY)
|
||||
configure_file("${PROJECT_SOURCE_DIR}/src/rpt/py.typed" "${RPT_PYTHON_PACKAGE_DIR}/py.typed" COPYONLY)
|
||||
|
||||
@@ -29,7 +30,7 @@ set_target_properties(rpt_core_ext PROPERTIES
|
||||
|
||||
target_link_libraries(rpt_core_ext PRIVATE rpt_core)
|
||||
target_include_directories(rpt_core_ext PRIVATE
|
||||
"${PROJECT_SOURCE_DIR}/rpt"
|
||||
"${PROJECT_SOURCE_DIR}/rpt_cpp"
|
||||
)
|
||||
|
||||
nanobind_add_stub(rpt_core_stub
|
||||
@@ -40,4 +41,6 @@ nanobind_add_stub(rpt_core_stub
|
||||
)
|
||||
|
||||
install(TARGETS rpt_core_ext LIBRARY DESTINATION rpt)
|
||||
install(FILES "${RPT_PYTHON_PACKAGE_DIR}/__init__.pyi" DESTINATION rpt)
|
||||
install(FILES "${RPT_PYTHON_PACKAGE_DIR}/_core.pyi" DESTINATION rpt)
|
||||
install(FILES "${RPT_PYTHON_PACKAGE_DIR}/py.typed" DESTINATION rpt)
|
||||
|
||||
+183
-22
@@ -20,8 +20,11 @@ namespace
|
||||
using PoseArray2D =
|
||||
nb::ndarray<nb::numpy, const float, nb::shape<-1, -1, -1, 3>, nb::c_contig>;
|
||||
using CountArray = nb::ndarray<nb::numpy, const uint32_t, nb::shape<-1>, nb::c_contig>;
|
||||
using TrackIdArray = nb::ndarray<nb::numpy, const int64_t, nb::shape<-1>, nb::c_contig>;
|
||||
using PoseArray3DConst =
|
||||
nb::ndarray<nb::numpy, const float, nb::shape<-1, -1, 4>, nb::c_contig>;
|
||||
using PoseArray3DByViewConst =
|
||||
nb::ndarray<nb::numpy, const float, nb::shape<-1, -1, -1, 4>, nb::c_contig>;
|
||||
using PoseArray3D = nb::ndarray<nb::numpy, float, nb::shape<-1, -1, 4>, nb::c_contig>;
|
||||
using PoseArray2DOut = nb::ndarray<nb::numpy, float, nb::shape<-1, 4>, nb::c_contig>;
|
||||
|
||||
@@ -58,6 +61,50 @@ PoseBatch3DView pose_batch3d_view_from_numpy(const PoseArray3DConst &poses_3d)
|
||||
};
|
||||
}
|
||||
|
||||
PoseBatch3DByViewView pose_batch3d_by_view_from_numpy(
|
||||
const PoseArray3DByViewConst &poses_3d,
|
||||
const CountArray &person_counts)
|
||||
{
|
||||
if (poses_3d.shape(0) != person_counts.shape(0))
|
||||
{
|
||||
throw std::invalid_argument("poses_3d and person_counts must have the same number of views.");
|
||||
}
|
||||
|
||||
for (size_t i = 0; i < static_cast<size_t>(person_counts.shape(0)); ++i)
|
||||
{
|
||||
if (person_counts(i) > poses_3d.shape(1))
|
||||
{
|
||||
throw std::invalid_argument("person_counts entries must not exceed the padded person dimension.");
|
||||
}
|
||||
}
|
||||
|
||||
return PoseBatch3DByViewView {
|
||||
poses_3d.data(),
|
||||
person_counts.data(),
|
||||
static_cast<size_t>(poses_3d.shape(0)),
|
||||
static_cast<size_t>(poses_3d.shape(1)),
|
||||
static_cast<size_t>(poses_3d.shape(2)),
|
||||
};
|
||||
}
|
||||
|
||||
TrackedPoseBatch3DView tracked_pose_batch_view_from_numpy(
|
||||
const PoseArray3DConst &poses_3d,
|
||||
const TrackIdArray &track_ids)
|
||||
{
|
||||
if (poses_3d.shape(0) != track_ids.shape(0))
|
||||
{
|
||||
throw std::invalid_argument(
|
||||
"previous_poses_3d and previous_track_ids must have the same number of tracks.");
|
||||
}
|
||||
|
||||
return TrackedPoseBatch3DView {
|
||||
track_ids.data(),
|
||||
poses_3d.data(),
|
||||
static_cast<size_t>(poses_3d.shape(0)),
|
||||
static_cast<size_t>(poses_3d.shape(1)),
|
||||
};
|
||||
}
|
||||
|
||||
PoseArray3D pose_batch_to_numpy(PoseBatch3D batch)
|
||||
{
|
||||
auto *storage = new std::vector<float>(std::move(batch.data));
|
||||
@@ -139,15 +186,54 @@ NB_MODULE(_core, m)
|
||||
.value("FISHEYE", CameraModel::Fisheye);
|
||||
|
||||
nb::class_<Camera>(m, "Camera")
|
||||
.def(nb::init<>())
|
||||
.def_rw("name", &Camera::name)
|
||||
.def_rw("K", &Camera::K)
|
||||
.def_rw("DC", &Camera::DC)
|
||||
.def_rw("R", &Camera::R)
|
||||
.def_rw("T", &Camera::T)
|
||||
.def_rw("width", &Camera::width)
|
||||
.def_rw("height", &Camera::height)
|
||||
.def_rw("model", &Camera::model)
|
||||
.def_prop_ro("name", [](const Camera &camera)
|
||||
{
|
||||
return camera.name;
|
||||
})
|
||||
.def_prop_ro("K", [](const Camera &camera)
|
||||
{
|
||||
return camera.K;
|
||||
})
|
||||
.def_prop_ro("DC", [](const Camera &camera)
|
||||
{
|
||||
return camera.DC;
|
||||
})
|
||||
.def_prop_ro("R", [](const Camera &camera)
|
||||
{
|
||||
return camera.R;
|
||||
})
|
||||
.def_prop_ro("T", [](const Camera &camera)
|
||||
{
|
||||
return camera.T;
|
||||
})
|
||||
.def_prop_ro("width", [](const Camera &camera)
|
||||
{
|
||||
return camera.width;
|
||||
})
|
||||
.def_prop_ro("height", [](const Camera &camera)
|
||||
{
|
||||
return camera.height;
|
||||
})
|
||||
.def_prop_ro("model", [](const Camera &camera)
|
||||
{
|
||||
return camera.model;
|
||||
})
|
||||
.def_prop_ro("invR", [](const Camera &camera)
|
||||
{
|
||||
return camera.invR;
|
||||
})
|
||||
.def_prop_ro("center", [](const Camera &camera)
|
||||
{
|
||||
return camera.center;
|
||||
})
|
||||
.def_prop_ro("newK", [](const Camera &camera)
|
||||
{
|
||||
return camera.newK;
|
||||
})
|
||||
.def_prop_ro("invK", [](const Camera &camera)
|
||||
{
|
||||
return camera.invK;
|
||||
})
|
||||
.def("__repr__", [](const Camera &camera)
|
||||
{
|
||||
return camera.to_string();
|
||||
@@ -177,6 +263,7 @@ NB_MODULE(_core, m)
|
||||
nb::class_<PreviousPoseMatch>(m, "PreviousPoseMatch")
|
||||
.def(nb::init<>())
|
||||
.def_rw("previous_pose_index", &PreviousPoseMatch::previous_pose_index)
|
||||
.def_rw("previous_track_id", &PreviousPoseMatch::previous_track_id)
|
||||
.def_rw("score_view1", &PreviousPoseMatch::score_view1)
|
||||
.def_rw("score_view2", &PreviousPoseMatch::score_view2)
|
||||
.def_rw("matched_view1", &PreviousPoseMatch::matched_view1)
|
||||
@@ -235,6 +322,34 @@ NB_MODULE(_core, m)
|
||||
return merged_poses_to_numpy_copy(merge.merged_poses);
|
||||
}, nb::rv_policy::move);
|
||||
|
||||
nb::enum_<AssociationStatus>(m, "AssociationStatus")
|
||||
.value("MATCHED", AssociationStatus::Matched)
|
||||
.value("NEW", AssociationStatus::New)
|
||||
.value("AMBIGUOUS", AssociationStatus::Ambiguous);
|
||||
|
||||
nb::class_<AssociationReport>(m, "AssociationReport")
|
||||
.def(nb::init<>())
|
||||
.def_rw("pose_previous_indices", &AssociationReport::pose_previous_indices)
|
||||
.def_rw("pose_previous_track_ids", &AssociationReport::pose_previous_track_ids)
|
||||
.def_rw("pose_status", &AssociationReport::pose_status)
|
||||
.def_rw("pose_candidate_previous_indices", &AssociationReport::pose_candidate_previous_indices)
|
||||
.def_rw("pose_candidate_previous_track_ids", &AssociationReport::pose_candidate_previous_track_ids)
|
||||
.def_rw("unmatched_previous_indices", &AssociationReport::unmatched_previous_indices)
|
||||
.def_rw("unmatched_previous_track_ids", &AssociationReport::unmatched_previous_track_ids)
|
||||
.def_rw("new_pose_indices", &AssociationReport::new_pose_indices)
|
||||
.def_rw("ambiguous_pose_indices", &AssociationReport::ambiguous_pose_indices);
|
||||
|
||||
nb::class_<FinalPoseAssociationDebug>(m, "FinalPoseAssociationDebug")
|
||||
.def(nb::init<>())
|
||||
.def_rw("final_pose_index", &FinalPoseAssociationDebug::final_pose_index)
|
||||
.def_rw("source_core_proposal_indices", &FinalPoseAssociationDebug::source_core_proposal_indices)
|
||||
.def_rw("source_pair_indices", &FinalPoseAssociationDebug::source_pair_indices)
|
||||
.def_rw("candidate_previous_indices", &FinalPoseAssociationDebug::candidate_previous_indices)
|
||||
.def_rw("candidate_previous_track_ids", &FinalPoseAssociationDebug::candidate_previous_track_ids)
|
||||
.def_rw("resolved_previous_index", &FinalPoseAssociationDebug::resolved_previous_index)
|
||||
.def_rw("resolved_previous_track_id", &FinalPoseAssociationDebug::resolved_previous_track_id)
|
||||
.def_rw("status", &FinalPoseAssociationDebug::status);
|
||||
|
||||
nb::class_<TriangulationTrace>(m, "TriangulationTrace")
|
||||
.def(nb::init<>())
|
||||
.def_rw("pairs", &TriangulationTrace::pairs)
|
||||
@@ -243,11 +358,33 @@ NB_MODULE(_core, m)
|
||||
.def_rw("grouping", &TriangulationTrace::grouping)
|
||||
.def_rw("full_proposals", &TriangulationTrace::full_proposals)
|
||||
.def_rw("merge", &TriangulationTrace::merge)
|
||||
.def_rw("association", &TriangulationTrace::association)
|
||||
.def_rw("final_pose_associations", &TriangulationTrace::final_pose_associations)
|
||||
.def_prop_ro("final_poses", [](const TriangulationTrace &trace)
|
||||
{
|
||||
return pose_batch_to_numpy_copy(trace.final_poses);
|
||||
}, nb::rv_policy::move);
|
||||
|
||||
nb::class_<TriangulationResult>(m, "TriangulationResult")
|
||||
.def(nb::init<>())
|
||||
.def_rw("association", &TriangulationResult::association)
|
||||
.def_prop_ro("poses_3d", [](const TriangulationResult &result)
|
||||
{
|
||||
return pose_batch_to_numpy_copy(result.poses);
|
||||
}, nb::rv_policy::move);
|
||||
|
||||
m.def(
|
||||
"make_camera",
|
||||
&make_camera,
|
||||
"name"_a,
|
||||
"K"_a,
|
||||
"DC"_a,
|
||||
"R"_a,
|
||||
"T"_a,
|
||||
"width"_a,
|
||||
"height"_a,
|
||||
"model"_a);
|
||||
|
||||
m.def(
|
||||
"build_pair_candidates",
|
||||
[](const PoseArray2D &poses_2d, const CountArray &person_counts)
|
||||
@@ -262,17 +399,19 @@ NB_MODULE(_core, m)
|
||||
[](const PoseArray2D &poses_2d,
|
||||
const CountArray &person_counts,
|
||||
const TriangulationConfig &config,
|
||||
const PoseArray3DConst &previous_poses_3d)
|
||||
const PoseArray3DConst &previous_poses_3d,
|
||||
const TrackIdArray &previous_track_ids)
|
||||
{
|
||||
return filter_pairs_with_previous_poses(
|
||||
pose_batch_view_from_numpy(poses_2d, person_counts),
|
||||
config,
|
||||
pose_batch3d_view_from_numpy(previous_poses_3d));
|
||||
tracked_pose_batch_view_from_numpy(previous_poses_3d, previous_track_ids));
|
||||
},
|
||||
"poses_2d"_a,
|
||||
"person_counts"_a,
|
||||
"config"_a,
|
||||
"previous_poses_3d"_a);
|
||||
"previous_poses_3d"_a,
|
||||
"previous_track_ids"_a);
|
||||
|
||||
m.def(
|
||||
"triangulate_debug",
|
||||
@@ -291,9 +430,11 @@ NB_MODULE(_core, m)
|
||||
[](const PoseArray2D &poses_2d,
|
||||
const CountArray &person_counts,
|
||||
const TriangulationConfig &config,
|
||||
const PoseArray3DConst &previous_poses_3d)
|
||||
const PoseArray3DConst &previous_poses_3d,
|
||||
const TrackIdArray &previous_track_ids)
|
||||
{
|
||||
const PoseBatch3DView previous_view = pose_batch3d_view_from_numpy(previous_poses_3d);
|
||||
const TrackedPoseBatch3DView previous_view =
|
||||
tracked_pose_batch_view_from_numpy(previous_poses_3d, previous_track_ids);
|
||||
return triangulate_debug(
|
||||
pose_batch_view_from_numpy(poses_2d, person_counts),
|
||||
config,
|
||||
@@ -302,7 +443,8 @@ NB_MODULE(_core, m)
|
||||
"poses_2d"_a,
|
||||
"person_counts"_a,
|
||||
"config"_a,
|
||||
"previous_poses_3d"_a);
|
||||
"previous_poses_3d"_a,
|
||||
"previous_track_ids"_a);
|
||||
|
||||
m.def(
|
||||
"triangulate_poses",
|
||||
@@ -319,21 +461,40 @@ NB_MODULE(_core, m)
|
||||
"config"_a);
|
||||
|
||||
m.def(
|
||||
"triangulate_poses",
|
||||
"merge_rgbd_views",
|
||||
[](const PoseArray3DByViewConst &poses_3d,
|
||||
const CountArray &person_counts,
|
||||
const TriangulationConfig &config,
|
||||
float max_distance)
|
||||
{
|
||||
const PoseBatch3D merged = merge_rgbd_views(
|
||||
pose_batch3d_by_view_from_numpy(poses_3d, person_counts),
|
||||
config,
|
||||
max_distance);
|
||||
return pose_batch_to_numpy(merged);
|
||||
},
|
||||
"poses_3d"_a,
|
||||
"person_counts"_a,
|
||||
"config"_a,
|
||||
"max_distance"_a = 0.5f);
|
||||
|
||||
m.def(
|
||||
"triangulate_with_report",
|
||||
[](const PoseArray2D &poses_2d,
|
||||
const CountArray &person_counts,
|
||||
const TriangulationConfig &config,
|
||||
const PoseArray3DConst &previous_poses_3d)
|
||||
const PoseArray3DConst &previous_poses_3d,
|
||||
const TrackIdArray &previous_track_ids)
|
||||
{
|
||||
const PoseBatch3DView previous_view = pose_batch3d_view_from_numpy(previous_poses_3d);
|
||||
const PoseBatch3D poses_3d = triangulate_poses(
|
||||
const TriangulationResult result = triangulate_with_report(
|
||||
pose_batch_view_from_numpy(poses_2d, person_counts),
|
||||
config,
|
||||
&previous_view);
|
||||
return pose_batch_to_numpy(poses_3d);
|
||||
tracked_pose_batch_view_from_numpy(previous_poses_3d, previous_track_ids));
|
||||
return result;
|
||||
},
|
||||
"poses_2d"_a,
|
||||
"person_counts"_a,
|
||||
"config"_a,
|
||||
"previous_poses_3d"_a);
|
||||
"previous_poses_3d"_a,
|
||||
"previous_track_ids"_a);
|
||||
}
|
||||
|
||||
-24
@@ -1,24 +0,0 @@
|
||||
FROM nvcr.io/nvidia/tensorrt:24.10-py3
|
||||
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
ENV LANG=C.UTF-8
|
||||
ENV LC_ALL=C.UTF-8
|
||||
WORKDIR /
|
||||
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends feh
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends python3-opencv
|
||||
RUN pip uninstall -y opencv-python && pip install --no-cache-dir "opencv-python<4.3"
|
||||
|
||||
# Show matplotlib images
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends python3-tk
|
||||
|
||||
# Python build frontend
|
||||
RUN pip3 install --upgrade --no-cache-dir pip
|
||||
|
||||
# Install build dependencies
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends build-essential
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends libopencv-dev
|
||||
RUN pip3 install --no-cache-dir uv
|
||||
|
||||
WORKDIR /RapidPoseTriangulation/
|
||||
CMD ["/bin/bash"]
|
||||
File diff suppressed because it is too large
Load Diff
+9
-3
@@ -7,14 +7,20 @@ build-backend = "scikit_build_core.build"
|
||||
|
||||
[project]
|
||||
name = "rapid-pose-triangulation"
|
||||
version = "0.1.0"
|
||||
version = "0.2.0"
|
||||
description = "Rapid Pose Triangulation library with nanobind Python bindings"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
dependencies = ["numpy>=2.0"]
|
||||
dependencies = [
|
||||
"jaxtyping",
|
||||
"numpy>=2.0",
|
||||
]
|
||||
|
||||
[dependency-groups]
|
||||
dev = ["pytest>=8.3"]
|
||||
dev = [
|
||||
"basedpyright>=1.38.3",
|
||||
"pytest>=8.3",
|
||||
]
|
||||
|
||||
[tool.scikit-build]
|
||||
minimum-version = "build-system.requires"
|
||||
|
||||
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"include": ["src"],
|
||||
"ignore": ["src/rpt/_core.pyi"],
|
||||
"failOnWarnings": false,
|
||||
"pythonVersion": "3.10",
|
||||
"reportMissingModuleSource": "none",
|
||||
"executionEnvironments": [
|
||||
{
|
||||
"root": "tests",
|
||||
"extraPaths": ["src"]
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,19 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <array>
|
||||
|
||||
#include "camera.hpp"
|
||||
|
||||
struct CachedCamera
|
||||
{
|
||||
const Camera cam;
|
||||
const std::array<std::array<float, 3>, 3> invR;
|
||||
const std::array<float, 3> center;
|
||||
const std::array<std::array<float, 3>, 3> newK;
|
||||
const std::array<std::array<float, 3>, 3> invK;
|
||||
};
|
||||
|
||||
CachedCamera cache_camera(const Camera &camera);
|
||||
|
||||
void undistort_point_pinhole(std::array<float, 3> &point, const std::array<float, 5> &distortion);
|
||||
void undistort_point_fisheye(std::array<float, 3> &point, const std::array<float, 5> &distortion);
|
||||
@@ -3,6 +3,7 @@
|
||||
set(RPT_SOURCES
|
||||
camera.cpp
|
||||
interface.cpp
|
||||
rgbd_merger.cpp
|
||||
triangulator.cpp
|
||||
)
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
#include <array>
|
||||
#include <cmath>
|
||||
#include <iomanip>
|
||||
#include <stdexcept>
|
||||
#include <sstream>
|
||||
#include <stdexcept>
|
||||
#include <utility>
|
||||
#include <vector>
|
||||
#include <cmath>
|
||||
|
||||
#include "cached_camera.hpp"
|
||||
#include "camera.hpp"
|
||||
|
||||
namespace
|
||||
{
|
||||
@@ -156,28 +156,46 @@ std::ostream &operator<<(std::ostream &out, const Camera &cam)
|
||||
// =================================================================================================
|
||||
// =================================================================================================
|
||||
|
||||
CachedCamera cache_camera(const Camera &cam)
|
||||
Camera make_camera(
|
||||
std::string name,
|
||||
std::array<std::array<float, 3>, 3> K,
|
||||
std::array<float, 5> DC,
|
||||
std::array<std::array<float, 3>, 3> R,
|
||||
std::array<std::array<float, 1>, 3> T,
|
||||
int width,
|
||||
int height,
|
||||
CameraModel model)
|
||||
{
|
||||
const std::array<std::array<float, 3>, 3> invR = transpose3x3(cam.R);
|
||||
Camera cam {
|
||||
.name = std::move(name),
|
||||
.K = K,
|
||||
.DC = DC,
|
||||
.R = R,
|
||||
.T = T,
|
||||
.width = width,
|
||||
.height = height,
|
||||
.model = model,
|
||||
};
|
||||
|
||||
cam.invR = transpose3x3(cam.R);
|
||||
|
||||
// Camera center:
|
||||
// C = -(Rᵀ * t) = -(Rᵀ * (R * (T * -1))) = -(Rᵀ * (R * -T)) = -(Rᵀ * -R * T) = -(-T) = T
|
||||
const std::array<float, 3> center = {cam.T[0][0], cam.T[1][0], cam.T[2][0]};
|
||||
cam.center = {cam.T[0][0], cam.T[1][0], cam.T[2][0]};
|
||||
|
||||
// Undistort camera matrix
|
||||
// As with the undistortion, the own implementation avoids some overhead compared to OpenCV
|
||||
std::array<std::array<float, 3>, 3> newK;
|
||||
if (cam.model == CameraModel::Fisheye)
|
||||
{
|
||||
newK = calc_optimal_camera_matrix_fisheye(cam, 1.0f, {cam.width, cam.height});
|
||||
cam.newK = calc_optimal_camera_matrix_fisheye(cam, 1.0f, {cam.width, cam.height});
|
||||
}
|
||||
else
|
||||
{
|
||||
newK = calc_optimal_camera_matrix_pinhole(cam, 1.0f, {cam.width, cam.height});
|
||||
cam.newK = calc_optimal_camera_matrix_pinhole(cam, 1.0f, {cam.width, cam.height});
|
||||
}
|
||||
const std::array<std::array<float, 3>, 3> invK = invert3x3(newK);
|
||||
cam.invK = invert3x3(cam.newK);
|
||||
|
||||
return CachedCamera {cam, invR, center, newK, invK};
|
||||
return cam;
|
||||
}
|
||||
|
||||
// =================================================================================================
|
||||
@@ -25,10 +25,27 @@ struct Camera
|
||||
std::array<float, 5> DC = {0.0f, 0.0f, 0.0f, 0.0f, 0.0f};
|
||||
std::array<std::array<float, 3>, 3> R;
|
||||
std::array<std::array<float, 1>, 3> T;
|
||||
int width;
|
||||
int height;
|
||||
int width = 0;
|
||||
int height = 0;
|
||||
CameraModel model = CameraModel::Pinhole;
|
||||
std::array<std::array<float, 3>, 3> invR {};
|
||||
std::array<float, 3> center {};
|
||||
std::array<std::array<float, 3>, 3> newK {};
|
||||
std::array<std::array<float, 3>, 3> invK {};
|
||||
|
||||
friend std::ostream &operator<<(std::ostream &out, Camera const &camera);
|
||||
std::string to_string() const;
|
||||
};
|
||||
|
||||
Camera make_camera(
|
||||
std::string name,
|
||||
std::array<std::array<float, 3>, 3> K,
|
||||
std::array<float, 5> DC,
|
||||
std::array<std::array<float, 3>, 3> R,
|
||||
std::array<std::array<float, 1>, 3> T,
|
||||
int width,
|
||||
int height,
|
||||
CameraModel model);
|
||||
|
||||
void undistort_point_pinhole(std::array<float, 3> &point, const std::array<float, 5> &distortion);
|
||||
void undistort_point_fisheye(std::array<float, 3> &point, const std::array<float, 5> &distortion);
|
||||
@@ -23,6 +23,17 @@ size_t pose3d_offset(size_t person, size_t joint, size_t coord, size_t num_joint
|
||||
{
|
||||
return (((person * num_joints) + joint) * 4) + coord;
|
||||
}
|
||||
|
||||
size_t pose3d_by_view_offset(
|
||||
size_t view,
|
||||
size_t person,
|
||||
size_t joint,
|
||||
size_t coord,
|
||||
size_t max_persons,
|
||||
size_t num_joints)
|
||||
{
|
||||
return ((((view * max_persons) + person) * num_joints) + joint) * 4 + coord;
|
||||
}
|
||||
} // namespace
|
||||
|
||||
// =================================================================================================
|
||||
@@ -43,6 +54,21 @@ const float &PoseBatch3DView::at(size_t person, size_t joint, size_t coord) cons
|
||||
return data[pose3d_offset(person, joint, coord, num_joints)];
|
||||
}
|
||||
|
||||
int64_t TrackedPoseBatch3DView::track_id(size_t person) const
|
||||
{
|
||||
return track_ids[person];
|
||||
}
|
||||
|
||||
const float &TrackedPoseBatch3DView::at(size_t person, size_t joint, size_t coord) const
|
||||
{
|
||||
return data[pose3d_offset(person, joint, coord, num_joints)];
|
||||
}
|
||||
|
||||
const float &PoseBatch3DByViewView::at(size_t view, size_t person, size_t joint, size_t coord) const
|
||||
{
|
||||
return data[pose3d_by_view_offset(view, person, joint, coord, max_persons, num_joints)];
|
||||
}
|
||||
|
||||
const float &PoseBatch2D::at(size_t view, size_t person, size_t joint, size_t coord) const
|
||||
{
|
||||
return data[pose2d_offset(view, person, joint, coord, max_persons, num_joints)];
|
||||
@@ -119,6 +145,27 @@ PoseBatch3DView PoseBatch3D::view() const
|
||||
return PoseBatch3DView {data.data(), num_persons, num_joints};
|
||||
}
|
||||
|
||||
float &PoseBatch3DByView::at(size_t view, size_t person, size_t joint, size_t coord)
|
||||
{
|
||||
return data[pose3d_by_view_offset(view, person, joint, coord, max_persons, num_joints)];
|
||||
}
|
||||
|
||||
const float &PoseBatch3DByView::at(size_t view, size_t person, size_t joint, size_t coord) const
|
||||
{
|
||||
return data[pose3d_by_view_offset(view, person, joint, coord, max_persons, num_joints)];
|
||||
}
|
||||
|
||||
PoseBatch3DByViewView PoseBatch3DByView::view() const
|
||||
{
|
||||
return PoseBatch3DByViewView {
|
||||
data.data(),
|
||||
person_counts.data(),
|
||||
num_views,
|
||||
max_persons,
|
||||
num_joints,
|
||||
};
|
||||
}
|
||||
|
||||
NestedPoses3D PoseBatch3D::to_nested() const
|
||||
{
|
||||
NestedPoses3D poses_3d(num_persons);
|
||||
@@ -34,6 +34,28 @@ struct PoseBatch3DView
|
||||
const float &at(size_t person, size_t joint, size_t coord) const;
|
||||
};
|
||||
|
||||
struct TrackedPoseBatch3DView
|
||||
{
|
||||
const int64_t *track_ids = nullptr;
|
||||
const float *data = nullptr;
|
||||
size_t num_persons = 0;
|
||||
size_t num_joints = 0;
|
||||
|
||||
int64_t track_id(size_t person) const;
|
||||
const float &at(size_t person, size_t joint, size_t coord) const;
|
||||
};
|
||||
|
||||
struct PoseBatch3DByViewView
|
||||
{
|
||||
const float *data = nullptr;
|
||||
const uint32_t *person_counts = nullptr;
|
||||
size_t num_views = 0;
|
||||
size_t max_persons = 0;
|
||||
size_t num_joints = 0;
|
||||
|
||||
const float &at(size_t view, size_t person, size_t joint, size_t coord) const;
|
||||
};
|
||||
|
||||
struct PoseBatch2D
|
||||
{
|
||||
std::vector<float> data;
|
||||
@@ -63,6 +85,19 @@ struct PoseBatch3D
|
||||
static PoseBatch3D from_nested(const NestedPoses3D &poses_3d);
|
||||
};
|
||||
|
||||
struct PoseBatch3DByView
|
||||
{
|
||||
std::vector<float> data;
|
||||
std::vector<uint32_t> person_counts;
|
||||
size_t num_views = 0;
|
||||
size_t max_persons = 0;
|
||||
size_t num_joints = 0;
|
||||
|
||||
float &at(size_t view, size_t person, size_t joint, size_t coord);
|
||||
const float &at(size_t view, size_t person, size_t joint, size_t coord) const;
|
||||
PoseBatch3DByViewView view() const;
|
||||
};
|
||||
|
||||
// =================================================================================================
|
||||
|
||||
struct PairCandidate
|
||||
@@ -78,6 +113,7 @@ struct PairCandidate
|
||||
struct PreviousPoseMatch
|
||||
{
|
||||
int previous_pose_index = -1;
|
||||
int64_t previous_track_id = -1;
|
||||
float score_view1 = 0.0f;
|
||||
float score_view2 = 0.0f;
|
||||
bool matched_view1 = false;
|
||||
@@ -131,6 +167,38 @@ struct MergeDebug
|
||||
std::vector<std::vector<int>> group_proposal_indices;
|
||||
};
|
||||
|
||||
enum class AssociationStatus
|
||||
{
|
||||
Matched,
|
||||
New,
|
||||
Ambiguous,
|
||||
};
|
||||
|
||||
struct AssociationReport
|
||||
{
|
||||
std::vector<int> pose_previous_indices;
|
||||
std::vector<int64_t> pose_previous_track_ids;
|
||||
std::vector<AssociationStatus> pose_status;
|
||||
std::vector<std::vector<int>> pose_candidate_previous_indices;
|
||||
std::vector<std::vector<int64_t>> pose_candidate_previous_track_ids;
|
||||
std::vector<int> unmatched_previous_indices;
|
||||
std::vector<int64_t> unmatched_previous_track_ids;
|
||||
std::vector<int> new_pose_indices;
|
||||
std::vector<int> ambiguous_pose_indices;
|
||||
};
|
||||
|
||||
struct FinalPoseAssociationDebug
|
||||
{
|
||||
int final_pose_index = -1;
|
||||
std::vector<int> source_core_proposal_indices;
|
||||
std::vector<int> source_pair_indices;
|
||||
std::vector<int> candidate_previous_indices;
|
||||
std::vector<int64_t> candidate_previous_track_ids;
|
||||
int resolved_previous_index = -1;
|
||||
int64_t resolved_previous_track_id = -1;
|
||||
AssociationStatus status = AssociationStatus::New;
|
||||
};
|
||||
|
||||
struct TriangulationTrace
|
||||
{
|
||||
std::vector<PairCandidate> pairs;
|
||||
@@ -139,9 +207,17 @@ struct TriangulationTrace
|
||||
GroupingDebug grouping;
|
||||
std::vector<FullProposalDebug> full_proposals;
|
||||
MergeDebug merge;
|
||||
AssociationReport association;
|
||||
std::vector<FinalPoseAssociationDebug> final_pose_associations;
|
||||
PoseBatch3D final_poses;
|
||||
};
|
||||
|
||||
struct TriangulationResult
|
||||
{
|
||||
PoseBatch3D poses;
|
||||
AssociationReport association;
|
||||
};
|
||||
|
||||
// =================================================================================================
|
||||
|
||||
struct TriangulationOptions
|
||||
@@ -165,40 +241,15 @@ std::vector<PairCandidate> build_pair_candidates(const PoseBatch2DView &poses_2d
|
||||
PreviousPoseFilterDebug filter_pairs_with_previous_poses(
|
||||
const PoseBatch2DView &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const PoseBatch3DView &previous_poses_3d,
|
||||
const TrackedPoseBatch3DView &previous_poses_3d,
|
||||
const TriangulationOptions *options_override = nullptr);
|
||||
|
||||
inline PreviousPoseFilterDebug filter_pairs_with_previous_poses(
|
||||
const PoseBatch2D &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const PoseBatch3D &previous_poses_3d,
|
||||
const TriangulationOptions *options_override = nullptr)
|
||||
{
|
||||
return filter_pairs_with_previous_poses(poses_2d.view(), config, previous_poses_3d.view(), options_override);
|
||||
}
|
||||
|
||||
TriangulationTrace triangulate_debug(
|
||||
const PoseBatch2DView &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const PoseBatch3DView *previous_poses_3d = nullptr,
|
||||
const TrackedPoseBatch3DView *previous_poses_3d = nullptr,
|
||||
const TriangulationOptions *options_override = nullptr);
|
||||
|
||||
inline TriangulationTrace triangulate_debug(
|
||||
const PoseBatch2D &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const PoseBatch3D *previous_poses_3d = nullptr,
|
||||
const TriangulationOptions *options_override = nullptr)
|
||||
{
|
||||
PoseBatch3DView previous_view_storage;
|
||||
const PoseBatch3DView *previous_view = nullptr;
|
||||
if (previous_poses_3d != nullptr)
|
||||
{
|
||||
previous_view_storage = previous_poses_3d->view();
|
||||
previous_view = &previous_view_storage;
|
||||
}
|
||||
return triangulate_debug(poses_2d.view(), config, previous_view, options_override);
|
||||
}
|
||||
|
||||
// =================================================================================================
|
||||
|
||||
/**
|
||||
@@ -213,21 +264,48 @@ inline TriangulationTrace triangulate_debug(
|
||||
PoseBatch3D triangulate_poses(
|
||||
const PoseBatch2DView &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const PoseBatch3DView *previous_poses_3d = nullptr,
|
||||
const TriangulationOptions *options_override = nullptr);
|
||||
|
||||
PoseBatch3D merge_rgbd_views(
|
||||
const PoseBatch3DByViewView &poses_3d,
|
||||
const TriangulationConfig &config,
|
||||
float max_distance = 0.5f);
|
||||
|
||||
TriangulationResult triangulate_with_report(
|
||||
const PoseBatch2DView &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const TrackedPoseBatch3DView &previous_poses_3d,
|
||||
const TriangulationOptions *options_override = nullptr);
|
||||
|
||||
inline PoseBatch3D triangulate_poses(
|
||||
const PoseBatch2D &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const PoseBatch3D *previous_poses_3d = nullptr,
|
||||
const TriangulationOptions *options_override = nullptr)
|
||||
{
|
||||
PoseBatch3DView previous_view_storage;
|
||||
const PoseBatch3DView *previous_view = nullptr;
|
||||
if (previous_poses_3d != nullptr)
|
||||
return triangulate_poses(poses_2d.view(), config, options_override);
|
||||
}
|
||||
|
||||
inline PoseBatch3D merge_rgbd_views(
|
||||
const PoseBatch3DByView &poses_3d,
|
||||
const TriangulationConfig &config,
|
||||
float max_distance = 0.5f)
|
||||
{
|
||||
previous_view_storage = previous_poses_3d->view();
|
||||
previous_view = &previous_view_storage;
|
||||
return merge_rgbd_views(poses_3d.view(), config, max_distance);
|
||||
}
|
||||
return triangulate_poses(poses_2d.view(), config, previous_view, options_override);
|
||||
|
||||
inline TriangulationTrace triangulate_debug(
|
||||
const PoseBatch2D &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const TriangulationOptions *options_override = nullptr)
|
||||
{
|
||||
return triangulate_debug(poses_2d.view(), config, nullptr, options_override);
|
||||
}
|
||||
|
||||
inline TriangulationResult triangulate_with_report(
|
||||
const PoseBatch2D &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const TrackedPoseBatch3DView &previous_poses_3d,
|
||||
const TriangulationOptions *options_override = nullptr)
|
||||
{
|
||||
return triangulate_with_report(poses_2d.view(), config, previous_poses_3d, options_override);
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -9,9 +9,9 @@
|
||||
#include <stdexcept>
|
||||
#include <string_view>
|
||||
#include <unordered_map>
|
||||
#include <unordered_set>
|
||||
|
||||
#include "interface.hpp"
|
||||
#include "cached_camera.hpp"
|
||||
|
||||
namespace
|
||||
{
|
||||
@@ -25,73 +25,14 @@ size_t packed_pose_offset(size_t pose, size_t joint, size_t coord, size_t num_jo
|
||||
return (((pose * num_joints) + joint) * 3) + coord;
|
||||
}
|
||||
|
||||
[[maybe_unused]] static void print_2d_poses(const Pose2D &poses)
|
||||
{
|
||||
std::cout << "Poses: (" << poses.size() << ", 3)[" << std::endl;
|
||||
for (const auto &pose : poses)
|
||||
{
|
||||
std::cout << " [";
|
||||
for (size_t i = 0; i < 3; ++i)
|
||||
{
|
||||
std::cout << std::fixed << std::setprecision(3) << pose[i];
|
||||
if (i < 2)
|
||||
{
|
||||
std::cout << ", ";
|
||||
}
|
||||
}
|
||||
std::cout << "]" << std::endl;
|
||||
}
|
||||
std::cout << "]" << std::endl;
|
||||
}
|
||||
|
||||
[[maybe_unused]] static void print_3d_poses(const Pose3D &poses)
|
||||
{
|
||||
std::cout << "Poses: (" << poses.size() << ", 4)[" << std::endl;
|
||||
for (const auto &pose : poses)
|
||||
{
|
||||
std::cout << " [";
|
||||
for (size_t i = 0; i < 4; ++i)
|
||||
{
|
||||
std::cout << std::fixed << std::setprecision(3) << pose[i];
|
||||
if (i < 3)
|
||||
{
|
||||
std::cout << ", ";
|
||||
}
|
||||
}
|
||||
std::cout << "]" << std::endl;
|
||||
}
|
||||
std::cout << "]" << std::endl;
|
||||
}
|
||||
|
||||
[[maybe_unused]] static void print_allpairs(const std::vector<PosePair> &all_pairs)
|
||||
{
|
||||
std::cout << "All pairs: [" << std::endl;
|
||||
for (const auto &pair : all_pairs)
|
||||
{
|
||||
const auto &tuple_part = pair.first;
|
||||
const auto &pair_part = pair.second;
|
||||
|
||||
std::cout << "("
|
||||
<< std::get<0>(tuple_part) << ", "
|
||||
<< std::get<1>(tuple_part) << ", "
|
||||
<< std::get<2>(tuple_part) << ", "
|
||||
<< std::get<3>(tuple_part) << ")";
|
||||
std::cout << ", ("
|
||||
<< pair_part.first << ", "
|
||||
<< pair_part.second << ")"
|
||||
<< std::endl;
|
||||
}
|
||||
std::cout << "]" << std::endl;
|
||||
}
|
||||
|
||||
std::array<float, 3> undistort_point(
|
||||
const std::array<float, 3> &raw_point, const CachedCamera &icam)
|
||||
const std::array<float, 3> &raw_point, const Camera &icam)
|
||||
{
|
||||
std::array<float, 3> point = raw_point;
|
||||
const float ifx_old = 1.0f / icam.cam.K[0][0];
|
||||
const float ify_old = 1.0f / icam.cam.K[1][1];
|
||||
const float cx_old = icam.cam.K[0][2];
|
||||
const float cy_old = icam.cam.K[1][2];
|
||||
const float ifx_old = 1.0f / icam.K[0][0];
|
||||
const float ify_old = 1.0f / icam.K[1][1];
|
||||
const float cx_old = icam.K[0][2];
|
||||
const float cy_old = icam.K[1][2];
|
||||
const float fx_new = icam.newK[0][0];
|
||||
const float fy_new = icam.newK[1][1];
|
||||
const float cx_new = icam.newK[0][2];
|
||||
@@ -100,21 +41,21 @@ std::array<float, 3> undistort_point(
|
||||
point[0] = (point[0] - cx_old) * ifx_old;
|
||||
point[1] = (point[1] - cy_old) * ify_old;
|
||||
|
||||
if (icam.cam.model == CameraModel::Fisheye)
|
||||
if (icam.model == CameraModel::Fisheye)
|
||||
{
|
||||
undistort_point_fisheye(point, icam.cam.DC);
|
||||
undistort_point_fisheye(point, icam.DC);
|
||||
}
|
||||
else
|
||||
{
|
||||
undistort_point_pinhole(point, icam.cam.DC);
|
||||
undistort_point_pinhole(point, icam.DC);
|
||||
}
|
||||
|
||||
point[0] = (point[0] * fx_new) + cx_new;
|
||||
point[1] = (point[1] * fy_new) + cy_new;
|
||||
|
||||
const float mask_offset = (icam.cam.width + icam.cam.height) / 20.0f;
|
||||
if (point[0] < -mask_offset || point[0] >= icam.cam.width + mask_offset ||
|
||||
point[1] < -mask_offset || point[1] >= icam.cam.height + mask_offset)
|
||||
const float mask_offset = (icam.width + icam.height) / 20.0f;
|
||||
if (point[0] < -mask_offset || point[0] >= icam.width + mask_offset ||
|
||||
point[1] < -mask_offset || point[1] >= icam.height + mask_offset)
|
||||
{
|
||||
point = {0.0f, 0.0f, 0.0f};
|
||||
}
|
||||
@@ -127,7 +68,7 @@ class PackedPoseStore2D
|
||||
public:
|
||||
PackedPoseStore2D(
|
||||
const PoseBatch2DView &source,
|
||||
const std::vector<CachedCamera> &internal_cameras)
|
||||
const std::vector<Camera> &internal_cameras)
|
||||
: person_counts(source.num_views),
|
||||
view_offsets(source.num_views),
|
||||
num_views(source.num_views),
|
||||
@@ -225,17 +166,17 @@ private:
|
||||
|
||||
struct PreparedInputs
|
||||
{
|
||||
std::vector<CachedCamera> internal_cameras;
|
||||
const std::vector<Camera> *internal_cameras = nullptr;
|
||||
std::vector<size_t> core_joint_idx;
|
||||
std::vector<std::array<size_t, 2>> core_limbs_idx;
|
||||
PackedPoseStore2D packed_poses;
|
||||
|
||||
PreparedInputs(
|
||||
std::vector<CachedCamera> cameras_in,
|
||||
const std::vector<Camera> &cameras_in,
|
||||
std::vector<size_t> core_joint_idx_in,
|
||||
std::vector<std::array<size_t, 2>> core_limbs_idx_in,
|
||||
PackedPoseStore2D packed_poses_in)
|
||||
: internal_cameras(std::move(cameras_in)),
|
||||
: internal_cameras(&cameras_in),
|
||||
core_joint_idx(std::move(core_joint_idx_in)),
|
||||
core_limbs_idx(std::move(core_limbs_idx_in)),
|
||||
packed_poses(std::move(packed_poses_in))
|
||||
@@ -250,6 +191,15 @@ struct PreviousProjectionCache
|
||||
std::vector<Pose3D> core_poses;
|
||||
};
|
||||
|
||||
struct FinalAssociationState
|
||||
{
|
||||
std::vector<int> candidate_previous_indices;
|
||||
std::vector<int64_t> candidate_previous_track_ids;
|
||||
int resolved_previous_index = -1;
|
||||
int64_t resolved_previous_track_id = -1;
|
||||
AssociationStatus status = AssociationStatus::New;
|
||||
};
|
||||
|
||||
constexpr std::array<std::string_view, 12> kCoreJoints = {
|
||||
"shoulder_left",
|
||||
"shoulder_right",
|
||||
@@ -278,24 +228,24 @@ constexpr std::array<std::pair<std::string_view, std::string_view>, 8> kCoreLimb
|
||||
|
||||
std::vector<PairCandidate> build_pair_candidates_from_packed(const PackedPoseStore2D &packed_poses);
|
||||
PreviousProjectionCache project_previous_poses(
|
||||
const PoseBatch3DView &previous_poses_3d,
|
||||
const std::vector<CachedCamera> &internal_cameras,
|
||||
const TrackedPoseBatch3DView &previous_poses_3d,
|
||||
const std::vector<Camera> &internal_cameras,
|
||||
const std::vector<size_t> &core_joint_idx);
|
||||
PreviousPoseFilterDebug filter_pairs_with_previous_poses_impl(
|
||||
const PackedPoseStore2D &packed_poses,
|
||||
const std::vector<CachedCamera> &internal_cameras,
|
||||
const std::vector<Camera> &internal_cameras,
|
||||
const std::vector<size_t> &core_joint_idx,
|
||||
const std::vector<PairCandidate> &pairs,
|
||||
const TriangulationOptions &options,
|
||||
const PoseBatch3DView &previous_poses_3d);
|
||||
const TrackedPoseBatch3DView &previous_poses_3d);
|
||||
float calc_pose_score(
|
||||
const Pose2D &pose,
|
||||
const Pose2D &reference_pose,
|
||||
const std::vector<float> &dists,
|
||||
const CachedCamera &icam);
|
||||
const Camera &icam);
|
||||
std::tuple<std::vector<Pose2D>, std::vector<std::vector<float>>> project_poses(
|
||||
const std::vector<Pose3D> &poses_3d,
|
||||
const CachedCamera &icam,
|
||||
const Camera &icam,
|
||||
bool calc_dists);
|
||||
std::vector<float> score_projection(
|
||||
const Pose2D &pose,
|
||||
@@ -306,8 +256,8 @@ std::vector<float> score_projection(
|
||||
std::pair<Pose3D, float> triangulate_and_score(
|
||||
const Pose2D &pose1,
|
||||
const Pose2D &pose2,
|
||||
const CachedCamera &cam1,
|
||||
const CachedCamera &cam2,
|
||||
const Camera &cam1,
|
||||
const Camera &cam2,
|
||||
const std::array<std::array<float, 3>, 2> &roomparams,
|
||||
const std::vector<std::array<size_t, 2>> &core_limbs_idx);
|
||||
std::vector<GroupedPose> calc_grouping(
|
||||
@@ -339,7 +289,7 @@ TriangulationTrace triangulate_debug_impl(
|
||||
const std::vector<Camera> &cameras,
|
||||
const std::array<std::array<float, 3>, 2> &roomparams,
|
||||
const std::vector<std::string> &joint_names,
|
||||
const PoseBatch3DView *previous_poses_3d,
|
||||
const TrackedPoseBatch3DView *previous_poses_3d,
|
||||
const TriangulationOptions &options);
|
||||
|
||||
PreparedInputs prepare_inputs(
|
||||
@@ -379,13 +329,6 @@ PreparedInputs prepare_inputs(
|
||||
}
|
||||
}
|
||||
|
||||
std::vector<CachedCamera> internal_cameras;
|
||||
internal_cameras.reserve(cameras.size());
|
||||
for (const auto &camera : cameras)
|
||||
{
|
||||
internal_cameras.push_back(cache_camera(camera));
|
||||
}
|
||||
|
||||
std::vector<size_t> core_joint_idx;
|
||||
core_joint_idx.reserve(kCoreJoints.size());
|
||||
for (const auto &joint : kCoreJoints)
|
||||
@@ -405,14 +348,47 @@ PreparedInputs prepare_inputs(
|
||||
static_cast<size_t>(std::distance(kCoreJoints.begin(), it2))});
|
||||
}
|
||||
|
||||
PackedPoseStore2D packed_poses(poses_2d, internal_cameras);
|
||||
PackedPoseStore2D packed_poses(poses_2d, cameras);
|
||||
return PreparedInputs(
|
||||
std::move(internal_cameras),
|
||||
cameras,
|
||||
std::move(core_joint_idx),
|
||||
std::move(core_limbs_idx),
|
||||
std::move(packed_poses));
|
||||
}
|
||||
|
||||
void validate_previous_tracks(
|
||||
const TrackedPoseBatch3DView &previous_poses_3d,
|
||||
const std::vector<std::string> &joint_names)
|
||||
{
|
||||
if (previous_poses_3d.num_persons == 0)
|
||||
{
|
||||
return;
|
||||
}
|
||||
if (previous_poses_3d.track_ids == nullptr)
|
||||
{
|
||||
throw std::invalid_argument("previous_track_ids must not be null.");
|
||||
}
|
||||
if (previous_poses_3d.data == nullptr)
|
||||
{
|
||||
throw std::invalid_argument("previous_poses_3d data must not be null.");
|
||||
}
|
||||
if (previous_poses_3d.num_joints != joint_names.size())
|
||||
{
|
||||
throw std::invalid_argument("previous_poses_3d must use the same joint count as joint_names.");
|
||||
}
|
||||
|
||||
std::unordered_set<int64_t> seen_track_ids;
|
||||
seen_track_ids.reserve(previous_poses_3d.num_persons);
|
||||
for (size_t person = 0; person < previous_poses_3d.num_persons; ++person)
|
||||
{
|
||||
const int64_t track_id = previous_poses_3d.track_id(person);
|
||||
if (!seen_track_ids.insert(track_id).second)
|
||||
{
|
||||
throw std::invalid_argument("previous_track_ids must be unique.");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
std::vector<PairCandidate> build_pair_candidates_from_packed(const PackedPoseStore2D &packed_poses)
|
||||
{
|
||||
std::vector<PairCandidate> pairs;
|
||||
@@ -440,8 +416,8 @@ std::vector<PairCandidate> build_pair_candidates_from_packed(const PackedPoseSto
|
||||
}
|
||||
|
||||
PreviousProjectionCache project_previous_poses(
|
||||
const PoseBatch3DView &previous_poses_3d,
|
||||
const std::vector<CachedCamera> &internal_cameras,
|
||||
const TrackedPoseBatch3DView &previous_poses_3d,
|
||||
const std::vector<Camera> &internal_cameras,
|
||||
const std::vector<size_t> &core_joint_idx)
|
||||
{
|
||||
PreviousProjectionCache cache;
|
||||
@@ -476,7 +452,7 @@ float calc_pose_score(
|
||||
const Pose2D &pose,
|
||||
const Pose2D &reference_pose,
|
||||
const std::vector<float> &dists,
|
||||
const CachedCamera &icam)
|
||||
const Camera &icam)
|
||||
{
|
||||
const float min_score = 0.1f;
|
||||
std::vector<bool> mask(pose.size(), false);
|
||||
@@ -494,7 +470,7 @@ float calc_pose_score(
|
||||
return 0.0f;
|
||||
}
|
||||
|
||||
const float iscale = (icam.cam.width + icam.cam.height) * 0.5f;
|
||||
const float iscale = (icam.width + icam.height) * 0.5f;
|
||||
std::vector<float> scores = score_projection(pose, reference_pose, dists, mask, iscale);
|
||||
const size_t drop_k = static_cast<size_t>(pose.size() * 0.2f);
|
||||
if (drop_k > 0)
|
||||
@@ -512,11 +488,11 @@ float calc_pose_score(
|
||||
|
||||
PreviousPoseFilterDebug filter_pairs_with_previous_poses_impl(
|
||||
const PackedPoseStore2D &packed_poses,
|
||||
const std::vector<CachedCamera> &internal_cameras,
|
||||
const std::vector<Camera> &internal_cameras,
|
||||
const std::vector<size_t> &core_joint_idx,
|
||||
const std::vector<PairCandidate> &pairs,
|
||||
const TriangulationOptions &options,
|
||||
const PoseBatch3DView &previous_poses_3d)
|
||||
const TrackedPoseBatch3DView &previous_poses_3d)
|
||||
{
|
||||
PreviousPoseFilterDebug debug;
|
||||
debug.used_previous_poses = true;
|
||||
@@ -567,6 +543,7 @@ PreviousPoseFilterDebug filter_pairs_with_previous_poses_impl(
|
||||
{
|
||||
best_match = PreviousPoseMatch {
|
||||
static_cast<int>(previous_index),
|
||||
previous_poses_3d.track_id(previous_index),
|
||||
score_view1,
|
||||
score_view2,
|
||||
true,
|
||||
@@ -582,6 +559,7 @@ PreviousPoseFilterDebug filter_pairs_with_previous_poses_impl(
|
||||
{
|
||||
best_match = PreviousPoseMatch {
|
||||
static_cast<int>(previous_index),
|
||||
previous_poses_3d.track_id(previous_index),
|
||||
score_view1,
|
||||
score_view2,
|
||||
matched_view1,
|
||||
@@ -616,16 +594,15 @@ TriangulationTrace triangulate_debug_impl(
|
||||
const std::vector<Camera> &cameras,
|
||||
const std::array<std::array<float, 3>, 2> &roomparams,
|
||||
const std::vector<std::string> &joint_names,
|
||||
const PoseBatch3DView *previous_poses_3d,
|
||||
const TrackedPoseBatch3DView *previous_poses_3d,
|
||||
const TriangulationOptions &options)
|
||||
{
|
||||
TriangulationTrace trace;
|
||||
trace.final_poses.num_joints = joint_names.size();
|
||||
|
||||
if (previous_poses_3d != nullptr && previous_poses_3d->num_persons > 0 &&
|
||||
previous_poses_3d->num_joints != joint_names.size())
|
||||
if (previous_poses_3d != nullptr)
|
||||
{
|
||||
throw std::invalid_argument("previous_poses_3d must use the same joint count as joint_names.");
|
||||
validate_previous_tracks(*previous_poses_3d, joint_names);
|
||||
}
|
||||
|
||||
PreparedInputs prepared = prepare_inputs(poses_2d, cameras, joint_names);
|
||||
@@ -643,7 +620,7 @@ TriangulationTrace triangulate_debug_impl(
|
||||
{
|
||||
trace.previous_filter = filter_pairs_with_previous_poses_impl(
|
||||
prepared.packed_poses,
|
||||
prepared.internal_cameras,
|
||||
*prepared.internal_cameras,
|
||||
prepared.core_joint_idx,
|
||||
trace.pairs,
|
||||
options,
|
||||
@@ -663,8 +640,8 @@ TriangulationTrace triangulate_debug_impl(
|
||||
for (size_t i = 0; i < active_pairs.size(); ++i)
|
||||
{
|
||||
const PairCandidate &pair = active_pairs[i];
|
||||
const CachedCamera &cam1 = prepared.internal_cameras[static_cast<size_t>(pair.view1)];
|
||||
const CachedCamera &cam2 = prepared.internal_cameras[static_cast<size_t>(pair.view2)];
|
||||
const Camera &cam1 = (*prepared.internal_cameras)[static_cast<size_t>(pair.view1)];
|
||||
const Camera &cam2 = (*prepared.internal_cameras)[static_cast<size_t>(pair.view2)];
|
||||
const Pose2D pose1_core = prepared.packed_poses.pose(
|
||||
static_cast<size_t>(pair.view1), static_cast<size_t>(pair.person1), prepared.core_joint_idx);
|
||||
const Pose2D pose2_core = prepared.packed_poses.pose(
|
||||
@@ -849,8 +826,8 @@ TriangulationTrace triangulate_debug_impl(
|
||||
kept_pose_pairs[i].second.first,
|
||||
kept_pose_pairs[i].second.second,
|
||||
};
|
||||
const CachedCamera &cam1 = prepared.internal_cameras[static_cast<size_t>(pair.view1)];
|
||||
const CachedCamera &cam2 = prepared.internal_cameras[static_cast<size_t>(pair.view2)];
|
||||
const Camera &cam1 = (*prepared.internal_cameras)[static_cast<size_t>(pair.view1)];
|
||||
const Camera &cam2 = (*prepared.internal_cameras)[static_cast<size_t>(pair.view2)];
|
||||
Pose2D pose1 = prepared.packed_poses.pose(static_cast<size_t>(pair.view1), static_cast<size_t>(pair.person1));
|
||||
Pose2D pose2 = prepared.packed_poses.pose(static_cast<size_t>(pair.view2), static_cast<size_t>(pair.person2));
|
||||
|
||||
@@ -893,11 +870,68 @@ TriangulationTrace triangulate_debug_impl(
|
||||
poses.push_back(full_pose_buffer[static_cast<size_t>(index)]);
|
||||
source_core_indices.push_back(kept_core_indices[static_cast<size_t>(index)]);
|
||||
}
|
||||
final_poses_3d[i] = merge_group(poses, options.min_match_score, prepared.internal_cameras.size());
|
||||
final_poses_3d[i] = merge_group(poses, options.min_match_score, prepared.internal_cameras->size());
|
||||
trace.merge.group_proposal_indices.push_back(std::move(source_core_indices));
|
||||
trace.merge.merged_poses.push_back(final_poses_3d[i]);
|
||||
}
|
||||
|
||||
std::vector<FinalAssociationState> group_associations(groups.size());
|
||||
if (previous_poses_3d != nullptr)
|
||||
{
|
||||
for (size_t group_index = 0; group_index < trace.merge.group_proposal_indices.size(); ++group_index)
|
||||
{
|
||||
FinalAssociationState association;
|
||||
std::set<int> candidate_previous_indices;
|
||||
std::set<int64_t> candidate_previous_track_ids;
|
||||
|
||||
for (const int core_index : trace.merge.group_proposal_indices[group_index])
|
||||
{
|
||||
if (core_index < 0 || static_cast<size_t>(core_index) >= trace.core_proposals.size())
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
const int pair_index = trace.core_proposals[static_cast<size_t>(core_index)].pair_index;
|
||||
if (pair_index < 0 || static_cast<size_t>(pair_index) >= trace.previous_filter.matches.size())
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
const PreviousPoseMatch &match = trace.previous_filter.matches[static_cast<size_t>(pair_index)];
|
||||
if (!match.kept || match.previous_pose_index < 0 || match.previous_track_id < 0)
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
candidate_previous_indices.insert(match.previous_pose_index);
|
||||
candidate_previous_track_ids.insert(match.previous_track_id);
|
||||
}
|
||||
|
||||
association.candidate_previous_indices.assign(
|
||||
candidate_previous_indices.begin(), candidate_previous_indices.end());
|
||||
association.candidate_previous_track_ids.assign(
|
||||
candidate_previous_track_ids.begin(), candidate_previous_track_ids.end());
|
||||
|
||||
if (association.candidate_previous_track_ids.empty())
|
||||
{
|
||||
association.status = AssociationStatus::New;
|
||||
}
|
||||
else if (association.candidate_previous_track_ids.size() == 1 &&
|
||||
association.candidate_previous_indices.size() == 1)
|
||||
{
|
||||
association.status = AssociationStatus::Matched;
|
||||
association.resolved_previous_index = association.candidate_previous_indices.front();
|
||||
association.resolved_previous_track_id = association.candidate_previous_track_ids.front();
|
||||
}
|
||||
else
|
||||
{
|
||||
association.status = AssociationStatus::Ambiguous;
|
||||
}
|
||||
|
||||
group_associations[group_index] = std::move(association);
|
||||
}
|
||||
}
|
||||
|
||||
add_extra_joints(final_poses_3d, joint_names);
|
||||
filter_poses(final_poses_3d, roomparams, prepared.core_joint_idx, prepared.core_limbs_idx);
|
||||
add_missing_joints(final_poses_3d, joint_names, options.min_match_score);
|
||||
@@ -921,10 +955,20 @@ TriangulationTrace triangulate_debug_impl(
|
||||
|
||||
trace.final_poses.num_persons = valid_persons;
|
||||
trace.final_poses.data.resize(valid_persons * trace.final_poses.num_joints * 4);
|
||||
trace.association.pose_previous_indices.reserve(valid_persons);
|
||||
trace.association.pose_previous_track_ids.reserve(valid_persons);
|
||||
trace.association.pose_status.reserve(valid_persons);
|
||||
trace.association.pose_candidate_previous_indices.reserve(valid_persons);
|
||||
trace.association.pose_candidate_previous_track_ids.reserve(valid_persons);
|
||||
trace.final_pose_associations.reserve(valid_persons);
|
||||
|
||||
std::set<int> resolved_previous_indices;
|
||||
std::set<int64_t> resolved_previous_track_ids;
|
||||
|
||||
size_t person_index = 0;
|
||||
for (const auto &pose : final_poses_3d)
|
||||
for (size_t group_index = 0; group_index < final_poses_3d.size(); ++group_index)
|
||||
{
|
||||
const auto &pose = final_poses_3d[group_index];
|
||||
const bool is_valid = std::any_of(
|
||||
pose.begin(),
|
||||
pose.end(),
|
||||
@@ -944,9 +988,70 @@ TriangulationTrace triangulate_debug_impl(
|
||||
trace.final_poses.at(person_index, joint, coord) = pose[joint][coord];
|
||||
}
|
||||
}
|
||||
|
||||
if (previous_poses_3d != nullptr)
|
||||
{
|
||||
const FinalAssociationState &association = group_associations[group_index];
|
||||
trace.association.pose_previous_indices.push_back(association.resolved_previous_index);
|
||||
trace.association.pose_previous_track_ids.push_back(association.resolved_previous_track_id);
|
||||
trace.association.pose_status.push_back(association.status);
|
||||
trace.association.pose_candidate_previous_indices.push_back(association.candidate_previous_indices);
|
||||
trace.association.pose_candidate_previous_track_ids.push_back(
|
||||
association.candidate_previous_track_ids);
|
||||
|
||||
if (association.status == AssociationStatus::Matched)
|
||||
{
|
||||
resolved_previous_indices.insert(association.resolved_previous_index);
|
||||
resolved_previous_track_ids.insert(association.resolved_previous_track_id);
|
||||
}
|
||||
else if (association.status == AssociationStatus::New)
|
||||
{
|
||||
trace.association.new_pose_indices.push_back(static_cast<int>(person_index));
|
||||
}
|
||||
else if (association.status == AssociationStatus::Ambiguous)
|
||||
{
|
||||
trace.association.ambiguous_pose_indices.push_back(static_cast<int>(person_index));
|
||||
}
|
||||
|
||||
FinalPoseAssociationDebug debug_association;
|
||||
debug_association.final_pose_index = static_cast<int>(person_index);
|
||||
debug_association.source_core_proposal_indices =
|
||||
trace.merge.group_proposal_indices[group_index];
|
||||
debug_association.candidate_previous_indices = association.candidate_previous_indices;
|
||||
debug_association.candidate_previous_track_ids = association.candidate_previous_track_ids;
|
||||
debug_association.resolved_previous_index = association.resolved_previous_index;
|
||||
debug_association.resolved_previous_track_id = association.resolved_previous_track_id;
|
||||
debug_association.status = association.status;
|
||||
debug_association.source_pair_indices.reserve(debug_association.source_core_proposal_indices.size());
|
||||
for (const int core_index : debug_association.source_core_proposal_indices)
|
||||
{
|
||||
if (core_index >= 0 && static_cast<size_t>(core_index) < trace.core_proposals.size())
|
||||
{
|
||||
debug_association.source_pair_indices.push_back(
|
||||
trace.core_proposals[static_cast<size_t>(core_index)].pair_index);
|
||||
}
|
||||
}
|
||||
trace.final_pose_associations.push_back(std::move(debug_association));
|
||||
}
|
||||
|
||||
++person_index;
|
||||
}
|
||||
|
||||
if (previous_poses_3d != nullptr)
|
||||
{
|
||||
for (size_t previous_index = 0; previous_index < previous_poses_3d->num_persons; ++previous_index)
|
||||
{
|
||||
const int previous_index_int = static_cast<int>(previous_index);
|
||||
const int64_t track_id = previous_poses_3d->track_id(previous_index);
|
||||
if (!resolved_previous_indices.contains(previous_index_int) &&
|
||||
!resolved_previous_track_ids.contains(track_id))
|
||||
{
|
||||
trace.association.unmatched_previous_indices.push_back(previous_index_int);
|
||||
trace.association.unmatched_previous_track_ids.push_back(track_id);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return trace;
|
||||
}
|
||||
|
||||
@@ -954,13 +1059,10 @@ PreviousPoseFilterDebug filter_pairs_with_previous_poses_impl(
|
||||
const PoseBatch2DView &poses_2d,
|
||||
const std::vector<Camera> &cameras,
|
||||
const std::vector<std::string> &joint_names,
|
||||
const PoseBatch3DView &previous_poses_3d,
|
||||
const TrackedPoseBatch3DView &previous_poses_3d,
|
||||
const TriangulationOptions &options)
|
||||
{
|
||||
if (previous_poses_3d.num_persons > 0 && previous_poses_3d.num_joints != joint_names.size())
|
||||
{
|
||||
throw std::invalid_argument("previous_poses_3d must use the same joint count as joint_names.");
|
||||
}
|
||||
validate_previous_tracks(previous_poses_3d, joint_names);
|
||||
|
||||
PreparedInputs prepared = prepare_inputs(poses_2d, cameras, joint_names);
|
||||
const std::vector<PairCandidate> pairs = build_pair_candidates_from_packed(prepared.packed_poses);
|
||||
@@ -973,7 +1075,7 @@ PreviousPoseFilterDebug filter_pairs_with_previous_poses_impl(
|
||||
|
||||
return filter_pairs_with_previous_poses_impl(
|
||||
prepared.packed_poses,
|
||||
prepared.internal_cameras,
|
||||
*prepared.internal_cameras,
|
||||
prepared.core_joint_idx,
|
||||
pairs,
|
||||
options,
|
||||
@@ -982,7 +1084,7 @@ PreviousPoseFilterDebug filter_pairs_with_previous_poses_impl(
|
||||
|
||||
std::tuple<std::vector<Pose2D>, std::vector<std::vector<float>>> project_poses(
|
||||
const std::vector<Pose3D> &poses_3d,
|
||||
const CachedCamera &icam,
|
||||
const Camera &icam,
|
||||
bool calc_dists)
|
||||
{
|
||||
if (poses_3d.empty())
|
||||
@@ -998,7 +1100,7 @@ std::tuple<std::vector<Pose2D>, std::vector<std::vector<float>>> project_poses(
|
||||
|
||||
const std::array<std::array<float, 3>, 3> &K = icam.newK;
|
||||
const std::array<std::array<float, 3>, 3> &R = icam.invR;
|
||||
const std::array<std::array<float, 1>, 3> &T = icam.cam.T;
|
||||
const std::array<std::array<float, 1>, 3> &T = icam.T;
|
||||
|
||||
for (size_t i = 0; i < num_persons; ++i)
|
||||
{
|
||||
@@ -1037,7 +1139,7 @@ std::tuple<std::vector<Pose2D>, std::vector<std::vector<float>>> project_poses(
|
||||
v = (K[1][0] * x_cam + K[1][1] * y_cam + K[1][2] * z_cam) / z_cam;
|
||||
}
|
||||
|
||||
if (u < 0.0f || u >= icam.cam.width || v < 0.0f || v >= icam.cam.height)
|
||||
if (u < 0.0f || u >= icam.width || v < 0.0f || v >= icam.height)
|
||||
{
|
||||
u = 0.0f;
|
||||
v = 0.0f;
|
||||
@@ -1155,19 +1257,19 @@ std::array<float, 3> mat_mul_vec(
|
||||
return res;
|
||||
}
|
||||
|
||||
std::array<float, 3> calc_ray_dir(const CachedCamera &icam, const std::array<float, 2> &pt)
|
||||
std::array<float, 3> calc_ray_dir(const Camera &icam, const std::array<float, 2> &pt)
|
||||
{
|
||||
// Compute normalized ray direction from the point
|
||||
std::array<float, 3> uv1 = {pt[0], pt[1], 1.0};
|
||||
auto d = mat_mul_vec(icam.cam.R, mat_mul_vec(icam.invK, uv1));
|
||||
auto d = mat_mul_vec(icam.R, mat_mul_vec(icam.invK, uv1));
|
||||
auto ray_dir = normalize(d);
|
||||
|
||||
return ray_dir;
|
||||
}
|
||||
|
||||
std::array<float, 4> triangulate_midpoint(
|
||||
const CachedCamera &icam1,
|
||||
const CachedCamera &icam2,
|
||||
const Camera &icam1,
|
||||
const Camera &icam2,
|
||||
const std::array<float, 2> &pt1,
|
||||
const std::array<float, 2> &pt2)
|
||||
{
|
||||
@@ -1206,8 +1308,8 @@ std::array<float, 4> triangulate_midpoint(
|
||||
std::pair<std::vector<std::array<float, 4>>, float> triangulate_and_score(
|
||||
const std::vector<std::array<float, 3>> &pose1,
|
||||
const std::vector<std::array<float, 3>> &pose2,
|
||||
const CachedCamera &cam1,
|
||||
const CachedCamera &cam2,
|
||||
const Camera &cam1,
|
||||
const Camera &cam2,
|
||||
const std::array<std::array<float, 3>, 2> &roomparams,
|
||||
const std::vector<std::array<size_t, 2>> &core_limbs_idx)
|
||||
{
|
||||
@@ -1307,7 +1409,7 @@ std::pair<std::vector<std::array<float, 4>>, float> triangulate_and_score(
|
||||
auto dists2 = dists2s[0];
|
||||
|
||||
// Calculate scores for each view
|
||||
float iscale = (cam1.cam.width + cam1.cam.height) / 2.0;
|
||||
float iscale = (cam1.width + cam1.height) / 2.0f;
|
||||
std::vector<float> score1 = score_projection(pose1, repro1, dists1, mask, iscale);
|
||||
std::vector<float> score2 = score_projection(pose2, repro2, dists2, mask, iscale);
|
||||
|
||||
@@ -2417,7 +2519,7 @@ std::vector<PairCandidate> build_pair_candidates(const PoseBatch2DView &poses_2d
|
||||
PreviousPoseFilterDebug filter_pairs_with_previous_poses(
|
||||
const PoseBatch2DView &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const PoseBatch3DView &previous_poses_3d,
|
||||
const TrackedPoseBatch3DView &previous_poses_3d,
|
||||
const TriangulationOptions *options_override)
|
||||
{
|
||||
const TriangulationOptions &options =
|
||||
@@ -2429,7 +2531,7 @@ PreviousPoseFilterDebug filter_pairs_with_previous_poses(
|
||||
TriangulationTrace triangulate_debug(
|
||||
const PoseBatch2DView &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const PoseBatch3DView *previous_poses_3d,
|
||||
const TrackedPoseBatch3DView *previous_poses_3d,
|
||||
const TriangulationOptions *options_override)
|
||||
{
|
||||
const TriangulationOptions &options =
|
||||
@@ -2441,8 +2543,20 @@ TriangulationTrace triangulate_debug(
|
||||
PoseBatch3D triangulate_poses(
|
||||
const PoseBatch2DView &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const PoseBatch3DView *previous_poses_3d,
|
||||
const TriangulationOptions *options_override)
|
||||
{
|
||||
return triangulate_debug(poses_2d, config, previous_poses_3d, options_override).final_poses;
|
||||
return triangulate_debug(poses_2d, config, nullptr, options_override).final_poses;
|
||||
}
|
||||
|
||||
TriangulationResult triangulate_with_report(
|
||||
const PoseBatch2DView &poses_2d,
|
||||
const TriangulationConfig &config,
|
||||
const TrackedPoseBatch3DView &previous_poses_3d,
|
||||
const TriangulationOptions *options_override)
|
||||
{
|
||||
TriangulationTrace trace = triangulate_debug(poses_2d, config, &previous_poses_3d, options_override);
|
||||
return TriangulationResult {
|
||||
std::move(trace.final_poses),
|
||||
std::move(trace.association),
|
||||
};
|
||||
}
|
||||
+227
-5
@@ -4,10 +4,14 @@ from collections.abc import Sequence
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from ._core import (
|
||||
AssociationReport,
|
||||
AssociationStatus,
|
||||
Camera,
|
||||
CameraModel,
|
||||
FinalPoseAssociationDebug,
|
||||
TriangulationConfig,
|
||||
TriangulationOptions,
|
||||
TriangulationResult,
|
||||
CoreProposalDebug,
|
||||
FullProposalDebug,
|
||||
GroupingDebug,
|
||||
@@ -17,33 +21,137 @@ from ._core import (
|
||||
PreviousPoseMatch,
|
||||
ProposalGroupDebug,
|
||||
TriangulationTrace,
|
||||
build_pair_candidates,
|
||||
filter_pairs_with_previous_poses,
|
||||
triangulate_debug,
|
||||
triangulate_poses,
|
||||
build_pair_candidates as _build_pair_candidates,
|
||||
filter_pairs_with_previous_poses as _filter_pairs_with_previous_poses,
|
||||
make_camera as _make_camera,
|
||||
merge_rgbd_views as _merge_rgbd_views,
|
||||
triangulate_debug as _triangulate_debug,
|
||||
triangulate_poses as _triangulate_poses,
|
||||
triangulate_with_report as _triangulate_with_report,
|
||||
)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
import numpy as np
|
||||
import numpy.typing as npt
|
||||
|
||||
from ._helpers import CameraLike, PoseViewLike
|
||||
from ._helpers import (
|
||||
CameraLike,
|
||||
CameraModelLike,
|
||||
DepthImageLike,
|
||||
Matrix3x3Like,
|
||||
PoseViewLike,
|
||||
TranslationVectorLike,
|
||||
VectorLike,
|
||||
)
|
||||
|
||||
PoseArray2D = npt.NDArray[np.float32]
|
||||
PoseArray3D = npt.NDArray[np.float32]
|
||||
PoseArray3DByView = npt.NDArray[np.float32]
|
||||
PersonCountArray = npt.NDArray[np.uint32]
|
||||
TrackIdArray = npt.NDArray[np.int64]
|
||||
|
||||
|
||||
Camera.__doc__ = """Immutable camera calibration with precomputed projection cache fields."""
|
||||
TriangulationConfig.__doc__ = """Stable scene configuration used for triangulation."""
|
||||
TriangulationOptions.__doc__ = """Score and grouping thresholds used by triangulation."""
|
||||
TriangulationResult.__doc__ = """Tracked triangulation output containing poses and association metadata."""
|
||||
AssociationReport.__doc__ = """Track-association summary for a tracked triangulation call."""
|
||||
TriangulationTrace.__doc__ = """Full debug trace for triangulation, including pair, grouping, and association stages."""
|
||||
|
||||
|
||||
def convert_cameras(cameras: "Sequence[CameraLike]") -> list[Camera]:
|
||||
"""Normalize mapping-like camera inputs into immutable bound `Camera` instances."""
|
||||
|
||||
from ._helpers import convert_cameras as _convert_cameras
|
||||
|
||||
return _convert_cameras(cameras)
|
||||
|
||||
|
||||
def make_camera(
|
||||
name: str,
|
||||
K: "Matrix3x3Like",
|
||||
DC: "VectorLike",
|
||||
R: "Matrix3x3Like",
|
||||
T: "TranslationVectorLike",
|
||||
width: int,
|
||||
height: int,
|
||||
model: "CameraModel | CameraModelLike",
|
||||
) -> Camera:
|
||||
"""Create an immutable camera and precompute its cached projection fields.
|
||||
|
||||
`T` may be a flat `[x, y, z]` vector or a nested translation matrix with shape `[1, 3]` or `[3, 1]`.
|
||||
"""
|
||||
|
||||
from ._helpers import _coerce_camera_model, _coerce_distortion, _coerce_matrix3x3, _coerce_translation
|
||||
|
||||
camera_model = _coerce_camera_model(model)
|
||||
return _make_camera(
|
||||
name,
|
||||
_coerce_matrix3x3(K, "K").tolist(),
|
||||
_coerce_distortion(DC, camera_model),
|
||||
_coerce_matrix3x3(R, "R").tolist(),
|
||||
_coerce_translation(T).tolist(),
|
||||
width,
|
||||
height,
|
||||
camera_model,
|
||||
)
|
||||
|
||||
|
||||
def build_pair_candidates(
|
||||
poses_2d: "PoseArray2D",
|
||||
person_counts: "PersonCountArray",
|
||||
) -> list[PairCandidate]:
|
||||
"""Enumerate all cross-view person pairs implied by the padded 2D pose batch."""
|
||||
|
||||
return _build_pair_candidates(poses_2d, person_counts)
|
||||
|
||||
|
||||
def pack_poses_2d(
|
||||
views: "Sequence[PoseViewLike]", *, joint_count: int | None = None
|
||||
) -> "tuple[npt.NDArray[np.float32], npt.NDArray[np.uint32]]":
|
||||
"""Pack ragged per-view pose detections into the padded tensor expected by the core API."""
|
||||
|
||||
from ._helpers import pack_poses_2d as _pack_poses_2d
|
||||
|
||||
return _pack_poses_2d(views, joint_count=joint_count)
|
||||
|
||||
|
||||
def sample_depth_for_poses(
|
||||
poses_2d: "PoseArray2D",
|
||||
person_counts: "PersonCountArray",
|
||||
depth_images: "Sequence[DepthImageLike]",
|
||||
*,
|
||||
window_size: int = 7,
|
||||
) -> "PoseArray3D":
|
||||
"""Sample aligned depth for visible 2D joints and return `[u, v, d, score]` rows."""
|
||||
|
||||
from ._helpers import sample_depth_for_poses as _sample_depth_for_poses
|
||||
|
||||
return _sample_depth_for_poses(poses_2d, person_counts, depth_images, window_size=window_size)
|
||||
|
||||
|
||||
def apply_depth_offsets(
|
||||
poses_uvd: "PoseArray3D",
|
||||
joint_names: "Sequence[str]",
|
||||
) -> "PoseArray3D":
|
||||
"""Apply the SimpleDepthPose per-joint depth offsets to `[u, v, d, score]` rows."""
|
||||
|
||||
from ._helpers import apply_depth_offsets as _apply_depth_offsets
|
||||
|
||||
return _apply_depth_offsets(poses_uvd, joint_names)
|
||||
|
||||
|
||||
def lift_depth_poses_to_world(
|
||||
poses_uvd: "PoseArray3D",
|
||||
cameras: "Sequence[CameraLike]",
|
||||
) -> "PoseArray3DByView":
|
||||
"""Lift `[u, v, d, score]` joints into world-space `[x, y, z, score]` poses."""
|
||||
|
||||
from ._helpers import lift_depth_poses_to_world as _lift_depth_poses_to_world
|
||||
|
||||
return _lift_depth_poses_to_world(poses_uvd, cameras)
|
||||
|
||||
|
||||
def make_triangulation_config(
|
||||
cameras: "Sequence[CameraLike]",
|
||||
roomparams: "npt.NDArray[np.generic] | Sequence[Sequence[float]]",
|
||||
@@ -52,6 +160,8 @@ def make_triangulation_config(
|
||||
min_match_score: float = 0.95,
|
||||
min_group_size: int = 1,
|
||||
) -> TriangulationConfig:
|
||||
"""Build a triangulation config from cameras, room parameters, and joint names."""
|
||||
|
||||
from ._helpers import make_triangulation_config as _make_triangulation_config
|
||||
|
||||
return _make_triangulation_config(
|
||||
@@ -63,11 +173,117 @@ def make_triangulation_config(
|
||||
)
|
||||
|
||||
|
||||
def filter_pairs_with_previous_poses(
|
||||
poses_2d: "PoseArray2D",
|
||||
person_counts: "PersonCountArray",
|
||||
config: TriangulationConfig,
|
||||
previous_poses_3d: "PoseArray3D",
|
||||
previous_track_ids: "TrackIdArray",
|
||||
) -> PreviousPoseFilterDebug:
|
||||
"""Filter raw cross-view pairs against caller-owned previous 3D tracks."""
|
||||
|
||||
return _filter_pairs_with_previous_poses(
|
||||
poses_2d,
|
||||
person_counts,
|
||||
config,
|
||||
previous_poses_3d,
|
||||
previous_track_ids,
|
||||
)
|
||||
|
||||
|
||||
def triangulate_debug(
|
||||
poses_2d: "PoseArray2D",
|
||||
person_counts: "PersonCountArray",
|
||||
config: TriangulationConfig,
|
||||
previous_poses_3d: "PoseArray3D | None" = None,
|
||||
previous_track_ids: "TrackIdArray | None" = None,
|
||||
) -> TriangulationTrace:
|
||||
"""Run triangulation and return the full debug trace.
|
||||
|
||||
If previous-frame 3D tracks are supplied, `previous_track_ids` must be supplied as an
|
||||
aligned `int64` array of the same length.
|
||||
"""
|
||||
|
||||
if previous_poses_3d is None:
|
||||
return _triangulate_debug(poses_2d, person_counts, config)
|
||||
if previous_track_ids is None:
|
||||
raise ValueError("previous_track_ids is required when previous_poses_3d is provided.")
|
||||
return _triangulate_debug(poses_2d, person_counts, config, previous_poses_3d, previous_track_ids)
|
||||
|
||||
|
||||
def triangulate_poses(
|
||||
poses_2d: "PoseArray2D",
|
||||
person_counts: "PersonCountArray",
|
||||
config: TriangulationConfig,
|
||||
) -> "PoseArray3D":
|
||||
"""Triangulate a frame into anonymous 3D poses without track association."""
|
||||
|
||||
return _triangulate_poses(poses_2d, person_counts, config)
|
||||
|
||||
|
||||
def merge_rgbd_views(
|
||||
poses_3d: "PoseArray3DByView",
|
||||
person_counts: "PersonCountArray",
|
||||
config: TriangulationConfig,
|
||||
*,
|
||||
max_distance: float = 0.5,
|
||||
) -> "PoseArray3D":
|
||||
"""Merge per-view world-space RGBD pose proposals into final 3D poses."""
|
||||
return _merge_rgbd_views(poses_3d, person_counts, config, float(max_distance))
|
||||
|
||||
|
||||
def reconstruct_rgbd(
|
||||
poses_2d: "PoseArray2D",
|
||||
person_counts: "PersonCountArray",
|
||||
depth_images: "Sequence[DepthImageLike]",
|
||||
config: TriangulationConfig,
|
||||
*,
|
||||
use_depth_offsets: bool = True,
|
||||
window_size: int = 7,
|
||||
max_distance: float = 0.5,
|
||||
) -> "PoseArray3D":
|
||||
"""Reconstruct per-frame RGBD poses from calibrated detections and aligned depth images."""
|
||||
|
||||
poses_uvd = sample_depth_for_poses(poses_2d, person_counts, depth_images, window_size=window_size)
|
||||
if use_depth_offsets:
|
||||
poses_uvd = apply_depth_offsets(poses_uvd, config.joint_names)
|
||||
poses_3d = lift_depth_poses_to_world(poses_uvd, config.cameras)
|
||||
return merge_rgbd_views(poses_3d, person_counts, config, max_distance=max_distance)
|
||||
|
||||
|
||||
def triangulate_with_report(
|
||||
poses_2d: "PoseArray2D",
|
||||
person_counts: "PersonCountArray",
|
||||
config: TriangulationConfig,
|
||||
previous_poses_3d: "PoseArray3D",
|
||||
previous_track_ids: "TrackIdArray",
|
||||
) -> TriangulationResult:
|
||||
"""Triangulate a frame and return caller-owned track association results.
|
||||
|
||||
The previous-frame poses and IDs are treated as immutable caller state. The returned
|
||||
association report classifies each final pose as matched, new, or ambiguous, and lists
|
||||
unmatched previous tracks as deletion candidates.
|
||||
"""
|
||||
|
||||
return _triangulate_with_report(
|
||||
poses_2d,
|
||||
person_counts,
|
||||
config,
|
||||
previous_poses_3d,
|
||||
previous_track_ids,
|
||||
)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Camera",
|
||||
"CameraModel",
|
||||
"AssociationReport",
|
||||
"AssociationStatus",
|
||||
"apply_depth_offsets",
|
||||
"FinalPoseAssociationDebug",
|
||||
"TriangulationConfig",
|
||||
"TriangulationOptions",
|
||||
"TriangulationResult",
|
||||
"CoreProposalDebug",
|
||||
"FullProposalDebug",
|
||||
"GroupingDebug",
|
||||
@@ -80,8 +296,14 @@ __all__ = [
|
||||
"build_pair_candidates",
|
||||
"convert_cameras",
|
||||
"filter_pairs_with_previous_poses",
|
||||
"lift_depth_poses_to_world",
|
||||
"make_camera",
|
||||
"make_triangulation_config",
|
||||
"merge_rgbd_views",
|
||||
"pack_poses_2d",
|
||||
"reconstruct_rgbd",
|
||||
"sample_depth_for_poses",
|
||||
"triangulate_debug",
|
||||
"triangulate_poses",
|
||||
"triangulate_with_report",
|
||||
]
|
||||
|
||||
@@ -0,0 +1,199 @@
|
||||
from collections.abc import Sequence
|
||||
from typing import TypeAlias, overload
|
||||
|
||||
import numpy as np
|
||||
import numpy.typing as npt
|
||||
|
||||
from ._core import (
|
||||
AssociationReport as AssociationReport,
|
||||
AssociationStatus as AssociationStatus,
|
||||
Camera as Camera,
|
||||
CameraModel as CameraModel,
|
||||
CoreProposalDebug as CoreProposalDebug,
|
||||
FinalPoseAssociationDebug as FinalPoseAssociationDebug,
|
||||
FullProposalDebug as FullProposalDebug,
|
||||
GroupingDebug as GroupingDebug,
|
||||
MergeDebug as MergeDebug,
|
||||
PairCandidate as PairCandidate,
|
||||
PreviousPoseFilterDebug as PreviousPoseFilterDebug,
|
||||
PreviousPoseMatch as PreviousPoseMatch,
|
||||
ProposalGroupDebug as ProposalGroupDebug,
|
||||
TriangulationConfig as TriangulationConfig,
|
||||
TriangulationOptions as TriangulationOptions,
|
||||
TriangulationResult as TriangulationResult,
|
||||
TriangulationTrace as TriangulationTrace,
|
||||
)
|
||||
from ._helpers import (
|
||||
CameraLike,
|
||||
CameraModelLike,
|
||||
DepthImageLike,
|
||||
Matrix3x3Like,
|
||||
PoseViewLike,
|
||||
RoomParamsLike,
|
||||
TranslationVectorLike,
|
||||
VectorLike,
|
||||
)
|
||||
|
||||
PoseArray2D: TypeAlias = npt.NDArray[np.float32]
|
||||
PoseArray3D: TypeAlias = npt.NDArray[np.float32]
|
||||
PoseArray3DByView: TypeAlias = npt.NDArray[np.float32]
|
||||
PersonCountArray: TypeAlias = npt.NDArray[np.uint32]
|
||||
TrackIdArray: TypeAlias = npt.NDArray[np.int64]
|
||||
|
||||
|
||||
def convert_cameras(cameras: Sequence[CameraLike]) -> list[Camera]: ...
|
||||
|
||||
|
||||
def make_camera(
|
||||
name: str,
|
||||
K: Matrix3x3Like,
|
||||
DC: VectorLike,
|
||||
R: Matrix3x3Like,
|
||||
T: TranslationVectorLike,
|
||||
width: int,
|
||||
height: int,
|
||||
model: CameraModel | CameraModelLike,
|
||||
) -> Camera: ...
|
||||
|
||||
|
||||
def build_pair_candidates(
|
||||
poses_2d: PoseArray2D,
|
||||
person_counts: PersonCountArray,
|
||||
) -> list[PairCandidate]: ...
|
||||
|
||||
|
||||
def pack_poses_2d(
|
||||
views: Sequence[PoseViewLike],
|
||||
*,
|
||||
joint_count: int | None = None,
|
||||
) -> tuple[npt.NDArray[np.float32], npt.NDArray[np.uint32]]: ...
|
||||
|
||||
|
||||
def sample_depth_for_poses(
|
||||
poses_2d: PoseArray2D,
|
||||
person_counts: PersonCountArray,
|
||||
depth_images: Sequence[DepthImageLike],
|
||||
*,
|
||||
window_size: int = 7,
|
||||
) -> PoseArray3D: ...
|
||||
|
||||
|
||||
def apply_depth_offsets(
|
||||
poses_uvd: PoseArray3D,
|
||||
joint_names: Sequence[str],
|
||||
) -> PoseArray3D: ...
|
||||
|
||||
|
||||
def lift_depth_poses_to_world(
|
||||
poses_uvd: PoseArray3D,
|
||||
cameras: Sequence[CameraLike],
|
||||
) -> PoseArray3DByView: ...
|
||||
|
||||
|
||||
def make_triangulation_config(
|
||||
cameras: Sequence[CameraLike],
|
||||
roomparams: RoomParamsLike,
|
||||
joint_names: Sequence[str],
|
||||
*,
|
||||
min_match_score: float = 0.95,
|
||||
min_group_size: int = 1,
|
||||
) -> TriangulationConfig: ...
|
||||
|
||||
|
||||
def filter_pairs_with_previous_poses(
|
||||
poses_2d: PoseArray2D,
|
||||
person_counts: PersonCountArray,
|
||||
config: TriangulationConfig,
|
||||
previous_poses_3d: PoseArray3D,
|
||||
previous_track_ids: TrackIdArray,
|
||||
) -> PreviousPoseFilterDebug: ...
|
||||
|
||||
|
||||
@overload
|
||||
def triangulate_debug(
|
||||
poses_2d: PoseArray2D,
|
||||
person_counts: PersonCountArray,
|
||||
config: TriangulationConfig,
|
||||
) -> TriangulationTrace: ...
|
||||
|
||||
|
||||
@overload
|
||||
def triangulate_debug(
|
||||
poses_2d: PoseArray2D,
|
||||
person_counts: PersonCountArray,
|
||||
config: TriangulationConfig,
|
||||
previous_poses_3d: PoseArray3D,
|
||||
previous_track_ids: TrackIdArray,
|
||||
) -> TriangulationTrace: ...
|
||||
|
||||
|
||||
def triangulate_poses(
|
||||
poses_2d: PoseArray2D,
|
||||
person_counts: PersonCountArray,
|
||||
config: TriangulationConfig,
|
||||
) -> PoseArray3D: ...
|
||||
|
||||
|
||||
def merge_rgbd_views(
|
||||
poses_3d: PoseArray3DByView,
|
||||
person_counts: PersonCountArray,
|
||||
config: TriangulationConfig,
|
||||
*,
|
||||
max_distance: float = 0.5,
|
||||
) -> PoseArray3D: ...
|
||||
|
||||
|
||||
def reconstruct_rgbd(
|
||||
poses_2d: PoseArray2D,
|
||||
person_counts: PersonCountArray,
|
||||
depth_images: Sequence[DepthImageLike],
|
||||
config: TriangulationConfig,
|
||||
*,
|
||||
use_depth_offsets: bool = True,
|
||||
window_size: int = 7,
|
||||
max_distance: float = 0.5,
|
||||
) -> PoseArray3D: ...
|
||||
|
||||
|
||||
def triangulate_with_report(
|
||||
poses_2d: PoseArray2D,
|
||||
person_counts: PersonCountArray,
|
||||
config: TriangulationConfig,
|
||||
previous_poses_3d: PoseArray3D,
|
||||
previous_track_ids: TrackIdArray,
|
||||
) -> TriangulationResult: ...
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Camera",
|
||||
"CameraModel",
|
||||
"AssociationReport",
|
||||
"AssociationStatus",
|
||||
"apply_depth_offsets",
|
||||
"FinalPoseAssociationDebug",
|
||||
"TriangulationConfig",
|
||||
"TriangulationOptions",
|
||||
"TriangulationResult",
|
||||
"CoreProposalDebug",
|
||||
"FullProposalDebug",
|
||||
"GroupingDebug",
|
||||
"MergeDebug",
|
||||
"PairCandidate",
|
||||
"PreviousPoseFilterDebug",
|
||||
"PreviousPoseMatch",
|
||||
"ProposalGroupDebug",
|
||||
"TriangulationTrace",
|
||||
"build_pair_candidates",
|
||||
"convert_cameras",
|
||||
"filter_pairs_with_previous_poses",
|
||||
"lift_depth_poses_to_world",
|
||||
"make_camera",
|
||||
"make_triangulation_config",
|
||||
"merge_rgbd_views",
|
||||
"pack_poses_2d",
|
||||
"reconstruct_rgbd",
|
||||
"sample_depth_for_poses",
|
||||
"triangulate_debug",
|
||||
"triangulate_poses",
|
||||
"triangulate_with_report",
|
||||
]
|
||||
@@ -0,0 +1,530 @@
|
||||
from collections.abc import Sequence
|
||||
import enum
|
||||
from typing import Annotated, overload
|
||||
|
||||
import numpy
|
||||
from numpy.typing import NDArray
|
||||
|
||||
|
||||
class CameraModel(enum.Enum):
|
||||
PINHOLE = 0
|
||||
|
||||
FISHEYE = 1
|
||||
|
||||
class Camera:
|
||||
"""Immutable camera calibration with precomputed projection cache fields."""
|
||||
|
||||
@property
|
||||
def name(self) -> str: ...
|
||||
|
||||
@property
|
||||
def K(self) -> list[list[float]]: ...
|
||||
|
||||
@property
|
||||
def DC(self) -> list[float]: ...
|
||||
|
||||
@property
|
||||
def R(self) -> list[list[float]]: ...
|
||||
|
||||
@property
|
||||
def T(self) -> list[list[float]]: ...
|
||||
|
||||
@property
|
||||
def width(self) -> int: ...
|
||||
|
||||
@property
|
||||
def height(self) -> int: ...
|
||||
|
||||
@property
|
||||
def model(self) -> CameraModel: ...
|
||||
|
||||
@property
|
||||
def invR(self) -> list[list[float]]: ...
|
||||
|
||||
@property
|
||||
def center(self) -> list[float]: ...
|
||||
|
||||
@property
|
||||
def newK(self) -> list[list[float]]: ...
|
||||
|
||||
@property
|
||||
def invK(self) -> list[list[float]]: ...
|
||||
|
||||
def __repr__(self) -> str: ...
|
||||
|
||||
class TriangulationOptions:
|
||||
"""Score and grouping thresholds used by triangulation."""
|
||||
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def min_match_score(self) -> float: ...
|
||||
|
||||
@min_match_score.setter
|
||||
def min_match_score(self, arg: float, /) -> None: ...
|
||||
|
||||
@property
|
||||
def min_group_size(self) -> int: ...
|
||||
|
||||
@min_group_size.setter
|
||||
def min_group_size(self, arg: int, /) -> None: ...
|
||||
|
||||
class TriangulationConfig:
|
||||
"""Stable scene configuration used for triangulation."""
|
||||
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def cameras(self) -> list[Camera]: ...
|
||||
|
||||
@cameras.setter
|
||||
def cameras(self, arg: Sequence[Camera], /) -> None: ...
|
||||
|
||||
@property
|
||||
def roomparams(self) -> list[list[float]]: ...
|
||||
|
||||
@roomparams.setter
|
||||
def roomparams(self, arg: Sequence[Sequence[float]], /) -> None: ...
|
||||
|
||||
@property
|
||||
def joint_names(self) -> list[str]: ...
|
||||
|
||||
@joint_names.setter
|
||||
def joint_names(self, arg: Sequence[str], /) -> None: ...
|
||||
|
||||
@property
|
||||
def options(self) -> TriangulationOptions: ...
|
||||
|
||||
@options.setter
|
||||
def options(self, arg: TriangulationOptions, /) -> None: ...
|
||||
|
||||
class PairCandidate:
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def view1(self) -> int: ...
|
||||
|
||||
@view1.setter
|
||||
def view1(self, arg: int, /) -> None: ...
|
||||
|
||||
@property
|
||||
def view2(self) -> int: ...
|
||||
|
||||
@view2.setter
|
||||
def view2(self, arg: int, /) -> None: ...
|
||||
|
||||
@property
|
||||
def person1(self) -> int: ...
|
||||
|
||||
@person1.setter
|
||||
def person1(self, arg: int, /) -> None: ...
|
||||
|
||||
@property
|
||||
def person2(self) -> int: ...
|
||||
|
||||
@person2.setter
|
||||
def person2(self, arg: int, /) -> None: ...
|
||||
|
||||
@property
|
||||
def global_person1(self) -> int: ...
|
||||
|
||||
@global_person1.setter
|
||||
def global_person1(self, arg: int, /) -> None: ...
|
||||
|
||||
@property
|
||||
def global_person2(self) -> int: ...
|
||||
|
||||
@global_person2.setter
|
||||
def global_person2(self, arg: int, /) -> None: ...
|
||||
|
||||
class PreviousPoseMatch:
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def previous_pose_index(self) -> int: ...
|
||||
|
||||
@previous_pose_index.setter
|
||||
def previous_pose_index(self, arg: int, /) -> None: ...
|
||||
|
||||
@property
|
||||
def previous_track_id(self) -> int: ...
|
||||
|
||||
@previous_track_id.setter
|
||||
def previous_track_id(self, arg: int, /) -> None: ...
|
||||
|
||||
@property
|
||||
def score_view1(self) -> float: ...
|
||||
|
||||
@score_view1.setter
|
||||
def score_view1(self, arg: float, /) -> None: ...
|
||||
|
||||
@property
|
||||
def score_view2(self) -> float: ...
|
||||
|
||||
@score_view2.setter
|
||||
def score_view2(self, arg: float, /) -> None: ...
|
||||
|
||||
@property
|
||||
def matched_view1(self) -> bool: ...
|
||||
|
||||
@matched_view1.setter
|
||||
def matched_view1(self, arg: bool, /) -> None: ...
|
||||
|
||||
@property
|
||||
def matched_view2(self) -> bool: ...
|
||||
|
||||
@matched_view2.setter
|
||||
def matched_view2(self, arg: bool, /) -> None: ...
|
||||
|
||||
@property
|
||||
def kept(self) -> bool: ...
|
||||
|
||||
@kept.setter
|
||||
def kept(self, arg: bool, /) -> None: ...
|
||||
|
||||
@property
|
||||
def decision(self) -> str: ...
|
||||
|
||||
@decision.setter
|
||||
def decision(self, arg: str, /) -> None: ...
|
||||
|
||||
class PreviousPoseFilterDebug:
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def used_previous_poses(self) -> bool: ...
|
||||
|
||||
@used_previous_poses.setter
|
||||
def used_previous_poses(self, arg: bool, /) -> None: ...
|
||||
|
||||
@property
|
||||
def matches(self) -> list[PreviousPoseMatch]: ...
|
||||
|
||||
@matches.setter
|
||||
def matches(self, arg: Sequence[PreviousPoseMatch], /) -> None: ...
|
||||
|
||||
@property
|
||||
def kept_pair_indices(self) -> list[int]: ...
|
||||
|
||||
@kept_pair_indices.setter
|
||||
def kept_pair_indices(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
@property
|
||||
def kept_pairs(self) -> list[PairCandidate]: ...
|
||||
|
||||
@kept_pairs.setter
|
||||
def kept_pairs(self, arg: Sequence[PairCandidate], /) -> None: ...
|
||||
|
||||
class CoreProposalDebug:
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def pair_index(self) -> int: ...
|
||||
|
||||
@pair_index.setter
|
||||
def pair_index(self, arg: int, /) -> None: ...
|
||||
|
||||
@property
|
||||
def pair(self) -> PairCandidate: ...
|
||||
|
||||
@pair.setter
|
||||
def pair(self, arg: PairCandidate, /) -> None: ...
|
||||
|
||||
@property
|
||||
def score(self) -> float: ...
|
||||
|
||||
@score.setter
|
||||
def score(self, arg: float, /) -> None: ...
|
||||
|
||||
@property
|
||||
def kept(self) -> bool: ...
|
||||
|
||||
@kept.setter
|
||||
def kept(self, arg: bool, /) -> None: ...
|
||||
|
||||
@property
|
||||
def drop_reason(self) -> str: ...
|
||||
|
||||
@drop_reason.setter
|
||||
def drop_reason(self, arg: str, /) -> None: ...
|
||||
|
||||
@property
|
||||
def pose_3d(self) -> Annotated[NDArray[numpy.float32], dict(shape=(None, 4), order='C')]: ...
|
||||
|
||||
class ProposalGroupDebug:
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def center(self) -> list[float]: ...
|
||||
|
||||
@center.setter
|
||||
def center(self, arg: Sequence[float], /) -> None: ...
|
||||
|
||||
@property
|
||||
def proposal_indices(self) -> list[int]: ...
|
||||
|
||||
@proposal_indices.setter
|
||||
def proposal_indices(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
@property
|
||||
def pose_3d(self) -> Annotated[NDArray[numpy.float32], dict(shape=(None, 4), order='C')]: ...
|
||||
|
||||
class GroupingDebug:
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def initial_groups(self) -> list[ProposalGroupDebug]: ...
|
||||
|
||||
@initial_groups.setter
|
||||
def initial_groups(self, arg: Sequence[ProposalGroupDebug], /) -> None: ...
|
||||
|
||||
@property
|
||||
def duplicate_pair_drops(self) -> list[int]: ...
|
||||
|
||||
@duplicate_pair_drops.setter
|
||||
def duplicate_pair_drops(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
@property
|
||||
def groups(self) -> list[ProposalGroupDebug]: ...
|
||||
|
||||
@groups.setter
|
||||
def groups(self, arg: Sequence[ProposalGroupDebug], /) -> None: ...
|
||||
|
||||
class FullProposalDebug:
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def source_core_proposal_index(self) -> int: ...
|
||||
|
||||
@source_core_proposal_index.setter
|
||||
def source_core_proposal_index(self, arg: int, /) -> None: ...
|
||||
|
||||
@property
|
||||
def pair(self) -> PairCandidate: ...
|
||||
|
||||
@pair.setter
|
||||
def pair(self, arg: PairCandidate, /) -> None: ...
|
||||
|
||||
@property
|
||||
def pose_3d(self) -> Annotated[NDArray[numpy.float32], dict(shape=(None, 4), order='C')]: ...
|
||||
|
||||
class MergeDebug:
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def group_proposal_indices(self) -> list[list[int]]: ...
|
||||
|
||||
@group_proposal_indices.setter
|
||||
def group_proposal_indices(self, arg: Sequence[Sequence[int]], /) -> None: ...
|
||||
|
||||
@property
|
||||
def merged_poses(self) -> Annotated[NDArray[numpy.float32], dict(shape=(None, None, 4), order='C')]: ...
|
||||
|
||||
class AssociationStatus(enum.Enum):
|
||||
MATCHED = 0
|
||||
|
||||
NEW = 1
|
||||
|
||||
AMBIGUOUS = 2
|
||||
|
||||
class AssociationReport:
|
||||
"""Track-association summary for a tracked triangulation call."""
|
||||
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def pose_previous_indices(self) -> list[int]: ...
|
||||
|
||||
@pose_previous_indices.setter
|
||||
def pose_previous_indices(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
@property
|
||||
def pose_previous_track_ids(self) -> list[int]: ...
|
||||
|
||||
@pose_previous_track_ids.setter
|
||||
def pose_previous_track_ids(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
@property
|
||||
def pose_status(self) -> list[AssociationStatus]: ...
|
||||
|
||||
@pose_status.setter
|
||||
def pose_status(self, arg: Sequence[AssociationStatus], /) -> None: ...
|
||||
|
||||
@property
|
||||
def pose_candidate_previous_indices(self) -> list[list[int]]: ...
|
||||
|
||||
@pose_candidate_previous_indices.setter
|
||||
def pose_candidate_previous_indices(self, arg: Sequence[Sequence[int]], /) -> None: ...
|
||||
|
||||
@property
|
||||
def pose_candidate_previous_track_ids(self) -> list[list[int]]: ...
|
||||
|
||||
@pose_candidate_previous_track_ids.setter
|
||||
def pose_candidate_previous_track_ids(self, arg: Sequence[Sequence[int]], /) -> None: ...
|
||||
|
||||
@property
|
||||
def unmatched_previous_indices(self) -> list[int]: ...
|
||||
|
||||
@unmatched_previous_indices.setter
|
||||
def unmatched_previous_indices(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
@property
|
||||
def unmatched_previous_track_ids(self) -> list[int]: ...
|
||||
|
||||
@unmatched_previous_track_ids.setter
|
||||
def unmatched_previous_track_ids(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
@property
|
||||
def new_pose_indices(self) -> list[int]: ...
|
||||
|
||||
@new_pose_indices.setter
|
||||
def new_pose_indices(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
@property
|
||||
def ambiguous_pose_indices(self) -> list[int]: ...
|
||||
|
||||
@ambiguous_pose_indices.setter
|
||||
def ambiguous_pose_indices(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
class FinalPoseAssociationDebug:
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def final_pose_index(self) -> int: ...
|
||||
|
||||
@final_pose_index.setter
|
||||
def final_pose_index(self, arg: int, /) -> None: ...
|
||||
|
||||
@property
|
||||
def source_core_proposal_indices(self) -> list[int]: ...
|
||||
|
||||
@source_core_proposal_indices.setter
|
||||
def source_core_proposal_indices(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
@property
|
||||
def source_pair_indices(self) -> list[int]: ...
|
||||
|
||||
@source_pair_indices.setter
|
||||
def source_pair_indices(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
@property
|
||||
def candidate_previous_indices(self) -> list[int]: ...
|
||||
|
||||
@candidate_previous_indices.setter
|
||||
def candidate_previous_indices(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
@property
|
||||
def candidate_previous_track_ids(self) -> list[int]: ...
|
||||
|
||||
@candidate_previous_track_ids.setter
|
||||
def candidate_previous_track_ids(self, arg: Sequence[int], /) -> None: ...
|
||||
|
||||
@property
|
||||
def resolved_previous_index(self) -> int: ...
|
||||
|
||||
@resolved_previous_index.setter
|
||||
def resolved_previous_index(self, arg: int, /) -> None: ...
|
||||
|
||||
@property
|
||||
def resolved_previous_track_id(self) -> int: ...
|
||||
|
||||
@resolved_previous_track_id.setter
|
||||
def resolved_previous_track_id(self, arg: int, /) -> None: ...
|
||||
|
||||
@property
|
||||
def status(self) -> AssociationStatus: ...
|
||||
|
||||
@status.setter
|
||||
def status(self, arg: AssociationStatus, /) -> None: ...
|
||||
|
||||
class TriangulationTrace:
|
||||
"""
|
||||
Full debug trace for triangulation, including pair, grouping, and association stages.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def pairs(self) -> list[PairCandidate]: ...
|
||||
|
||||
@pairs.setter
|
||||
def pairs(self, arg: Sequence[PairCandidate], /) -> None: ...
|
||||
|
||||
@property
|
||||
def previous_filter(self) -> PreviousPoseFilterDebug: ...
|
||||
|
||||
@previous_filter.setter
|
||||
def previous_filter(self, arg: PreviousPoseFilterDebug, /) -> None: ...
|
||||
|
||||
@property
|
||||
def core_proposals(self) -> list[CoreProposalDebug]: ...
|
||||
|
||||
@core_proposals.setter
|
||||
def core_proposals(self, arg: Sequence[CoreProposalDebug], /) -> None: ...
|
||||
|
||||
@property
|
||||
def grouping(self) -> GroupingDebug: ...
|
||||
|
||||
@grouping.setter
|
||||
def grouping(self, arg: GroupingDebug, /) -> None: ...
|
||||
|
||||
@property
|
||||
def full_proposals(self) -> list[FullProposalDebug]: ...
|
||||
|
||||
@full_proposals.setter
|
||||
def full_proposals(self, arg: Sequence[FullProposalDebug], /) -> None: ...
|
||||
|
||||
@property
|
||||
def merge(self) -> MergeDebug: ...
|
||||
|
||||
@merge.setter
|
||||
def merge(self, arg: MergeDebug, /) -> None: ...
|
||||
|
||||
@property
|
||||
def association(self) -> AssociationReport: ...
|
||||
|
||||
@association.setter
|
||||
def association(self, arg: AssociationReport, /) -> None: ...
|
||||
|
||||
@property
|
||||
def final_pose_associations(self) -> list[FinalPoseAssociationDebug]: ...
|
||||
|
||||
@final_pose_associations.setter
|
||||
def final_pose_associations(self, arg: Sequence[FinalPoseAssociationDebug], /) -> None: ...
|
||||
|
||||
@property
|
||||
def final_poses(self) -> Annotated[NDArray[numpy.float32], dict(shape=(None, None, 4), order='C')]: ...
|
||||
|
||||
class TriangulationResult:
|
||||
"""
|
||||
Tracked triangulation output containing poses and association metadata.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@property
|
||||
def association(self) -> AssociationReport: ...
|
||||
|
||||
@association.setter
|
||||
def association(self, arg: AssociationReport, /) -> None: ...
|
||||
|
||||
@property
|
||||
def poses_3d(self) -> Annotated[NDArray[numpy.float32], dict(shape=(None, None, 4), order='C')]: ...
|
||||
|
||||
def make_camera(name: str, K: Sequence[Sequence[float]], DC: Sequence[float], R: Sequence[Sequence[float]], T: Sequence[Sequence[float]], width: int, height: int, model: CameraModel) -> Camera: ...
|
||||
|
||||
def build_pair_candidates(poses_2d: Annotated[NDArray[numpy.float32], dict(shape=(None, None, None, 3), order='C', writable=False)], person_counts: Annotated[NDArray[numpy.uint32], dict(shape=(None,), order='C', writable=False)]) -> list[PairCandidate]: ...
|
||||
|
||||
def filter_pairs_with_previous_poses(poses_2d: Annotated[NDArray[numpy.float32], dict(shape=(None, None, None, 3), order='C', writable=False)], person_counts: Annotated[NDArray[numpy.uint32], dict(shape=(None,), order='C', writable=False)], config: TriangulationConfig, previous_poses_3d: Annotated[NDArray[numpy.float32], dict(shape=(None, None, 4), order='C', writable=False)], previous_track_ids: Annotated[NDArray[numpy.int64], dict(shape=(None,), order='C', writable=False)]) -> PreviousPoseFilterDebug: ...
|
||||
|
||||
@overload
|
||||
def triangulate_debug(poses_2d: Annotated[NDArray[numpy.float32], dict(shape=(None, None, None, 3), order='C', writable=False)], person_counts: Annotated[NDArray[numpy.uint32], dict(shape=(None,), order='C', writable=False)], config: TriangulationConfig) -> TriangulationTrace: ...
|
||||
|
||||
@overload
|
||||
def triangulate_debug(poses_2d: Annotated[NDArray[numpy.float32], dict(shape=(None, None, None, 3), order='C', writable=False)], person_counts: Annotated[NDArray[numpy.uint32], dict(shape=(None,), order='C', writable=False)], config: TriangulationConfig, previous_poses_3d: Annotated[NDArray[numpy.float32], dict(shape=(None, None, 4), order='C', writable=False)], previous_track_ids: Annotated[NDArray[numpy.int64], dict(shape=(None,), order='C', writable=False)]) -> TriangulationTrace: ...
|
||||
|
||||
def triangulate_poses(poses_2d: Annotated[NDArray[numpy.float32], dict(shape=(None, None, None, 3), order='C', writable=False)], person_counts: Annotated[NDArray[numpy.uint32], dict(shape=(None,), order='C', writable=False)], config: TriangulationConfig) -> Annotated[NDArray[numpy.float32], dict(shape=(None, None, 4), order='C')]: ...
|
||||
|
||||
def merge_rgbd_views(poses_3d: Annotated[NDArray[numpy.float32], dict(shape=(None, None, None, 4), order='C', writable=False)], person_counts: Annotated[NDArray[numpy.uint32], dict(shape=(None,), order='C', writable=False)], config: TriangulationConfig, max_distance: float = 0.5) -> Annotated[NDArray[numpy.float32], dict(shape=(None, None, 4), order='C')]: ...
|
||||
|
||||
def triangulate_with_report(poses_2d: Annotated[NDArray[numpy.float32], dict(shape=(None, None, None, 3), order='C', writable=False)], person_counts: Annotated[NDArray[numpy.uint32], dict(shape=(None,), order='C', writable=False)], config: TriangulationConfig, previous_poses_3d: Annotated[NDArray[numpy.float32], dict(shape=(None, None, 4), order='C', writable=False)], previous_track_ids: Annotated[NDArray[numpy.int64], dict(shape=(None,), order='C', writable=False)]) -> TriangulationResult: ...
|
||||
+214
-18
@@ -1,33 +1,67 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Sequence
|
||||
from typing import Literal, TypeAlias, TypedDict
|
||||
|
||||
from jaxtyping import Float
|
||||
import numpy as np
|
||||
import numpy.typing as npt
|
||||
|
||||
from ._core import Camera, CameraModel, TriangulationConfig, TriangulationOptions
|
||||
from ._core import Camera, CameraModel, TriangulationConfig, TriangulationOptions, make_camera as _make_camera
|
||||
|
||||
Matrix3x3Like: TypeAlias = Sequence[Sequence[float]]
|
||||
VectorLike: TypeAlias = Sequence[float]
|
||||
Matrix3x3: TypeAlias = Float[np.ndarray, "3 3"]
|
||||
DistortionVector: TypeAlias = Float[np.ndarray, "coeffs"]
|
||||
TranslationVector: TypeAlias = Float[np.ndarray, "3"]
|
||||
TranslationColumn: TypeAlias = Float[np.ndarray, "3 1"]
|
||||
TranslationRow: TypeAlias = Float[np.ndarray, "1 3"]
|
||||
Matrix3x3Like: TypeAlias = Matrix3x3 | Sequence[Sequence[float]]
|
||||
VectorLike: TypeAlias = DistortionVector | Sequence[float]
|
||||
TranslationVectorLike: TypeAlias = (
|
||||
TranslationVector | TranslationColumn | TranslationRow | Sequence[float] | Sequence[Sequence[float]]
|
||||
)
|
||||
RoomParamsLike: TypeAlias = npt.NDArray[np.generic] | Sequence[Sequence[float]]
|
||||
PoseViewLike: TypeAlias = npt.NDArray[np.generic] | Sequence[Sequence[Sequence[float]]] | Sequence[Sequence[float]]
|
||||
DepthImageLike: TypeAlias = npt.NDArray[np.generic] | Sequence[Sequence[float]]
|
||||
|
||||
|
||||
class CameraDict(TypedDict, total=False):
|
||||
class _CameraDictRequired(TypedDict):
|
||||
name: str
|
||||
K: Matrix3x3Like
|
||||
DC: VectorLike
|
||||
R: Matrix3x3Like
|
||||
T: Sequence[Sequence[float]]
|
||||
T: TranslationVectorLike
|
||||
width: int
|
||||
height: int
|
||||
|
||||
|
||||
class CameraDict(_CameraDictRequired, total=False):
|
||||
type: Literal["pinhole", "fisheye"]
|
||||
model: Literal["pinhole", "fisheye"] | CameraModel
|
||||
|
||||
|
||||
CameraModelLike: TypeAlias = CameraModel | Literal["pinhole", "fisheye"]
|
||||
CameraLike = Camera | CameraDict
|
||||
CameraLike: TypeAlias = Camera | CameraDict
|
||||
|
||||
DEFAULT_DEPTH_OFFSETS_METERS: dict[str, float] = {
|
||||
"nose": 0.005,
|
||||
"eye_left": 0.005,
|
||||
"eye_right": 0.005,
|
||||
"ear_left": 0.005,
|
||||
"ear_right": 0.005,
|
||||
"shoulder_left": 0.03,
|
||||
"shoulder_right": 0.03,
|
||||
"elbow_left": 0.02,
|
||||
"elbow_right": 0.02,
|
||||
"wrist_left": 0.01,
|
||||
"wrist_right": 0.01,
|
||||
"hip_left": 0.04,
|
||||
"hip_right": 0.04,
|
||||
"knee_left": 0.03,
|
||||
"knee_right": 0.03,
|
||||
"ankle_left": 0.03,
|
||||
"ankle_right": 0.03,
|
||||
"hip_middle": 0.04,
|
||||
"shoulder_middle": 0.03,
|
||||
"head": 0.0,
|
||||
}
|
||||
|
||||
|
||||
def _coerce_camera_model(model: CameraModelLike) -> CameraModel:
|
||||
@@ -55,6 +89,33 @@ def _coerce_distortion(distortion: VectorLike, camera_model: CameraModel) -> tup
|
||||
return values
|
||||
|
||||
|
||||
def _coerce_matrix3x3(matrix: object, field_name: str) -> Matrix3x3:
|
||||
array = np.asarray(matrix, dtype=np.float32)
|
||||
if array.shape != (3, 3):
|
||||
raise ValueError(f"{field_name} must have shape [3, 3].")
|
||||
return np.ascontiguousarray(array, dtype=np.float32)
|
||||
|
||||
|
||||
def _coerce_translation(translation: object) -> TranslationColumn:
|
||||
array = np.asarray(translation, dtype=np.float32)
|
||||
if array.shape == (3,):
|
||||
array = array[:, np.newaxis]
|
||||
elif array.shape == (1, 3):
|
||||
array = array.T
|
||||
if array.shape != (3, 1):
|
||||
raise ValueError("T must have shape [3], [1, 3], or [3, 1].")
|
||||
return np.ascontiguousarray(array, dtype=np.float32)
|
||||
|
||||
|
||||
def _coerce_depth_image(depth_image: DepthImageLike) -> npt.NDArray[np.float32]:
|
||||
array = np.asarray(depth_image, dtype=np.float32)
|
||||
if array.ndim == 3 and array.shape[-1] == 1:
|
||||
array = np.squeeze(array, axis=-1)
|
||||
if array.ndim != 2:
|
||||
raise ValueError("Each depth image must have shape [height, width] or [height, width, 1].")
|
||||
return np.ascontiguousarray(array, dtype=np.float32)
|
||||
|
||||
|
||||
def convert_cameras(cameras: Sequence[CameraLike]) -> list[Camera]:
|
||||
"""Normalize mappings or existing Camera objects into bound Camera instances."""
|
||||
|
||||
@@ -64,17 +125,19 @@ def convert_cameras(cameras: Sequence[CameraLike]) -> list[Camera]:
|
||||
converted.append(cam)
|
||||
continue
|
||||
|
||||
camera = Camera()
|
||||
camera.name = str(cam["name"])
|
||||
camera.K = cam["K"]
|
||||
camera_model = _coerce_camera_model(cam.get("model", cam.get("type", "pinhole")))
|
||||
camera.DC = _coerce_distortion(cam["DC"], camera_model)
|
||||
camera.R = cam["R"]
|
||||
camera.T = cam["T"]
|
||||
camera.width = int(cam["width"])
|
||||
camera.height = int(cam["height"])
|
||||
camera.model = camera_model
|
||||
converted.append(camera)
|
||||
converted.append(
|
||||
_make_camera(
|
||||
str(cam["name"]),
|
||||
_coerce_matrix3x3(cam["K"], "K").tolist(),
|
||||
_coerce_distortion(cam["DC"], camera_model),
|
||||
_coerce_matrix3x3(cam["R"], "R").tolist(),
|
||||
_coerce_translation(cam["T"]).tolist(),
|
||||
int(cam["width"]),
|
||||
int(cam["height"]),
|
||||
camera_model,
|
||||
)
|
||||
)
|
||||
return converted
|
||||
|
||||
|
||||
@@ -155,3 +218,136 @@ def make_triangulation_config(
|
||||
options.min_group_size = int(min_group_size)
|
||||
config.options = options
|
||||
return config
|
||||
|
||||
|
||||
def sample_depth_for_poses(
|
||||
poses_2d: npt.NDArray[np.generic],
|
||||
person_counts: npt.NDArray[np.generic],
|
||||
depth_images: Sequence[DepthImageLike],
|
||||
*,
|
||||
window_size: int = 7,
|
||||
) -> npt.NDArray[np.float32]:
|
||||
"""Sample aligned depth for each visible 2D joint and return `[u, v, d, score]` rows."""
|
||||
|
||||
poses = np.asarray(poses_2d, dtype=np.float32)
|
||||
counts = np.asarray(person_counts, dtype=np.uint32)
|
||||
if poses.ndim != 4 or poses.shape[-1] != 3:
|
||||
raise ValueError("poses_2d must have shape [views, max_persons, joints, 3].")
|
||||
if counts.ndim != 1 or counts.shape[0] != poses.shape[0]:
|
||||
raise ValueError("person_counts must be a 1D array aligned with the pose views.")
|
||||
if len(depth_images) != poses.shape[0]:
|
||||
raise ValueError("depth_images must have the same number of views as poses_2d.")
|
||||
if window_size <= 0:
|
||||
raise ValueError("window_size must be positive.")
|
||||
radius = window_size // 2
|
||||
|
||||
poses_uvd = np.zeros((poses.shape[0], poses.shape[1], poses.shape[2], 4), dtype=np.float32)
|
||||
for view_idx, depth_image in enumerate(depth_images):
|
||||
depth = _coerce_depth_image(depth_image)
|
||||
poses_uvd[view_idx, :, :, :2] = poses[view_idx, :, :, :2]
|
||||
poses_uvd[view_idx, :, :, 3] = poses[view_idx, :, :, 2]
|
||||
|
||||
valid_persons = int(counts[view_idx])
|
||||
if valid_persons == 0:
|
||||
continue
|
||||
|
||||
joints = poses[view_idx, :valid_persons, :, :2].astype(np.int32, copy=False).reshape(-1, 2)
|
||||
scores = poses[view_idx, :valid_persons, :, 2:3].reshape(-1, 1)
|
||||
|
||||
depth_padded = np.pad(depth, radius, mode="constant", constant_values=0)
|
||||
offsets = np.arange(-radius, radius + 1, dtype=np.int32)
|
||||
valid_xy = (
|
||||
(joints[:, 0] >= 0)
|
||||
& (joints[:, 0] < depth.shape[1])
|
||||
& (joints[:, 1] >= 0)
|
||||
& (joints[:, 1] < depth.shape[0])
|
||||
)
|
||||
clamped_x = np.clip(joints[:, 0], 0, depth.shape[1] - 1)
|
||||
clamped_y = np.clip(joints[:, 1], 0, depth.shape[0] - 1)
|
||||
center_x = clamped_x[:, None] + radius
|
||||
center_y = clamped_y[:, None] + radius
|
||||
vertical_grid = np.clip(np.add.outer(clamped_y, offsets) + radius, 0, depth_padded.shape[0] - 1)
|
||||
horizontal_grid = np.clip(
|
||||
np.add.outer(clamped_x, offsets) + radius, 0, depth_padded.shape[1] - 1
|
||||
)
|
||||
|
||||
vertical_depths = depth_padded[vertical_grid, center_x]
|
||||
horizontal_depths = depth_padded[center_y, horizontal_grid]
|
||||
all_depths = np.concatenate((vertical_depths, horizontal_depths), axis=1).astype(np.float32)
|
||||
all_depths[~valid_xy] = np.nan
|
||||
all_depths[all_depths <= 0] = np.nan
|
||||
|
||||
valid_depth_rows = ~np.isnan(all_depths).all(axis=1)
|
||||
sampled_depths = np.zeros((all_depths.shape[0],), dtype=np.float32)
|
||||
if np.any(valid_depth_rows):
|
||||
with np.errstate(all="ignore"):
|
||||
sampled_depths[valid_depth_rows] = np.nanmedian(all_depths[valid_depth_rows], axis=1)
|
||||
|
||||
valid_mask = ((sampled_depths > 0.0).astype(np.float32)[:, None] * (scores > 0.0).astype(np.float32))
|
||||
sampled_depths = sampled_depths.reshape(valid_persons, poses.shape[2], 1)
|
||||
valid_mask = valid_mask.reshape(valid_persons, poses.shape[2], 1)
|
||||
|
||||
poses_uvd[view_idx, :valid_persons, :, 2:3] = sampled_depths
|
||||
poses_uvd[view_idx, :valid_persons] *= np.concatenate((valid_mask, valid_mask, valid_mask, valid_mask), axis=-1)
|
||||
|
||||
return poses_uvd
|
||||
|
||||
|
||||
def apply_depth_offsets(
|
||||
poses_uvd: npt.NDArray[np.generic],
|
||||
joint_names: Sequence[str],
|
||||
) -> npt.NDArray[np.float32]:
|
||||
"""Apply the SimpleDepthPose per-joint depth offsets in meters."""
|
||||
|
||||
poses = np.asarray(poses_uvd, dtype=np.float32)
|
||||
if poses.ndim != 4 or poses.shape[-1] != 4:
|
||||
raise ValueError("poses_uvd must have shape [views, max_persons, joints, 4].")
|
||||
if len(joint_names) != poses.shape[2]:
|
||||
raise ValueError("joint_names must have the same number of joints as poses_uvd.")
|
||||
|
||||
result = poses.copy()
|
||||
offsets = np.asarray(
|
||||
[DEFAULT_DEPTH_OFFSETS_METERS.get(str(joint_name), 0.0) for joint_name in joint_names],
|
||||
dtype=np.float32,
|
||||
)
|
||||
depth_mask = (result[:, :, :, 2:3] > 0.0).astype(np.float32)
|
||||
result[:, :, :, 2:3] += depth_mask * offsets[np.newaxis, np.newaxis, :, np.newaxis] * 1000.0
|
||||
return result
|
||||
|
||||
|
||||
def lift_depth_poses_to_world(
|
||||
poses_uvd: npt.NDArray[np.generic],
|
||||
cameras: Sequence[CameraLike],
|
||||
) -> npt.NDArray[np.float32]:
|
||||
"""Lift `[u, v, d, score]` joints into world-space `[x, y, z, score]` poses."""
|
||||
|
||||
poses = np.asarray(poses_uvd, dtype=np.float32)
|
||||
if poses.ndim != 4 or poses.shape[-1] != 4:
|
||||
raise ValueError("poses_uvd must have shape [views, max_persons, joints, 4].")
|
||||
|
||||
converted_cameras = convert_cameras(cameras)
|
||||
if len(converted_cameras) != poses.shape[0]:
|
||||
raise ValueError("cameras must have the same number of views as poses_uvd.")
|
||||
|
||||
result = np.zeros_like(poses, dtype=np.float32)
|
||||
for view_idx, camera in enumerate(converted_cameras):
|
||||
uv = poses[view_idx, :, :, :2].reshape(-1, 2)
|
||||
depth_mm = poses[view_idx, :, :, 2:3].reshape(-1, 1)
|
||||
scores = poses[view_idx, :, :, 3:4].reshape(-1, 1)
|
||||
|
||||
depth_m = depth_mm * 0.001
|
||||
uv_ones = np.concatenate((uv, np.ones((uv.shape[0], 1), dtype=np.float32)), axis=1)
|
||||
k_inv = np.linalg.inv(np.asarray(camera.K, dtype=np.float32))
|
||||
xyz_cam = depth_m * (uv_ones @ k_inv.T)
|
||||
|
||||
rotation = np.asarray(camera.R, dtype=np.float32)
|
||||
translation = np.asarray(camera.T, dtype=np.float32).reshape(1, 3)
|
||||
xyz_world = (rotation @ xyz_cam.T).T + translation
|
||||
|
||||
pose_world = np.concatenate((xyz_world, scores), axis=1).reshape(
|
||||
poses.shape[1], poses.shape[2], 4
|
||||
)
|
||||
pose_world *= (pose_world[:, :, 3:4] > 0.0).astype(np.float32)
|
||||
result[view_idx] = pose_world
|
||||
|
||||
return result
|
||||
|
||||
+71
-11
@@ -51,19 +51,21 @@ def make_config(cameras, roomparams) -> rpt.TriangulationConfig:
|
||||
|
||||
|
||||
def test_camera_structure_repr():
|
||||
camera = rpt.Camera()
|
||||
camera.name = "Camera 1"
|
||||
camera.K = [[1, 0, 0], [0, 1, 0], [0, 0, 1]]
|
||||
camera.DC = [0, 0, 0, 0, 0]
|
||||
camera.R = [[1, 0, 0], [0, 1, 0], [0, 0, 1]]
|
||||
camera.T = [[1], [2], [3]]
|
||||
camera.width = 640
|
||||
camera.height = 480
|
||||
camera.model = rpt.CameraModel.PINHOLE
|
||||
camera = rpt.make_camera(
|
||||
"Camera 1",
|
||||
[[1, 0, 0], [0, 1, 0], [0, 0, 1]],
|
||||
[0, 0, 0, 0, 0],
|
||||
[[1, 0, 0], [0, 1, 0], [0, 0, 1]],
|
||||
[1, 2, 3],
|
||||
640,
|
||||
480,
|
||||
rpt.CameraModel.PINHOLE,
|
||||
)
|
||||
|
||||
rendered = repr(camera)
|
||||
assert "Camera 1" in rendered
|
||||
assert "pinhole" in rendered
|
||||
np.testing.assert_allclose(np.asarray(camera.T, dtype=np.float32).reshape(3), [1.0, 2.0, 3.0])
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
@@ -118,11 +120,19 @@ def test_triangulate_accepts_empty_previous_poses():
|
||||
poses_2d, person_counts, cameras = load_case("data/p1/sample.json", "tests/poses_p1.json")
|
||||
config = make_config(cameras, [[5.6, 6.4, 2.4], [0.0, -0.5, 1.2]])
|
||||
empty_previous = np.zeros((0, len(JOINT_NAMES), 4), dtype=np.float32)
|
||||
empty_previous_ids = np.zeros((0,), dtype=np.int64)
|
||||
|
||||
baseline = rpt.triangulate_poses(poses_2d, person_counts, config)
|
||||
with_previous = rpt.triangulate_poses(poses_2d, person_counts, config, empty_previous)
|
||||
result = rpt.triangulate_with_report(
|
||||
poses_2d,
|
||||
person_counts,
|
||||
config,
|
||||
empty_previous,
|
||||
empty_previous_ids,
|
||||
)
|
||||
|
||||
np.testing.assert_allclose(with_previous, baseline, rtol=1e-5, atol=1e-5)
|
||||
np.testing.assert_allclose(result.poses_3d, baseline, rtol=1e-5, atol=1e-5)
|
||||
assert result.association.unmatched_previous_track_ids == []
|
||||
|
||||
|
||||
def test_triangulate_debug_matches_final_output():
|
||||
@@ -134,6 +144,7 @@ def test_triangulate_debug_matches_final_output():
|
||||
|
||||
np.testing.assert_allclose(trace.final_poses, final_poses, rtol=1e-5, atol=1e-5)
|
||||
assert len(trace.pairs) >= len(trace.core_proposals)
|
||||
assert trace.association.pose_previous_track_ids == []
|
||||
for group in trace.grouping.groups:
|
||||
assert all(0 <= index < len(trace.core_proposals) for index in group.proposal_indices)
|
||||
for merge_indices in trace.merge.group_proposal_indices:
|
||||
@@ -144,18 +155,67 @@ def test_filter_pairs_with_previous_poses_returns_debug_matches():
|
||||
poses_2d, person_counts, cameras = load_case("data/p1/sample.json", "tests/poses_p1.json")
|
||||
config = make_config(cameras, [[5.6, 6.4, 2.4], [0.0, -0.5, 1.2]])
|
||||
previous_poses = rpt.triangulate_poses(poses_2d, person_counts, config)
|
||||
previous_track_ids = np.arange(previous_poses.shape[0], dtype=np.int64) + 100
|
||||
|
||||
debug = rpt.filter_pairs_with_previous_poses(
|
||||
poses_2d,
|
||||
person_counts,
|
||||
config,
|
||||
previous_poses,
|
||||
previous_track_ids,
|
||||
)
|
||||
|
||||
assert debug.used_previous_poses is True
|
||||
assert len(debug.matches) == len(rpt.build_pair_candidates(poses_2d, person_counts))
|
||||
assert len(debug.kept_pairs) == len(debug.kept_pair_indices)
|
||||
assert any(match.matched_view1 or match.matched_view2 for match in debug.matches)
|
||||
assert any(match.previous_track_id >= 100 for match in debug.matches if match.previous_pose_index >= 0)
|
||||
|
||||
|
||||
def test_triangulate_with_report_resolves_previous_track_ids():
|
||||
poses_2d, person_counts, cameras = load_case("data/p1/sample.json", "tests/poses_p1.json")
|
||||
config = make_config(cameras, [[5.6, 6.4, 2.4], [0.0, -0.5, 1.2]])
|
||||
previous_poses = rpt.triangulate_poses(poses_2d, person_counts, config)
|
||||
previous_track_ids = np.arange(previous_poses.shape[0], dtype=np.int64) + 100
|
||||
|
||||
result = rpt.triangulate_with_report(
|
||||
poses_2d,
|
||||
person_counts,
|
||||
config,
|
||||
previous_poses,
|
||||
previous_track_ids,
|
||||
)
|
||||
|
||||
assert result.poses_3d.shape == previous_poses.shape
|
||||
assert len(result.association.pose_previous_track_ids) == result.poses_3d.shape[0]
|
||||
matched_track_ids = sorted(
|
||||
track_id for track_id in result.association.pose_previous_track_ids if track_id >= 0
|
||||
)
|
||||
unmatched_track_ids = sorted(result.association.unmatched_previous_track_ids)
|
||||
|
||||
for pose_index in result.association.new_pose_indices:
|
||||
assert result.association.pose_previous_track_ids[pose_index] == -1
|
||||
for pose_index in result.association.ambiguous_pose_indices:
|
||||
assert result.association.pose_previous_track_ids[pose_index] == -1
|
||||
|
||||
assert matched_track_ids == sorted(set(matched_track_ids))
|
||||
assert sorted(matched_track_ids + unmatched_track_ids) == list(previous_track_ids)
|
||||
|
||||
|
||||
def test_triangulate_with_report_rejects_duplicate_previous_track_ids():
|
||||
poses_2d, person_counts, cameras = load_case("data/p1/sample.json", "tests/poses_p1.json")
|
||||
config = make_config(cameras, [[5.6, 6.4, 2.4], [0.0, -0.5, 1.2]])
|
||||
previous_poses = rpt.triangulate_poses(poses_2d, person_counts, config)
|
||||
duplicate_ids = np.zeros((previous_poses.shape[0],), dtype=np.int64)
|
||||
|
||||
with pytest.raises(ValueError, match="unique"):
|
||||
rpt.triangulate_with_report(
|
||||
poses_2d,
|
||||
person_counts,
|
||||
config,
|
||||
previous_poses,
|
||||
duplicate_ids,
|
||||
)
|
||||
|
||||
|
||||
def test_triangulate_does_not_mutate_inputs():
|
||||
|
||||
@@ -0,0 +1,197 @@
|
||||
import numpy as np
|
||||
|
||||
import rpt
|
||||
|
||||
JOINT_NAMES = [
|
||||
"nose",
|
||||
"eye_left",
|
||||
"eye_right",
|
||||
"ear_left",
|
||||
"ear_right",
|
||||
"shoulder_left",
|
||||
"shoulder_right",
|
||||
"elbow_left",
|
||||
"elbow_right",
|
||||
"wrist_left",
|
||||
"wrist_right",
|
||||
"hip_left",
|
||||
"hip_right",
|
||||
"knee_left",
|
||||
"knee_right",
|
||||
"ankle_left",
|
||||
"ankle_right",
|
||||
"hip_middle",
|
||||
"shoulder_middle",
|
||||
"head",
|
||||
]
|
||||
|
||||
|
||||
def make_camera(name: str) -> rpt.Camera:
|
||||
return rpt.make_camera(
|
||||
name,
|
||||
[[1000, 0, 0], [0, 1000, 0], [0, 0, 1]],
|
||||
[0, 0, 0, 0, 0],
|
||||
[[1, 0, 0], [0, 1, 0], [0, 0, 1]],
|
||||
[[0], [0], [0]],
|
||||
256,
|
||||
256,
|
||||
rpt.CameraModel.PINHOLE,
|
||||
)
|
||||
|
||||
|
||||
def make_config(num_views: int) -> rpt.TriangulationConfig:
|
||||
return rpt.make_triangulation_config(
|
||||
[make_camera(f"Camera {idx}") for idx in range(num_views)],
|
||||
np.asarray([[10.0, 10.0, 10.0], [0.0, 0.0, 0.0]], dtype=np.float32),
|
||||
JOINT_NAMES,
|
||||
)
|
||||
|
||||
|
||||
def make_body_2d() -> np.ndarray:
|
||||
return np.asarray(
|
||||
[
|
||||
[150, 50, 1.0],
|
||||
[145, 48, 1.0],
|
||||
[155, 48, 1.0],
|
||||
[138, 50, 1.0],
|
||||
[162, 50, 1.0],
|
||||
[135, 80, 1.0],
|
||||
[165, 80, 1.0],
|
||||
[125, 115, 1.0],
|
||||
[175, 115, 1.0],
|
||||
[115, 150, 1.0],
|
||||
[185, 150, 1.0],
|
||||
[145, 130, 1.0],
|
||||
[155, 130, 1.0],
|
||||
[145, 175, 1.0],
|
||||
[155, 175, 1.0],
|
||||
[145, 220, 1.0],
|
||||
[155, 220, 1.0],
|
||||
[150, 130, 1.0],
|
||||
[150, 80, 1.0],
|
||||
[150, 50, 1.0],
|
||||
],
|
||||
dtype=np.float32,
|
||||
)
|
||||
|
||||
|
||||
def test_sample_depth_for_poses_respects_person_counts_and_scores():
|
||||
poses_2d = np.zeros((1, 2, 2, 3), dtype=np.float32)
|
||||
poses_2d[0, 0, 0] = [5, 6, 0.8]
|
||||
poses_2d[0, 0, 1] = [7, 8, 0.0]
|
||||
person_counts = np.asarray([1], dtype=np.uint32)
|
||||
|
||||
depth_image = np.full((16, 16), 3000, dtype=np.float32)
|
||||
depth_image[0, 0] = 1234
|
||||
|
||||
poses_uvd = rpt.sample_depth_for_poses(poses_2d, person_counts, [depth_image])
|
||||
|
||||
np.testing.assert_allclose(poses_uvd[0, 0, 0], [5.0, 6.0, 3000.0, 0.8], rtol=1e-6, atol=1e-6)
|
||||
np.testing.assert_array_equal(poses_uvd[0, 0, 1], np.zeros((4,), dtype=np.float32))
|
||||
np.testing.assert_array_equal(poses_uvd[0, 1], np.zeros((2, 4), dtype=np.float32))
|
||||
|
||||
|
||||
def test_sample_depth_for_poses_uses_symmetric_window():
|
||||
poses_2d = np.zeros((1, 1, 1, 3), dtype=np.float32)
|
||||
poses_2d[0, 0, 0] = [5, 5, 1.0]
|
||||
person_counts = np.asarray([1], dtype=np.uint32)
|
||||
|
||||
depth_image = np.zeros((16, 16), dtype=np.float32)
|
||||
depth_image[5, 5] = 1000.0
|
||||
depth_image[3, 5] = 5000.0
|
||||
depth_image[5, 2] = 5000.0
|
||||
depth_image[5, 3] = 5000.0
|
||||
depth_image[5, 7] = 5000.0
|
||||
depth_image[5, 8] = 5000.0
|
||||
|
||||
poses_uvd = rpt.sample_depth_for_poses(poses_2d, person_counts, [depth_image], window_size=3)
|
||||
|
||||
np.testing.assert_allclose(poses_uvd[0, 0, 0], [5.0, 5.0, 1000.0, 1.0], rtol=1e-6, atol=1e-6)
|
||||
|
||||
|
||||
def test_sample_depth_for_poses_ignores_out_of_bounds_joints():
|
||||
poses_2d = np.zeros((1, 1, 1, 3), dtype=np.float32)
|
||||
poses_2d[0, 0, 0] = [99, -4, 0.7]
|
||||
person_counts = np.asarray([1], dtype=np.uint32)
|
||||
|
||||
poses_uvd = rpt.sample_depth_for_poses(
|
||||
poses_2d,
|
||||
person_counts,
|
||||
[np.full((16, 16), 3000, dtype=np.float32)],
|
||||
)
|
||||
|
||||
np.testing.assert_array_equal(poses_uvd[0, 0, 0], np.zeros((4,), dtype=np.float32))
|
||||
|
||||
|
||||
def test_apply_depth_offsets_uses_joint_name_mapping():
|
||||
poses_uvd = np.zeros((1, 1, 3, 4), dtype=np.float32)
|
||||
poses_uvd[0, 0, :, 2] = 3000.0
|
||||
poses_uvd[0, 0, :, 3] = 1.0
|
||||
|
||||
adjusted = rpt.apply_depth_offsets(poses_uvd, ["nose", "shoulder_left", "unknown_joint"])
|
||||
|
||||
np.testing.assert_allclose(adjusted[0, 0, :, 2], [3005.0, 3030.0, 3000.0], rtol=1e-6, atol=1e-6)
|
||||
np.testing.assert_allclose(poses_uvd[0, 0, :, 2], [3000.0, 3000.0, 3000.0], rtol=1e-6, atol=1e-6)
|
||||
|
||||
|
||||
def test_lift_depth_poses_to_world_matches_camera_projection():
|
||||
poses_uvd = np.zeros((1, 1, 2, 4), dtype=np.float32)
|
||||
poses_uvd[0, 0, 0] = [100.0, 200.0, 3000.0, 0.9]
|
||||
poses_uvd[0, 0, 1] = [0.0, 0.0, 0.0, 0.0]
|
||||
|
||||
lifted = rpt.lift_depth_poses_to_world(poses_uvd, [make_camera("Camera 1")])
|
||||
|
||||
np.testing.assert_allclose(lifted[0, 0, 0], [0.3, 0.6, 3.0, 0.9], rtol=1e-6, atol=1e-6)
|
||||
np.testing.assert_array_equal(lifted[0, 0, 1], np.zeros((4,), dtype=np.float32))
|
||||
|
||||
|
||||
def test_merge_rgbd_views_merges_identical_world_poses():
|
||||
config = make_config(2)
|
||||
body_2d = make_body_2d()
|
||||
|
||||
poses_2d = np.zeros((2, 1, len(JOINT_NAMES), 3), dtype=np.float32)
|
||||
poses_2d[0, 0] = body_2d
|
||||
poses_2d[1, 0] = body_2d
|
||||
person_counts = np.asarray([1, 1], dtype=np.uint32)
|
||||
depth_images = [np.full((256, 256), 3000, dtype=np.float32) for _ in range(2)]
|
||||
|
||||
poses_uvd = rpt.sample_depth_for_poses(poses_2d, person_counts, depth_images)
|
||||
poses_uvd = rpt.apply_depth_offsets(poses_uvd, JOINT_NAMES)
|
||||
poses_3d_by_view = rpt.lift_depth_poses_to_world(poses_uvd, config.cameras)
|
||||
merged = rpt.merge_rgbd_views(poses_3d_by_view, person_counts, config)
|
||||
|
||||
assert merged.shape == (1, len(JOINT_NAMES), 4)
|
||||
np.testing.assert_allclose(merged[0, :-1], poses_3d_by_view[0, 0, :-1], rtol=1e-5, atol=1e-5)
|
||||
expected_head = (poses_3d_by_view[0, 0, 3] + poses_3d_by_view[0, 0, 4]) * 0.5
|
||||
expected_head[3] = min(poses_3d_by_view[0, 0, 3, 3], poses_3d_by_view[0, 0, 4, 3])
|
||||
np.testing.assert_allclose(merged[0, -1], expected_head, rtol=1e-5, atol=1e-5)
|
||||
|
||||
|
||||
def test_reconstruct_rgbd_matches_manual_pipeline_and_single_view_person():
|
||||
config = make_config(2)
|
||||
body_2d = make_body_2d()
|
||||
|
||||
poses_2d = np.zeros((2, 1, len(JOINT_NAMES), 3), dtype=np.float32)
|
||||
poses_2d[0, 0] = body_2d
|
||||
person_counts = np.asarray([1, 0], dtype=np.uint32)
|
||||
depth_images = [
|
||||
np.full((256, 256), 3000, dtype=np.float32),
|
||||
np.zeros((256, 256), dtype=np.float32),
|
||||
]
|
||||
|
||||
manual = rpt.merge_rgbd_views(
|
||||
rpt.lift_depth_poses_to_world(
|
||||
rpt.apply_depth_offsets(
|
||||
rpt.sample_depth_for_poses(poses_2d, person_counts, depth_images),
|
||||
JOINT_NAMES,
|
||||
),
|
||||
config.cameras,
|
||||
),
|
||||
person_counts,
|
||||
config,
|
||||
)
|
||||
reconstructed = rpt.reconstruct_rgbd(poses_2d, person_counts, depth_images, config)
|
||||
|
||||
assert reconstructed.shape == (1, len(JOINT_NAMES), 4)
|
||||
np.testing.assert_allclose(reconstructed, manual, rtol=1e-5, atol=1e-5)
|
||||
assert np.count_nonzero(reconstructed[0, :, 3] > 0.0) >= 7
|
||||
@@ -0,0 +1,18 @@
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
ROOT = Path(__file__).resolve().parents[1]
|
||||
|
||||
|
||||
def test_checked_in_core_stub_exists():
|
||||
assert (ROOT / "src" / "rpt" / "_core.pyi").exists()
|
||||
|
||||
|
||||
def test_checked_in_core_stub_matches_generated_stub():
|
||||
generated_stub = ROOT / "build" / "bindings" / "rpt" / "_core.pyi"
|
||||
if not generated_stub.exists():
|
||||
pytest.skip("Build-generated nanobind stub is unavailable.")
|
||||
|
||||
checked_in_stub = ROOT / "src" / "rpt" / "_core.pyi"
|
||||
assert checked_in_stub.read_text(encoding="utf-8") == generated_stub.read_text(encoding="utf-8")
|
||||
@@ -6,6 +6,18 @@ resolution-markers = [
|
||||
"python_full_version < '3.11'",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "basedpyright"
|
||||
version = "1.38.4"
|
||||
source = { registry = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/" }
|
||||
dependencies = [
|
||||
{ name = "nodejs-wheel-binaries" },
|
||||
]
|
||||
sdist = { url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/08/b4/26cb812eaf8ab56909c792c005fe1690706aef6f21d61107639e46e9c54c/basedpyright-1.38.4.tar.gz", hash = "sha256:8e7d4f37ffb6106621e06b9355025009cdf5b48f71c592432dd2dd304bf55e70", size = 25354730, upload-time = "2026-03-25T13:50:44.353Z" }
|
||||
wheels = [
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/62/0b/3f95fd47def42479e61077523d3752086d5c12009192a7f1c9fd5507e687/basedpyright-1.38.4-py3-none-any.whl", hash = "sha256:90aa067cf3e8a3c17ad5836a72b9e1f046bc72a4ad57d928473d9368c9cd07a2", size = 12352258, upload-time = "2026-03-25T13:50:41.059Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "colorama"
|
||||
version = "0.4.6"
|
||||
@@ -36,6 +48,52 @@ wheels = [
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "jaxtyping"
|
||||
version = "0.3.7"
|
||||
source = { registry = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/" }
|
||||
resolution-markers = [
|
||||
"python_full_version < '3.11'",
|
||||
]
|
||||
dependencies = [
|
||||
{ name = "wadler-lindig", marker = "python_full_version < '3.11'" },
|
||||
]
|
||||
sdist = { url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/38/40/a2ea3ce0e3e5f540eb970de7792c90fa58fef1b27d34c83f9fa94fea4729/jaxtyping-0.3.7.tar.gz", hash = "sha256:3bd7d9beb7d3cb01a89f93f90581c6f4fff3e5c5dc3c9307e8f8687a040d10c4", size = 45721, upload-time = "2026-01-30T14:18:47.409Z" }
|
||||
wheels = [
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/78/42/caf65e9a0576a3abadc537e2f831701ba9081f21317fb3be87d64451587a/jaxtyping-0.3.7-py3-none-any.whl", hash = "sha256:303ab8599edf412eeb40bf06c863e3168fa186cf0e7334703fa741ddd7046e66", size = 56101, upload-time = "2026-01-30T14:18:45.954Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "jaxtyping"
|
||||
version = "0.3.9"
|
||||
source = { registry = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/" }
|
||||
resolution-markers = [
|
||||
"python_full_version >= '3.11'",
|
||||
]
|
||||
dependencies = [
|
||||
{ name = "wadler-lindig", marker = "python_full_version >= '3.11'" },
|
||||
]
|
||||
sdist = { url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/c2/be/00294e369938937e31b094437d5ea040e4fd1a20b998ebe572c4a1dcfa68/jaxtyping-0.3.9.tar.gz", hash = "sha256:f8c02d1b623d5f1b6665d4f3ddaec675d70004f16a792102c2fc51264190951d", size = 45857, upload-time = "2026-02-16T10:35:13.263Z" }
|
||||
wheels = [
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/94/05/3e39d416fb92b2738a76e8265e6bfc5d10542f90a7c32ad1eb831eea3fa3/jaxtyping-0.3.9-py3-none-any.whl", hash = "sha256:a00557a9d616eff157491f06ed2e21ed94886fad3832399273eb912b345da378", size = 56274, upload-time = "2026-02-16T10:35:11.795Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nodejs-wheel-binaries"
|
||||
version = "24.14.0"
|
||||
source = { registry = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/" }
|
||||
sdist = { url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/71/05/c75c0940b1ebf82975d14f37176679b6f3229eae8b47b6a70d1e1dae0723/nodejs_wheel_binaries-24.14.0.tar.gz", hash = "sha256:c87b515e44b0e4a523017d8c59f26ccbd05b54fe593338582825d4b51fc91e1c", size = 8057, upload-time = "2026-02-27T02:57:30.931Z" }
|
||||
wheels = [
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/58/8c/b057c2db3551a6fe04e93dd14e33d810ac8907891534ffcc7a051b253858/nodejs_wheel_binaries-24.14.0-py2.py3-none-macosx_13_0_arm64.whl", hash = "sha256:59bb78b8eb08c3e32186da1ef913f1c806b5473d8bd0bb4492702092747b674a", size = 54798488, upload-time = "2026-02-27T02:56:56.831Z" },
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/30/88/7e1b29c067b6625c97c81eb8b0ef37cf5ad5b62bb81e23f4bde804910ec9/nodejs_wheel_binaries-24.14.0-py2.py3-none-macosx_13_0_x86_64.whl", hash = "sha256:348fa061b57625de7250d608e2d9b7c4bc170544da7e328325343860eadd59e5", size = 54972803, upload-time = "2026-02-27T02:57:01.696Z" },
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/1e/e0/a83f0ff12faca2a56366462e572e38ac6f5cb361877bb29e289138eb7f24/nodejs_wheel_binaries-24.14.0-py2.py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:222dbf516ccc877afcad4e4789a81b4ee93daaa9f0ad97c464417d9597f49449", size = 59340859, upload-time = "2026-02-27T02:57:06.125Z" },
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/e2/9f/06fad4ae8a723ae7096b5311eba67ad8b4df5f359c0a68e366750b7fef78/nodejs_wheel_binaries-24.14.0-py2.py3-none-manylinux_2_28_x86_64.whl", hash = "sha256:b35d6fcccfe4fb0a409392d237fbc67796bac0d357b996bc12d057a1531a238b", size = 59838751, upload-time = "2026-02-27T02:57:10.449Z" },
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/8c/72/4916dadc7307c3e9bcfa43b4b6f88237932d502c66f89eb2d90fb07810db/nodejs_wheel_binaries-24.14.0-py2.py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:519507fb74f3f2b296ab1e9f00dcc211f36bbfb93c60229e72dcdee9dafd301a", size = 61340534, upload-time = "2026-02-27T02:57:15.309Z" },
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/2e/df/a8ba881ee5d04b04e0d93abc8ce501ff7292813583e97f9789eb3fc0472a/nodejs_wheel_binaries-24.14.0-py2.py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:68c93c52ff06d704bcb5ed160b4ba04ab1b291d238aaf996b03a5396e0e9a7ed", size = 61922394, upload-time = "2026-02-27T02:57:20.24Z" },
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/60/8c/b8c5f61201c72a0c7dc694b459941f89a6defda85deff258a9940a4e2efc/nodejs_wheel_binaries-24.14.0-py2.py3-none-win_amd64.whl", hash = "sha256:60b83c4e98b0c7d836ac9ccb67dcb36e343691cbe62cd325799ff9ed936286f3", size = 41218783, upload-time = "2026-02-27T02:57:24.175Z" },
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/91/23/1f904bc9cbd8eece393e20840c08ba3ac03440090c3a4e95168fa6d2709f/nodejs_wheel_binaries-24.14.0-py2.py3-none-win_arm64.whl", hash = "sha256:78a9bd1d6b11baf1433f9fb84962ff8aa71c87d48b6434f98224bc49a2253a6e", size = 38926103, upload-time = "2026-02-27T02:57:27.458Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "numpy"
|
||||
version = "2.2.6"
|
||||
@@ -230,23 +288,32 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "rapid-pose-triangulation"
|
||||
version = "0.1.0"
|
||||
version = "0.2.0"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "jaxtyping", version = "0.3.7", source = { registry = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/" }, marker = "python_full_version < '3.11'" },
|
||||
{ name = "jaxtyping", version = "0.3.9", source = { registry = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/" }, marker = "python_full_version >= '3.11'" },
|
||||
{ name = "numpy", version = "2.2.6", source = { registry = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/" }, marker = "python_full_version < '3.11'" },
|
||||
{ name = "numpy", version = "2.4.3", source = { registry = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/" }, marker = "python_full_version >= '3.11'" },
|
||||
]
|
||||
|
||||
[package.dev-dependencies]
|
||||
dev = [
|
||||
{ name = "basedpyright" },
|
||||
{ name = "pytest" },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [{ name = "numpy", specifier = ">=2.0" }]
|
||||
requires-dist = [
|
||||
{ name = "jaxtyping" },
|
||||
{ name = "numpy", specifier = ">=2.0" },
|
||||
]
|
||||
|
||||
[package.metadata.requires-dev]
|
||||
dev = [{ name = "pytest", specifier = ">=8.3" }]
|
||||
dev = [
|
||||
{ name = "basedpyright", specifier = ">=1.38.3" },
|
||||
{ name = "pytest", specifier = ">=8.3" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tomli"
|
||||
@@ -310,3 +377,12 @@ sdist = { url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/72/94/1a
|
||||
wheels = [
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wadler-lindig"
|
||||
version = "0.1.7"
|
||||
source = { registry = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple/" }
|
||||
sdist = { url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/1e/67/cbae4bf7683a64755c2c1778c418fea96d00e34395bb91743f08bd951571/wadler_lindig-0.1.7.tar.gz", hash = "sha256:81d14d3fe77d441acf3ebd7f4aefac20c74128bf460e84b512806dccf7b2cd55", size = 15842, upload-time = "2025-06-18T07:00:42.843Z" }
|
||||
wheels = [
|
||||
{ url = "https://mirrors.tuna.tsinghua.edu.cn/pypi/web/packages/8d/96/04e7b441807b26b794da5b11e59ed7f83b2cf8af202bd7eba8ad2fa6046e/wadler_lindig-0.1.7-py3-none-any.whl", hash = "sha256:e3ec83835570fd0a9509f969162aeb9c65618f998b1f42918cfc8d45122fe953", size = 20516, upload-time = "2025-06-18T07:00:41.684Z" },
|
||||
]
|
||||
|
||||
Reference in New Issue
Block a user