refactor(zed): remove extracted offline helper tooling
Drop the offline ZED helper implementations that were moved into zed-offline-tools.\n\nThis removes the standalone conversion binaries, batch/index/inspection scripts, related configs and tests, and the tool-specific support code that no longer belongs in cvmmap-streamer.\n\nThe build files and docs are updated to point at the standalone repo while keeping the streamer runtime surface intact.
This commit is contained in:
@@ -45,17 +45,6 @@ cmake -B build -S .
|
||||
cmake --build build
|
||||
```
|
||||
|
||||
When the ZED SDK is available, the build also enables `zed_svo_to_mcap` and
|
||||
`zed_svo_to_mp4` automatically. When the SDK is absent, those tools are skipped
|
||||
and the main streamer plus non-ZED testers still build normally.
|
||||
|
||||
`zed_svo_grid_to_mp4` remains optional and additionally requires OpenCV. Disable
|
||||
it explicitly with:
|
||||
|
||||
```bash
|
||||
cmake -B build -S . -DCVMMAP_BUILD_ZED_SVO_GRID_TO_MP4=OFF
|
||||
```
|
||||
|
||||
```bash
|
||||
# Use a local cv-mmap build tree
|
||||
cmake -B build -S . \
|
||||
@@ -69,300 +58,25 @@ cmake --build build
|
||||
ls -la build/{cvmmap_streamer,rtp_receiver_tester,rtmp_stub_tester}
|
||||
```
|
||||
|
||||
### ZED SVO/SVO2 To MP4
|
||||
### Offline ZED Tooling
|
||||
|
||||
This tool is only built when the ZED SDK is detected during CMake configure.
|
||||
Offline ZED conversion, batch wrappers, dataset indexing, and MCAP inspection helpers moved to the sibling repository `../zed-offline-tools`.
|
||||
|
||||
The repo also includes an offline conversion tool for the left ZED color stream:
|
||||
Use that repo for:
|
||||
|
||||
```bash
|
||||
CUDA_VISIBLE_DEVICES=GPU-9cc7b26e-90d4-0c49-4d4c-060e528ffba6 \
|
||||
./build/bin/zed_svo_to_mp4 \
|
||||
--input <SVO_INPUT> \
|
||||
--encoder-device auto \
|
||||
--preset balanced \
|
||||
--quality 20 \
|
||||
--start-frame 0 \
|
||||
--end-frame 89
|
||||
```
|
||||
- `zed_svo_to_mcap`
|
||||
- `zed_svo_to_mp4`
|
||||
- `zed_svo_grid_to_mp4`
|
||||
- `mcap_video_bounds`
|
||||
- `scripts/zed_batch_*`
|
||||
- `scripts/zed_segment_time_index.py`
|
||||
- `scripts/generate_playlist_config.py`
|
||||
- `scripts/mcap_bundle_validator.py`
|
||||
- `scripts/mcap_rgbd_example.py`
|
||||
- `scripts/mcap_rgbd_viewer.py`
|
||||
- `scripts/mcap_depth_alignment.py`
|
||||
|
||||
By default the tool writes `foo.mp4` next to `foo.svo` or `foo.svo2`, defaults to `h265`, and shows a tqdm-like progress bar when stderr is attached to a TTY. `--encoder-device auto` tries NVENC first and falls back to software (`libx264` or `libx265`) if the hardware encoder is unavailable or cannot be opened.
|
||||
|
||||
### Batch ZED SVO2 To MP4
|
||||
|
||||
Python dependencies for the batch wrapper are managed with `uv`:
|
||||
|
||||
```bash
|
||||
uv sync
|
||||
```
|
||||
|
||||
Expected multi-camera dataset layout:
|
||||
|
||||
```text
|
||||
<DATASET_ROOT>/
|
||||
├── svo2_segments_sorted.csv
|
||||
├── bar/
|
||||
│ └── 2026-03-18T11-59-41/
|
||||
│ ├── 2026-03-18T11-59-41_zed1.svo2
|
||||
│ ├── 2026-03-18T11-59-41_zed2.svo2
|
||||
│ ├── 2026-03-18T11-59-41_zed3.svo2
|
||||
│ └── 2026-03-18T11-59-41_zed4.svo2
|
||||
└── jump/
|
||||
└── experiment/
|
||||
└── 1/
|
||||
└── 2026-03-18T11-26-23/
|
||||
├── 2026-03-18T11-26-23_zed1.svo2
|
||||
├── 2026-03-18T11-26-23_zed2.svo2
|
||||
├── 2026-03-18T11-26-23_zed3.svo2
|
||||
└── 2026-03-18T11-26-23_zed4.svo2
|
||||
```
|
||||
|
||||
Placeholders used below:
|
||||
- `<DATASET_ROOT>`: dataset root containing multi-camera segment directories
|
||||
- `<SEGMENT_DIR>`: one multi-camera segment directory containing `*_zedN.svo` or `*_zedN.svo2`
|
||||
- `<SEGMENT_DIR_A>`, `<SEGMENT_DIR_B>`: explicit segment directories
|
||||
- `<SEGMENTS_CSV>`: CSV file with a `segment_dir` column, for example `config/svo2_segments_sorted.sample.csv`
|
||||
- `<SVO_INPUT>`: one single-camera `.svo` or `.svo2` file
|
||||
- `<POSE_CONFIG>`: TOML file such as `config/zed_pose_config.toml`
|
||||
|
||||
Use the wrapper to recurse through a folder, run `zed_svo_to_mp4` on every matched `.svo2`, and show one aggregate tqdm progress bar:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_to_mp4.py \
|
||||
<DATASET_ROOT>/bar \
|
||||
--pattern '*.svo2' \
|
||||
--recursive \
|
||||
--jobs 2 \
|
||||
--encoder-device auto \
|
||||
--start-frame 0 \
|
||||
--end-frame 29 \
|
||||
--cuda-visible-devices GPU-9cc7b26e-90d4-0c49-4d4c-060e528ffba6
|
||||
```
|
||||
|
||||
The batch tool mirrors the common encoder options from `zed_svo_to_mp4`, skips existing sibling `.mp4` outputs by default, and continues after failures while returning a nonzero exit code if any conversion fails.
|
||||
|
||||
### ZED SVO Grid To MP4
|
||||
|
||||
This tool is only built when the ZED SDK is detected and
|
||||
`CVMMAP_BUILD_ZED_SVO_GRID_TO_MP4=ON`.
|
||||
|
||||
Use the grid converter to merge four synced ZED recordings into a 2x2 CCTV-style MP4 with a Unix timestamp overlay in the top-left corner:
|
||||
|
||||
```bash
|
||||
./build/bin/zed_svo_grid_to_mp4 \
|
||||
--segment-dir <SEGMENT_DIR> \
|
||||
--encoder-device auto \
|
||||
--codec h265 \
|
||||
--duration-seconds 2
|
||||
```
|
||||
|
||||
The tool syncs the four inputs using the same common-start timestamp rule as the ZED multi-camera playback sample, defaults to a 2x2 layout ordered as `zed1 zed2 / zed3 zed4`, and writes `<segment>/<segment>_grid.mp4` unless `--output` is provided. By default each tile is scaled to `0.5x`, so a four-camera 1920x1200 segment produces a 1920x1200 composite. Use repeated `--input` flags instead of `--segment-dir` when you want explicit row-major ordering.
|
||||
|
||||
Use the batch wrapper to run `zed_svo_grid_to_mp4` over many segment directories with one aggregate progress bar:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_grid_to_mp4.py \
|
||||
--dataset-root <DATASET_ROOT> \
|
||||
--recursive \
|
||||
--jobs 2 \
|
||||
--encoder-device auto \
|
||||
--duration-seconds 2
|
||||
```
|
||||
|
||||
You can also provide the exact segments to convert:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_grid_to_mp4.py \
|
||||
--segment <SEGMENT_DIR_A> \
|
||||
--segment <SEGMENT_DIR_B> \
|
||||
--jobs 2
|
||||
```
|
||||
|
||||
Or preserve a precomputed CSV ordering:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_grid_to_mp4.py \
|
||||
--segments-csv <SEGMENTS_CSV> \
|
||||
--jobs 2 \
|
||||
--duration-seconds 2
|
||||
```
|
||||
|
||||
The batch grid wrapper mirrors the grid encoder options, skips existing `<segment>/<segment>_grid.mp4` outputs by default, and returns a nonzero exit code if any segment fails.
|
||||
|
||||
When you suspect a previous run left behind partial MP4 files, opt into `ffprobe` validation so broken existing outputs are treated as missing instead of skipped:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_grid_to_mp4.py \
|
||||
--dataset-root <DATASET_ROOT> \
|
||||
--probe-existing \
|
||||
--jobs 2
|
||||
```
|
||||
|
||||
Use `--report-existing` to audit existing outputs without launching conversions. The report prints invalid existing files only, while the summary still includes valid and missing counts. This is useful for the partial-write failure mode currently seen as `moov atom not found` in some kindergarten grid MP4s:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_grid_to_mp4.py \
|
||||
--dataset-root <DATASET_ROOT> \
|
||||
--report-existing
|
||||
```
|
||||
|
||||
Use `--dry-run` to preview what the batch wrapper would convert after applying skip logic. Combine it with `--probe-existing` when you want to see which broken existing outputs would be requeued:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_grid_to_mp4.py \
|
||||
<DATASET_ROOT> \
|
||||
--probe-existing \
|
||||
--dry-run
|
||||
```
|
||||
|
||||
#### Expected CSV Input Format
|
||||
|
||||
The `--segments-csv` input expects a header row with at least a `segment_dir` column. Extra columns are allowed and ignored by the batch wrapper. `segment_dir` values may be absolute paths or paths relative to the CSV file's parent directory. Use `--csv-root` to override that base directory.
|
||||
|
||||
Repeated rows for the same `segment_dir` are allowed; the wrapper converts each unique segment once, preserving the first-seen CSV order. The repo includes a small example at `config/svo2_segments_sorted.sample.csv`:
|
||||
|
||||
```csv
|
||||
timestamp,activity,group_path,segment_dir,camera,relative_path
|
||||
2026-03-18T11-23-22,jump,jump/external/recording,jump/external/recording/2026-03-18T11-23-22,zed1,jump/external/recording/2026-03-18T11-23-22/2026-03-18T11-23-22_zed1.svo2
|
||||
2026-03-18T11-23-22,jump,jump/external/recording,jump/external/recording/2026-03-18T11-23-22,zed2,jump/external/recording/2026-03-18T11-23-22/2026-03-18T11-23-22_zed2.svo2
|
||||
```
|
||||
|
||||
### Batch ZED Segments To MCAP
|
||||
|
||||
This workflow depends on the `zed_svo_to_mcap` binary, which is only built when
|
||||
the ZED SDK is detected during CMake configure.
|
||||
|
||||
Use the wrapper to recurse through a dataset root, run `zed_svo_to_mcap --segment-dir` on every matched multi-camera segment, and show interactive table progress on TTYs with durable text logging elsewhere:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_to_mcap.py \
|
||||
--dataset-root <DATASET_ROOT> \
|
||||
--recursive \
|
||||
--jobs 2 \
|
||||
--cuda-visible-devices GPU-9cc7b26e-90d4-0c49-4d4c-060e528ffba6 \
|
||||
--start-frame 10 \
|
||||
--end-frame 29
|
||||
```
|
||||
|
||||
You can also preserve the precomputed kindergarten CSV ordering:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_to_mcap.py \
|
||||
--segments-csv <SEGMENTS_CSV> \
|
||||
--jobs 2 \
|
||||
--start-frame 10 \
|
||||
--end-frame 29
|
||||
```
|
||||
|
||||
Enable per-camera pose export when the segment has valid tracking:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_to_mcap.py \
|
||||
--segment <SEGMENT_DIR> \
|
||||
--with-pose \
|
||||
--pose-config <POSE_CONFIG>
|
||||
```
|
||||
|
||||
The batch MCAP wrapper writes `<segment>/<segment>.mcap` by default, skips existing outputs unless told otherwise, and returns a nonzero exit code if any segment fails.
|
||||
The repo includes a minimal pose config at `config/zed_pose_config.toml` so MCAP conversion does not depend on a separate `cv-mmap` checkout.
|
||||
In bundled multi-camera timeline mode, `--start-frame` and `--end-frame` mean the first and last emitted bundle indices from the common start timestamp, inclusive.
|
||||
When stderr is attached to a TTY, `zed_batch_svo_to_mcap.py` uses a `progress-table` view by default; otherwise it emits line-oriented start/completion/failure logs plus periodic heartbeat summaries. Use `--progress-ui table` or `--progress-ui text` to override the automatic mode selection.
|
||||
|
||||
Bundled MCAP export now defaults to `--bundle-policy nearest`. That mode emits one `/bundle` manifest message per bundle timestamp on the common timeline and keeps the original per-camera timestamps on `/zedN/video`, `/zedN/depth`, and optional `/zedN/pose`. Faster cameras are sampled onto the slowest common timeline there, so they can end up with the same message count as slower cameras. Consumers that care about grouping should follow `/bundle` instead of inferring bundle membership from identical message timestamps.
|
||||
|
||||
Use `--bundle-policy strict` when you want thresholded grouping; `--sync-tolerance-ms` only applies in that strict mode. Use `--bundle-policy copy` when you want one MCAP containing all camera namespaces with their original per-camera cadence and no `/bundle` manifest. `copy` disables `--start-frame`, `--end-frame`, and `--sync-tolerance-ms`; `--copy-range common|full` controls whether it trims to the overlap window or preserves each camera’s full timestamp range.
|
||||
Single-source `zed_svo_to_mcap` now writes the one-camera `copy` shape by default, so `foo_zed4.svo2` exports namespaced topics like `/zed4/video` and `/zed4/depth` with no `/bundle`. See [docs/mcap_layout.md](./docs/mcap_layout.md) for the current bundled/copy contract and [docs/mcap_legacy_single_camera_layout.md](./docs/mcap_legacy_single_camera_layout.md) for the separate legacy `/camera/*` reference.
|
||||
|
||||
For the simple non-GUI path, use `scripts/mcap_rgbd_example.py` and [docs/mcap_recipes.md](./docs/mcap_recipes.md). That helper supports current `bundled` and `copy` MCAPs, and it also accepts the legacy `/camera/*` shape by treating it as a single-camera stream with the literal label `camera`.
|
||||
|
||||
For calibration-based depth/RGB mapping, use `scripts/mcap_depth_alignment.py` and [docs/depth_alignment.md](./docs/depth_alignment.md). That helper explains the current affine mapping implied by the exported calibration topics and can export example aligned-depth and overlay PNGs from a chosen MCAP frame.
|
||||
|
||||
### MCAP RGBD Viewer
|
||||
|
||||
The repo includes an example RGB+depth viewer at `scripts/mcap_rgbd_viewer.py`. It supports legacy standalone `/camera/*` MCAPs, bundled `/bundle` + `/zedN/*` MCAPs, and `copy` MCAPs with namespaced `/{label}/*` topics and no `/bundle`, including the default single-source output from `zed_svo_to_mcap`.
|
||||
|
||||
Install the optional viewer dependencies first:
|
||||
|
||||
```bash
|
||||
uv sync --extra viewer
|
||||
```
|
||||
|
||||
Then launch the interactive viewer:
|
||||
|
||||
```bash
|
||||
uv run --extra viewer python scripts/mcap_rgbd_viewer.py \
|
||||
/workspaces/data/kindergarten/bar/2026-03-18T11-59-41/2026-03-18T11-59-41.mcap \
|
||||
--camera-label zed1
|
||||
```
|
||||
|
||||
You can also use the same script without a GUI to inspect metadata or render a preview PNG:
|
||||
|
||||
```bash
|
||||
uv run --extra viewer python scripts/mcap_rgbd_viewer.py \
|
||||
--summary-only \
|
||||
/workspaces/data/kindergarten/bar/2026-03-18T11-59-41/2026-03-18T11-59-41.mcap
|
||||
```
|
||||
|
||||
```bash
|
||||
uv run --extra viewer python scripts/mcap_rgbd_viewer.py \
|
||||
--camera-label zed2 \
|
||||
--frame-index 150 \
|
||||
--export-preview /tmp/mcap_bundled_gap_preview.png \
|
||||
/workspaces/data/kindergarten/throw/2026-03-18T12-58-13/2026-03-18T12-58-13.mcap
|
||||
```
|
||||
|
||||
The viewer depends on `ffmpeg` being on `PATH` so it can build a seek-friendly preview cache for H.264/H.265 MCAP video streams.
|
||||
This is intentionally a simple preview script: it transcodes only the RGB video stream into a temporary intra-frame `mjpeg` cache and then uses that same cache for both scrubbing and normal playback. Depth data is not transcoded to `mjpeg`; it stays in the temporary raw depth cache and is decoded and color-mapped on demand.
|
||||
|
||||
### Why Mixed Hardware/Software Mode Exists
|
||||
|
||||
Bundled MCAP export opens one video encoder per camera stream. A four-camera segment therefore consumes four H.264/H.265 encoder sessions at once.
|
||||
|
||||
This matters because NVIDIA's NVENC session limit is separate from raw CUDA utilization. In NVIDIA's Video Codec SDK documentation, non-qualified systems are capped at 8 concurrent encode sessions across all non-qualified GPUs in the system, and NVIDIA's SDK readme still calls out a 5-session GeForce limit in some contexts. In practice, consumer/GeForce hosts often hit NVENC session-init failures before the GPUs look "full" in `nvidia-smi`.
|
||||
|
||||
That is why the batch wrapper supports mixed pools such as two NVENC workers plus two software-encoded workers:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_to_mcap.py \
|
||||
--dataset-root <DATASET_ROOT> \
|
||||
--recursive \
|
||||
--overwrite \
|
||||
--hardware-jobs 2 \
|
||||
--hardware-cuda-visible-devices 0,1 \
|
||||
--software-jobs 2 \
|
||||
--software-cuda-visible-devices 0,1 \
|
||||
--depth-mode neural_plus
|
||||
```
|
||||
|
||||
With bundled four-camera segments, `4` all-hardware jobs would try to open about `16` NVENC sessions, which is why mixed mode is the safe default for high-throughput rebuilds on GeForce-class machines. The software workers still use the GPUs for ZED neural depth; only video encoding moves to CPU.
|
||||
|
||||
If you intentionally want to bypass NVIDIA's consumer NVENC session cap, there is an unofficial driver patch at [`keylase/nvidia-patch`](https://github.com/keylase/nvidia-patch). That can make larger all-hardware batches viable, but it is not NVIDIA-supported and should be treated as an explicit ops decision rather than a project requirement.
|
||||
|
||||
Use `--probe-existing` to validate existing MCAPs before skipping them. Invalid outputs are treated as missing and requeued:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_to_mcap.py \
|
||||
--dataset-root <DATASET_ROOT> \
|
||||
--probe-existing \
|
||||
--jobs 2
|
||||
```
|
||||
|
||||
Use `--report-existing` to audit existing MCAPs without launching conversions:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_to_mcap.py \
|
||||
--dataset-root <DATASET_ROOT> \
|
||||
--report-existing
|
||||
```
|
||||
|
||||
Use `--dry-run` to preview what would be converted after applying skip or probe logic:
|
||||
|
||||
```bash
|
||||
uv run python scripts/zed_batch_svo_to_mcap.py \
|
||||
--segments-csv <SEGMENTS_CSV> \
|
||||
--probe-existing \
|
||||
--dry-run
|
||||
```
|
||||
This repo keeps the live downstream streamer/runtime plus the MCAP contract docs such as [docs/mcap_layout.md](./docs/mcap_layout.md), [docs/mcap_legacy_single_camera_layout.md](./docs/mcap_legacy_single_camera_layout.md), and [docs/mcap_body_tracking.md](./docs/mcap_body_tracking.md).
|
||||
|
||||
### Mandatory Acceptance (Standalone)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user