Replace the deprecated <nats.h> include with <nats/nats.h> in the streamer request/reply service implementation to avoid the cnats deprecation warning during builds.
cv-mmap Streamer
A standalone C++ downstream project that reads frames from cv-mmap IPC, encodes with NVIDIA NVENC (with software fallback), and publishes RTMP + RTP streams with low-latency tuning on localhost.
Overview
This project consumes video frames from the cv-mmap shared memory interface and publishes them as encoded streams. It operates as a downstream consumer only, never writing to the cv-mmap shared memory.
Key Features:
- Reads cv-mmap IPC frames via POSIX shared memory + ZeroMQ frame sync
- Consumes cv-mmap control/status/body over NATS
- NVENC H.264/H.265 encoding with deterministic software fallback
- RTP UDP-unicast publisher with automatic SDP generation
- RTMP publisher with dual H.265 modes (Enhanced-RTMP + domestic extension)
- Embedded standalone testers for server-independent validation
- Low-latency bounded queues with latest-frame semantics
Quickstart
Prerequisites
- C++23 compatible compiler (GCC 13+, Clang 16+)
- CMake 3.20+
- GStreamer 1.20+ with development headers
- ZeroMQ (cppzmq) with development headers
- NATS server reachable at runtime
- spdlog
- NVIDIA GPU with NVENC support (optional, falls back to software encoding)
Arch Linux:
sudo pacman -S cmake gstreamer gst-plugins-base gst-plugins-good \
gst-plugins-bad gst-plugins-ugly gst-libav cppzmq spdlog
Build
cvmmap-streamer uses CVMMAP_CNATS_PROVIDER to decide how cnats is resolved:
system(default): use an installedcnatspackage, typically from a top-levelcv-mmapinstall under a standard prefix like/usr/localworkspace: use the localcv-mmapbuild-tree exports
cmake -B build -S .
cmake --build build
When the ZED SDK is available, the build also enables zed_svo_to_mcap and
zed_svo_to_mp4 automatically. When the SDK is absent, those tools are skipped
and the main streamer plus non-ZED testers still build normally.
zed_svo_grid_to_mp4 remains optional and additionally requires OpenCV. Disable
it explicitly with:
cmake -B build -S . -DCVMMAP_BUILD_ZED_SVO_GRID_TO_MP4=OFF
# Use a local cv-mmap build tree
cmake -B build -S . \
-DCVMMAP_CNATS_PROVIDER=workspace \
-DCVMMAP_LOCAL_ROOT=/path/to/cv-mmap
cmake --build build
Verify binaries exist:
ls -la build/{cvmmap_streamer,rtp_receiver_tester,rtmp_stub_tester}
ZED SVO/SVO2 To MP4
This tool is only built when the ZED SDK is detected during CMake configure.
The repo also includes an offline conversion tool for the left ZED color stream:
CUDA_VISIBLE_DEVICES=GPU-9cc7b26e-90d4-0c49-4d4c-060e528ffba6 \
./build/bin/zed_svo_to_mp4 \
--input <SVO_INPUT> \
--encoder-device auto \
--preset balanced \
--quality 20 \
--start-frame 0 \
--end-frame 89
By default the tool writes foo.mp4 next to foo.svo or foo.svo2, defaults to h265, and shows a tqdm-like progress bar when stderr is attached to a TTY. --encoder-device auto tries NVENC first and falls back to software (libx264 or libx265) if the hardware encoder is unavailable or cannot be opened.
Batch ZED SVO2 To MP4
Python dependencies for the batch wrapper are managed with uv:
uv sync
Expected multi-camera dataset layout:
<DATASET_ROOT>/
├── svo2_segments_sorted.csv
├── bar/
│ └── 2026-03-18T11-59-41/
│ ├── 2026-03-18T11-59-41_zed1.svo2
│ ├── 2026-03-18T11-59-41_zed2.svo2
│ ├── 2026-03-18T11-59-41_zed3.svo2
│ └── 2026-03-18T11-59-41_zed4.svo2
└── jump/
└── experiment/
└── 1/
└── 2026-03-18T11-26-23/
├── 2026-03-18T11-26-23_zed1.svo2
├── 2026-03-18T11-26-23_zed2.svo2
├── 2026-03-18T11-26-23_zed3.svo2
└── 2026-03-18T11-26-23_zed4.svo2
Placeholders used below:
<DATASET_ROOT>: dataset root containing multi-camera segment directories<SEGMENT_DIR>: one multi-camera segment directory containing*_zedN.svoor*_zedN.svo2<SEGMENT_DIR_A>,<SEGMENT_DIR_B>: explicit segment directories<SEGMENTS_CSV>: CSV file with asegment_dircolumn, for exampleconfig/svo2_segments_sorted.sample.csv<SVO_INPUT>: one single-camera.svoor.svo2file<POSE_CONFIG>: TOML file such asconfig/zed_pose_config.toml
Use the wrapper to recurse through a folder, run zed_svo_to_mp4 on every matched .svo2, and show one aggregate tqdm progress bar:
uv run python scripts/zed_batch_svo_to_mp4.py \
<DATASET_ROOT>/bar \
--pattern '*.svo2' \
--recursive \
--jobs 2 \
--encoder-device auto \
--start-frame 0 \
--end-frame 29 \
--cuda-visible-devices GPU-9cc7b26e-90d4-0c49-4d4c-060e528ffba6
The batch tool mirrors the common encoder options from zed_svo_to_mp4, skips existing sibling .mp4 outputs by default, and continues after failures while returning a nonzero exit code if any conversion fails.
ZED SVO Grid To MP4
This tool is only built when the ZED SDK is detected and
CVMMAP_BUILD_ZED_SVO_GRID_TO_MP4=ON.
Use the grid converter to merge four synced ZED recordings into a 2x2 CCTV-style MP4 with a Unix timestamp overlay in the top-left corner:
./build/bin/zed_svo_grid_to_mp4 \
--segment-dir <SEGMENT_DIR> \
--encoder-device auto \
--codec h265 \
--duration-seconds 2
The tool syncs the four inputs using the same common-start timestamp rule as the ZED multi-camera playback sample, defaults to a 2x2 layout ordered as zed1 zed2 / zed3 zed4, and writes <segment>/<segment>_grid.mp4 unless --output is provided. By default each tile is scaled to 0.5x, so a four-camera 1920x1200 segment produces a 1920x1200 composite. Use repeated --input flags instead of --segment-dir when you want explicit row-major ordering.
Use the batch wrapper to run zed_svo_grid_to_mp4 over many segment directories with one aggregate progress bar:
uv run python scripts/zed_batch_svo_grid_to_mp4.py \
--dataset-root <DATASET_ROOT> \
--recursive \
--jobs 2 \
--encoder-device auto \
--duration-seconds 2
You can also provide the exact segments to convert:
uv run python scripts/zed_batch_svo_grid_to_mp4.py \
--segment <SEGMENT_DIR_A> \
--segment <SEGMENT_DIR_B> \
--jobs 2
Or preserve a precomputed CSV ordering:
uv run python scripts/zed_batch_svo_grid_to_mp4.py \
--segments-csv <SEGMENTS_CSV> \
--jobs 2 \
--duration-seconds 2
The batch grid wrapper mirrors the grid encoder options, skips existing <segment>/<segment>_grid.mp4 outputs by default, and returns a nonzero exit code if any segment fails.
When you suspect a previous run left behind partial MP4 files, opt into ffprobe validation so broken existing outputs are treated as missing instead of skipped:
uv run python scripts/zed_batch_svo_grid_to_mp4.py \
--dataset-root <DATASET_ROOT> \
--probe-existing \
--jobs 2
Use --report-existing to audit existing outputs without launching conversions. The report prints invalid existing files only, while the summary still includes valid and missing counts. This is useful for the partial-write failure mode currently seen as moov atom not found in some kindergarten grid MP4s:
uv run python scripts/zed_batch_svo_grid_to_mp4.py \
--dataset-root <DATASET_ROOT> \
--report-existing
Use --dry-run to preview what the batch wrapper would convert after applying skip logic. Combine it with --probe-existing when you want to see which broken existing outputs would be requeued:
uv run python scripts/zed_batch_svo_grid_to_mp4.py \
<DATASET_ROOT> \
--probe-existing \
--dry-run
Expected CSV Input Format
The --segments-csv input expects a header row with at least a segment_dir column. Extra columns are allowed and ignored by the batch wrapper. segment_dir values may be absolute paths or paths relative to the CSV file's parent directory. Use --csv-root to override that base directory.
Repeated rows for the same segment_dir are allowed; the wrapper converts each unique segment once, preserving the first-seen CSV order. The repo includes a small example at config/svo2_segments_sorted.sample.csv:
timestamp,activity,group_path,segment_dir,camera,relative_path
2026-03-18T11-23-22,jump,jump/external/recording,jump/external/recording/2026-03-18T11-23-22,zed1,jump/external/recording/2026-03-18T11-23-22/2026-03-18T11-23-22_zed1.svo2
2026-03-18T11-23-22,jump,jump/external/recording,jump/external/recording/2026-03-18T11-23-22,zed2,jump/external/recording/2026-03-18T11-23-22/2026-03-18T11-23-22_zed2.svo2
Batch ZED Segments To MCAP
This workflow depends on the zed_svo_to_mcap binary, which is only built when
the ZED SDK is detected during CMake configure.
Use the wrapper to recurse through a dataset root, run zed_svo_to_mcap --segment-dir on every matched multi-camera segment, and show interactive table progress on TTYs with durable text logging elsewhere:
uv run python scripts/zed_batch_svo_to_mcap.py \
--dataset-root <DATASET_ROOT> \
--recursive \
--jobs 2 \
--cuda-visible-devices GPU-9cc7b26e-90d4-0c49-4d4c-060e528ffba6 \
--start-frame 10 \
--end-frame 29
You can also preserve the precomputed kindergarten CSV ordering:
uv run python scripts/zed_batch_svo_to_mcap.py \
--segments-csv <SEGMENTS_CSV> \
--jobs 2 \
--start-frame 10 \
--end-frame 29
Enable per-camera pose export when the segment has valid tracking:
uv run python scripts/zed_batch_svo_to_mcap.py \
--segment <SEGMENT_DIR> \
--with-pose \
--pose-config <POSE_CONFIG>
The batch MCAP wrapper writes <segment>/<segment>.mcap by default, skips existing outputs unless told otherwise, and returns a nonzero exit code if any segment fails.
The repo includes a minimal pose config at config/zed_pose_config.toml so MCAP conversion does not depend on a separate cv-mmap checkout.
In bundled multi-camera timeline mode, --start-frame and --end-frame mean the first and last emitted bundle indices from the common start timestamp, inclusive.
When stderr is attached to a TTY, zed_batch_svo_to_mcap.py uses a progress-table view by default; otherwise it emits line-oriented start/completion/failure logs plus periodic heartbeat summaries. Use --progress-ui table or --progress-ui text to override the automatic mode selection.
Bundled MCAP export now defaults to --bundle-policy nearest. That mode emits one /bundle manifest message per bundle timestamp on the common timeline and keeps the original per-camera timestamps on /zedN/video, /zedN/depth, and optional /zedN/pose. Faster cameras are sampled onto the slowest common timeline there, so they can end up with the same message count as slower cameras. Consumers that care about grouping should follow /bundle instead of inferring bundle membership from identical message timestamps.
Use --bundle-policy strict when you want thresholded grouping; --sync-tolerance-ms only applies in that strict mode. Use --bundle-policy copy when you want one MCAP containing all camera namespaces with their original per-camera cadence and no /bundle manifest. copy disables --start-frame, --end-frame, and --sync-tolerance-ms; --copy-range common|full controls whether it trims to the overlap window or preserves each camera’s full timestamp range.
Single-source zed_svo_to_mcap now writes the one-camera copy shape by default, so foo_zed4.svo2 exports namespaced topics like /zed4/video and /zed4/depth with no /bundle. See docs/mcap_layout.md for the current bundled/copy contract and docs/mcap_legacy_single_camera_layout.md for the separate legacy /camera/* reference.
For the simple non-GUI path, use scripts/mcap_rgbd_example.py and docs/mcap_recipes.md. That helper supports current bundled and copy MCAPs, and it also accepts the legacy /camera/* shape by treating it as a single-camera stream with the literal label camera.
For calibration-based depth/RGB mapping, use scripts/mcap_depth_alignment.py and docs/depth_alignment.md. That helper explains the current affine mapping implied by the exported calibration topics and can export example aligned-depth and overlay PNGs from a chosen MCAP frame.
MCAP RGBD Viewer
The repo includes an example RGB+depth viewer at scripts/mcap_rgbd_viewer.py. It supports legacy standalone /camera/* MCAPs, bundled /bundle + /zedN/* MCAPs, and copy MCAPs with namespaced /{label}/* topics and no /bundle, including the default single-source output from zed_svo_to_mcap.
Install the optional viewer dependencies first:
uv sync --extra viewer
Then launch the interactive viewer:
uv run --extra viewer python scripts/mcap_rgbd_viewer.py \
/workspaces/data/kindergarten/bar/2026-03-18T11-59-41/2026-03-18T11-59-41.mcap \
--camera-label zed1
You can also use the same script without a GUI to inspect metadata or render a preview PNG:
uv run --extra viewer python scripts/mcap_rgbd_viewer.py \
--summary-only \
/workspaces/data/kindergarten/bar/2026-03-18T11-59-41/2026-03-18T11-59-41.mcap
uv run --extra viewer python scripts/mcap_rgbd_viewer.py \
--camera-label zed2 \
--frame-index 150 \
--export-preview /tmp/mcap_bundled_gap_preview.png \
/workspaces/data/kindergarten/throw/2026-03-18T12-58-13/2026-03-18T12-58-13.mcap
The viewer depends on ffmpeg being on PATH so it can build a seek-friendly preview cache for H.264/H.265 MCAP video streams.
This is intentionally a simple preview script: it transcodes only the RGB video stream into a temporary intra-frame mjpeg cache and then uses that same cache for both scrubbing and normal playback. Depth data is not transcoded to mjpeg; it stays in the temporary raw depth cache and is decoded and color-mapped on demand.
Why Mixed Hardware/Software Mode Exists
Bundled MCAP export opens one video encoder per camera stream. A four-camera segment therefore consumes four H.264/H.265 encoder sessions at once.
This matters because NVIDIA's NVENC session limit is separate from raw CUDA utilization. In NVIDIA's Video Codec SDK documentation, non-qualified systems are capped at 8 concurrent encode sessions across all non-qualified GPUs in the system, and NVIDIA's SDK readme still calls out a 5-session GeForce limit in some contexts. In practice, consumer/GeForce hosts often hit NVENC session-init failures before the GPUs look "full" in nvidia-smi.
That is why the batch wrapper supports mixed pools such as two NVENC workers plus two software-encoded workers:
uv run python scripts/zed_batch_svo_to_mcap.py \
--dataset-root <DATASET_ROOT> \
--recursive \
--overwrite \
--hardware-jobs 2 \
--hardware-cuda-visible-devices 0,1 \
--software-jobs 2 \
--software-cuda-visible-devices 0,1 \
--depth-mode neural_plus
With bundled four-camera segments, 4 all-hardware jobs would try to open about 16 NVENC sessions, which is why mixed mode is the safe default for high-throughput rebuilds on GeForce-class machines. The software workers still use the GPUs for ZED neural depth; only video encoding moves to CPU.
If you intentionally want to bypass NVIDIA's consumer NVENC session cap, there is an unofficial driver patch at keylase/nvidia-patch. That can make larger all-hardware batches viable, but it is not NVIDIA-supported and should be treated as an explicit ops decision rather than a project requirement.
Use --probe-existing to validate existing MCAPs before skipping them. Invalid outputs are treated as missing and requeued:
uv run python scripts/zed_batch_svo_to_mcap.py \
--dataset-root <DATASET_ROOT> \
--probe-existing \
--jobs 2
Use --report-existing to audit existing MCAPs without launching conversions:
uv run python scripts/zed_batch_svo_to_mcap.py \
--dataset-root <DATASET_ROOT> \
--report-existing
Use --dry-run to preview what would be converted after applying skip or probe logic:
uv run python scripts/zed_batch_svo_to_mcap.py \
--segments-csv <SEGMENTS_CSV> \
--probe-existing \
--dry-run
Mandatory Acceptance (Standalone)
Run the full mandatory acceptance suite. This executes the complete protocol/codec matrix without requiring external servers.
./scripts/acceptance_standalone.sh
Expected result: Exit code 0 with summary showing total=5 pass=5 fail=0 skip=0
Individual matrix rows verified:
- RTP + H.264
- RTP + H.265
- RTMP + H.264 (enhanced mode)
- RTMP + H.265 enhanced mode
- RTMP + H.265 domestic mode
Fault Suite Baseline
Run the fault injection and latency validation suite.
./scripts/fault_suite.sh
Expected result: Exit code 0 with all scenarios passing.
Scenarios tested:
- Torn read handling (coherent snapshot validation)
- Sink stall resilience (backpressure containment)
- Reset storm recovery (stream reset handling)
Manual Component Testing
1. Start the simulator:
./build/cvmmap_streamer \
--run-mode pipeline \
--codec h264 \
--shm-name test_stream \
--zmq-endpoint "ipc:///tmp/test_sync.ipc" \
--input-mode dummy \
--dummy-label teststream \
--dummy-frames 300 \
--dummy-fps 30 \
--dummy-width 640 \
--dummy-height 360
2. Test RTP output:
# Terminal 1: Start receiver tester
./build/rtp_receiver_tester \
--port 5004 \
--expect-pt 96 \
--packet-threshold 1 \
--timeout-ms 10000
# Terminal 2: Start streamer
./build/cvmmap_streamer \
--run-mode pipeline \
--codec h264 \
--shm-name test_stream \
--zmq-endpoint "ipc:///tmp/test_sync.ipc" \
--rtp \
--rtp-endpoint "127.0.0.1:5004" \
--rtp-payload-type 96 \
--rtp-sdp /tmp/test.sdp
3. Test RTMP output (enhanced mode):
# Terminal 1: Start RTMP stub tester
./build/rtmp_stub_tester \
--mode h264 \
--listen-host 127.0.0.1 \
--listen-port 1935 \
--video-threshold 1 \
--timeout-ms 10000
# Terminal 2: Start streamer
./build/cvmmap_streamer \
--run-mode pipeline \
--codec h264 \
--shm-name test_stream \
--zmq-endpoint "ipc:///tmp/test_sync.ipc" \
--rtmp \
--rtmp-url "rtmp://127.0.0.1:1935/live/test" \
--rtmp-mode enhanced
Compatibility Matrix
| Protocol | Codec | RTMP Mode | Status | Notes |
|---|---|---|---|---|
| RTP | H.264 | N/A | MANDATORY | Full support |
| RTP | H.265 | N/A | MANDATORY | Full support |
| RTMP | H.264 | enhanced | MANDATORY | Legacy codec-id 7 |
| RTMP | H.265 | enhanced | MANDATORY | FourCC hvc1, Enhanced-RTMP spec |
| RTMP | H.265 | domestic | MANDATORY | FLV codec-id 12, legacy CDN compatibility |
| RTMP | H.264 | domestic | INVALID | Rejected at startup with clear error |
Legend:
- MANDATORY: Must pass for release acceptance
- INVALID: Explicitly rejected, exits non-zero
Runtime Configuration
Input Options
| Flag | Description | Default |
|---|---|---|
--shm-name NAME |
POSIX shared memory segment name | required |
--zmq-endpoint URI |
ZeroMQ PUB endpoint for frame sync | required |
--nats-url URL |
NATS server for control/status/body | nats://localhost:4222 |
--queue-size N |
Ingest queue capacity (1 = latest-frame) | 1 |
Codec Options
| Flag | Description |
|---|---|
--codec h264|h265 |
Video codec selection (required) |
Output Options
| Flag | Description |
|---|---|
--rtp |
Enable RTP output |
--rtp-endpoint HOST:PORT |
RTP destination (required if --rtp) |
--rtp-payload-type PT |
Dynamic payload type [96,127] |
--rtp-sdp PATH |
SDP output path |
--rtmp |
Enable RTMP output |
--rtmp-url URL |
RTMP publish URL (required if --rtmp) |
--rtmp-mode enhanced|domestic |
H.265 packaging mode (required for H.265) |
Latency Knobs
| Flag | Description | Default |
|---|---|---|
--gop N |
GOP size (keyframe interval) | 30 |
--b-frames N |
B-frame count (0 = lowest latency) | 0 |
--queue-size N |
Ingest queue depth | 1 |
Operational Limits
| Flag | Description | Default |
|---|---|---|
--ingest-max-frames N |
Process at most N frames then exit | 0 (unlimited) |
--ingest-idle-timeout-ms MS |
Exit if idle for MS milliseconds; 0 disables the timeout | 0 (disabled) |
Architecture
Data Flow
cv-mmap producer ──> SHM + ZMQ sync ──> Ingest Runtime
│
v
┌───────────────┐
│ Bounded Queue │
│ (size=1) │
└───────┬───────┘
│
v
NVENC Pipeline
(NVENC -> fallback)
│
┌───────┴───────┐
v v
RTP Publisher RTMP Publisher
(UDP unicast) (TCP + FLV)
Key Design Decisions
Latest-Frame Semantics: The ingest queue has size 1 by default. When a new frame arrives while the previous is still queued, the old frame is dropped. This prevents latency accumulation under backpressure.
Coherent Snapshot: Frame metadata is read twice around the payload copy. If frame_count or timestamp_ns changed, the frame is rejected as torn. This prevents consuming partially-updated frames.
NVENC with Fallback: The pipeline attempts NVENC first for hardware acceleration. If NVENC produces zero encoded access units after 60 frames, it falls back to software encoding (x264enc or x265enc).
Dual-Mode H.265: H.265 RTMP supports two packaging modes:
- Enhanced-RTMP: Uses FourCC
hvc1, modern standard, supported by FFmpeg 6.0+, SRS 6.0+, ZLMediaKit - Domestic extension: Uses FLV codec-id 12, legacy Chinese CDN compatibility
The mode must be explicitly selected via --rtmp-mode and cannot be mixed within a session.
Environment Caveats
Simulator Label Length
Simulator labels (--label) have a hard maximum of 24 bytes. Exceeding this causes immediate exit with code 2. Use compact deterministic labels like acc_1_rtp_h264 instead of descriptive names.
Deterministic Simulator Sizing
For reliable RTMP validation, use simulator frame sizes of at least 640x360. Smaller frames may trigger GStreamer caps negotiation failures before the first encoded access unit on some hosts.
Build Path
Always use downstream/cvmmap-streamer/build for the build directory. Using the root build/ folder causes cache collision with the main cv-mmap project.
Fresh Configure
If you encounter configure errors referencing sibling repo paths, run:
cmake --fresh -B build -S .
Optional Server Smoke Tests
Interoperability tests with SRS and ZLMediaKit are provided for reference but are NOT mandatory for acceptance. See:
If the server environment is unavailable, these tests should be skipped without failing the mandatory acceptance criteria.
Project Structure
cvmmap-streamer/
├── CMakeLists.txt # Build configuration
├── README.md # This file
├── docs/
│ ├── smoke/
│ │ ├── srs.md # SRS interoperability guide
│ │ └── zlm.md # ZLMediaKit interoperability guide
│ ├── compat_matrix.md # Detailed compatibility matrix
│ └── caveats.md # Environment and operational caveats
├── include/cvmmap_streamer/# Public headers
│ ├── config/
│ │ └── runtime_config.hpp
│ ├── ipc/
│ │ └── cvmmap_contract.hpp
│ └── pipeline/
│ └── pipeline_types.hpp
├── scripts/
│ ├── acceptance_standalone.sh # Mandatory acceptance runner
│ ├── fault_suite.sh # Fault injection suite
│ └── *_helper.py # Summary generators
└── src/
├── config/ # Runtime configuration
├── core/ # Ingest runtime and supervision
├── ipc/ # cv-mmap contract parsing
├── pipeline/ # NVENC encoding
├── protocol/ # RTP and RTMP publishers
└── testers/ # Simulator and test stubs
Evidence Artifacts
All test runs produce machine-readable evidence in .sisyphus/evidence/:
task-14-acceptance.txt- Latest acceptance run metadatatask-14-acceptance-summary.json- JSON summary of acceptance resultstask-15-fault-suite.txt- Latest fault suite run metadatatask-15-fault-suite-summary.json- JSON summary of fault suite results
Each run creates timestamped subdirectories with full logs for every matrix row or fault scenario.
Exit Codes
| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | Invalid arguments |
| 2 | Invalid arguments or configuration |
| 3 | RTP payload type mismatch |
| 4 | Packet/frame threshold not met |
| 5 | Pipeline initialization error (missing encoder) |
| 6 | RTMP mode mismatch (tester validation) |
| 7 | Protocol validation error |
| 124 | Timeout |
References
- Enhanced RTMP Specification
- cv-mmap IPC Contract
- SRS Documentation: https://ossrs.io/lts/en-us/docs/v7/doc/rtmp
- ZLMediaKit: https://github.com/ZLMediaKit/ZLMediaKit