# Setup with Nvidia-Jetson-Orin Initial setup and installation of _RapidPoseTriangulation_ on a _Nvidia Jetson_ device. \ Tested with a _Jetson AGX Orin Developer Kit_ module.
## Base installation - Install newest software image: \ (https://developer.nvidia.com/sdk-manager) - Use manual recovery mode setup for first installation - Find out the _ip-address_ of the _Jetson_ for the runtime component installation with: ```bash sudo nmap -sn $(ip route get 1 | awk '{print $(NF-2);exit}')/24 ``` - Initialize system: \ (https://developer.nvidia.com/embedded/learn/get-started-jetson-agx-orin-devkit) - Connect via _ssh_, because using _screen_ did not work, skip _oem-config_ step - Skip installation of _nvidia-jetpack_ - Install basic tools: ```bash sudo apt install -y curl nano wget git ``` - Update hostname: ```bash sudo nano /etc/hostname sudo nano /etc/hosts sudo reboot ``` - Enable maximum performance mode: ```bash sudo nvpmodel -m 0 sudo jetson_clocks ``` - Test docker is working: ```bash sudo docker run --rm hello-world ``` - Enable _docker_ without _sudo_: \ (https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user) - Enable GPU-access for docker building: - Run `sudo nano /etc/docker/daemon.json` and add: ```json { "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } }, "default-runtime": "nvidia" } ``` - Restart docker: `sudo systemctl restart docker` - Test docker is working: ```bash docker run --rm hello-world docker run -it --rm --runtime=nvidia --network=host dustynv/onnxruntime:1.20-r36.4.0 ```
## RPT installation - Build docker container: ```bash docker build --progress=plain -f extras/jetson/dockerfile -t rapidposetriangulation . ./run_container.sh ``` - Build _rpt_ package inside container: ```bash cd /RapidPoseTriangulation/swig/ && make all && cd ../tests/ && python3 test_interface.py && cd .. cd /RapidPoseTriangulation/scripts/ && \ g++ -std=c++2a -fPIC -O3 -march=native -Wall -Werror -flto=auto -fopenmp -fopenmp-simd \ -I /RapidPoseTriangulation/rpt/ \ -isystem /usr/include/opencv4/ \ -isystem /usr/local/include/onnxruntime/ \ -L /usr/local/lib/ \ test_skelda_dataset.cpp \ /RapidPoseTriangulation/rpt/*.cpp \ -o test_skelda_dataset.bin \ -Wl,--start-group \ -lonnxruntime_providers_tensorrt \ -lonnxruntime_providers_shared \ -lonnxruntime_providers_cuda \ -lonnxruntime \ -Wl,--end-group \ $(pkg-config --libs opencv4) \ -Wl,-rpath,/onnxruntime/build/Linux/Release/ \ && cd .. ``` - Test with samples: ```bash python3 /RapidPoseTriangulation/scripts/test_triangulate.py ```
## ROS interface - Build docker container: ```bash docker build --progress=plain -f extras/ros/dockerfile -t rapidposetriangulation_ros . ``` - Run and test: ```bash docker compose -f extras/jetson/docker-compose.yml up docker exec -it jetson-test_node-1 bash export ROS_DOMAIN_ID=18 ```