1.9 KiB
Setup with Nvidia-Jetson-Orin
Initial setup and installation of RapidPoseTriangulation on a Nvidia Jetson device.
Tested with a Jetson AGX Orin Developer Kit module.
Base installation
-
Install newest software image:
(https://developer.nvidia.com/sdk-manager) -
Initialize system:
(https://developer.nvidia.com/embedded/learn/get-started-jetson-agx-orin-devkit) -
Install basic tools:
sudo apt install -y curl nano wget git sudo apt install -y terminator -
Test docker is working:
sudo docker run --rm hello-world -
Enable docker without sudo:
(https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user) -
Enable GPU-access for docker building:
Run
sudo nano /etc/docker/daemon.jsonand add:{ "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } }, "default-runtime": "nvidia" }Restart docker:
sudo systemctl restart docker -
Install vs-code:
(https://code.visualstudio.com/docs/setup/linux) -
Test docker is working:
docker run --rm hello-world docker run -it --rm --runtime=nvidia --network=host -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-base:r36.2.0 docker run -it --rm --runtime=nvidia --network=host dustynv/onnxruntime:1.20-r36.4.0 -
Check cuda access in container:
python3 -c 'import torch; print(torch.cuda.is_available());'
RPT installation
-
Build docker container:
docker build --progress=plain -f extras/jetson/dockerfile -t rapidposetriangulation . ./run_container.sh -
Build rpt package inside container:
cd /RapidPoseTriangulation/swig/ && make all && cd ../tests/ && python3 test_interface.py && cd .. -
Test with samples:
python3 /RapidPoseTriangulation/scripts/test_triangulate.py