Files
RapidPoseTriangulation/extras/jetson/README.md
2024-12-19 14:56:18 +01:00

96 lines
2.0 KiB
Markdown

# Setup with Nvidia-Jetson-Orin
Initial setup and installation of _RapidPoseTriangulation_ on a _Nvidia Jetson_ device. \
Tested with a _Jetson AGX Orin Developer Kit_ module.
<br>
## Base installation
- Install newest software image: \
(https://developer.nvidia.com/sdk-manager)
- Initialize system: \
(https://developer.nvidia.com/embedded/learn/get-started-jetson-agx-orin-devkit)
- Install basic tools:
```bash
sudo apt install -y curl nano wget git
sudo apt install -y terminator
```
- Test docker is working:
```bash
sudo docker run --rm hello-world
```
- Enable _docker_ without _sudo_: \
(https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user)
- Enable GPU-access for docker building:
Run `sudo nano /etc/docker/daemon.json` and add:
```json
{
"runtimes": {
"nvidia": {
"args": [],
"path": "nvidia-container-runtime"
}
},
"default-runtime": "nvidia"
}
```
Restart docker: `sudo systemctl restart docker`
- Install _vs-code_: \
(https://code.visualstudio.com/docs/setup/linux)
- Test docker is working:
```bash
docker run --rm hello-world
docker run -it --rm --runtime=nvidia --network=host -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-base:r36.2.0
docker run -it --rm --runtime=nvidia --network=host dustynv/onnxruntime:1.20-r36.4.0
```
- Check _cuda_ access in container:
```bash
python3 -c 'import torch; print(torch.cuda.is_available());'
```
- Enable maximum performance mode:
```bash
sudo nvpmodel -m 0
sudo jetson_clocks
```
<br>
## RPT installation
- Build docker container:
```bash
docker build --progress=plain -f extras/jetson/dockerfile -t rapidposetriangulation .
./run_container.sh
```
- Build _rpt_ package inside container:
```bash
cd /RapidPoseTriangulation/swig/ && make all && cd ../tests/ && python3 test_interface.py && cd ..
```
- Test with samples:
```bash
python3 /RapidPoseTriangulation/scripts/test_triangulate.py
```