Using tensorrt runtime directly.
This commit is contained in:
@ -49,6 +49,28 @@ python3 /RapidPoseTriangulation/extras/mmdeploy/add_extra_steps.py
|
||||
|
||||
<br>
|
||||
|
||||
## TensorRT
|
||||
|
||||
Run this directly in the inference container (the TensorRT versions need to be the same)
|
||||
|
||||
```bash
|
||||
export withFP16="_fp16"
|
||||
|
||||
trtexec --fp16 \
|
||||
--onnx=/RapidPoseTriangulation/extras/mmdeploy/exports/rtmdet-nano_320x320"$withFP16"_extra-steps.onnx \
|
||||
--saveEngine=end2end.engine
|
||||
|
||||
mv ./end2end.engine /RapidPoseTriangulation/extras/mmdeploy/exports/rtmdet-nano_1x320x320x3"$withFP16"_extra-steps.engine
|
||||
|
||||
trtexec --fp16 \
|
||||
--onnx=/RapidPoseTriangulation/extras/mmdeploy/exports/rtmpose-m_384x288"$withFP16"_extra-steps.onnx \
|
||||
--saveEngine=end2end.engine
|
||||
|
||||
mv ./end2end.engine /RapidPoseTriangulation/extras/mmdeploy/exports/rtmpose-m_1x384x288x3"$withFP16"_extra-steps.engine
|
||||
```
|
||||
|
||||
<br>
|
||||
|
||||
## Benchmark
|
||||
|
||||
```bash
|
||||
|
||||
Reference in New Issue
Block a user