NVIDIA Xavier - JetPack 5.0.2 - Components - TensorRT

From RidgeRun Developer Connection
< Xavier‎ | JetPack 5.0.2‎ | Components
Revision as of 10:26, 17 August 2022 by Fherrera (talk | contribs) (Fherrera moved page Xavier/JetPack 4.1/Components/TensorRT to Xavier/JetPack 5.0.2/Components/TensorRT: update)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search



  Index  



Nvidia-preferred-partner-badge-rgb-for-screen.png



TensorRT is a C++ library that facilitates high performance inference on NVIDIA platforms. It is designed to work with the most popular deep learning frameworks, such as TensorFlow, Caffe, PyTorch etc. It focus specifically on running an already trained model, to train the model, other libraries like cuDNN are more suitable. Some frameworks like TensorFlow have integrated TensorRT so that it can be used to accelerate inference within the framework. For other frameworks like Caffe, a parser is provided to generate a model that can be imported on TensorRT. For more information on using this library read our wiki here






Previous: JetPack 4.1/Components/MultimediaAPI Index Next: JetPack 4.1‎/Components/Cuda