NVIDIA Xavier - JetPack 5.0.2 - Components - TensorRT

From RidgeRun Developer Connection
Jump to: navigation, search



  Index  



Nvidia-preferred-partner-badge-rgb-for-screen.png



TensorRT is a C++ library that facilitates high performance inference on NVIDIA platforms. It is designed to work with the most popular deep learning frameworks, such as TensorFlow, Caffe, PyTorch etc. It focus specifically on running an already trained model, to train the model, other libraries like cuDNN are more suitable. Some frameworks like TensorFlow have integrated TensorRT so that it can be used to accelerate inference within the framework. For other frameworks like Caffe, a parser is provided to generate a model that can be imported on TensorRT. For more information on using this library read our wiki here






Previous: JetPack 4.1/Components/MultimediaAPI Index Next: JetPack 4.1‎/Components/Cuda