Difference between revisions of "Xavier/Deep Learning/TensorRT/Parsing Caffe"
< Xavier | Deep Learning | TensorRT
m |
m |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
<noinclude> | <noinclude> | ||
− | {{Xavier/Head|previous=Deep Learning/TensorRT/Parsing Tensorflow|next=Deep Learning/TensorRT/Building Examples| | + | {{Xavier/Head|previous=Deep Learning/TensorRT/Parsing Tensorflow|next=Deep Learning/TensorRT/Building Examples|metakeywords=TensorRT,Parsing,Caffe model,Parsing Caffe model}} |
</noinclude> | </noinclude> | ||
− | + | {{DISPLAYTITLE:NVIDIA Jetson Xavier - Parsing Caffe model for TensorRT|noerror}} | |
− | {{DISPLAYTITLE:NVIDIA Jetson Xavier - | ||
− | |||
==Parsing Caffe model for TensorRT== | ==Parsing Caffe model for TensorRT== |
Latest revision as of 12:55, 13 February 2023
Parsing Caffe model for TensorRT
The process for Caffe models is fairly similar to Tensorflow models. The key difference is that you don't need to generate a uff model file. Caffe model file (.caffemodel) can be imported directly from tensorrt.
Loading a Caffe model is an actual example provided by NVIDIA with TensorRT named sample_mnist. For more details on this example please refer to the C++ API section.