Difference between revisions of "Xavier/Deep Learning/TensorRT/Parsing Caffe"

From RidgeRun Developer Connection
Jump to: navigation, search
m
m
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
 
<noinclude>
 
<noinclude>
{{Xavier/Head|previous=Deep Learning‎/TensorRT/Parsing Tensorflow|next=Deep Learning/TensorRT/Building Examples|keywords=TensorRT,Parsing,Caffe model,Parsing Caffe model}}
+
{{Xavier/Head|previous=Deep Learning‎/TensorRT/Parsing Tensorflow|next=Deep Learning/TensorRT/Building Examples|metakeywords=TensorRT,Parsing,Caffe model,Parsing Caffe model}}
 
</noinclude>
 
</noinclude>
  
<!-- If you want a custom title for the page, un-comment and edit this line:
+
{{DISPLAYTITLE:NVIDIA Jetson Xavier - Parsing Caffe model for TensorRT|noerror}}
{{DISPLAYTITLE:NVIDIA Jetson Xavier - <descriptive page name>|noerror}}
 
-->
 
  
 
==Parsing Caffe model for TensorRT==
 
==Parsing Caffe model for TensorRT==

Latest revision as of 12:55, 13 February 2023



Previous: Deep Learning‎/TensorRT/Parsing Tensorflow Index Next: Deep Learning/TensorRT/Building Examples



Nvidia-preferred-partner-badge-rgb-for-screen.png




Parsing Caffe model for TensorRT

The process for Caffe models is fairly similar to Tensorflow models. The key difference is that you don't need to generate a uff model file. Caffe model file (.caffemodel) can be imported directly from tensorrt.

Loading a Caffe model is an actual example provided by NVIDIA with TensorRT named sample_mnist. For more details on this example please refer to the C++ API section.



Previous: Deep Learning‎/TensorRT/Parsing Tensorflow Index Next: Deep Learning/TensorRT/Building Examples