Difference between revisions of "Xavier/Deep Learning/TensorRT/Parsing Caffe"

From RidgeRun Developer Connection
Jump to: navigation, search
m
m
Line 1: Line 1:
 
<noinclude>
 
<noinclude>
{{Xavier/Head|TensorRT,Parsing,Caffe model,Parsing Caffe model}}
+
{{Xavier/Head|previous=Deep Learning‎/TensorRT/Parsing Tensorflow|next=Deep Learning/TensorRT/Building Examples|keywords=TensorRT,Parsing,Caffe model,Parsing Caffe model}}
 
</noinclude>
 
</noinclude>
 +
 
<!-- If you want a custom title for the page, un-comment and edit this line:
 
<!-- If you want a custom title for the page, un-comment and edit this line:
 
{{DISPLAYTITLE:NVIDIA Jetson Xavier - <descriptive page name>|noerror}}
 
{{DISPLAYTITLE:NVIDIA Jetson Xavier - <descriptive page name>|noerror}}
Line 13: Line 14:
  
 
<noinclude>
 
<noinclude>
{{Xavier/Foot|Deep Learning‎/TensorRT/Parsing Tensorflow|Deep Learning/TensorRT/Building Examples }}
+
{{Xavier/Foot|Deep Learning‎/TensorRT/Parsing Tensorflow|Deep Learning/TensorRT/Building Examples}}
 
</noinclude>
 
</noinclude>

Revision as of 02:17, 8 December 2018



Previous: Deep Learning‎/TensorRT/Parsing Tensorflow Index Next: Deep Learning/TensorRT/Building Examples



Nvidia-preferred-partner-badge-rgb-for-screen.png



Parsing Caffe model for TensorRT

The process for caffe models are fairly similar to Tensorflow models. The key difference is that you don't need to generate a uff model file. Caffe model file (.caffemodel) can be imported directly from tensorrt.

Loading a caffe model is an actual example provided by NVIDIA with TensorRT naned sample_mnist. For more details on this example please refer to the C++ API section.



Previous: Deep Learning‎/TensorRT/Parsing Tensorflow Index Next: Deep Learning/TensorRT/Building Examples