Difference between revisions of "GstInference/Supported backends/Coral from Google"

From RidgeRun Developer Connection
Jump to: navigation, search
(Created page with "<noinclude> {{GstInference/Head|previous=Supported backends/TensorFlow|next=Metadatas|keywords=GstInference backends,Tensorflow,Google,Jetson-TX2,Jetson-TX1,Xavier,Nvidia,Deep...")
 
m
 
(13 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
<noinclude>
 
<noinclude>
{{GstInference/Head|previous=Supported backends/TensorFlow|next=Metadatas|keywords=GstInference backends,Tensorflow,Google,Jetson-TX2,Jetson-TX1,Xavier,Nvidia,Deep Neural Networks,DNN,DNN Model,Neural Compute API}}
+
{{GstInference/Head|previous=Supported backends/Tensorflow-Lite|next=Supported_backends/TensorRT|metakeywords=GstInference backends,Tensorflow,Google,Jetson-TX2,Jetson-TX1,Xavier,Nvidia,Deep Neural Networks,DNN,DNN Model,Neural Compute API}}
 
</noinclude>
 
</noinclude>
 
<!-- If you want a custom title for the page, un-comment and edit this line:
 
<!-- If you want a custom title for the page, un-comment and edit this line:
Line 6: Line 6:
 
-->
 
-->
  
{{DISPLAYTITLE:GstInference and TensorFlow-Lite backend|noerror}}
+
{{DISPLAYTITLE:GstInference and Coral backend|noerror}}
  
The [https://cloud.google.com/edge-tpu '''EdgeTPU'''] is an ASIC designed by Google to provide high-performance machine learning applications that are running on embedded devices. The EdgeTPU library extends the TensorFlow Lite framework. For more information about the EdgeTPU hardware go to the [[Google_Coral|Coral dev board]] page.
+
The [https://cloud.google.com/edge-tpu '''Edge TPU'''] is an ASIC designed by Google to provide high-performance machine learning applications that are running on embedded devices. The EdgeTPU library extends the TensorFlow Lite framework. For more information about the Edge TPU hardware go to the [[Google_Coral|Coral dev board]] page.
  
To use the EdgeTPU backend on Gst-Inference be sure to run the R2Inference configure with the flag <code> --enable-tflite </code> and <code> --enable-edgetpu </code>. Then, use the property <code> backend=edgetpu </code> on the Gst-Inference plugins. GstInference depends on the C++ API of Tensorflow-Lite.   
+
To use the Edge TPU backend on Gst-Inference be sure to run the R2Inference configure with the flag <code> -Denable-coral=true</code>. Then, use the property <code> backend=coral</code> on the Gst-Inference plugins. GstInference depends on the C++ API of Tensorflow-Lite.   
  
 
==Installation==
 
==Installation==
  
GstInference depends on the  the C++ API of Tensorflow-Lite. For installation steps, follow the steps in [https://developer.ridgerun.com/wiki/index.php?title=R2Inference/Getting_started/Building_the_library| R2Inference/Building the library] section.
+
GstInference depends on the C++ API of Tensorflow-Lite. For installation steps, follow the steps in [[R2Inference/Getting_started/Building_the_library|R2Inference/Building the library]] section.
  
TensorFlow python API and utilities can be installed with python pip, but it is not needed by GstInference.
+
TensorFlow Python API and utilities can be installed with python pip, but it is not needed by GstInference.
  
 
== Enabling the backend ==
 
== Enabling the backend ==
  
To enable Tensorflow-Lite as a backend for GstInference you need to install R2Inference with TensorFlow-Lite, which is a dependency, and EdgeTPU support. To do this, use the option --enable-tflite and --enable-edgetpu while following this [[R2Inference/Getting_started/Building_the_library|wiki]]
+
To enable Tensorflow-Lite as a backend for GstInference you need to install R2Inference with TensorFlow-Lite, which is a dependency, and EdgeTPU support. To do this, use the option <code> -Denable-coral=true</code> while following this [[R2Inference/Getting_started/Building_the_library|wiki]]
  
 
==Properties==
 
==Properties==
[https://www.tensorflow.org/lite/api_docs/cc TensorFlow Lite API Reference] has a full documentation of the Tensorflow-Lite C++ API. Gst-Inference uses only the C++ API of Tensorflow-Lite and R2Inference takes care of devices and loading the models.
+
[https://www.tensorflow.org/lite/api_docs/cc TensorFlow Lite API Reference] has full documentation of the Tensorflow-Lite C++ API. Gst-Inference uses only the C++ API of Tensorflow-Lite and R2Inference takes care of devices and loading the models.
  
 
The following syntax is used to change backend options on Gst-Inference plugins:
 
The following syntax is used to change backend options on Gst-Inference plugins:
Line 31: Line 31:
 
</syntaxhighlight>
 
</syntaxhighlight>
  
For example to change the backend to use Tensorflow-Lite with the inceptionv4 plugin you need to run the pipeline like this:
+
For example to change the backend to use Tensorflow-Lite you need to run the pipeline like this:
  
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
 
gst-launch-1.0 \  
 
gst-launch-1.0 \  
mobilenetv2 name=net model-location=mobilenet_v2_1.0_224_quant_edgetpu.tflite backend=edgetpu \
+
mobilenetv2 name=net model-location=mobilenet_v2_1.0_224_quant_edgetpu.tflite backend=coral \
 
filesrc location=video_stream.mp4 ! decodebin ! videoconvert ! videoscale ! queue ! tee name=t \
 
filesrc location=video_stream.mp4 ! decodebin ! videoconvert ! videoscale ! queue ! tee name=t \
 
t. ! queue ! videoconvert ! videoscale !  net.sink_model \
 
t. ! queue ! videoconvert ! videoscale !  net.sink_model \
Line 46: Line 46:
 
==Tools==
 
==Tools==
  
The TensorFlow Python API installation include a tool named Tensorboard, that can be used to visualize a model. If you want some examples and a more complete description please check the [[R2Inference/Supported_backends/TensorFlow#Tools| Tools]] section on the R2Inference wiki.
+
The TensorFlow Python API installation includes a tool named Tensorboard, that can be used to visualize a model. If you want some examples and a more complete description please check the [[R2Inference/Supported_backends/TensorFlow#Tools| Tools]] section on the R2Inference wiki.
  
 
<noinclude>
 
<noinclude>
{{GstInference/Foot|Supported backends/TensorFlow|Metadatas}}
+
{{GstInference/Foot|Supported backends/Tensorflow-Lite|Supported_backends/TensorRT}}
 
</noinclude>
 
</noinclude>

Latest revision as of 14:26, 27 February 2023



Previous: Supported backends/Tensorflow-Lite Index Next: Supported_backends/TensorRT





The Edge TPU is an ASIC designed by Google to provide high-performance machine learning applications that are running on embedded devices. The EdgeTPU library extends the TensorFlow Lite framework. For more information about the Edge TPU hardware go to the Coral dev board page.

To use the Edge TPU backend on Gst-Inference be sure to run the R2Inference configure with the flag -Denable-coral=true. Then, use the property backend=coral on the Gst-Inference plugins. GstInference depends on the C++ API of Tensorflow-Lite.

Installation

GstInference depends on the C++ API of Tensorflow-Lite. For installation steps, follow the steps in R2Inference/Building the library section.

TensorFlow Python API and utilities can be installed with python pip, but it is not needed by GstInference.

Enabling the backend

To enable Tensorflow-Lite as a backend for GstInference you need to install R2Inference with TensorFlow-Lite, which is a dependency, and EdgeTPU support. To do this, use the option -Denable-coral=true while following this wiki

Properties

TensorFlow Lite API Reference has full documentation of the Tensorflow-Lite C++ API. Gst-Inference uses only the C++ API of Tensorflow-Lite and R2Inference takes care of devices and loading the models.

The following syntax is used to change backend options on Gst-Inference plugins:

backend::<property>

For example to change the backend to use Tensorflow-Lite you need to run the pipeline like this:

gst-launch-1.0 \ 
mobilenetv2 name=net model-location=mobilenet_v2_1.0_224_quant_edgetpu.tflite backend=coral \
filesrc location=video_stream.mp4 ! decodebin ! videoconvert ! videoscale ! queue ! tee name=t \
t. ! queue ! videoconvert ! videoscale !  net.sink_model \
t. ! queue ! videoconvert ! net.sink_bypass \
 net.src_model ! fakesink

To learn more about the EdgeTPU C++ API, please check the Tensorflow-Lite API section on the R2Inference subwiki.

Tools

The TensorFlow Python API installation includes a tool named Tensorboard, that can be used to visualize a model. If you want some examples and a more complete description please check the Tools section on the R2Inference wiki.


Previous: Supported backends/Tensorflow-Lite Index Next: Supported_backends/TensorRT