Difference between revisions of "GstInference/Supported backends/NCSDK"

From RidgeRun Developer Connection
Jump to: navigation, search
m
m
Line 2: Line 2:
 
{{GstInference/Head|previous=Supported backends|next=Supported backends/TensorFlow|keywords=GstInference backends,NCSDK,Movidius,Intel Movidius,Neural Compute SDK,Intel Movidius NCSDK,Deep Neural Networks,DNN,DNN Model,Neural Compute API, NCAPI | title=GstInference with NCSDK backend}}
 
{{GstInference/Head|previous=Supported backends|next=Supported backends/TensorFlow|keywords=GstInference backends,NCSDK,Movidius,Intel Movidius,Neural Compute SDK,Intel Movidius NCSDK,Deep Neural Networks,DNN,DNN Model,Neural Compute API, NCAPI | title=GstInference with NCSDK backend}}
 
</noinclude>
 
</noinclude>
 +
 
<!-- If you want a custom title for the page, un-comment and edit this line:
 
<!-- If you want a custom title for the page, un-comment and edit this line:
 
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
 
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}

Revision as of 04:03, 18 February 2020



Previous: Supported backends Index Next: Supported backends/TensorFlow





The NCSDK Intel® Movidius™ Neural Compute SDK (Intel® Movidius™ NCSDK) enables deployment of deep neural networks on compatible devices such as the Intel® Movidius™ Neural Compute Stick. The NCSDK includes a set of software tools to compile, profile, and validate DNNs (Deep Neural Networks) as well as APIs on C/C++ and Python for application development.

To use the ncsdk on Gst-Inference be sure to run the R2Inference configure with the flag --enable-ncsdk and use the property backend=ncsdk on the Gst-Inference plugins.

Installation

You can install the NCSDK on a system running Linux directly, downloading a Docker container, on a virtual machine or using a Python virtual environment. All the possible installation paths are documented on the official installation guide.

We also provide an installation guide with troubleshooting on the Intel Movidius Installation wiki page

Note: It is recommended to take the docker container route on the NCSDK installation. Other routes may affect your python environment because it sometimes uninstalls and reinstalls python and some common plugins such as numpy or TensorFlow. Docker installation is actually very simple and it doesn't affect your environment at all. Use this link to jump directly to the docker section on the installation guide.

Enabling the backend

To enable NCSDK as a backend for GstInference you need to install R2Inference with NCSDK support. To do this, use the option --enable-ncsdk during R2Inference configure:

./autogen.sh --enable-ncsdk

Generating a graph

GstInference ncsdk backend uses the same graphs as the NCSDK API. Those graphs are specially compiled to run inference on a Neural Compute Stick(NCS). The NCSDK provides a tool (mvNCCompile) to generate NCS graphs from either a TensorFlow frozen model, or a Caffe model and weights. For examples on how to generate a graph please check the Generating a model for R2I section on the R2Inference wiki.

Properties

You can find the full documentation of the C API here and the Python API here. Gst-Inference uses only the C API and R2Inference takes care of devices, graphs, models and fifos. Because of this, we will only take a look at the options that you can change when using the C API through R2Inference.

The following syntax is used to change backend options on Gst-Inference plugins:

backend::<property>

For example to change the NCSDK API log level of the googlenet plugin you need to run the pipeline like this:

gst-launch-1.0 \
googlenet name=net model-location=/root/r2inference/examples/r2i/ncsdk/graph_googlenet backend=ncsdk backend::log-level=1 \
videotestsrc ! tee name=t \
t. ! queue ! videoconvert ! videoscale ! net.sink_model \
t. ! queue ! net.sink_bypass \
net.src_bypass ! fakesink

The backend::log-level=1 section of the pipeline sets the NC_RW_LOG_LEVEL option of the NCSDK C API to 1.

To learn more about the NCSDK C API option, please check the NCSDK API wiki section on the R2Inference subwiki.

Tools

The NCSDK installation include some useful tools to analyze, optimize and compile models. We will mention these tools here, but if you want some examples and a more complete description please check the NCSDK wiki page on the R2Inference subwiki.

  • mvNCCheck: Checks the validity of a Caffe or TensorFlow model on a neural compute device. The check is done by running an inference on both the device and in software and then comparing the results to determine a if the network passes or fails.
  • mvNCCompile: Compiles a network and weights files from Caffe or TensorFlow models into a graph file that is compatible with the NCAPI.
  • mvNCProfile: Compiles a network, runs it on a connected neural compute device, and outputs profiling info on the terminal and on an HTML file. The profiling data contains layer performance and execution time of the model. The html version of the report also contains a graphical representation of the model.


Previous: Supported backends Index Next: Supported backends/TensorFlow