Qualcomm Robotics RB5/RB6 - Neural Processing SDK: Example project

From RidgeRun Developer Connection
Jump to: navigation, search



Index






In this section, we are going to see how to work with the Neural Processing SDK. We are going to transform a trained model to a Deep Learning Container (DLC) file, and then use this model with the GStreamer plugin named qtimlesnpe to use the tools from the SDK and GStreamer to execute the model in a live pipeline. Finally, we are going to measure its performance. To continue with these steps, the previous sections should be completed. If not, please check our sections Downloading Requirements, Setup SDK Environment and Install qtimlesnpe.

Download ML Models and Convert them to DLC Format

We are going to use a script in the SDK to download a model and convert it to DLC. This script will download the Inception V3 TensorFlow model and convert it to DLC.

Inception v3

The Neural Processing SDK does not come with any model files, that's why it has scripts to download some popular and public models and converts them into the Deep Learning container needed to use the qtimlesnpe element. In this example, we are using the Inception v3 TensorFlow model for classification. The SDK has a python script that downloads the frozen model and converts it to a DLC file. Please follow the next steps:

1. Open a terminal in your host computer where you downloaded your Neural Processing SDK.

cd $SNPE_ROOT


2. Since this model comes from TensorFlow, we need to setup the environment for this framework. First, lets check where is TensorFlow installed with the following command:

python -m pip show tensorflow | grep Location

The output of the above command should be similar to the following:

Location: /home/user/.local/lib/python3.6/site-packages


3. Now, lets define an environment variable for this path. TensorFlow is installed in the above path, but we need to add the specific directory of the package at the end, like the following:

export TENSORFLOW_DIR=/home/edson/.local/lib/python3.6/site-packages/tensorflow


4. Lets setup the environment for the SDK with the chosen framework.

source bin/envsetup.sh -t $TENSORFLOW_DIR

You should see the following output:

[INFO] Setting TENSORFLOW_HOME=/home/user/.local/lib/python3.6/site-packages/tensorflow


5. Create a new temporary folder for the download.

mkdir -p /home/user/tmpdir


6. Finally, lets download the Inception v3 model and convert it to a DLC file.

python models/inception_v3/scripts/setup_inceptionv3.py -a ~/tmpdir -d

You should see an output similar to the one below:

2023-03-15 15:44:04,338 - 214 - INFO - Processed 0 quantization encodings
2023-03-15 15:44:04,742 - 214 - INFO - INFO_INITIALIZATION_SUCCESS: 
2023-03-15 15:44:04,991 - 214 - INFO - INFO_CONVERSION_SUCCESS: Conversion completed successfully
2023-03-15 15:44:05,350 - 214 - INFO - INFO_WRITE_SUCCESS: 
INFO: Setup inception_v3 completed.


7. Now, we are going to transfer the DLC and labels files to the board.

scp models/inception_v3/dlc/inception_v3.dlc qrb5:/data/misc/camera/.
scp models/inception_v3/data/imagenet_slim_labels.txt qrb5:/data/misc/camera/.


You can now continue with the GStreamer pipeline section to check how to use your model with qtimlesnpe.

GStreamer pipeline

Now that we have our model in the DLC format, we can use it with the qtimlesnpe element in GStreamer! In the previous steps, we moved the DLC and labesl files to the board, specifically to the /data/misc/camera/ directory. We are noe going to build our pipeline and use these files. We are using the following pipeline:

gst-launch-1.0 qtiqmmfsrc ! "video/x-raw,format=NV12,width=1280,height=720,framerate=30/1,camera=0" ! queue ! qtimlesnpe model=/data/misc/camera/inception_v3.dlc labels=/data/misc/camera/imagenet_slim_labels.txt postprocessing="classification" runtime=1 ! queue ! qtioverlay ! waylandsink qos=false enable-last-sample=false sync=false fullscreen=true


In the pipeline above, we use qtiqmmfsrc to capture frames from the camera in the Qualcomm Robotics RB5/RB6. We then set the caps of the capture, where we specify the format, resolution and framerate. Then we have our main element, the qtimlesnpe, where we specify the path of the model and the labels, also the type of postporcessing we are using; that in our case is a classification model, and finally where is the pipeline running. The qtimlesnpe plugin has 4 possible runtime options[1]:

  • 0 - CPU: Runs the model on the CPU. Supports 32-bit floating point or 8-bit quantized execution.
  • 1 - DSP: Runs the model on Hexagon DSP using Q6 and Hexagon NN, executing on HVX; supports 8-bit quantized execution.
  • 2 - GPU: Runs the model on the GPU; supports hybrid or full 16-bit floating point modes.
  • 3 - AIP: Runs the model on Hexagon DSP using Q6, Hexagon NN, and HTA; supports 8-bit quantized execution.

Later in the pipeline we use qtioverlay to overlay the metadata from the model in our video frames to finally display it in the monitor with the waylandsink element.

Measurements

For the pipeline in GStreamer pipeline section, we are measuring its CPU usage percentage and framerate, changing the possible runtimes. To measure the former, we are using the top command and then pressing Shift+i to get the average CPU usage percentage per core. For the latter, we use the gst-perf plugin from RidgeRun that computes the mean FPS, we get the average of 30 samples.

Table 1: Performance of pipeline with nnapi as delegate.
Runtime CPU (%) Framerate (fps)
CPU 88.5 5.284
DSP 86.4 5.301

References

  1. Snapdragon NPE Runtime. Retrieved March 14, 2023, from [1]


Index