Difference between revisions of "GstInference and NVIDIA DeepStream 1.5 nvcaffegie"

From RidgeRun Developer Connection
Jump to: navigation, search
m (Spalli moved page GstInference and NVIDIA Deepstream 1.5 nvcaffegie to GstInference and NVIDIA DeepStream 1.5 nvcaffegie: followed a standard naming convention for the term 'DeepStream' instead of 'Deepstream')
m
Line 1: Line 1:
= Deepstream =
+
== DeepStream ==
  
 
[1] DeepStream SDK on Jetson uses Jetpack, which includes L4T, Multimedia APIs, CUDA, and TensorRT. The SDK offers a rich collection of plug-ins and libraries, built using the Gstreamer framework to enable developers to build flexible applications for transforming video into valuable insights. DeepStream also comes with sample applications including source code and an application adaptation guide to help developers jumpstart their builds.
 
[1] DeepStream SDK on Jetson uses Jetpack, which includes L4T, Multimedia APIs, CUDA, and TensorRT. The SDK offers a rich collection of plug-ins and libraries, built using the Gstreamer framework to enable developers to build flexible applications for transforming video into valuable insights. DeepStream also comes with sample applications including source code and an application adaptation guide to help developers jumpstart their builds.
Line 6: Line 6:
 
* Jetpack 3.2 which includes L4T R28.2, CUDA 9.0, TensorRT 3.0 GA, cuDNN 7.0.5, VisionWorks 1.6
 
* Jetpack 3.2 which includes L4T R28.2, CUDA 9.0, TensorRT 3.0 GA, cuDNN 7.0.5, VisionWorks 1.6
 
* Download Deepstream for Jetson [https://developer.nvidia.com/deepstream-jetson https://developer.nvidia.com/deepstream-jetson] You need to sign and download.
 
* Download Deepstream for Jetson [https://developer.nvidia.com/deepstream-jetson https://developer.nvidia.com/deepstream-jetson] You need to sign and download.
* Deepstream download for Jetson and Tesla, available at: ftp://10.251.101.2/docs/Installers/Nvidia/
+
* DeepStream download for Jetson and Tesla, available at: ftp://10.251.101.2/docs/Installers/Nvidia/
  
  
Line 17: Line 17:
 
** Contact us is you have questions or doubts: [https://www.ridgerun.com/contact https://www.ridgerun.com/contact]
 
** Contact us is you have questions or doubts: [https://www.ridgerun.com/contact https://www.ridgerun.com/contact]
  
= Using Deepstream demo at Jetson =
+
== Using DeepStream demo at Jetson ==
  
* This wiki is for Deepstream 1.5 at Jetson (and tested at TX1) Deepstream 3.0 is available for Xavier not covered on this wiki.
+
* This wiki is for Deepstream 1.5 at Jetson (and tested at TX1) DeepStream 3.0 is available for Xavier not covered on this wiki.
  
 
<pre>
 
<pre>
Line 33: Line 33:
 
</pre>
 
</pre>
  
= Building the demo =
+
== Building the demo ==
  
 
Install and build:
 
Install and build:
Line 51: Line 51:
 
</pre>
 
</pre>
  
= Doing some analysis =
+
== Doing some analysis ==
  
 
* The sample application is a Gstreamer application that uses NVidia elements, by obtaining the DOT file we can see used elements and its configurations, since decodebins and other similar elements are used the pipeline is extensive. Check the Generated Dot file at:
 
* The sample application is a Gstreamer application that uses NVidia elements, by obtaining the DOT file we can see used elements and its configurations, since decodebins and other similar elements are used the pipeline is extensive. Check the Generated Dot file at:
Line 78: Line 78:
 
* libgstnvcaffegie.so
 
* libgstnvcaffegie.so
  
== Testing with gst-launch ==
+
=== Testing with gst-launch ===
  
 
* Pipeline with nvcamerasrc, one model:
 
* Pipeline with nvcamerasrc, one model:
Line 204: Line 204:
 
</pre>
 
</pre>
  
= Other links =
+
== Links ==
  
* NVidia Deepstream Reference:
+
* NVidia DeepStream Reference:
 
** [1] [https://developer.nvidia.com/deepstream-jetson DeepStream SDK on Jetson]
 
** [1] [https://developer.nvidia.com/deepstream-jetson DeepStream SDK on Jetson]
  
  
 
[[Category:GstInference]] [[Category:DeepStream]] [[Category:nvcaffegie]]
 
[[Category:GstInference]] [[Category:DeepStream]] [[Category:nvcaffegie]]

Revision as of 22:40, 7 February 2020

DeepStream

[1] DeepStream SDK on Jetson uses Jetpack, which includes L4T, Multimedia APIs, CUDA, and TensorRT. The SDK offers a rich collection of plug-ins and libraries, built using the Gstreamer framework to enable developers to build flexible applications for transforming video into valuable insights. DeepStream also comes with sample applications including source code and an application adaptation guide to help developers jumpstart their builds.

For this wiki used Jetson TX1 for testing. Required:


Ridgerun offers GstInference, GstInference is the GStreamer front-end for R²Inference, the actual project that handles the abstraction for different back-ends and frameworks. R²Inference will know how to deal with different vendor frameworks such as TensorFlow (x86, iMX8), OpenVX (x86, iMX8), Caffe (x86, NVidia), TensorRT (Nvidia) or NCSDK (Intel), while exposing a generic/easy interface to the user.

Using DeepStream demo at Jetson

  • This wiki is for Deepstream 1.5 at Jetson (and tested at TX1) DeepStream 3.0 is available for Xavier not covered on this wiki.
tar xpvf DeepStream_SDK_on_Jetson_1.5_pre-release.tbz2
sudo tar xpvf deepstream_sdk_on_jetson.tbz2 -C /
sudo tar xpvf deepstream_sdk_on_jetson_models.tbz2 -C /
sudo ldconfig

Run the demo: Video will be displayed at HDMI output

nvgstiva-app -c ${HOME}/configs/PGIE-FP16-CarType-CarMake-CarColor.txt 

Building the demo

Install and build:

sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev

sudo ln -s /usr/lib/aarch64-linux-gnu/tegra/libnvid_mapper.so.1.0.0 \
             /usr/lib/aarch64-linux-gnu/libnvid_mapper.so

cd ${HOME}/nvgstiva-app_sources/nvgstiva-app

make

#Run the App with ./nvgstiva-app -c <config-file>
./nvgstiva-app -c ./${HOME}/configs/PGIE-FP16-CarType-CarMake-CarColor.txt 

Doing some analysis

  • The sample application is a Gstreamer application that uses NVidia elements, by obtaining the DOT file we can see used elements and its configurations, since decodebins and other similar elements are used the pipeline is extensive. Check the Generated Dot file at:

Basically the pipeline is composed with (in order as elements appear):

  • Filesrc
  • Decodebin from mp3 to 720p NV12
  • nvvconv
  • nvcaffegie (this element receives as parameters profile file, caffe model and caffe model cache)
  • nvtracker
  • tee (with 4 outputs)
  • Three more nvcaffegie plugins, each one with a different model (car color, vehicle type, secondary make)
  • each one of this nvcaffegie goes into a fakesink
  • Fourth tee goes to nvvconv
  • nvosd
  • nvoverlaysink

* Note: NVidia elements are provided as binaries:

  • libnvcaffegie.so.1.0.0
  • libgstnvtracker.so
  • libgstnvclrdetector.so
  • libgstnvcaffegie.so

Testing with gst-launch

  • Pipeline with nvcamerasrc, one model:
GST_DEBUG=3 gst-launch-1.0 nvcamerasrc queue-size=6 sensor-id=0 fpsRange='30 30' \
! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)30/1, format=(string)I420' \
! queue ! nvvidconv ! nvcaffegie  model-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel" \
protofile-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_deploy_pruned.prototxt" \
model-cache="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel_b2_fp16.cache" \
labelfile-path="/home/nvidia/Model/ResNet_18/labels.txt" net-stride=16 batch-size=2 roi-top-offset="0,0:1,0:2,0:" \
roi-bottom-offset="0,0:1,0:2,0:" detected-min-w-h="0,0,0:1,0,0:2,0,0" detected-max-w-h="0,1920,1080:1,100,1080:2,1920,1080:" \
interval=1 parse-func=4 net-scale-factor=0.0039215697906911373 \
class-thresh-params="0,0.200000,0.100000,3,0:1,0.200000,0.100000,3,0:2,0.200000,0.100000,3,0:" \
output-bbox-layer-name=Layer11_bbox output-coverage-layer-names=Layer11_cov ! queue ! nvtracker \
! queue ! nvosd x-clock-offset=800 y-clock-offset=820 hw-blend-color-attr="3,1.000000,1.000000,0.000000:" \
! queue ! nvoverlaysink sync=false enable-last-sample=false
  • Pipeline with nvcamerasrc and two caffe models, it is better to put pipeline at script and execute, video runs and boxes are draw, but no labels.
gst-launch-1.0 nvcamerasrc queue-size=10 sensor-id=0 fpsRange='30 30' ! \
'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, \
framerate=(fraction)30/1, format=(string)I420' \
! queue ! nvvidconv ! \
nvcaffegie  \
class-thresh-params="0,0.200000,0.100000,3,0:1,0.200000,0.100000,3,0:2,0.200000,0.100000,3,0:" \
model-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel" \
protofile-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_deploy_pruned.prototxt" \
model-cache="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel_b2_fp16.cache" \
labelfile-path="/home/nvidia/Model/ResNet_18/labels.txt"  \
batch-size=2 \
roi-top-offset="0,0:1,0:2,0:" \
roi-bottom-offset="0,0:1,0:2,0:" \
detected-min-w-h="0,0,0:1,0,0:2,0,0" \
detected-max-w-h="0,1920,1080:1,100,1080:2,1920,1080:" \
interval=1 \
parse-func=4 \
net-scale-factor=0.0039215697906911373 \
output-bbox-layer-name=Layer11_bbox \
output-coverage-layer-names=Layer11_cov ! \
queue ! \
nvtracker \
! queue ! tee name=t ! queue ! nvosd x-clock-offset=800 y-clock-offset=820 hw-blend-color-attr="3,1.000000,1.000000,0.000000:" \
! nvvidconv ! nvoverlaysink sync=false async=false enable-last-sample=false \
t. ! queue ! \
nvcaffegie  \
gie-mode = 2 \
gie-unique-id=5 \
infer-on-gie-id=1 \
class-thresh-params="0,1.000000,0.100000,3,2" \
infer-on-class-ids="2:" \
model-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/CarColorPruned.caffemodel" \
protofile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/deploy.prototxt" \
model-cache="/home/nvidia/Model/IVA_secondary_carcolor_V1/CarColorPruned.caffemodel_b2_fp16.cache" \
batch-size=2 \
detected-min-w-h="11,0,0:" \
detected-max-w-h="3,1920,1080:" \
roi-top-offset="0,0:" \
roi-bottom-offset="0,0:" \
model-color-format=1 \
meanfile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/mean.ppm" \
detect-clr="0:" \
labelfile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/labels.txt" \
sec-class-threshold=0.510000 \
parse-func=0 \
is-classifier=TRUE \
offsets="" \
output-coverage-layer-names="softmax" \
sgie-async-mode=TRUE  \
! fakesink async=false sync=false enable-last-sample=false


  • Pipeline with nvcamerasrc and two caffe models, it is better to put pipeline at script and execute, video runs and boxes are draw, but no labels, not using tee.
gst-launch-1.0 nvcamerasrc queue-size=10 sensor-id=0 fpsRange='30 30' ! \
'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, \
framerate=(fraction)30/1, format=(string)I420' \
! queue ! nvvidconv ! \
nvcaffegie  \
class-thresh-params="0,0.200000,0.100000,3,0:1,0.200000,0.100000,3,0:2,0.200000,0.100000,3,0:" \
model-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel" \
protofile-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_deploy_pruned.prototxt" \
model-cache="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel_b2_fp16.cache" \
labelfile-path="/home/nvidia/Model/ResNet_18/labels.txt"  \
batch-size=2 \
roi-top-offset="0,0:1,0:2,0:" \
roi-bottom-offset="0,0:1,0:2,0:" \
detected-min-w-h="0,0,0:1,0,0:2,0,0" \
detected-max-w-h="0,1920,1080:1,100,1080:2,1920,1080:" \
interval=1 \
parse-func=4 \
net-scale-factor=0.0039215697906911373 \
output-bbox-layer-name=Layer11_bbox \
output-coverage-layer-names=Layer11_cov ! \
queue ! \
nvtracker \
! queue ! \
nvcaffegie  \
gie-mode = 2 \
gie-unique-id=5 \
infer-on-gie-id=1 \
class-thresh-params="0,1.000000,0.100000,3,2" \
infer-on-class-ids="2:" \
model-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/CarColorPruned.caffemodel" \
protofile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/deploy.prototxt" \
model-cache="/home/nvidia/Model/IVA_secondary_carcolor_V1/CarColorPruned.caffemodel_b2_fp16.cache" \
batch-size=2 \
detected-min-w-h="11,0,0:" \
detected-max-w-h="3,1920,1080:" \
roi-top-offset="0,0:" \
roi-bottom-offset="0,0:" \
model-color-format=1 \
meanfile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/mean.ppm" \
detect-clr="0:" \
labelfile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/labels.txt" \
sec-class-threshold=0.510000 \
parse-func=0 \
is-classifier=TRUE \
offsets="" \
output-coverage-layer-names="softmax" \
sgie-async-mode=TRUE  \
! nvosd x-clock-offset=800 y-clock-offset=820 hw-blend-color-attr="3,1.000000,1.000000,0.000000:" \
! nvvidconv ! nvoverlaysink sync=false async=false enable-last-sample=false

Links