Difference between revisions of "GstInference and NVIDIA DeepStream 1.5 nvcaffegie"
(Created page with "= Deepstream = [1] DeepStream SDK on Jetson uses Jetpack, which includes L4T, Multimedia APIs, CUDA, and TensorRT. The SDK offers a rich collection of plug-ins and libraries,...") |
m |
||
(10 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | = | + | <seo title="GstInference and NVIDIA DeepStream 1.5 nvcaffegie | RidgeRun Developer" titlemode="replace" keywords="DeepStream, DeepStream SDK, GstInference, nvcaffegie, DeepStream 1.5, Jetson TX1, Jetson, OpenCV CUDA Streams example, CUDA, TensorRT, Jetpack 3.2, L4T R28.2, CUDA 9.0, TensorRT 3.0 GA, cuDNN 7.0.5, VisionWorks 1.6, GStreamer, NCSDK, DeepStream 1.5, DeepStream 3.0, TensorFlow, nvvconv, nvcaffegie, nvtracker, caffe models, NVIDIA DeepStream Reference, NVIDIA DeepStream" description="This wiki page from RidgeRun is about OpenCV CUDA Streams example, profiling with NVIDIA Nsight and understanding CUDA Streams pipelining"></seo> |
+ | |||
+ | <table> | ||
+ | <tr> | ||
+ | <td><div class="clear; float:right">__TOC__</div></td> | ||
+ | <td> | ||
+ | {{NVIDIA Preferred Partner logo}} | ||
+ | <td> | ||
+ | <td> | ||
+ | {{GStreamer debug}} | ||
+ | <td> | ||
+ | <center> | ||
+ | {{ContactUs Button}} | ||
+ | </center> | ||
+ | </tr> | ||
+ | </table> | ||
+ | |||
+ | == DeepStream == | ||
[1] DeepStream SDK on Jetson uses Jetpack, which includes L4T, Multimedia APIs, CUDA, and TensorRT. The SDK offers a rich collection of plug-ins and libraries, built using the Gstreamer framework to enable developers to build flexible applications for transforming video into valuable insights. DeepStream also comes with sample applications including source code and an application adaptation guide to help developers jumpstart their builds. | [1] DeepStream SDK on Jetson uses Jetpack, which includes L4T, Multimedia APIs, CUDA, and TensorRT. The SDK offers a rich collection of plug-ins and libraries, built using the Gstreamer framework to enable developers to build flexible applications for transforming video into valuable insights. DeepStream also comes with sample applications including source code and an application adaptation guide to help developers jumpstart their builds. | ||
Line 5: | Line 22: | ||
For this wiki used Jetson TX1 for testing. Required: | For this wiki used Jetson TX1 for testing. Required: | ||
* Jetpack 3.2 which includes L4T R28.2, CUDA 9.0, TensorRT 3.0 GA, cuDNN 7.0.5, VisionWorks 1.6 | * Jetpack 3.2 which includes L4T R28.2, CUDA 9.0, TensorRT 3.0 GA, cuDNN 7.0.5, VisionWorks 1.6 | ||
− | * Download | + | * Download DeepStream for Jetson [https://developer.nvidia.com/deepstream-jetson https://developer.nvidia.com/deepstream-jetson] You need to sign and download. |
− | * | + | * DeepStream download for Jetson and Tesla, available at: ftp://10.251.101.2/docs/Installers/Nvidia/ |
− | |||
− | '''Ridgerun''' offers GstInference, GstInference is the GStreamer front-end for R²Inference, the actual project that handles the abstraction for different back-ends and frameworks. R²Inference will know how to deal with different vendor frameworks such as TensorFlow (x86, iMX8), OpenVX (x86, iMX8), Caffe (x86, | + | '''Ridgerun''' offers GstInference, GstInference is the GStreamer front-end for R²Inference, the actual project that handles the abstraction for different back-ends and frameworks. R²Inference will know how to deal with different vendor frameworks such as TensorFlow (x86, iMX8), OpenVX (x86, iMX8), Caffe (x86, NVIDIA), TensorRT (NVIDIA), or NCSDK (Intel) while exposing a generic/easy interface to the user. |
* More information please check: | * More information please check: | ||
− | ** [ | + | ** [[GstInference | GstInference]] Work in progress |
** [https://drive.google.com/file/d/1QEvAH2soJF7YynT4d9eT0i46LRmoBKd8/view?usp=sharing PDF Slides] | ** [https://drive.google.com/file/d/1QEvAH2soJF7YynT4d9eT0i46LRmoBKd8/view?usp=sharing PDF Slides] | ||
** [https://gstconf.ubicast.tv/videos/gstinference-a-gstreamer-deep-learning-framework/ Presentation video recording] | ** [https://gstconf.ubicast.tv/videos/gstinference-a-gstreamer-deep-learning-framework/ Presentation video recording] | ||
− | |||
** Contact us is you have questions or doubts: [https://www.ridgerun.com/contact https://www.ridgerun.com/contact] | ** Contact us is you have questions or doubts: [https://www.ridgerun.com/contact https://www.ridgerun.com/contact] | ||
− | = Using | + | == Using DeepStream demo at Jetson == |
− | * This wiki is for | + | * This wiki is for DeepStream 1.5 at Jetson (and tested at TX1) DeepStream 3.0 is available for Xavier not covered on this wiki. |
<pre> | <pre> | ||
Line 33: | Line 48: | ||
</pre> | </pre> | ||
− | = Building the demo = | + | == Building the demo == |
Install and build: | Install and build: | ||
Line 51: | Line 66: | ||
</pre> | </pre> | ||
− | = Doing some analysis = | + | == Doing some analysis == |
− | * The sample application is a | + | * The sample application is a GStreamer application that uses NVIDIA elements, by obtaining the DOT file we can see used elements and its configurations, since decodebins and other similar elements are used the pipeline is extensive. Check the Generated Dot file at: |
* Pipeline graphic for filesrc pipeline: [http://intranet.ridgerun.com/wiki/images/8/8c/Deepstream-pipeline-tegra.png Deepstream filesrc Tegra Pipeline] | * Pipeline graphic for filesrc pipeline: [http://intranet.ridgerun.com/wiki/images/8/8c/Deepstream-pipeline-tegra.png Deepstream filesrc Tegra Pipeline] | ||
* Pipeline graphic for nvcamerasrc pipeline: [http://intranet.ridgerun.com/wiki/images/d/de/Deepstream-tegra-pipeline-nvcamerasrc.png Deepstream nvcamerasrc Tegra Pipeline] | * Pipeline graphic for nvcamerasrc pipeline: [http://intranet.ridgerun.com/wiki/images/d/de/Deepstream-tegra-pipeline-nvcamerasrc.png Deepstream nvcamerasrc Tegra Pipeline] | ||
− | Basically the pipeline is composed with (in order as elements appear): | + | Basically, the pipeline is composed with (in order as elements appear): |
* Filesrc | * Filesrc | ||
Line 72: | Line 87: | ||
* nvoverlaysink | * nvoverlaysink | ||
− | '''* Note: ''' | + | '''* Note: ''' NVIDIA elements are provided as binaries: |
* libnvcaffegie.so.1.0.0 | * libnvcaffegie.so.1.0.0 | ||
* libgstnvtracker.so | * libgstnvtracker.so | ||
Line 78: | Line 93: | ||
* libgstnvcaffegie.so | * libgstnvcaffegie.so | ||
− | == Testing with gst-launch == | + | === Testing with gst-launch === |
* Pipeline with nvcamerasrc, one model: | * Pipeline with nvcamerasrc, one model: | ||
Line 204: | Line 219: | ||
</pre> | </pre> | ||
− | = | + | == Links == |
− | * | + | * NVIDIA DeepStream Reference: |
** [1] [https://developer.nvidia.com/deepstream-jetson DeepStream SDK on Jetson] | ** [1] [https://developer.nvidia.com/deepstream-jetson DeepStream SDK on Jetson] | ||
+ | {{ContactUs}} | ||
− | [[Category: | + | [[Category:GstInference]] [[Category:DeepStream]][[Category:GStreamer]][[Category:Jetson]][[Category:JetsonNano]][[Category:JetsonTX2]][[Category:NVIDIA Xavier]][[Category:JetsonXavierNX]] |
Latest revision as of 04:30, 22 August 2022
|
|
|
DeepStream
[1] DeepStream SDK on Jetson uses Jetpack, which includes L4T, Multimedia APIs, CUDA, and TensorRT. The SDK offers a rich collection of plug-ins and libraries, built using the Gstreamer framework to enable developers to build flexible applications for transforming video into valuable insights. DeepStream also comes with sample applications including source code and an application adaptation guide to help developers jumpstart their builds.
For this wiki used Jetson TX1 for testing. Required:
- Jetpack 3.2 which includes L4T R28.2, CUDA 9.0, TensorRT 3.0 GA, cuDNN 7.0.5, VisionWorks 1.6
- Download DeepStream for Jetson https://developer.nvidia.com/deepstream-jetson You need to sign and download.
- DeepStream download for Jetson and Tesla, available at: ftp://10.251.101.2/docs/Installers/Nvidia/
Ridgerun offers GstInference, GstInference is the GStreamer front-end for R²Inference, the actual project that handles the abstraction for different back-ends and frameworks. R²Inference will know how to deal with different vendor frameworks such as TensorFlow (x86, iMX8), OpenVX (x86, iMX8), Caffe (x86, NVIDIA), TensorRT (NVIDIA), or NCSDK (Intel) while exposing a generic/easy interface to the user.
- More information please check:
- GstInference Work in progress
- PDF Slides
- Presentation video recording
- Contact us is you have questions or doubts: https://www.ridgerun.com/contact
Using DeepStream demo at Jetson
- This wiki is for DeepStream 1.5 at Jetson (and tested at TX1) DeepStream 3.0 is available for Xavier not covered on this wiki.
tar xpvf DeepStream_SDK_on_Jetson_1.5_pre-release.tbz2 sudo tar xpvf deepstream_sdk_on_jetson.tbz2 -C / sudo tar xpvf deepstream_sdk_on_jetson_models.tbz2 -C / sudo ldconfig
Run the demo: Video will be displayed at HDMI output
nvgstiva-app -c ${HOME}/configs/PGIE-FP16-CarType-CarMake-CarColor.txt
Building the demo
Install and build:
sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev sudo ln -s /usr/lib/aarch64-linux-gnu/tegra/libnvid_mapper.so.1.0.0 \ /usr/lib/aarch64-linux-gnu/libnvid_mapper.so cd ${HOME}/nvgstiva-app_sources/nvgstiva-app make #Run the App with ./nvgstiva-app -c <config-file> ./nvgstiva-app -c ./${HOME}/configs/PGIE-FP16-CarType-CarMake-CarColor.txt
Doing some analysis
- The sample application is a GStreamer application that uses NVIDIA elements, by obtaining the DOT file we can see used elements and its configurations, since decodebins and other similar elements are used the pipeline is extensive. Check the Generated Dot file at:
- Pipeline graphic for filesrc pipeline: Deepstream filesrc Tegra Pipeline
- Pipeline graphic for nvcamerasrc pipeline: Deepstream nvcamerasrc Tegra Pipeline
Basically, the pipeline is composed with (in order as elements appear):
- Filesrc
- Decodebin from mp3 to 720p NV12
- nvvconv
- nvcaffegie (this element receives as parameters profile file, caffe model and caffe model cache)
- nvtracker
- tee (with 4 outputs)
- Three more nvcaffegie plugins, each one with a different model (car color, vehicle type, secondary make)
- each one of this nvcaffegie goes into a fakesink
- Fourth tee goes to nvvconv
- nvosd
- nvoverlaysink
* Note: NVIDIA elements are provided as binaries:
- libnvcaffegie.so.1.0.0
- libgstnvtracker.so
- libgstnvclrdetector.so
- libgstnvcaffegie.so
Testing with gst-launch
- Pipeline with nvcamerasrc, one model:
GST_DEBUG=3 gst-launch-1.0 nvcamerasrc queue-size=6 sensor-id=0 fpsRange='30 30' \ ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=(fraction)30/1, format=(string)I420' \ ! queue ! nvvidconv ! nvcaffegie model-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel" \ protofile-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_deploy_pruned.prototxt" \ model-cache="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel_b2_fp16.cache" \ labelfile-path="/home/nvidia/Model/ResNet_18/labels.txt" net-stride=16 batch-size=2 roi-top-offset="0,0:1,0:2,0:" \ roi-bottom-offset="0,0:1,0:2,0:" detected-min-w-h="0,0,0:1,0,0:2,0,0" detected-max-w-h="0,1920,1080:1,100,1080:2,1920,1080:" \ interval=1 parse-func=4 net-scale-factor=0.0039215697906911373 \ class-thresh-params="0,0.200000,0.100000,3,0:1,0.200000,0.100000,3,0:2,0.200000,0.100000,3,0:" \ output-bbox-layer-name=Layer11_bbox output-coverage-layer-names=Layer11_cov ! queue ! nvtracker \ ! queue ! nvosd x-clock-offset=800 y-clock-offset=820 hw-blend-color-attr="3,1.000000,1.000000,0.000000:" \ ! queue ! nvoverlaysink sync=false enable-last-sample=false
- Pipeline with nvcamerasrc and two caffe models, it is better to put pipeline at script and execute, video runs and boxes are draw, but no labels.
gst-launch-1.0 nvcamerasrc queue-size=10 sensor-id=0 fpsRange='30 30' ! \ 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, \ framerate=(fraction)30/1, format=(string)I420' \ ! queue ! nvvidconv ! \ nvcaffegie \ class-thresh-params="0,0.200000,0.100000,3,0:1,0.200000,0.100000,3,0:2,0.200000,0.100000,3,0:" \ model-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel" \ protofile-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_deploy_pruned.prototxt" \ model-cache="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel_b2_fp16.cache" \ labelfile-path="/home/nvidia/Model/ResNet_18/labels.txt" \ batch-size=2 \ roi-top-offset="0,0:1,0:2,0:" \ roi-bottom-offset="0,0:1,0:2,0:" \ detected-min-w-h="0,0,0:1,0,0:2,0,0" \ detected-max-w-h="0,1920,1080:1,100,1080:2,1920,1080:" \ interval=1 \ parse-func=4 \ net-scale-factor=0.0039215697906911373 \ output-bbox-layer-name=Layer11_bbox \ output-coverage-layer-names=Layer11_cov ! \ queue ! \ nvtracker \ ! queue ! tee name=t ! queue ! nvosd x-clock-offset=800 y-clock-offset=820 hw-blend-color-attr="3,1.000000,1.000000,0.000000:" \ ! nvvidconv ! nvoverlaysink sync=false async=false enable-last-sample=false \ t. ! queue ! \ nvcaffegie \ gie-mode = 2 \ gie-unique-id=5 \ infer-on-gie-id=1 \ class-thresh-params="0,1.000000,0.100000,3,2" \ infer-on-class-ids="2:" \ model-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/CarColorPruned.caffemodel" \ protofile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/deploy.prototxt" \ model-cache="/home/nvidia/Model/IVA_secondary_carcolor_V1/CarColorPruned.caffemodel_b2_fp16.cache" \ batch-size=2 \ detected-min-w-h="11,0,0:" \ detected-max-w-h="3,1920,1080:" \ roi-top-offset="0,0:" \ roi-bottom-offset="0,0:" \ model-color-format=1 \ meanfile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/mean.ppm" \ detect-clr="0:" \ labelfile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/labels.txt" \ sec-class-threshold=0.510000 \ parse-func=0 \ is-classifier=TRUE \ offsets="" \ output-coverage-layer-names="softmax" \ sgie-async-mode=TRUE \ ! fakesink async=false sync=false enable-last-sample=false
- Pipeline with nvcamerasrc and two caffe models, it is better to put pipeline at script and execute, video runs and boxes are draw, but no labels, not using tee.
gst-launch-1.0 nvcamerasrc queue-size=10 sensor-id=0 fpsRange='30 30' ! \ 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, \ framerate=(fraction)30/1, format=(string)I420' \ ! queue ! nvvidconv ! \ nvcaffegie \ class-thresh-params="0,0.200000,0.100000,3,0:1,0.200000,0.100000,3,0:2,0.200000,0.100000,3,0:" \ model-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel" \ protofile-path="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_deploy_pruned.prototxt" \ model-cache="/home/nvidia/Model/ResNet_18/ResNet_18_threeClass_VGA_pruned.caffemodel_b2_fp16.cache" \ labelfile-path="/home/nvidia/Model/ResNet_18/labels.txt" \ batch-size=2 \ roi-top-offset="0,0:1,0:2,0:" \ roi-bottom-offset="0,0:1,0:2,0:" \ detected-min-w-h="0,0,0:1,0,0:2,0,0" \ detected-max-w-h="0,1920,1080:1,100,1080:2,1920,1080:" \ interval=1 \ parse-func=4 \ net-scale-factor=0.0039215697906911373 \ output-bbox-layer-name=Layer11_bbox \ output-coverage-layer-names=Layer11_cov ! \ queue ! \ nvtracker \ ! queue ! \ nvcaffegie \ gie-mode = 2 \ gie-unique-id=5 \ infer-on-gie-id=1 \ class-thresh-params="0,1.000000,0.100000,3,2" \ infer-on-class-ids="2:" \ model-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/CarColorPruned.caffemodel" \ protofile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/deploy.prototxt" \ model-cache="/home/nvidia/Model/IVA_secondary_carcolor_V1/CarColorPruned.caffemodel_b2_fp16.cache" \ batch-size=2 \ detected-min-w-h="11,0,0:" \ detected-max-w-h="3,1920,1080:" \ roi-top-offset="0,0:" \ roi-bottom-offset="0,0:" \ model-color-format=1 \ meanfile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/mean.ppm" \ detect-clr="0:" \ labelfile-path="/home/nvidia/Model/IVA_secondary_carcolor_V1/labels.txt" \ sec-class-threshold=0.510000 \ parse-func=0 \ is-classifier=TRUE \ offsets="" \ output-coverage-layer-names="softmax" \ sgie-async-mode=TRUE \ ! nvosd x-clock-offset=800 y-clock-offset=820 hw-blend-color-attr="3,1.000000,1.000000,0.000000:" \ ! nvvidconv ! nvoverlaysink sync=false async=false enable-last-sample=false
Links
- NVIDIA DeepStream Reference:
RidgeRun Resources | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Contact Us
|