Difference between revisions of "NVIDIA GTC 2020: How to build a multi-camera Media Server for AI processing on Jetson"

From RidgeRun Developer Connection
Jump to: navigation, search
(Demo Directory Layout)
m
Line 1: Line 1:
= Overview =
+
== Overview ==
 
RidgeRun has put a lot of effort on delivering software solutions in accordance with embedded devices market trends; NVIDIA's Jetson is a family of low-power hardware systems designed for accelerating machine learning applications combined with the NVIDIA's JetPack Software Development Kit (SDK).
 
RidgeRun has put a lot of effort on delivering software solutions in accordance with embedded devices market trends; NVIDIA's Jetson is a family of low-power hardware systems designed for accelerating machine learning applications combined with the NVIDIA's JetPack Software Development Kit (SDK).
 
   
 
   
Line 5: Line 5:
 
Each JetPack SDK release provides support for multiple of the Jetson family modules, including Jetson AGX Xavier series, Jetson TX2 series, Jetson TX1 and Jetson Nano. The integration of support for several of the Jetson series modules under one single JetPack release provides a great level of portability for developers' applications, hence the media server here described can actually run on several platforms including Jetson's TX2, Xavier and Nano.
 
Each JetPack SDK release provides support for multiple of the Jetson family modules, including Jetson AGX Xavier series, Jetson TX2 series, Jetson TX1 and Jetson Nano. The integration of support for several of the Jetson series modules under one single JetPack release provides a great level of portability for developers' applications, hence the media server here described can actually run on several platforms including Jetson's TX2, Xavier and Nano.
  
 
+
== Demo Requirements ==
= Demo Requirements =
 
  
 
[[Image:Tx2_devkit.png|frameless|left|300px]]
 
[[Image:Tx2_devkit.png|frameless|left|300px]]
Line 23: Line 22:
 
- NVIDIA DeepStream 4.0
 
- NVIDIA DeepStream 4.0
  
 
+
== Environment Setup ==
 
 
 
 
 
 
 
 
= Environment Setup =
 
 
   
 
   
 
1. Proceed to install NVIDIA software tools, both JetPack and DeepStream can be installed using SDK Manager, this is the recommended installation method.
 
1. Proceed to install NVIDIA software tools, both JetPack and DeepStream can be installed using SDK Manager, this is the recommended installation method.
Line 38: Line 32:
 
[https://gitlab.com/RidgeRun/demos/gtc-2020/-/tree/develop/ GTC20 Demo Repo]
 
[https://gitlab.com/RidgeRun/demos/gtc-2020/-/tree/develop/ GTC20 Demo Repo]
  
= Demo Directory Layout =
+
== Demo Directory Layout ==
  
 
The ''gtc-2020'' demo directory has the following structure:
 
The ''gtc-2020'' demo directory has the following structure:
Line 75: Line 69:
 
* '''python-example''': Contains the python-based media server demo scripts.
 
* '''python-example''': Contains the python-based media server demo scripts.
  
= Demo Script Overview =
+
== Demo Script Overview ==
 
 
 
 
  
 
The purpose of the ''Media Server for AI processing on Jetson'' presentation is to provide an idea of the simplicity in the overall design of a complex Gstreamer-based application (like for example the media server) when using RidgeRun's [https://developer.ridgerun.com/wiki/index.php?title=GStreamer_Daemon_-_Building_GStreamer_Daemon GstD] and [https://developer.ridgerun.com/wiki/index.php?title=GstInterpipe_-_Building_and_Installation_Guide GstInterpipe] products.
 
The purpose of the ''Media Server for AI processing on Jetson'' presentation is to provide an idea of the simplicity in the overall design of a complex Gstreamer-based application (like for example the media server) when using RidgeRun's [https://developer.ridgerun.com/wiki/index.php?title=GStreamer_Daemon_-_Building_GStreamer_Daemon GstD] and [https://developer.ridgerun.com/wiki/index.php?title=GstInterpipe_-_Building_and_Installation_Guide GstInterpipe] products.
 
  
 
The implementation of the ''controller application'' for the media server, in this case, is a Python script but it may well be a QT GUI, a C / C++ application, among other alternatives, this provides great sense of flexibility to your design; The diagram below basically presents the media server as a set of functional blocks that, when interconnected, provide several data (video buffers) flow paths.  
 
The implementation of the ''controller application'' for the media server, in this case, is a Python script but it may well be a QT GUI, a C / C++ application, among other alternatives, this provides great sense of flexibility to your design; The diagram below basically presents the media server as a set of functional blocks that, when interconnected, provide several data (video buffers) flow paths.  
 
  
 
[[Image:GTC20Demo.png|thumb|center|1000px| Media Server Diagram]]  
 
[[Image:GTC20Demo.png|thumb|center|1000px| Media Server Diagram]]  
 
  
 
The following sections show a segmented view of the ''media-server.py'' script:
 
The following sections show a segmented view of the ''media-server.py'' script:
  
 
+
===Pipeline Class Definition ===
 
 
== Pipeline Class Definition ==
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
#!/usr/bin/env python3
 
#!/usr/bin/env python3
Line 137: Line 124:
  
  
 
+
=== Pipelines Creation ===
== Pipelines Creation ==
+
==== Capture Pipelines ====
=== Capture Pipelines ===
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Create camera pipelines
 
# Create camera pipelines
Line 155: Line 141:
 
</pre>
 
</pre>
  
=== DeepStream Pipeline ===
+
==== DeepStream Pipeline ====
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Create Deepstream pipeline with 4 cameras processing
 
# Create Deepstream pipeline with 4 cameras processing
Line 162: Line 148:
 
</pre>
 
</pre>
  
=== Encoding Pipelines ===
+
==== Encoding Pipelines ====
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Create encoding pipelines
 
# Create encoding pipelines
Line 178: Line 164:
 
</pre>
 
</pre>
  
=== Video Recording Pipelines ===
+
==== Video Recording Pipelines ====
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Create recording pipelines
 
# Create recording pipelines
Line 191: Line 177:
 
</pre>
 
</pre>
  
=== Image Snapshots Pipeline ===
+
==== Image Snapshots Pipeline ====
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Create snapshot pipeline
 
# Create snapshot pipeline
Line 198: Line 184:
 
</pre>
 
</pre>
  
=== Video Display Pipeline ===
+
==== Video Display Pipeline ====
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Create display pipeline
 
# Create display pipeline
Line 208: Line 194:
  
  
 
+
=== Pipelines Execution ===
 
 
 
 
== Pipelines Execution ==
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Play base pipelines
 
# Play base pipelines
Line 277: Line 260:
  
  
 
+
=== Pipelines Closure / Deletion ===
 
 
 
 
== Pipelines Closure / Deletion ==
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Send EOS event to encode pipelines for proper closing
 
# Send EOS event to encode pipelines for proper closing
Line 307: Line 287:
 
</pre>
 
</pre>
  
= Running The Demo =
+
== Running The Demo ==
 +
 
 +
{{ContactUs}}
 +
 
 +
 
 +
 
 +
[[Category:Jetson]] [[Category:NVIDIA Xavier]] [[Category:GStreamer]] [[Category:JetsonNano]]

Revision as of 02:25, 23 March 2020

Overview

RidgeRun has put a lot of effort on delivering software solutions in accordance with embedded devices market trends; NVIDIA's Jetson is a family of low-power hardware systems designed for accelerating machine learning applications combined with the NVIDIA's JetPack Software Development Kit (SDK).


Each JetPack SDK release provides support for multiple of the Jetson family modules, including Jetson AGX Xavier series, Jetson TX2 series, Jetson TX1 and Jetson Nano. The integration of support for several of the Jetson series modules under one single JetPack release provides a great level of portability for developers' applications, hence the media server here described can actually run on several platforms including Jetson's TX2, Xavier and Nano.

Demo Requirements

Tx2 devkit.png
  • Hardware

- NVIDIA Jetson TX2 Developer Board

- 5 MP fixed focus MIPI CSI camera (included in the devkit)

- USB camera


  • Software

- NVIDIA JetPack 4.3

- NVIDIA DeepStream 4.0

Environment Setup

1. Proceed to install NVIDIA software tools, both JetPack and DeepStream can be installed using SDK Manager, this is the recommended installation method.

NVIDIA JetPack / DeepStream installation guide

2. Clone the GTC20 demo repository to a suitable path in your working station.

GTC20 Demo Repo

Demo Directory Layout

The gtc-2020 demo directory has the following structure:

gtc-2020/
├── deepstream-models
│   ├── config_infer_primary_1_cameras.txt
│   ├── config_infer_primary_4_cameras.txt
│   ├── config_infer_primary.txt
│   ├── libnvds_mot_klt.so
│   └── Primary_Detector
│       ├── cal_trt.bin
│       ├── labels.txt
│       ├── resnet10.caffemodel
│       ├── resnet10.caffemodel_b1_fp16.engine
│       ├── resnet10.caffemodel_b30_fp16.engine
│       ├── resnet10.caffemodel_b4_fp16.engine
│       └── resnet10.prototxt
├── jupyter
│   ├── gtc_2020.ipynb
│   ├── images
│   │   └── RidgeRun_logo.jpg
│   ├── media
│   │   └── IronMan.mp4
│   ├── README.md
│   └── run_jupyter.sh
├── python-example
│   ├── media-server-filesrc.py
│   └── media-server.py
└── README.md
  • deepstream-models: Required by the DeepStream processing block in the media server.
  • jupyter : Jupyter notebook for the media server presentation (under development).
  • python-example: Contains the python-based media server demo scripts.

Demo Script Overview

The purpose of the Media Server for AI processing on Jetson presentation is to provide an idea of the simplicity in the overall design of a complex Gstreamer-based application (like for example the media server) when using RidgeRun's GstD and GstInterpipe products.

The implementation of the controller application for the media server, in this case, is a Python script but it may well be a QT GUI, a C / C++ application, among other alternatives, this provides great sense of flexibility to your design; The diagram below basically presents the media server as a set of functional blocks that, when interconnected, provide several data (video buffers) flow paths.

Error creating thumbnail: Unable to save thumbnail to destination
Media Server Diagram

The following sections show a segmented view of the media-server.py script:

Pipeline Class Definition

#!/usr/bin/env python3

import time
from pygstc.gstc import *

# Create PipelineEntity object to manage each pipeline
class PipelineEntity(object):
    def __init__(self, client, name, description):
        self._name = name
        self._description = description
        self._client = client
        print("Creating pipeline: " + self._name)
        self._client.pipeline_create(self._name, self._description)
    def play(self):
        print("Playing pipeline: " + self._name)
        self._client.pipeline_play(self._name)
    def stop(self):
        print("Stopping pipeline: " + self._name)
        self._client.pipeline_stop(self._name)
    def delete(self):
        print("Deleting pipeline: " + self._name)
        self._client.pipeline_delete(self._name)
    def eos(self):
        print("Sending EOS to pipeline: " + self._name)
        self._client.event_eos(self._name)
    def set_file_location(self, location):
        print("Setting " + self._name + " pipeline recording/snapshot location to " + location);
        filesink_name = "filesink_" + self._name;
        self._client.element_set(self._name, filesink_name, 'location', location);
    def listen_to(self, sink):
        print(self._name + " pipeline listening to " + sink);
        self._client.element_set(self._name, self._name + '_src', 'listen-to', sink);

pipelines_base = []
pipelines_video_rec = []
pipelines_video_enc = []
pipelines_snap = []

# Create GstD Python client
client = GstdClient()


Pipelines Creation

Capture Pipelines

# Create camera pipelines
camera0 = PipelineEntity(client, 'camera0', 'v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=1280,height=720 ! interpipesink name=camera0 forward-events=true forward-eos=true sync=false')
pipelines_base.append(camera0)

camera0_rgba_nvmm = PipelineEntity(client, 'camera0_rgba_nvmm', 'interpipesrc listen-to=camera0 ! video/x-raw,format=YUY2,width=1280,height=720 ! videoconvert ! video/x-raw,format=NV12,width=1280,height=720 ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720 ! queue ! interpipesink name=camera0_rgba_nvmm forward-events=true forward-eos=true sync=false caps=video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720,pixel-aspect-ratio=1/1,interlace-mode=progressive,framerate=30/1')
pipelines_base.append(camera0_rgba_nvmm)

camera1 = PipelineEntity(client, 'camera1', 'nvarguscamerasrc ! nvvidconv ! video/x-raw,format=I420,width=1280,height=720 ! queue ! interpipesink name=camera1 forward-events=true forward-eos=true sync=false')
pipelines_base.append(camera1)

camera1_rgba_nvmm = PipelineEntity(client, 'camera1_rgba_nvmm', 'interpipesrc listen-to=camera1 ! video/x-raw,format=I420,width=1280,height=720 ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720 ! interpipesink name=camera1_rgba_nvmm forward-events=true forward-eos=true sync=false caps=video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720,pixel-aspect-ratio=1/1,interlace-mode=progressive,framerate=30/1')
pipelines_base.append(camera1_rgba_nvmm)

DeepStream Pipeline

# Create Deepstream pipeline with 4 cameras processing
deepstream = PipelineEntity(client, 'deepstream', 'interpipesrc listen-to=camera0_rgba_nvmm ! nvstreammux0.sink_0 interpipesrc listen-to=camera0_rgba_nvmm ! nvstreammux0.sink_1 interpipesrc listen-to=camera1_rgba_nvmm ! nvstreammux0.sink_2 interpipesrc listen-to=camera1_rgba_nvmm ! nvstreammux0.sink_3 nvstreammux name=nvstreammux0 batch-size=4 batched-push-timeout=40000 width=1280 height=720 ! queue ! nvinfer batch-size=4 config-file-path=../deepstream-models/config_infer_primary_4_cameras.txt ! queue ! nvtracker ll-lib-file=../deepstream-models/libnvds_mot_klt.so enable-batch-process=true ! queue ! nvmultistreamtiler width=1280 height=720 rows=2 columns=2 ! nvvideoconvert ! nvdsosd ! queue ! interpipesink name=deep forward-events=true forward-eos=true sync=false')
pipelines_base.append(deepstream)

Encoding Pipelines

# Create encoding pipelines
h264 = PipelineEntity(client, 'h264', 'interpipesrc name=h264_src format=time listen-to=deep ! video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720 ! nvvideoconvert ! nvv4l2h264enc ! interpipesink name=h264_sink forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true')
pipelines_video_enc.append(h264)

h265 = PipelineEntity(client, 'h265', 'interpipesrc name=h265_src format=time listen-to=deep ! nvvideoconvert ! nvv4l2h265enc ! interpipesink name=h265_sink forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true')
pipelines_video_enc.append(h265)

vp9 = PipelineEntity(client, 'vp9', 'interpipesrc name=vp9_src format=time listen-to=deep ! nvvideoconvert ! nvv4l2vp9enc ! interpipesink name=vp9_sink forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true')
pipelines_video_enc.append(vp9)

jpeg = PipelineEntity(client, 'jpeg', 'interpipesrc name=jpeg_src format=time listen-to=deep ! nvvideoconvert ! video/x-raw,format=I420,width=1280,height=720 ! nvjpegenc ! interpipesink name=jpeg forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true')
pipelines_snap.append(jpeg)

Video Recording Pipelines

# Create recording pipelines
record_h264 = PipelineEntity(client, 'record_h264', 'interpipesrc format=time allow-renegotiation=false listen-to=h264_sink ! h264parse ! matroskamux ! filesink name=filesink_record_h264 location=test-h264.mkv')
pipelines_video_rec.append(record_h264)

record_h265 = PipelineEntity(client, 'record_h265', 'interpipesrc format=time listen-to=h265_sink ! h265parse ! matroskamux ! filesink name=filesink_record_h265 location=test-h265.mkv')
pipelines_video_rec.append(record_h265)

record_vp9 = PipelineEntity(client, 'record_vp9', 'interpipesrc format=time listen-to=vp9_sink ! matroskamux ! filesink name=filesink_record_vp9 location=test-vp9.mkv')
pipelines_video_rec.append(record_vp9)

Image Snapshots Pipeline

# Create snapshot pipeline
snapshot = PipelineEntity(client, 'snapshot', 'interpipesrc format=time listen-to=jpeg num-buffers=1 ! filesink name=filesink_snapshot location=test-snapshot.jpg')
pipelines_snap.append(snapshot)

Video Display Pipeline

# Create display pipeline
# display = PipelineEntity(client, 'display', 'interpipesrc listen-to=deep ! nvegltransform bufapi-version=true ! nveglglessink qos=false async=false sync=false')
# pipelines_base.append(display)
display = PipelineEntity(client, 'display', 'interpipesrc listen-to=deep ! fakesink async=false sync=false')
pipelines_base.append(display)


Pipelines Execution

# Play base pipelines
for pipeline in pipelines_base:
    pipeline.play()

time.sleep(10)

# Set locations for video recordings
for pipeline in pipelines_video_rec:
    pipeline.set_file_location('test_' + pipeline._name + '_0.mkv')

# Play video recording pipelines
for pipeline in pipelines_video_rec:
    pipeline.play()

# Play video encoding pipelines
for pipeline in pipelines_video_enc:
    pipeline.play()

time.sleep(20)

# Set location for snapshot
snapshot.set_file_location('test_' + snapshot._name + '_0.jpeg')

# Play snapshot pipelines
for pipeline in pipelines_snap:
    pipeline.play()

time.sleep(5)

# Take another snapshot, but now use camera0 as source
snapshot.stop()
jpeg.listen_to('camera0_rgba_nvmm')
snapshot.set_file_location('test_' + snapshot._name + '_1.jpg')
snapshot.play()

# Stop previous recordings, connect to camera 1 capture instead of deepstream output and record another video
# Send EOS event to encode pipelines for proper closing
# EOS to recording pipelines
for pipeline in pipelines_video_enc:
    pipeline.eos()

# Stop recordings
for pipeline in pipelines_video_rec:
    pipeline.stop()
for pipeline in pipelines_video_enc:
    pipeline.stop()

for pipeline in pipelines_video_enc:
    pipeline.listen_to('camera0_rgba_nvmm')

# Set locations for new video recordings
for pipeline in pipelines_video_rec:
    pipeline.set_file_location('test_' + pipeline._name + '_1.mkv')

for pipeline in pipelines_video_enc:
    pipeline.play()

for pipeline in pipelines_video_rec:
    pipeline.play()

time.sleep(10)


Pipelines Closure / Deletion

# Send EOS event to encode pipelines for proper closing
# EOS to recording pipelines
for pipeline in pipelines_video_enc:
    pipeline.eos()
# Stop pipelines
for pipeline in pipelines_snap:
    pipeline.stop()
for pipeline in pipelines_video_rec:
    pipeline.stop()
for pipeline in pipelines_video_enc:
    pipeline.stop()
for pipeline in pipelines_base:
    pipeline.stop()

# Delete pipelines
for pipeline in pipelines_snap:
    pipeline.delete()
for pipeline in pipelines_video_rec:
    pipeline.delete()
for pipeline in pipelines_video_enc:
    pipeline.delete()
for pipeline in pipelines_base:
    pipeline.delete()

Running The Demo


RidgeRun Resources

Quick Start Client Engagement Process RidgeRun Blog Homepage
Technical and Sales Support RidgeRun Online Store RidgeRun Videos Contact Us

OOjs UI icon message-progressive.svg Contact Us

Visit our Main Website for the RidgeRun Products and Online Store. RidgeRun Engineering informations are available in RidgeRun Professional Services, RidgeRun Subscription Model and Client Engagement Process wiki pages. Please email to support@ridgerun.com for technical questions and contactus@ridgerun.com for other queries. Contact details for sponsoring the RidgeRun GStreamer projects are available in Sponsor Projects page. Ridgerun-logo.svg
RR Contact Us.png