Difference between revisions of "NVIDIA GTC 2020: How to build a multi-camera Media Server for AI processing on Jetson"

From RidgeRun Developer Connection
Jump to: navigation, search
(Demo Script Overview)
m
 
(75 intermediate revisions by 3 users not shown)
Line 1: Line 1:
= Overview =
+
<seo title="Multi-camera Jetson TX2 AI Media Server demo | NVIDIA GTC 2020 RidgeRun demo" titlemode="replace" keywords=" RidgeRun, GStreamer, Nvidia, Jetson, TX1, TX2, Jetson AGX Xavier, NVIDIA Jetson Xavier, Jetson Xavier, GStreamer Multimedia Framework, NVIDIA, Xavier, AI, Deep Learning, Xavier multi-camera, Multi-camera, Artificial Intelligence, Jetson TX2, JetPack, Deepstream, GTC 2020, GStreamer daemon, Gstd, GstInterpipe, ov5693 sensor, OV5693, AI Media Server, Jetson TX2 AI Media Sever, NVIDIA GTC 2020, Jetpack SDK" description="Details about RidgeRun demo on how to build a multi-camera Media Server for AI processing on NVIDIA Jetson."></seo>
RidgeRun has put a lot of effort on delivering software solutions in accordance with embedded devices market trends; NVIDIA's Jetson is a family of low-power hardware systems designed for accelerating machine learning applications combined with the NVIDIA's JetPack Software Development Kit (SDK).
+
 
+
 
 +
<table cellspacing="5">
 +
<tr>
 +
<td><div class="clear; float:right">__TOC__</div></td>
 +
<td valign=top>
 +
<td>
 +
{{GStreamer debug}}
 +
<td>
 +
<center>
 +
{{ContactUs Button}}
 +
</center>
 +
</td>
 +
</tr>
 +
</table>
 +
 
 +
== Overview of multi-camera NVIDIA<sup>®</sup>Jetson™ TX2 AI Media Sever demo ==
 +
 
 +
RidgeRun has put a lot of effort into delivering software solutions in accordance with embedded devices market trends; NVIDIA's Jetson is a family of low-power hardware systems designed for accelerating machine learning applications combined with NVIDIA's JetPack Software Development Kit (SDK).
  
Each JetPack SDK release provides support for multiple of the Jetson family modules, including Jetson AGX Xavier series, Jetson TX2 series, Jetson TX1 and Jetson Nano. The integration of support for several of the Jetson series modules under one single JetPack release provides a great level of portability for developers' applications, hence the media server here described can actually run on several platforms including Jetson's TX2, Xavier and Nano.
+
Each JetPack SDK release provides support for multiple of the Jetson family modules, including the Jetson AGX Xavier series, Jetson TX2 series, Jetson TX1 and Jetson Nano. The integration of support for several of the Jetson series modules under one single JetPack release provides a great level of portability for developers' applications, hence the media server here described can actually run on several platforms including Jetson's TX2, Xavier and Nano.
  
 +
RidgeRun team is demonstrating this software solution in the NVIDIA GTC 2020 online event.
  
= Demo Requirements =
+
== Demo requirements ==
  
 
[[Image:Tx2_devkit.png|frameless|left|300px]]
 
[[Image:Tx2_devkit.png|frameless|left|300px]]
Line 22: Line 40:
  
 
- NVIDIA DeepStream 4.0
 
- NVIDIA DeepStream 4.0
 +
<br>
 +
<br>
 +
<br>
 +
<br>
 +
== Environment Setup ==
 +
 +
1. Proceed to install NVIDIA software tools, both JetPack and DeepStream can be installed using SDK Manager, this is the recommended installation method.
  
 +
* [https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/index.html NVIDIA JetPack / DeepStream installation guide]
  
 +
2. Proceed to install RidgeRun's GstD and GstInterpipe
  
 +
* [[GStreamer_Daemon_-_Building_GStreamer_Daemon|GstD]]
  
 +
* [[GstInterpipe_-_Building_and_Installation_Guide | GstInterpipe]]
  
 +
3. Clone the '''GTC 2020''' demo repository to a suitable path in your working station.
 +
<pre>
 +
git clone https://github.com/RidgeRun/gtc-2020-demo
 +
</pre>
  
= Environment Setup =
+
== Demo Directory Layout ==
 
1. Proceed to install NVIDIA software tools, both JetPack and DeepStream can be installed using SDK Manager, this is the recommended installation method.
 
 
 
[https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/index.html NVIDIA JetPack / DeepStream installation guide]
 
 
 
2. Clone the GTC20 demo repository to a suitable path in your working station.
 
 
 
[https://gitlab.com/RidgeRun/demos/gtc-2020/-/tree/develop/ GTC20 Demo Repo]
 
 
 
= Demo Directory Layout =
 
  
The ''gtc-2020'' demo directory has the following structure:
+
The ''gtc-2020-demo'' directory has the following structure:
  
 
<pre>
 
<pre>
gtc-2020/
+
gtc-2020-demo/
 
├── deepstream-models
 
├── deepstream-models
 
│  ├── config_infer_primary_1_cameras.txt
 
│  ├── config_infer_primary_1_cameras.txt
Line 57: Line 80:
 
│      ├── resnet10.caffemodel_b4_fp16.engine
 
│      ├── resnet10.caffemodel_b4_fp16.engine
 
│      └── resnet10.prototxt
 
│      └── resnet10.prototxt
├── jupyter
 
│  ├── gtc_2020.ipynb
 
│  ├── images
 
│  │  └── RidgeRun_logo.jpg
 
│  ├── media
 
│  │  └── IronMan.mp4
 
│  ├── README.md
 
│  └── run_jupyter.sh
 
 
├── python-example
 
├── python-example
│  ├── media-server-filesrc.py
 
 
│  └── media-server.py
 
│  └── media-server.py
 
└── README.md
 
└── README.md
Line 72: Line 86:
  
 
* '''deepstream-models''': Required by the DeepStream processing block in the media server.
 
* '''deepstream-models''': Required by the DeepStream processing block in the media server.
* '''jupyter ''': Jupyter notebook for the media server presentation (under development).
 
* '''python-example''': Contains the python-base media servir demo scripts.
 
  
 +
* '''python-example''': Contains the python-based media server demo scripts.
 +
 +
== Demo Script Overview ==
  
= Demo Script Overview =
+
The purpose of the ''Media Server for AI processing on Jetson'' presentation is to provide an idea of the simplicity in the overall design of a complex Gstreamer-based application (like for example the media server) when using RidgeRun's [[GStreamer_Daemon_-_Building_GStreamer_Daemon | '''GstD''']] and [[GstInterpipe_-_Building_and_Installation_Guide | '''GstInterpipe''']] products.
  
 +
The implementation of the '''''controller application''''' for the media server, in this case, is a Python script but it may well be a QT GUI, a C / C++ application, among other alternatives, this provides a great sense of flexibility to your design; The diagram below basically presents the media server as a set of functional blocks that, when interconnected, provide several data (video buffers) flow paths.
 +
<br>
 +
<br>
 +
[[Image:GTC20Demo.png|thumb|center|1000px| Media Server Diagram]]
 +
<br>
  
 +
The following sections show a segmented view of the '''''media-server.py''''' script:
  
[[Image:GTC20Demo.png|thumb|center|1000px| Media Server Diagram]]
+
===Pipeline Class Definition===
  
 +
In the code snippet shown below, the Gstd python client is imported and we create the PipelineEntity object with some helper methods to simplify the pipeline control in the media server. We also create pipeline arrays to group the base pipelines (capture, processing, DeepStream, and display), encoding, recording, and snapshot. Finally, we create the instance of the Gstd client.
  
== Pipeline Class Definition ==
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
#!/usr/bin/env python3
 
#!/usr/bin/env python3
Line 127: Line 148:
 
</pre>
 
</pre>
  
 +
===Pipelines Creation===
 +
 +
==== Capture Pipelines ====
 +
In this snippet, we create the capture pipelines and a processing pipeline for each capture pipeline, in order to convert the camera output to the required format/colorspace required in the DeepStream pipeline input, RGBA with NVMM memory in this case.
 +
Notice that these pipelines are appended to the pipelines_base array.
  
 +
The media server expects the USB camera to be identified as /dev/video1 and the devkit camera as /dev/video0. This must be validated and changed if necessary for the media server to work properly.
  
== Pipelines Creation ==
 
=== Capture Pipelines ===
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Create camera pipelines
 
# Create camera pipelines
Line 146: Line 171:
 
</pre>
 
</pre>
  
=== DeepStream Pipeline ===
+
==== DeepStream Pipeline ====
 +
The snippet below shows the DeepStream pipeline, notice that this pipeline is also appended to the pipelines_base array.
 +
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Create Deepstream pipeline with 4 cameras processing
 
# Create Deepstream pipeline with 4 cameras processing
Line 153: Line 180:
 
</pre>
 
</pre>
  
=== Encoding Pipelines ===
+
==== Encoding Pipelines ====
 +
The pipelines created in the code below are in charge of encoding the input video stream (camera or DeepStream) with h264, h265, vp9, and jpeg codecs.
 +
 
 +
The h264, h265, and vp9 pipelines are appended to the pipelines_video_enc array while the jpeg pipeline is part of the pipelines_snap array.
 +
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Create encoding pipelines
 
# Create encoding pipelines
Line 169: Line 200:
 
</pre>
 
</pre>
  
=== Video Recording Pipelines ===
+
==== Video Recording Pipelines ====
 +
These pipelines are responsible for writing an encoded stream to file.
 +
 
 +
Notice that these pipelines are appended to the pipelines_video_rec array.
 +
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Create recording pipelines
 
# Create recording pipelines
record_h264 = PipelineEntity(client, 'record_h264', 'interpipesrc format=time allow-renegotiation=false listen-to=h264_sink ! h264parse ! matroskamux ! filesink name=filesink_record_h264 location=test-h264.mkv')
+
record_h264 = PipelineEntity(client, 'record_h264', 'interpipesrc format=time allow-renegotiation=false listen-to=h264_sink ! h264parse ! matroskamux ! filesink name=filesink_record_h264')
 
pipelines_video_rec.append(record_h264)
 
pipelines_video_rec.append(record_h264)
  
record_h265 = PipelineEntity(client, 'record_h265', 'interpipesrc format=time listen-to=h265_sink ! h265parse ! matroskamux ! filesink name=filesink_record_h265 location=test-h265.mkv')
+
record_h265 = PipelineEntity(client, 'record_h265', 'interpipesrc format=time listen-to=h265_sink ! h265parse ! matroskamux ! filesink name=filesink_record_h265')
 
pipelines_video_rec.append(record_h265)
 
pipelines_video_rec.append(record_h265)
  
record_vp9 = PipelineEntity(client, 'record_vp9', 'interpipesrc format=time listen-to=vp9_sink ! matroskamux ! filesink name=filesink_record_vp9 location=test-vp9.mkv')
+
record_vp9 = PipelineEntity(client, 'record_vp9', 'interpipesrc format=time listen-to=vp9_sink ! matroskamux ! filesink name=filesink_record_vp9')
 
pipelines_video_rec.append(record_vp9)
 
pipelines_video_rec.append(record_vp9)
 
</pre>
 
</pre>
  
=== Image Snapshots Pipeline ===
+
==== Image Snapshots Pipeline ====
 +
The pipeline below saves a jpeg encoded buffer to file. This pipeline is also appended to the pipelines_snap array.
 +
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Create snapshot pipeline
 
# Create snapshot pipeline
snapshot = PipelineEntity(client, 'snapshot', 'interpipesrc format=time listen-to=jpeg num-buffers=1 ! filesink name=filesink_snapshot location=test-snapshot.jpg')
+
snapshot = PipelineEntity(client, 'snapshot', 'interpipesrc format=time listen-to=jpeg num-buffers=1 ! filesink name=filesink_snapshot')
 
pipelines_snap.append(snapshot)
 
pipelines_snap.append(snapshot)
 
</pre>
 
</pre>
  
=== Video Display Pipeline ===
+
==== Video Display Pipeline ====
 +
The snippet below shows the display pipeline. This pipeline is appended to the pipelines_base array.
 +
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Create display pipeline
 
# Create display pipeline
# display = PipelineEntity(client, 'display', 'interpipesrc listen-to=deep ! nvegltransform bufapi-version=true ! nveglglessink qos=false async=false sync=false')
+
display = PipelineEntity(client, 'display', 'interpipesrc listen-to=deep ! nvegltransform bufapi-version=true ! nveglglessink qos=false async=false sync=false')
# pipelines_base.append(display)
 
display = PipelineEntity(client, 'display', 'interpipesrc listen-to=deep ! fakesink async=false sync=false')
 
 
pipelines_base.append(display)
 
pipelines_base.append(display)
 
</pre>
 
</pre>
  
 +
===Pipelines Execution===
  
 
+
First, we play all the pipelines in the pipelines_base array. This includes the capture, processing, DeepStream, and display pipelines.
 
 
 
 
== Pipelines Execution ==
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Play base pipelines
 
# Play base pipelines
Line 209: Line 244:
  
 
time.sleep(10)
 
time.sleep(10)
 +
</pre>
 +
 +
We set the locations for our first video recordings, test_record_h264_0.mkv, test_record_h265_0.mkv and test_record_vp9_0.mkv. Then we play all the recording and encoding pipelines.
  
 +
<pre span style="background:skyblue>
 
# Set locations for video recordings
 
# Set locations for video recordings
 
for pipeline in pipelines_video_rec:
 
for pipeline in pipelines_video_rec:
Line 223: Line 262:
  
 
time.sleep(20)
 
time.sleep(20)
 +
</pre>
 +
 +
Here we set the location for the first snapshot, test_snapshot_0.jpeg. Then we play the pipelines in the pipelines_snap array (jpeg encoding and snapshot pipeline that saves the snapshot to file).
  
 +
<pre span style="background:skyblue>
 
# Set location for snapshot
 
# Set location for snapshot
 
snapshot.set_file_location('test_' + snapshot._name + '_0.jpeg')
 
snapshot.set_file_location('test_' + snapshot._name + '_0.jpeg')
Line 232: Line 275:
  
 
time.sleep(5)
 
time.sleep(5)
 +
</pre>
 +
 +
At this point, we should be able to check the file test_snapshot_0.jpeg. You should see a snapshot similar to the example shown below:
 +
 +
[[File:Test snapshot 0.jpg|500px|frameless|center]]
 +
 +
We can take another snapshot, for this, we first stop the snapshot pipeline that we played previously to take the first snapshot. We can choose to take the snapshot from another source, we use the listen-to property to change the jpeg encoding pipeline input from the DeepStream pipeline to the processed camera0 output. Then we change the file location to avoid overwriting our first snapshot and finally, we play back the snapshot pipe.
  
 +
<pre span style="background:skyblue>
 
# Take another snapshot, but now use camera0 as source
 
# Take another snapshot, but now use camera0 as source
 
snapshot.stop()
 
snapshot.stop()
Line 238: Line 289:
 
snapshot.set_file_location('test_' + snapshot._name + '_1.jpg')
 
snapshot.set_file_location('test_' + snapshot._name + '_1.jpg')
 
snapshot.play()
 
snapshot.play()
 +
</pre>
 +
 +
At this point, we expect a file test_snapshot_1.jpg that should show the input of only one camera, as in the example below:
 +
 +
[[File:Test snapshot 1.jpg|500px|frameless|center]]
  
# Stop previous recordings, connect to camera 1 capture instead of deepstream output and record another video
+
We left the recording pipelines playing while taking the snapshots. Now we can stop these pipelines and inspect the results.
 +
<pre span style="background:skyblue>
 
# Send EOS event to encode pipelines for proper closing
 
# Send EOS event to encode pipelines for proper closing
 
# EOS to recording pipelines
 
# EOS to recording pipelines
Line 250: Line 307:
 
for pipeline in pipelines_video_enc:
 
for pipeline in pipelines_video_enc:
 
     pipeline.stop()
 
     pipeline.stop()
 +
</pre>
 +
 +
We expect to have the files test_record_h264_0.mkv, test_record_h265_0.mkv and test_record_vp9_0.mkv, with videos that contain images similar to the test_snapshot_0.jpeg shown above, with four videos in each frame and with the deesptream overlay.
 +
 +
Now, we can change the video input of the encoding pipeline, as we did with the snapshot pipeline, to get recordings from another source. In the snippet below, we set the source for the h264, h265, and vp9 pipelines to the processed camera0 video.
  
 +
<pre span style="background:skyblue>
 
for pipeline in pipelines_video_enc:
 
for pipeline in pipelines_video_enc:
 
     pipeline.listen_to('camera0_rgba_nvmm')
 
     pipeline.listen_to('camera0_rgba_nvmm')
Line 267: Line 330:
 
</pre>
 
</pre>
  
 +
===Pipelines Closure / Deletion===
  
 +
In the code snippet below, we stop the recording and all the other pipelines currently playing, and then all pipelines are deleted.
  
 
 
== Pipelines Closure / Deletion ==
 
 
<pre span style="background:skyblue>
 
<pre span style="background:skyblue>
 
# Send EOS event to encode pipelines for proper closing
 
# Send EOS event to encode pipelines for proper closing
Line 298: Line 360:
 
</pre>
 
</pre>
  
= Running The Demo =
+
At this point, we can inspect the files test_record_h264_1.mkv, test_record_h265_1.mkv and test_record_vp9_1.mkv. We expect these recordings to contain images only from camera0.
 +
 
 +
== Running The Demo ==
 +
1. Head to the ''gtc-2020-demo/python-example'' directory
 +
 
 +
2. Proceed to initialize GstD and execute the media server demo script
 +
<pre>
 +
DISPLAY=:0 gstd -D
 +
./media-server.py
 +
</pre>
 +
 
 +
===Expected Results===
 +
 
 +
After executing the media server, you should find the output files in the directory where Gstd was started, the following should contain media with the deepstream output:
 +
<pre>
 +
test_record_h264_0.mkv
 +
test_record_h265_0.mkv
 +
test_record_vp9_0.mkv
 +
test_snapshot_0.jpeg
 +
</pre>
 +
 
 +
And the following files should contain media with the camera0 output:
 +
<pre>
 +
test_record_h264_1.mkv
 +
test_record_h265_1.mkv
 +
test_record_vp9_1.mkv
 +
test_snapshot_1.jpg
 +
</pre>
 +
 
 +
{{ContactUs}}
 +
 
 +
 
 +
[[Category:Jetson]] [[Category:NVIDIA Xavier]] [[Category:GStreamer]] [[Category:JetsonNano]]

Latest revision as of 15:05, 10 January 2022


Error something wrong.jpg Problems running the pipelines shown on this page?
Please see our GStreamer Debugging guide for help.

RR Contact Us.png

Overview of multi-camera NVIDIA®Jetson™ TX2 AI Media Sever demo

RidgeRun has put a lot of effort into delivering software solutions in accordance with embedded devices market trends; NVIDIA's Jetson is a family of low-power hardware systems designed for accelerating machine learning applications combined with NVIDIA's JetPack Software Development Kit (SDK).

Each JetPack SDK release provides support for multiple of the Jetson family modules, including the Jetson AGX Xavier series, Jetson TX2 series, Jetson TX1 and Jetson Nano. The integration of support for several of the Jetson series modules under one single JetPack release provides a great level of portability for developers' applications, hence the media server here described can actually run on several platforms including Jetson's TX2, Xavier and Nano.

RidgeRun team is demonstrating this software solution in the NVIDIA GTC 2020 online event.

Demo requirements

Tx2 devkit.png
  • Hardware

- NVIDIA Jetson TX2 Developer Board

- 5 MP fixed focus MIPI CSI camera (included in the devkit)

- USB camera


  • Software

- NVIDIA JetPack 4.3

- NVIDIA DeepStream 4.0



Environment Setup

1. Proceed to install NVIDIA software tools, both JetPack and DeepStream can be installed using SDK Manager, this is the recommended installation method.

2. Proceed to install RidgeRun's GstD and GstInterpipe

3. Clone the GTC 2020 demo repository to a suitable path in your working station.

git clone https://github.com/RidgeRun/gtc-2020-demo

Demo Directory Layout

The gtc-2020-demo directory has the following structure:

gtc-2020-demo/
├── deepstream-models
│   ├── config_infer_primary_1_cameras.txt
│   ├── config_infer_primary_4_cameras.txt
│   ├── config_infer_primary.txt
│   ├── libnvds_mot_klt.so
│   └── Primary_Detector
│       ├── cal_trt.bin
│       ├── labels.txt
│       ├── resnet10.caffemodel
│       ├── resnet10.caffemodel_b1_fp16.engine
│       ├── resnet10.caffemodel_b30_fp16.engine
│       ├── resnet10.caffemodel_b4_fp16.engine
│       └── resnet10.prototxt
├── python-example
│   └── media-server.py
└── README.md
  • deepstream-models: Required by the DeepStream processing block in the media server.
  • python-example: Contains the python-based media server demo scripts.

Demo Script Overview

The purpose of the Media Server for AI processing on Jetson presentation is to provide an idea of the simplicity in the overall design of a complex Gstreamer-based application (like for example the media server) when using RidgeRun's GstD and GstInterpipe products.

The implementation of the controller application for the media server, in this case, is a Python script but it may well be a QT GUI, a C / C++ application, among other alternatives, this provides a great sense of flexibility to your design; The diagram below basically presents the media server as a set of functional blocks that, when interconnected, provide several data (video buffers) flow paths.

Error creating thumbnail: Unable to save thumbnail to destination
Media Server Diagram


The following sections show a segmented view of the media-server.py script:

Pipeline Class Definition

In the code snippet shown below, the Gstd python client is imported and we create the PipelineEntity object with some helper methods to simplify the pipeline control in the media server. We also create pipeline arrays to group the base pipelines (capture, processing, DeepStream, and display), encoding, recording, and snapshot. Finally, we create the instance of the Gstd client.

#!/usr/bin/env python3

import time
from pygstc.gstc import *

# Create PipelineEntity object to manage each pipeline
class PipelineEntity(object):
    def __init__(self, client, name, description):
        self._name = name
        self._description = description
        self._client = client
        print("Creating pipeline: " + self._name)
        self._client.pipeline_create(self._name, self._description)
    def play(self):
        print("Playing pipeline: " + self._name)
        self._client.pipeline_play(self._name)
    def stop(self):
        print("Stopping pipeline: " + self._name)
        self._client.pipeline_stop(self._name)
    def delete(self):
        print("Deleting pipeline: " + self._name)
        self._client.pipeline_delete(self._name)
    def eos(self):
        print("Sending EOS to pipeline: " + self._name)
        self._client.event_eos(self._name)
    def set_file_location(self, location):
        print("Setting " + self._name + " pipeline recording/snapshot location to " + location);
        filesink_name = "filesink_" + self._name;
        self._client.element_set(self._name, filesink_name, 'location', location);
    def listen_to(self, sink):
        print(self._name + " pipeline listening to " + sink);
        self._client.element_set(self._name, self._name + '_src', 'listen-to', sink);

pipelines_base = []
pipelines_video_rec = []
pipelines_video_enc = []
pipelines_snap = []

# Create GstD Python client
client = GstdClient()

Pipelines Creation

Capture Pipelines

In this snippet, we create the capture pipelines and a processing pipeline for each capture pipeline, in order to convert the camera output to the required format/colorspace required in the DeepStream pipeline input, RGBA with NVMM memory in this case. Notice that these pipelines are appended to the pipelines_base array.

The media server expects the USB camera to be identified as /dev/video1 and the devkit camera as /dev/video0. This must be validated and changed if necessary for the media server to work properly.

# Create camera pipelines
camera0 = PipelineEntity(client, 'camera0', 'v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=1280,height=720 ! interpipesink name=camera0 forward-events=true forward-eos=true sync=false')
pipelines_base.append(camera0)

camera0_rgba_nvmm = PipelineEntity(client, 'camera0_rgba_nvmm', 'interpipesrc listen-to=camera0 ! video/x-raw,format=YUY2,width=1280,height=720 ! videoconvert ! video/x-raw,format=NV12,width=1280,height=720 ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720 ! queue ! interpipesink name=camera0_rgba_nvmm forward-events=true forward-eos=true sync=false caps=video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720,pixel-aspect-ratio=1/1,interlace-mode=progressive,framerate=30/1')
pipelines_base.append(camera0_rgba_nvmm)

camera1 = PipelineEntity(client, 'camera1', 'nvarguscamerasrc ! nvvidconv ! video/x-raw,format=I420,width=1280,height=720 ! queue ! interpipesink name=camera1 forward-events=true forward-eos=true sync=false')
pipelines_base.append(camera1)

camera1_rgba_nvmm = PipelineEntity(client, 'camera1_rgba_nvmm', 'interpipesrc listen-to=camera1 ! video/x-raw,format=I420,width=1280,height=720 ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720 ! interpipesink name=camera1_rgba_nvmm forward-events=true forward-eos=true sync=false caps=video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720,pixel-aspect-ratio=1/1,interlace-mode=progressive,framerate=30/1')
pipelines_base.append(camera1_rgba_nvmm)

DeepStream Pipeline

The snippet below shows the DeepStream pipeline, notice that this pipeline is also appended to the pipelines_base array.

# Create Deepstream pipeline with 4 cameras processing
deepstream = PipelineEntity(client, 'deepstream', 'interpipesrc listen-to=camera0_rgba_nvmm ! nvstreammux0.sink_0 interpipesrc listen-to=camera0_rgba_nvmm ! nvstreammux0.sink_1 interpipesrc listen-to=camera1_rgba_nvmm ! nvstreammux0.sink_2 interpipesrc listen-to=camera1_rgba_nvmm ! nvstreammux0.sink_3 nvstreammux name=nvstreammux0 batch-size=4 batched-push-timeout=40000 width=1280 height=720 ! queue ! nvinfer batch-size=4 config-file-path=../deepstream-models/config_infer_primary_4_cameras.txt ! queue ! nvtracker ll-lib-file=../deepstream-models/libnvds_mot_klt.so enable-batch-process=true ! queue ! nvmultistreamtiler width=1280 height=720 rows=2 columns=2 ! nvvideoconvert ! nvdsosd ! queue ! interpipesink name=deep forward-events=true forward-eos=true sync=false')
pipelines_base.append(deepstream)

Encoding Pipelines

The pipelines created in the code below are in charge of encoding the input video stream (camera or DeepStream) with h264, h265, vp9, and jpeg codecs.

The h264, h265, and vp9 pipelines are appended to the pipelines_video_enc array while the jpeg pipeline is part of the pipelines_snap array.

# Create encoding pipelines
h264 = PipelineEntity(client, 'h264', 'interpipesrc name=h264_src format=time listen-to=deep ! video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720 ! nvvideoconvert ! nvv4l2h264enc ! interpipesink name=h264_sink forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true')
pipelines_video_enc.append(h264)

h265 = PipelineEntity(client, 'h265', 'interpipesrc name=h265_src format=time listen-to=deep ! nvvideoconvert ! nvv4l2h265enc ! interpipesink name=h265_sink forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true')
pipelines_video_enc.append(h265)

vp9 = PipelineEntity(client, 'vp9', 'interpipesrc name=vp9_src format=time listen-to=deep ! nvvideoconvert ! nvv4l2vp9enc ! interpipesink name=vp9_sink forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true')
pipelines_video_enc.append(vp9)

jpeg = PipelineEntity(client, 'jpeg', 'interpipesrc name=jpeg_src format=time listen-to=deep ! nvvideoconvert ! video/x-raw,format=I420,width=1280,height=720 ! nvjpegenc ! interpipesink name=jpeg forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true')
pipelines_snap.append(jpeg)

Video Recording Pipelines

These pipelines are responsible for writing an encoded stream to file.

Notice that these pipelines are appended to the pipelines_video_rec array.

# Create recording pipelines
record_h264 = PipelineEntity(client, 'record_h264', 'interpipesrc format=time allow-renegotiation=false listen-to=h264_sink ! h264parse ! matroskamux ! filesink name=filesink_record_h264')
pipelines_video_rec.append(record_h264)

record_h265 = PipelineEntity(client, 'record_h265', 'interpipesrc format=time listen-to=h265_sink ! h265parse ! matroskamux ! filesink name=filesink_record_h265')
pipelines_video_rec.append(record_h265)

record_vp9 = PipelineEntity(client, 'record_vp9', 'interpipesrc format=time listen-to=vp9_sink ! matroskamux ! filesink name=filesink_record_vp9')
pipelines_video_rec.append(record_vp9)

Image Snapshots Pipeline

The pipeline below saves a jpeg encoded buffer to file. This pipeline is also appended to the pipelines_snap array.

# Create snapshot pipeline
snapshot = PipelineEntity(client, 'snapshot', 'interpipesrc format=time listen-to=jpeg num-buffers=1 ! filesink name=filesink_snapshot')
pipelines_snap.append(snapshot)

Video Display Pipeline

The snippet below shows the display pipeline. This pipeline is appended to the pipelines_base array.

# Create display pipeline
display = PipelineEntity(client, 'display', 'interpipesrc listen-to=deep ! nvegltransform bufapi-version=true ! nveglglessink qos=false async=false sync=false')
pipelines_base.append(display)

Pipelines Execution

First, we play all the pipelines in the pipelines_base array. This includes the capture, processing, DeepStream, and display pipelines.

# Play base pipelines
for pipeline in pipelines_base:
    pipeline.play()

time.sleep(10)

We set the locations for our first video recordings, test_record_h264_0.mkv, test_record_h265_0.mkv and test_record_vp9_0.mkv. Then we play all the recording and encoding pipelines.

# Set locations for video recordings
for pipeline in pipelines_video_rec:
    pipeline.set_file_location('test_' + pipeline._name + '_0.mkv')

# Play video recording pipelines
for pipeline in pipelines_video_rec:
    pipeline.play()

# Play video encoding pipelines
for pipeline in pipelines_video_enc:
    pipeline.play()

time.sleep(20)

Here we set the location for the first snapshot, test_snapshot_0.jpeg. Then we play the pipelines in the pipelines_snap array (jpeg encoding and snapshot pipeline that saves the snapshot to file).

# Set location for snapshot
snapshot.set_file_location('test_' + snapshot._name + '_0.jpeg')

# Play snapshot pipelines
for pipeline in pipelines_snap:
    pipeline.play()

time.sleep(5)

At this point, we should be able to check the file test_snapshot_0.jpeg. You should see a snapshot similar to the example shown below:

Test snapshot 0.jpg

We can take another snapshot, for this, we first stop the snapshot pipeline that we played previously to take the first snapshot. We can choose to take the snapshot from another source, we use the listen-to property to change the jpeg encoding pipeline input from the DeepStream pipeline to the processed camera0 output. Then we change the file location to avoid overwriting our first snapshot and finally, we play back the snapshot pipe.

# Take another snapshot, but now use camera0 as source
snapshot.stop()
jpeg.listen_to('camera0_rgba_nvmm')
snapshot.set_file_location('test_' + snapshot._name + '_1.jpg')
snapshot.play()

At this point, we expect a file test_snapshot_1.jpg that should show the input of only one camera, as in the example below:

Test snapshot 1.jpg

We left the recording pipelines playing while taking the snapshots. Now we can stop these pipelines and inspect the results.

# Send EOS event to encode pipelines for proper closing
# EOS to recording pipelines
for pipeline in pipelines_video_enc:
    pipeline.eos()

# Stop recordings
for pipeline in pipelines_video_rec:
    pipeline.stop()
for pipeline in pipelines_video_enc:
    pipeline.stop()

We expect to have the files test_record_h264_0.mkv, test_record_h265_0.mkv and test_record_vp9_0.mkv, with videos that contain images similar to the test_snapshot_0.jpeg shown above, with four videos in each frame and with the deesptream overlay.

Now, we can change the video input of the encoding pipeline, as we did with the snapshot pipeline, to get recordings from another source. In the snippet below, we set the source for the h264, h265, and vp9 pipelines to the processed camera0 video.

for pipeline in pipelines_video_enc:
    pipeline.listen_to('camera0_rgba_nvmm')

# Set locations for new video recordings
for pipeline in pipelines_video_rec:
    pipeline.set_file_location('test_' + pipeline._name + '_1.mkv')

for pipeline in pipelines_video_enc:
    pipeline.play()

for pipeline in pipelines_video_rec:
    pipeline.play()

time.sleep(10)

Pipelines Closure / Deletion

In the code snippet below, we stop the recording and all the other pipelines currently playing, and then all pipelines are deleted.

# Send EOS event to encode pipelines for proper closing
# EOS to recording pipelines
for pipeline in pipelines_video_enc:
    pipeline.eos()
# Stop pipelines
for pipeline in pipelines_snap:
    pipeline.stop()
for pipeline in pipelines_video_rec:
    pipeline.stop()
for pipeline in pipelines_video_enc:
    pipeline.stop()
for pipeline in pipelines_base:
    pipeline.stop()

# Delete pipelines
for pipeline in pipelines_snap:
    pipeline.delete()
for pipeline in pipelines_video_rec:
    pipeline.delete()
for pipeline in pipelines_video_enc:
    pipeline.delete()
for pipeline in pipelines_base:
    pipeline.delete()

At this point, we can inspect the files test_record_h264_1.mkv, test_record_h265_1.mkv and test_record_vp9_1.mkv. We expect these recordings to contain images only from camera0.

Running The Demo

1. Head to the gtc-2020-demo/python-example directory

2. Proceed to initialize GstD and execute the media server demo script

DISPLAY=:0 gstd -D
./media-server.py

Expected Results

After executing the media server, you should find the output files in the directory where Gstd was started, the following should contain media with the deepstream output:

test_record_h264_0.mkv
test_record_h265_0.mkv
test_record_vp9_0.mkv
test_snapshot_0.jpeg

And the following files should contain media with the camera0 output:

test_record_h264_1.mkv
test_record_h265_1.mkv
test_record_vp9_1.mkv
test_snapshot_1.jpg


RidgeRun Resources

Quick Start Client Engagement Process RidgeRun Blog Homepage
Technical and Sales Support RidgeRun Online Store RidgeRun Videos Contact Us

OOjs UI icon message-progressive.svg Contact Us

Visit our Main Website for the RidgeRun Products and Online Store. RidgeRun Engineering informations are available in RidgeRun Professional Services, RidgeRun Subscription Model and Client Engagement Process wiki pages. Please email to support@ridgerun.com for technical questions and contactus@ridgerun.com for other queries. Contact details for sponsoring the RidgeRun GStreamer projects are available in Sponsor Projects page. Ridgerun-logo.svg
RR Contact Us.png