NVIDIA GTC 2020: How to build a multi-camera Media Server for AI processing on Jetson

From RidgeRun Developer Connection
Revision as of 16:11, 23 March 2020 by Dchaverri (talk | contribs) (Running The Demo)
Jump to: navigation, search


Error something wrong.jpg Problems running the pipelines shown on this page?
Please see our GStreamer Debugging guide for help.

RR Contact Us.png

Overview of multi-camera Jetson TX2 AI Media Sever demo

RidgeRun has put a lot of effort on delivering software solutions in accordance with embedded devices market trends; NVIDIA's Jetson is a family of low-power hardware systems designed for accelerating machine learning applications combined with the NVIDIA's JetPack Software Development Kit (SDK).

Each JetPack SDK release provides support for multiple of the Jetson family modules, including Jetson AGX Xavier series, Jetson TX2 series, Jetson TX1 and Jetson Nano. The integration of support for several of the Jetson series modules under one single JetPack release provides a great level of portability for developers' applications, hence the media server here described can actually run on several platforms including Jetson's TX2, Xavier and Nano.

Demo requirements

Tx2 devkit.png
  • Hardware

- NVIDIA Jetson TX2 Developer Board

- 5 MP fixed focus MIPI CSI camera (included in the devkit)

- USB camera


  • Software

- NVIDIA JetPack 4.3

- NVIDIA DeepStream 4.0



Environment Setup

1. Proceed to install NVIDIA software tools, both JetPack and DeepStream can be installed using SDK Manager, this is the recommended installation method.

2. Proceed to install RidgeRun's GstD and GstInterpipe

3. Clone the GTC20 demo repository to a suitable path in your working station.

git clone https://github.com/RidgeRun/gtc-2020-demo

Demo Directory Layout

The gtc-2020-demo directory has the following structure:

gtc-2020-demo/
├── deepstream-models
│   ├── config_infer_primary_1_cameras.txt
│   ├── config_infer_primary_4_cameras.txt
│   ├── config_infer_primary.txt
│   ├── libnvds_mot_klt.so
│   └── Primary_Detector
│       ├── cal_trt.bin
│       ├── labels.txt
│       ├── resnet10.caffemodel
│       ├── resnet10.caffemodel_b1_fp16.engine
│       ├── resnet10.caffemodel_b30_fp16.engine
│       ├── resnet10.caffemodel_b4_fp16.engine
│       └── resnet10.prototxt
├── python-example
│   └── media-server.py
└── README.md
  • deepstream-models: Required by the DeepStream processing block in the media server.
  • python-example: Contains the python-based media server demo scripts.

Demo Script Overview

The purpose of the Media Server for AI processing on Jetson presentation is to provide an idea of the simplicity in the overall design of a complex Gstreamer-based application (like for example the media server) when using RidgeRun's GstD and GstInterpipe products.

The implementation of the controller application for the media server, in this case, is a Python script but it may well be a QT GUI, a C / C++ application, among other alternatives, this provides great sense of flexibility to your design; The diagram below basically presents the media server as a set of functional blocks that, when interconnected, provide several data (video buffers) flow paths.

Error creating thumbnail: Unable to save thumbnail to destination
Media Server Diagram

The following sections show a segmented view of the media-server.py script:

Pipeline Class Definition

#!/usr/bin/env python3

import time
from pygstc.gstc import *

# Create PipelineEntity object to manage each pipeline
class PipelineEntity(object):
    def __init__(self, client, name, description):
        self._name = name
        self._description = description
        self._client = client
        print("Creating pipeline: " + self._name)
        self._client.pipeline_create(self._name, self._description)
    def play(self):
        print("Playing pipeline: " + self._name)
        self._client.pipeline_play(self._name)
    def stop(self):
        print("Stopping pipeline: " + self._name)
        self._client.pipeline_stop(self._name)
    def delete(self):
        print("Deleting pipeline: " + self._name)
        self._client.pipeline_delete(self._name)
    def eos(self):
        print("Sending EOS to pipeline: " + self._name)
        self._client.event_eos(self._name)
    def set_file_location(self, location):
        print("Setting " + self._name + " pipeline recording/snapshot location to " + location);
        filesink_name = "filesink_" + self._name;
        self._client.element_set(self._name, filesink_name, 'location', location);
    def listen_to(self, sink):
        print(self._name + " pipeline listening to " + sink);
        self._client.element_set(self._name, self._name + '_src', 'listen-to', sink);

pipelines_base = []
pipelines_video_rec = []
pipelines_video_enc = []
pipelines_snap = []

# Create GstD Python client
client = GstdClient()

Pipelines Creation

Capture Pipelines

# Create camera pipelines
camera0 = PipelineEntity(client, 'camera0', 'v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=1280,height=720 ! interpipesink name=camera0 forward-events=true forward-eos=true sync=false')
pipelines_base.append(camera0)

camera0_rgba_nvmm = PipelineEntity(client, 'camera0_rgba_nvmm', 'interpipesrc listen-to=camera0 ! video/x-raw,format=YUY2,width=1280,height=720 ! videoconvert ! video/x-raw,format=NV12,width=1280,height=720 ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720 ! queue ! interpipesink name=camera0_rgba_nvmm forward-events=true forward-eos=true sync=false caps=video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720,pixel-aspect-ratio=1/1,interlace-mode=progressive,framerate=30/1')
pipelines_base.append(camera0_rgba_nvmm)

camera1 = PipelineEntity(client, 'camera1', 'nvarguscamerasrc ! nvvidconv ! video/x-raw,format=I420,width=1280,height=720 ! queue ! interpipesink name=camera1 forward-events=true forward-eos=true sync=false')
pipelines_base.append(camera1)

camera1_rgba_nvmm = PipelineEntity(client, 'camera1_rgba_nvmm', 'interpipesrc listen-to=camera1 ! video/x-raw,format=I420,width=1280,height=720 ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720 ! interpipesink name=camera1_rgba_nvmm forward-events=true forward-eos=true sync=false caps=video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720,pixel-aspect-ratio=1/1,interlace-mode=progressive,framerate=30/1')
pipelines_base.append(camera1_rgba_nvmm)

DeepStream Pipeline

# Create Deepstream pipeline with 4 cameras processing
deepstream = PipelineEntity(client, 'deepstream', 'interpipesrc listen-to=camera0_rgba_nvmm ! nvstreammux0.sink_0 interpipesrc listen-to=camera0_rgba_nvmm ! nvstreammux0.sink_1 interpipesrc listen-to=camera1_rgba_nvmm ! nvstreammux0.sink_2 interpipesrc listen-to=camera1_rgba_nvmm ! nvstreammux0.sink_3 nvstreammux name=nvstreammux0 batch-size=4 batched-push-timeout=40000 width=1280 height=720 ! queue ! nvinfer batch-size=4 config-file-path=../deepstream-models/config_infer_primary_4_cameras.txt ! queue ! nvtracker ll-lib-file=../deepstream-models/libnvds_mot_klt.so enable-batch-process=true ! queue ! nvmultistreamtiler width=1280 height=720 rows=2 columns=2 ! nvvideoconvert ! nvdsosd ! queue ! interpipesink name=deep forward-events=true forward-eos=true sync=false')
pipelines_base.append(deepstream)

Encoding Pipelines

# Create encoding pipelines
h264 = PipelineEntity(client, 'h264', 'interpipesrc name=h264_src format=time listen-to=deep ! video/x-raw(memory:NVMM),format=RGBA,width=1280,height=720 ! nvvideoconvert ! nvv4l2h264enc ! interpipesink name=h264_sink forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true')
pipelines_video_enc.append(h264)

h265 = PipelineEntity(client, 'h265', 'interpipesrc name=h265_src format=time listen-to=deep ! nvvideoconvert ! nvv4l2h265enc ! interpipesink name=h265_sink forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true')
pipelines_video_enc.append(h265)

vp9 = PipelineEntity(client, 'vp9', 'interpipesrc name=vp9_src format=time listen-to=deep ! nvvideoconvert ! nvv4l2vp9enc ! interpipesink name=vp9_sink forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true')
pipelines_video_enc.append(vp9)

jpeg = PipelineEntity(client, 'jpeg', 'interpipesrc name=jpeg_src format=time listen-to=deep ! nvvideoconvert ! video/x-raw,format=I420,width=1280,height=720 ! nvjpegenc ! interpipesink name=jpeg forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true')
pipelines_snap.append(jpeg)

Video Recording Pipelines

# Create recording pipelines
record_h264 = PipelineEntity(client, 'record_h264', 'interpipesrc format=time allow-renegotiation=false listen-to=h264_sink ! h264parse ! matroskamux ! filesink name=filesink_record_h264 location=test-h264.mkv')
pipelines_video_rec.append(record_h264)

record_h265 = PipelineEntity(client, 'record_h265', 'interpipesrc format=time listen-to=h265_sink ! h265parse ! matroskamux ! filesink name=filesink_record_h265 location=test-h265.mkv')
pipelines_video_rec.append(record_h265)

record_vp9 = PipelineEntity(client, 'record_vp9', 'interpipesrc format=time listen-to=vp9_sink ! matroskamux ! filesink name=filesink_record_vp9 location=test-vp9.mkv')
pipelines_video_rec.append(record_vp9)

Image Snapshots Pipeline

# Create snapshot pipeline
snapshot = PipelineEntity(client, 'snapshot', 'interpipesrc format=time listen-to=jpeg num-buffers=1 ! filesink name=filesink_snapshot location=test-snapshot.jpg')
pipelines_snap.append(snapshot)

Video Display Pipeline

# Create display pipeline
display = PipelineEntity(client, 'display', 'interpipesrc listen-to=deep ! nvegltransform bufapi-version=true ! nveglglessink qos=false async=false sync=false')
pipelines_base.append(display)

Pipelines Execution

# Play base pipelines
for pipeline in pipelines_base:
    pipeline.play()

time.sleep(10)

# Set locations for video recordings
for pipeline in pipelines_video_rec:
    pipeline.set_file_location('test_' + pipeline._name + '_0.mkv')

# Play video recording pipelines
for pipeline in pipelines_video_rec:
    pipeline.play()

# Play video encoding pipelines
for pipeline in pipelines_video_enc:
    pipeline.play()

time.sleep(20)

# Set location for snapshot
snapshot.set_file_location('test_' + snapshot._name + '_0.jpeg')

# Play snapshot pipelines
for pipeline in pipelines_snap:
    pipeline.play()

time.sleep(5)

# Take another snapshot, but now use camera0 as source
snapshot.stop()
jpeg.listen_to('camera0_rgba_nvmm')
snapshot.set_file_location('test_' + snapshot._name + '_1.jpg')
snapshot.play()

# Stop previous recordings, connect to camera 1 capture instead of deepstream output and record another video
# Send EOS event to encode pipelines for proper closing
# EOS to recording pipelines
for pipeline in pipelines_video_enc:
    pipeline.eos()

# Stop recordings
for pipeline in pipelines_video_rec:
    pipeline.stop()
for pipeline in pipelines_video_enc:
    pipeline.stop()

for pipeline in pipelines_video_enc:
    pipeline.listen_to('camera0_rgba_nvmm')

# Set locations for new video recordings
for pipeline in pipelines_video_rec:
    pipeline.set_file_location('test_' + pipeline._name + '_1.mkv')

for pipeline in pipelines_video_enc:
    pipeline.play()

for pipeline in pipelines_video_rec:
    pipeline.play()

time.sleep(10)


Pipelines Closure / Deletion

# Send EOS event to encode pipelines for proper closing
# EOS to recording pipelines
for pipeline in pipelines_video_enc:
    pipeline.eos()
# Stop pipelines
for pipeline in pipelines_snap:
    pipeline.stop()
for pipeline in pipelines_video_rec:
    pipeline.stop()
for pipeline in pipelines_video_enc:
    pipeline.stop()
for pipeline in pipelines_base:
    pipeline.stop()

# Delete pipelines
for pipeline in pipelines_snap:
    pipeline.delete()
for pipeline in pipelines_video_rec:
    pipeline.delete()
for pipeline in pipelines_video_enc:
    pipeline.delete()
for pipeline in pipelines_base:
    pipeline.delete()


Running The Demo

1. Head to the gtc-2020-demo/python-example directory

2. Proceed to initialize GstD and execute the media server demo script

gstd; ./media-server.py




RidgeRun Resources

Quick Start Client Engagement Process RidgeRun Blog Homepage
Technical and Sales Support RidgeRun Online Store RidgeRun Videos Contact Us

OOjs UI icon message-progressive.svg Contact Us

Visit our Main Website for the RidgeRun Products and Online Store. RidgeRun Engineering informations are available in RidgeRun Professional Services, RidgeRun Subscription Model and Client Engagement Process wiki pages. Please email to support@ridgerun.com for technical questions and contactus@ridgerun.com for other queries. Contact details for sponsoring the RidgeRun GStreamer projects are available in Sponsor Projects page. Ridgerun-logo.svg
RR Contact Us.png