GStreamer AI Inference -RTSP - WebRTC demo

From RidgeRun Developer Connection
Jump to: navigation, search

Nvidia-preferred-partner-badge-rgb-for-screen.png

Error something wrong.jpg Problems running the pipelines shown on this page?
Please see our GStreamer Debugging guide for help.

RR Contact Us.png

Introduction to RidgeRun GStreamer AI inference demo

This demo demonstrates the capabilities of several of Ridgerun's GStreamer products while leveraging the NVIDIA Jetson TX2 hardware components for speedups in the video encoding and decoding.

The system consists of 4 different pipelines:

  • A camera connected to an interpipesink.
  • A second video source connected through a rstpsink-src to an interpipesink.
  • An interpipesrc feeding into an AI object detector.
  • An interpipesrc connected to a jpeg encoder.

The different interpipesrc and sinks connections are changed dynamically using Gstd, which allows passing either of the video sources through the GstInference element. The system also takes snapshots whenever a person is detected on the current video stream.


Error creating thumbnail: Unable to save thumbnail to destination
Demo system diagram.


Getting the Code

This demo is an open source project, the project is hosted at Github:

https://github.com/RidgeRun/inference-demo

To clone the latest version you may run:

git clone https://github.com/RidgeRun/inference-demo

Installation

Hardware and system requirements

  • Jetson TX2 EVM
  • Camera included in Jetson TX2 EVM
  • Jetpack 4.2
  • Google Chrome version up to 77.0 (newer version might work but haven't been tested)
  • Internet connection

This guide considers you already installed Jetpack 4.2 including all the AI and Computer Vision software>

  • CUDA 10
  • OpenCV 3.3.1

You can use this page as a reference to install a clean Jetpack 4.2 with all the needed dependencies.

Required GStreamer based Products

  • TensorFlow

Check our TensorFlow building and installation guide by accessing this this page.

  • R2Inference

Check our R2Inference building and installation guide by accessing this this page.

  • GstInference

We added a new element to GstInference for this demo that takes the detection and classification metadata and generates a signal if it matches the configured label-index. To use this element you need to install a specific branch: feature/inference-alert-element Follow these instructions to build GstInference with the inferencealert element:

sudo apt-get install -y autoconf automake pkg-config libtool gtk-doc-tools libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev  gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-libav libopencv-dev

git clone https://github.com/RidgeRun/gst-inference.git

cd gst-inference

git checkout feature/inference-alert-element

./autogen.sh --prefix /usr/ --libdir /usr/lib/aarch64-linux-gnu/

make

sudo make install

cd ..
  • GstInterpipe

GstInterpipe Build and Installation Guide.

  • GstD

GstD Build and Installation Guide.

It's important to make sure you are installing develop branch, using the Development Version instructions

  • GstWebRTC

GstWebRTC evaluation binary installation. If you don't have an evaluation binary, please contact us with providing following info : * Platform (i.e.: iMX6, TX2, Ultrascale+, etc...), * gst-launch-1.0 --gst-version and * uname -a.

  • GstRtspSink

GstRtspSink evaluation binary installation. If you don't have an evaluation binary, please contact us with providing following info : * Platform (i.e.: iMX6, TX2, Ultrascale+, etc...), * gst-launch-1.0 --gst-version and * uname -a.

Check dependencies

  • You can check the installed GStreamer elements by running the following commands. Each of them should output all the element information properly:
gst-inspect-1.0 inference
gst-inspect-1.0 rrwebrtcbin
gst-inspect-1.0 interpipesink
gst-inspect-1.0 interpipesrc
gst-inspect-1.0 rtspsink
gst-inspect-1.0 detectionoverlay
gst-inspect-1.0 classificationoverlay
gst-inspect-1.0 inferencealert
  • Check the GstD installation:
gstd-client --version

#Expected Output
gstd 0.7.0
Copyright (c) 2015 RidgeRun Engineering

Required External Python Libraries

  • pip3

Install pip3 as follow:

sudo apt install python3-pip
  • psutil:

This is a GstD dependency. You can install it with the following command:

sudo -H pip3 install psutil

Usage

1. Enter the TX2, you can do it natively or via SSH.

Enter the gst-inference-demo folder. The folder has the following structure.

.
├── README.md
└── src
    ├── graph_tinyyolov2_tensorflow.pb
    ├── gst
    │   ├── gstc.py
    │   ├── pygstd.py
    │   └── pygst.py
    ├── main.py
    ├── pipe_config.json
    └── tinyyolov2_labels.txt


2. Open a terminal on another PC/board and run the following pipeline:

PORT=5002
gst-launch-1.0 v4l2src device=/dev/video0 ! autovideoconvert ! vp8enc ! queue ! rtspsink service=$PORT

Get that PCs ip address by running ifconfig, and coping the found value to the rtsp_ip_address in the pipe_config.json file.

The port can be defined to be any value that isn't already reserved by the system (usual ports larger than 1024 should be safe), with the exception of the 5001 port which is used by the demo for internal communication. This value should match the rtsp_port value in pipe_config.json file.


3. Open this link: https://webrtc.ridgerun.com:8443 in a Chrome tab. This can be done outside of the TX2.

  • Set your desired Session id, this can be any string but make sure to also change the session_id in the pipe_config.json file to match.
  • Select video and press join.


4. Go back to the original terminal (located in the demo folder) and start the demo:

cd src
python3 main.py

After a few seconds ou should get an output similar to the following:

Starting GstInference Application...
Process 7154 died: No such process; trying to remove PID file. (/usr/local/var/run/gstd//gstd.pid)

    ** Menu **
 1) Camera source
 2) RTSP source
 3) Take snapshot
 4) Exit

By pressing a number and Enter the system will change to the desired mode, where:

1 Camera source: EVM camera.

2 RTSP source: data obtained from the pipeline ran in step 2.

3 Take snapshot: the system will take a jpeg snapshot and save it to <code/tmp/ under the output#.jpeg naming scheme.

4 Exit: will exit the demo.


RidgeRun Resources

Quick Start Client Engagement Process RidgeRun Blog Homepage
Technical and Sales Support RidgeRun Online Store RidgeRun Videos Contact Us

OOjs UI icon message-progressive.svg Contact Us

Visit our Main Website for the RidgeRun Products and Online Store. RidgeRun Engineering informations are available in RidgeRun Professional Services, RidgeRun Subscription Model and Client Engagement Process wiki pages. Please email to support@ridgerun.com for technical questions and contactus@ridgerun.com for other queries. Contact details for sponsoring the RidgeRun GStreamer projects are available in Sponsor Projects page. Ridgerun-logo.svg
RR Contact Us.png