Stitcher element few more example pipelines

From RidgeRun Developer Connection
Jump to: navigation, search



Previous: Examples/Using Gstd Index Next: Spherical Video



Nvidia-preferred-partner-badge-rgb-for-screen.png




Error something wrong.jpg Problems running the pipelines shown on this page?
Please see our GStreamer Debugging guide for help.

This page showcases basic usage examples of the cudastitcher element, most of these pipelines can be automatically generated by this Pipeline generator Tool.

The homography list is stored in the homographies.json file and contains N-1 homographies for N images, for more information on how to set these values, visit Controlling the Stitcher Wiki page.

For all of the examples below assume that there are three inputs and the homographies file looks like this:

{
    "homographies":[
        {
            "images":{
                "target":0,
                "reference":1
            },
            "matrix":{
                "h00": 1, "h01": 0, "h02": -510,
                "h10": 0, "h11": 1, "h12": 0,
                "h20": 0, "h21": 0, "h22": 1
            }
        },
        {
            "images":{
                "target":2,
                "reference":1
            },
            "matrix":{
                "h00": 1, "h01": 0, "h02": 510,
                "h10": 0, "h11": 1, "h12": 0,
                "h20": 0, "h21": 0, "h22": 1
            }
        }
    ]
}


The output of the stitcher can be displayed, saved in a file, streamed or dumped to fakesink; this applies to all kinds of inputs but is only showcased for camera inputs, make the required adjustments for the other cases if you need them.

The perf element is used in some of the examples, it can be downloaded from RidgeRun Git repository; otherwise, the element can be removed from the pipeline without any issues. Also, in case of encountering performance issues, consider executing the /usr/bin/jetson_clocks binary.

Stitching from cameras

Saving a stitch to MP4

OUTVIDEO=/tmp/stitching_result.mp4
BORDER_WIDTH=10

gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
border-width=$BORDER_WIDTH \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \
nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_2 \
stitcher. ! queue ! nvvidconv ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTVIDEO

Displaying a stitch

BORDER_WIDTH=10

gst-launch-1.0 -e cudastitcher name=stitcher \
  homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
  border-width=$BORDER_WIDTH \
  nvarguscamerasrc sensor-id=0 ! nvvidconv ! stitcher.sink_0 \
  nvarguscamerasrc sensor-id=1 ! nvvidconv ! stitcher.sink_1 \
  nvarguscamerasrc sensor-id=2 ! nvvidconv ! stitcher.sink_2 \
  stitcher. ! queue ! nvvidconv ! nvoverlaysink

Dumping output to fakesink

This option is particularly useful for debugging

BORDER_WIDTH=10

gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
border-width=$BORDER_WIDTH \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \
stitcher. ! fakesink

Streaming via UDP+RTP

Set the HOST variable to the Receiver's IP

BORDER_WIDTH=10
HOST=127.0.0.1
PORT=12345

# Sender
gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
border-width=$BORDER_WIDTH \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_1 \
nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080" ! queue ! stitcher.sink_2 \
stitcher. ! nvvidconv ! nvv4l2h264enc ! rtph264pay config-interval=10  ! queue ! udpsink host=$HOST port=$PORT
# Receiver
gst-launch-1.0 udpsrc port=$PORT ! 'application/x-rtp, media=(string)video, encoding-name=(string)H264' !  queue ! rtph264depay ! avdec_h264 ! videoconvert ! xvimagesink

Stitching videos

Saving a stitch from three MP4 videos

Example pipeline

BORDER_WIDTH=10

INPUT_0=video_0.mp4
INPUT_1=video_1.mp4
INPUT_2=video_2.mp4

OUTPUT=/tmp/stitching_result.mp4

gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
border-width=$BORDER_WIDTH \
filesrc location=$INPUT_0 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
filesrc location=$INPUT_1 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
filesrc location=$INPUT_2 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_2 \
stitcher. ! queue ! nvvidconv ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTPUT

Example pipeline for x86

BORDER_WIDTH=10

INPUT_0=video_0.mp4
INPUT_1=video_1.mp4
INPUT_2=video_2.mp4

OUTPUT=/tmp/stitching_result.mp4

gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
border-width=$BORDER_WIDTH \
filesrc location=$INPUT_0 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_0 \
filesrc location=$INPUT_1 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_1 \
filesrc location=$INPUT_2 ! qtdemux ! h264parse ! avdec_h264 ! videoconvert ! "video/x-raw, width=1920, height=1080, format=RGBA" ! queue ! stitcher.sink_2 \
stitcher. ! queue ! videoconvert ! x264enc ! h264parse ! mp4mux ! filesink location=$OUTPUT

Specifying a format

Generating a MP4 stitch from 3 GRAY8 cameras

BORDER_WIDTH=10
OUTVIDEO=/tmp/stitching_result.mp4

gst-launch-1.0 -e cudastitcher name=stitcher \
homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
border-width=$BORDER_WIDTH \
nvarguscamerasrc sensor-id=0 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_0 \
nvarguscamerasrc sensor-id=1 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_1 \
nvarguscamerasrc sensor-id=2 ! nvvidconv ! "video/x-raw, width=1920, height=1080,format=GRAY8" ! queue ! stitcher.sink_2 \
stitcher. ! queue ! nvvidconv ! "video/x-raw(memory:NVMM), width=1920, height=360" ! nvv4l2h264enc bitrate=20000000 ! h264parse ! mp4mux ! filesink location=$OUTVIDEO

Using distorted inputs

Lens distortion correction can be applied to the stitcher with the undistort element.

The Undistort examples wiki shows some basic usage examples. Visit Getting Started-Getting the code to learn more about the element and how to calibrate it.


Previous: Examples/Using Gstd Index Next: Spherical Video