Image Stitching for NVIDIA Jetson - User Guide - Controlling the Stitcher

From RidgeRun Developer Connection
< Image Stitching for NVIDIA Jetson‎ | User Guide
Revision as of 12:08, 18 March 2021 by Mherrera (talk | contribs) (Update workflow diagram)
Jump to: navigation, search



Previous: Image Stitching for NVIDIA Jetson Basics Index Next: User Guide/Homography estimation



Nvidia-preferred-partner-badge-rgb-for-screen.png



This page provides an explanation of the workflow used when working with the stitcher.

Workflow overview

When using the stitcher, parameter acquirement and selection is a crucial step in order to get the expected output. These parameters can be obtained from scripts provided whithin the stitcher itself. These parameters are:

  1. Homography List: This parameter defines the transformations between two images. It is specified with the option homography-list and is set as a JSON formatted string, the JSON is constructed manually based on the individual homographies produced with the homography estimation tool.

    Read the Homography estimation guide on how to calculate the homography between to images.
    Then visit the Homography list guide to better understand its format and how to construct it based on individual homographies.

  2. Blending Width: This parameter sets the amount of pixels to be blended between two images. Can be set with the option border-width


If you are using the stitcher as well as the cuda-undistort element, there are more parameters to be obtained, information about those and how to set them can be found in the cuda undistort wiki

Workflow diagram

The following diagram explains the workflow that should be followed when working this the stitcher as well as the auxiliary tools required, even with the cuda undistort element.

Error creating thumbnail: Unable to save thumbnail to destination

This page serves as a guide to configure the stitcher in order to meet different application requirements.

Homography List

The homography list is just a JSON file that defines the transformations and the relationships between the images. Here we will explore (by examples) how to create this file in order to stitch the corresponding images.

Case: 2 Images

2 Images Stitching Example

Let's assume we have only 2 images (with indices 0 an 1). These 2 images are related by a homography which can be computed using the Homography Estimation Guide. The computed homography transforms the Target image from the Reference image perspective.

This way, to fully describe a homography, we need to declare 3 parameters:

  • Matrix: the 3x3 transformation matrix.
  • Target: the index of the target image (i.e. the image to be transformed).
  • Reference: the index of the reference image (i.e. the image used as a reference to transform the target image).

Having this information, we build the Homography JSON file:

{
    "homographies":[
        {
            "images":{
                "target":1,
                "reference":0
            },
            "matrix":{
                "h00": 1, "h01": 0, "h02": 510,
                "h10": 0, "h11": 1, "h12": 0,
                "h20": 0, "h21": 0, "h22": 1
            }
        }
    ]
}


With this file we are describing a set of 2 images (0 and 1), where the given matrix will transform the image 1 based on 0.

Case: 3 Images

Error creating thumbnail: Unable to save thumbnail to destination

Similar to the 2 images case, we use homographies to relate the set of images. The rule is to use N-1 number of homographies, where N is the number of images.

One panoramic use case is to compute the homographies for both left (0) and right (2) images, using the center image (1) as the reference. The JSON file would look like this:

{
    "homographies":[
        {
            "images":{
                "target":0,
                "reference":1
            },
            "matrix":{
                "h00": 1, "h01": 0, "h02": -510,
                "h10": 0, "h11": 1, "h12": 0,
                "h20": 0, "h21": 0, "h22": 1
            }
        },
        {
            "images":{
                "target":2,
                "reference":1
            },
            "matrix":{
                "h00": 1, "h01": 0, "h02": 510,
                "h10": 0, "h11": 1, "h12": 0,
                "h20": 0, "h21": 0, "h22": 1
            }
        }
    ]
}

Your case

You can create your own homography list, using the other cases as a guide. Just keep in mind these rules:

  1. N images, N-1 homographies: if you have N input images, you only need to define N-1 homographies.
  2. Reference != Target: you can't use the same image as a target and as a reference for a given homography.
  3. No Target duplicates: an image can be a target only once.
  4. Image indices from 0 to N-1: if you have N images, you have to use successive numbers from 0 to N-1 for the target and reference indices. It means that you cannot declare something like target: 6 if you have 6 images; the correct index for your last image is 5.

Blending

The stitcher has the capability of blending the limit between two adjacent images to hide the abrupt change of color and gain between the input images. The parameter to adjust this feature is called border-width, and is the number of pixels to blend.

Usage Example

The homography list is stored into the homographies.json file.

BORDER_WIDTH=10

gst-launch-1.0 -e cudastitcher name=stitcher \
  homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
  border-width=$BORDER_WIDTH \
  nvarguscamerasrc sensor-id=0 ! nvvidconv ! stitcher.sink_0 \
  nvarguscamerasrc sensor-id=1 ! nvvidconv ! stitcher.sink_1 \
  nvarguscamerasrc sensor-id=2 ! nvvidconv ! stitcher.sink_2 \
  stitcher. ! queue ! nvvidconv ! nvoverlaysink

The indices of the stitcher's sinks (sink_0, for example) maps directly to the image index we use in the homography list.

You can find more complex examples here.


Previous: Image Stitching for NVIDIA Jetson Basics Index Next: User Guide/Homography estimation