Difference between revisions of "Image Stitching for NVIDIA Jetson/User Guide/Controlling the Stitcher"

From RidgeRun Developer Connection
Jump to: navigation, search
m (Update workflow diagram)
m (Add written workflow)
Line 3: Line 3:
 
</noinclude>
 
</noinclude>
  
This page provides an explanation of the workflow used when working with the stitcher.
+
This page provides an explanation of the workflow used when working with the stitcher. As well as a basic description of the parameters required when building a cudastitcher pipeline.
  
== Workflow overview ==
+
== Workflow Parameters ==
  
When using the stitcher, parameter acquirement and selection is a crucial step in order to get the expected output. These parameters can be obtained from scripts provided whithin the stitcher itself.
+
When using the stitcher, parameter acquirement and selection is a crucial step in order to get the expected output. These parameters can be obtained from scripts provided within the stitcher itself.
 
These parameters are:
 
These parameters are:
#'''Homography List:''' This parameter defines the transformations between two images. It is specified with the option <code>homography-list</code> and is set as a JSON formatted string, the JSON is constructed manually based on the individual homographies produced with the homography estimation tool. <br /><br />Read the [[Homography estimation guide]] on how to calculate the homography between to images.<br />Then visit the [[Homography list guide]] to better understand its format and how to construct it based on individual homographies.<br /><br />
 
#'''Blending Width:''' This parameter sets the amount of pixels to be blended between two images. Can be set with the option <code>border-width</code>
 
  
 +
==== Homography List====
 +
:This parameter defines the transformations between two images. It is specified with the option <code>homography-list</code> and is set as a JSON formatted string, the JSON is constructed manually based on the individual homographies produced with the homography estimation tool.
  
If you are using the stitcher as well as the cuda-undistort element, there are more parameters to be obtained, information about those and how to set them can be found in the [https://developer.ridgerun.com/wiki/index.php?title=CUDA_Accelerated_GStreamer_Camera_Undistort/User_Guide cuda undistort wiki]
+
:Read the [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_estimation|Homography estimation guide]] on how to calculate the homography between to images.
 +
:Then visit the [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_list|Homography list guide]] to better understand its format and how to construct it based on individual homographies.
  
== Workflow diagram ==
+
==== Blending Width ====
The following diagram explains the workflow that should be followed when working this the stitcher as well as the auxiliary tools required, even with the cuda undistort element.
+
:This parameter sets the amount of pixels to be blended between two images. It can be set with the <code>border-width</code> option. [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Blending|This guide]] provides more information on the topic.
  
[[File:Stitcher workflow diagram.png|center|640px|Stitcher basic workflow diagram]]
+
==== Additional parameters ====
 +
:If you are using the stitcher as well as the cuda-undistort element, there are more parameters to be obtained, information about those and how to set them can be found in the [https://developer.ridgerun.com/wiki/index.php?title=CUDA_Accelerated_GStreamer_Camera_Undistort/User_Guide cuda undistort wiki]
  
This page serves as a guide to configure the stitcher in order to meet different application requirements.
+
== Workflow overview ==
 +
Here are presented the basic steps and execution order that needs to be followed in order to configure the stitcher properly and acquire the parameters for its usage.
  
= Homography List =
+
#Know your input sources (N)
The homography list is just a JSON file that defines the transformations and the relationships between the images. Here we will explore (by examples) how to create this file in order to stitch the corresponding images.
+
#Apply distortion correction to the inputs (only if necessary), see [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Camera_Calibration|this guide]] for more details
 +
#*Run the calibration tool for each source that requires it
 +
#**input: Multiple images of a calibration pattern
 +
#**output: Camera matrix and distortion parameters
 +
#*Save the camera matrix and distortion parameters for each camera, since they will be required to build the pipelines 
 +
#*Repeat until every input has been corrected
 +
#Calculate all (N-1) homographies between pairs of adjacent images, see [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_estimation|this guide]] for more details
 +
#*Run the homography estimation tool for each image (target) and its reference (fixed)
 +
#**input: Two still images from adjacent sources with overlap and a JSON config file
 +
#**output: Homography matrix that describes the transformation between input sources
 +
#*Save the homography matrices; they will be required in the next steps
 +
#*Repeat until every image has been a target once
 +
#Construct homographies list JSON file
 +
#*This step is done manually, see [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_list|this guide]] for more details
 +
#Set the blending width, see [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Blending|this guide]] for more details
 +
#Build and launch the stitcher pipelines
  
== Case: 2 Images ==
+
== Workflow diagram ==
[[File:Stitching 2 images example.gif|500px|frameless|none|2 Images Stitching Example]]
+
The following diagram provides a visual representation of the workflow described in the previous section, as well as the auxiliary tools required, even with the cuda undistort element.
------
 
 
 
Let's assume we have only 2 images (with indices 0 an 1). These 2 images are related by a '''homography''' which can be computed using the [https://developer.ridgerun.com/wiki/index.php?title=Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_estimation Homography Estimation Guide]. The computed homography transforms the '''Target''' image from the '''Reference''' image perspective.
 
 
 
This way, to fully describe a homography, we need to declare 3 parameters:
 
* '''Matrix''': the 3x3 transformation matrix.
 
* '''Target''': the index of the target image (i.e. the image to be transformed).
 
* '''Reference''': the index of the reference image (i.e. the image used as a reference to transform the target image).
 
 
 
Having this information, we build the Homography JSON file:
 
 
 
<pre>
 
{
 
    "homographies":[
 
        {
 
            "images":{
 
                "target":1,
 
                "reference":0
 
            },
 
            "matrix":{
 
                "h00": 1, "h01": 0, "h02": 510,
 
                "h10": 0, "h11": 1, "h12": 0,
 
                "h20": 0, "h21": 0, "h22": 1
 
            }
 
        }
 
    ]
 
}
 
</pre>
 
 
 
 
 
With this file we are describing a set of 2 images (0 and 1), where the given matrix will transform the image '''1''' based on '''0'''.
 
 
 
== Case: 3 Images ==
 
 
 
[[File:3 Images Stitching Example.gif|1000px|frameless|none|3 Images Stitching Example]]
 
------
 
 
 
Similar to the 2 images case, we use homographies to relate the set of images. The rule is to use <code>N-1</code> number of homographies, where <code>N</code> is the number of images.
 
 
 
One panoramic use case is to compute the homographies for both left '''(0)''' and right '''(2)''' images, using the center image '''(1)''' as the reference.
 
The JSON file would look like this:
 
 
 
<pre>
 
{
 
    "homographies":[
 
        {
 
            "images":{
 
                "target":0,
 
                "reference":1
 
            },
 
            "matrix":{
 
                "h00": 1, "h01": 0, "h02": -510,
 
                "h10": 0, "h11": 1, "h12": 0,
 
                "h20": 0, "h21": 0, "h22": 1
 
            }
 
        },
 
        {
 
            "images":{
 
                "target":2,
 
                "reference":1
 
            },
 
            "matrix":{
 
                "h00": 1, "h01": 0, "h02": 510,
 
                "h10": 0, "h11": 1, "h12": 0,
 
                "h20": 0, "h21": 0, "h22": 1
 
            }
 
        }
 
    ]
 
}
 
</pre>
 
 
 
== Your case ==
 
You can create your own homography list, using the other cases as a guide. Just keep in mind these rules:
 
 
 
# '''N images, N-1 homographies''': if you have '''N''' input images, you only need to define '''N-1''' homographies.
 
# '''Reference != Target''': you can't use the same image as a target and as a reference for a given homography.
 
# '''No Target duplicates''': an image can be a target only once.
 
# '''Image indices from 0 to N-1''': if you have N images, you have to use successive numbers from '''0''' to '''N-1''' for the target and reference indices. It means that you cannot declare something like <code>target: 6</code> if you have 6 images; the correct index for your last image is  '''5'''.
 
 
 
= Blending =
 
The stitcher has the capability of blending the limit between two adjacent images to hide the abrupt change of color and gain between the input images. The parameter to adjust this feature is called '''border-width''', and is the number of pixels to blend.
 
 
 
= Usage Example =
 
The homography list is stored into the <code>homographies.json</code> file.
 
 
 
<syntaxhighlight lang=bash>
 
BORDER_WIDTH=10
 
 
 
gst-launch-1.0 -e cudastitcher name=stitcher \
 
  homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
 
  border-width=$BORDER_WIDTH \
 
  nvarguscamerasrc sensor-id=0 ! nvvidconv ! stitcher.sink_0 \
 
  nvarguscamerasrc sensor-id=1 ! nvvidconv ! stitcher.sink_1 \
 
  nvarguscamerasrc sensor-id=2 ! nvvidconv ! stitcher.sink_2 \
 
  stitcher. ! queue ! nvvidconv ! nvoverlaysink
 
</syntaxhighlight>
 
  
The indices of the stitcher's sinks (sink_'''0''', for example) maps directly to the image index we use in the homography list.
+
[[File:Stitcher workflow diagram.png|700px|center|none|2 Images Stitching Example]]
  
You can find more complex examples [[Image_Stitching_for_NVIDIA_Jetson/Examples|here]].
 
  
 
<noinclude>
 
<noinclude>
 
{{Image_Stitching_for_NVIDIA_Jetson/Foot|Image Stitching for NVIDIA Jetson Basics|User Guide/Homography estimation}}
 
{{Image_Stitching_for_NVIDIA_Jetson/Foot|Image Stitching for NVIDIA Jetson Basics|User Guide/Homography estimation}}
 
</noinclude>
 
</noinclude>

Revision as of 12:57, 18 March 2021



Previous: Image Stitching for NVIDIA Jetson Basics Index Next: User Guide/Homography estimation



Nvidia-preferred-partner-badge-rgb-for-screen.png



This page provides an explanation of the workflow used when working with the stitcher. As well as a basic description of the parameters required when building a cudastitcher pipeline.

Workflow Parameters

When using the stitcher, parameter acquirement and selection is a crucial step in order to get the expected output. These parameters can be obtained from scripts provided within the stitcher itself. These parameters are:

Homography List

This parameter defines the transformations between two images. It is specified with the option homography-list and is set as a JSON formatted string, the JSON is constructed manually based on the individual homographies produced with the homography estimation tool.
Read the Homography estimation guide on how to calculate the homography between to images.
Then visit the Homography list guide to better understand its format and how to construct it based on individual homographies.

Blending Width

This parameter sets the amount of pixels to be blended between two images. It can be set with the border-width option. This guide provides more information on the topic.

Additional parameters

If you are using the stitcher as well as the cuda-undistort element, there are more parameters to be obtained, information about those and how to set them can be found in the cuda undistort wiki

Workflow overview

Here are presented the basic steps and execution order that needs to be followed in order to configure the stitcher properly and acquire the parameters for its usage.

  1. Know your input sources (N)
  2. Apply distortion correction to the inputs (only if necessary), see this guide for more details
    • Run the calibration tool for each source that requires it
      • input: Multiple images of a calibration pattern
      • output: Camera matrix and distortion parameters
    • Save the camera matrix and distortion parameters for each camera, since they will be required to build the pipelines
    • Repeat until every input has been corrected
  3. Calculate all (N-1) homographies between pairs of adjacent images, see this guide for more details
    • Run the homography estimation tool for each image (target) and its reference (fixed)
      • input: Two still images from adjacent sources with overlap and a JSON config file
      • output: Homography matrix that describes the transformation between input sources
    • Save the homography matrices; they will be required in the next steps
    • Repeat until every image has been a target once
  4. Construct homographies list JSON file
    • This step is done manually, see this guide for more details
  5. Set the blending width, see this guide for more details
  6. Build and launch the stitcher pipelines

Workflow diagram

The following diagram provides a visual representation of the workflow described in the previous section, as well as the auxiliary tools required, even with the cuda undistort element.

2 Images Stitching Example


Previous: Image Stitching for NVIDIA Jetson Basics Index Next: User Guide/Homography estimation