Difference between revisions of "Image Stitching for NVIDIA Jetson/User Guide/Controlling the Stitcher"

From RidgeRun Developer Connection
Jump to: navigation, search
(Usage Example)
m
 
(18 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
<noinclude>
 
<noinclude>
{{Image_Stitching_for_NVIDIA_Jetson/Head|previous=Image Stitching for NVIDIA Jetson Basics|next=User Guide/Homography estimation|keywords=Image Stitching, CUDA, Stitcher, OpenCV, Panorama}}
+
{{Image_Stitching_for_NVIDIA_Jetson/Head|previous=User Guide|next=User Guide/Homography estimation|metakeywords=Image Stitching, CUDA, Stitcher, OpenCV, Panorama}}
 
</noinclude>
 
</noinclude>
  
This page serves as a guide to configure the stitcher in order to meet different application requirements.
+
{{DISPLAYTITLE: User Guide on Controlling the Stitcher|noerror}}
  
= Homography List =
+
This page provides a basic description of the parameters required when building a cudastitcher pipeline. As well as an explanation of the workflow used when working with the stitcher.
The homography list is just a JSON file that defines the transformations and the relationships between the images. Here we will explore (by examples) how to create this file in order to stitch the corresponding images.
 
  
== Case: 2 Images ==
+
== Workflow diagram ==
[[File:Stitching 2 images example.gif|500px|frameless|none|2 Images Stitching Example]]
+
The following diagram provides a visual representation of the workflow needed when using the stitcher, as well as the auxiliary tools required. The [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Controlling_the_Stitcher#Workflow_overview| workflow overview]] provides a more detailed written version of the workflow and links to the required wikis.
------
 
  
Let's assume we have only 2 images (with indices 0 an 1). These 2 images are related by a '''homography''' which can be computed using the [https://developer.ridgerun.com/wiki/index.php?title=Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_estimation Homography Estimation Guide]. The computed homography transforms the '''Target''' image from the '''Reference''' image perspective.
+
[[File:Stitcher workflow diagram.png|700px|center|none|2 Images Stitching Example]]
  
This way, to fully describe a homography, we need to declare 3 parameters:
+
== Workflow parameters ==
* '''Matrix''': the 3x3 transformation matrix.
 
* '''Target''': the index of the target image (i.e. the image to be transformed).
 
* '''Reference''': the index of the reference image (i.e. the image used as a reference to transform the target image).
 
  
Having this information, we build the Homography JSON file:
+
When using the stitcher, parameter acquirement and selection is a crucial steps in order to get the expected output. These parameters can be obtained from scripts provided within the stitcher itself.
 +
These parameters are:
  
<pre>
+
==== Undistort parameters ====
{
+
:If you are using the stitcher as well as the cuda-undistort element, there are more parameters to be obtained, information about those and how to set them can be found in the [[CUDA_Accelerated_GStreamer_Camera_Undistort/User_Guide | cuda undistort wiki]]
    "homographies":[
 
        {
 
            "images":{
 
                "target":1,
 
                "reference":0
 
            },
 
            "matrix":{
 
                "h00": 1, "h01": 0, "h02": 510,
 
                "h10": 0, "h11": 1, "h12": 0,
 
                "h20": 0, "h21": 0, "h22": 1
 
            }
 
        }
 
    ]
 
}
 
</pre>
 
  
 +
==== Homography List====
 +
:This parameter defines the transformations between pairs of images. It is specified with the option <code>homography-list</code> and is set as a JSON formatted string, the JSON is constructed manually based on the individual homographies calculated with the homography estimation tool.
  
With this file we are describing a set of 2 images (0 and 1), where the given matrix will transform the image '''1''' based on '''0'''.
+
:Read the [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_estimation|Homography estimation guide]] on how to calculate the homography between two images.
 +
:Then visit the [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_list|Homography list guide]] to better understand its format and how to construct it based on individual homographies.
  
== Case: 3 Images ==
+
==== Refinement parameters ====
 +
:The stitcher is capable of refining the homographies during execution, there are multiple parameters associated with this feature, see the [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_Refinement|Homography refinement guide]] for more information.
  
[[File:3 Images Stitching Example.gif|1000px|frameless|none|3 Images Stitching Example]]
+
==== Blending Width ====
------
+
:This parameter sets the amount of pixels to be blended between two images. It can be set with the <code>border-width</code> option. [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Blending|This guide]] provides more information on the topic.
  
Similar to the 2 images case, we use homographies to relate the set of images. The rule is to use <code>N-1</code> number of homographies, where <code>N</code> is the number of images.
+
== Workflow overview ==
 +
Here are presented the basic steps and execution order that needs to be followed in order to configure the stitcher properly and acquire the parameters for its usage.
  
One panoramic use case is to compute the homographies for both left '''(0)''' and right '''(2)''' images, using the center image '''(1)''' as the reference.
+
#Know your input sources (N)
The JSON file would look like this:
+
#Apply distortion correction to the inputs (only if necessary), see [[CUDA_Accelerated_GStreamer_Camera_Undistort/User_Guide/Camera_Calibration | CUDA Accelerated GStreamer Camera Undistort Camera Calibration User Guide]] for more details
 +
#*Run the calibration tool for each source that requires it
 +
#**'''input''': Multiple images of a calibration pattern
 +
#**'''output''': Camera matrix and distortion parameters
 +
#*Save the camera matrix and distortion parameters for each camera since they will be required to build the pipelines 
 +
#*Repeat until every input has been corrected
 +
#Calculate all (N-1) homographies between pairs of adjacent images, see [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_estimation|Image Stitching Homography estimation User Guide]] for more details
 +
#*Run the homography estimation tool for each image (target) and its reference (fixed)
 +
#**'''input''': Two still images from adjacent sources with overlap and a JSON config file
 +
#**'''output''': Homography matrix that describes the transformation between input sources
 +
#*Save the homography matrices; they will be required in the next steps
 +
#*Repeat until every (N-1) image has been a target (except for the original reference image)
 +
#Assemble the homographies list JSON file
 +
#*This step is done manually, see [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Homography_list|Image Stitching Homography list User Guide]] for more details
 +
#Set the blending width, see [[Image_Stitching_for_NVIDIA_Jetson/User_Guide/Blending|Image Stitching Blending User Guide]] for more details
 +
#Build and launch the stitcher pipeline, see [[Image Stitching for NVIDIA Jetson/Examples|Image Stitching GStreamer Pipeline Examples]]
  
<pre>
 
{
 
    "homographies":[
 
        {
 
            "images":{
 
                "target":0,
 
                "reference":1
 
            },
 
            "matrix":{
 
                "h00": 1, "h01": 0, "h02": -510,
 
                "h10": 0, "h11": 1, "h12": 0,
 
                "h20": 0, "h21": 0, "h22": 1
 
            }
 
        },
 
        {
 
            "images":{
 
                "target":2,
 
                "reference":1
 
            },
 
            "matrix":{
 
                "h00": 1, "h01": 0, "h02": 510,
 
                "h10": 0, "h11": 1, "h12": 0,
 
                "h20": 0, "h21": 0, "h22": 1
 
            }
 
        }
 
    ]
 
}
 
</pre>
 
 
= Blending =
 
The stitcher has the capability of blending the limit between two adjacent images to hide the abrupt change of color and gain between the input images. The parameter to adjust this feature is called '''border-width''', and is the number of pixels to blend.
 
 
= Usage Example =
 
The homography list is stored into the <code>homographies.json</code> file.
 
 
<syntaxhighlight lang=bash>
 
BORDER_WIDTH=10
 
 
gst-launch-1.0 -e cudastitcher name=stitcher \
 
  homography-list="`cat homographies.json | tr -d "\n" | tr -d " "`" \
 
  border-width=$BORDER_WIDTH \
 
  nvarguscamerasrc sensor-id=0 ! nvvidconv ! stitcher.sink_0 \
 
  nvarguscamerasrc sensor-id=1 ! nvvidconv ! stitcher.sink_1 \
 
  nvarguscamerasrc sensor-id=2 ! nvvidconv ! stitcher.sink_2 \
 
  stitcher. ! queue ! nvvidconv ! nvoverlaysink
 
</syntaxhighlight>
 
 
The indices of the stitcher's sinks (sink_'''0''', for example) maps directly to the image index we use in the homography list.
 
 
You can find more complex examples [[Image_Stitching_for_NVIDIA_Jetson/Examples|here]].
 
  
 
<noinclude>
 
<noinclude>
{{Image_Stitching_for_NVIDIA_Jetson/Foot|Image Stitching for NVIDIA Jetson Basics|User Guide/Homography estimation}}
+
{{Image_Stitching_for_NVIDIA_Jetson/Foot|User Guide|User Guide/Homography estimation}}
 
</noinclude>
 
</noinclude>

Latest revision as of 12:00, 26 February 2023



Previous: User Guide Index Next: User Guide/Homography estimation



Nvidia-preferred-partner-badge-rgb-for-screen.png




This page provides a basic description of the parameters required when building a cudastitcher pipeline. As well as an explanation of the workflow used when working with the stitcher.

Workflow diagram

The following diagram provides a visual representation of the workflow needed when using the stitcher, as well as the auxiliary tools required. The workflow overview provides a more detailed written version of the workflow and links to the required wikis.

2 Images Stitching Example

Workflow parameters

When using the stitcher, parameter acquirement and selection is a crucial steps in order to get the expected output. These parameters can be obtained from scripts provided within the stitcher itself. These parameters are:

Undistort parameters

If you are using the stitcher as well as the cuda-undistort element, there are more parameters to be obtained, information about those and how to set them can be found in the cuda undistort wiki

Homography List

This parameter defines the transformations between pairs of images. It is specified with the option homography-list and is set as a JSON formatted string, the JSON is constructed manually based on the individual homographies calculated with the homography estimation tool.
Read the Homography estimation guide on how to calculate the homography between two images.
Then visit the Homography list guide to better understand its format and how to construct it based on individual homographies.

Refinement parameters

The stitcher is capable of refining the homographies during execution, there are multiple parameters associated with this feature, see the Homography refinement guide for more information.

Blending Width

This parameter sets the amount of pixels to be blended between two images. It can be set with the border-width option. This guide provides more information on the topic.

Workflow overview

Here are presented the basic steps and execution order that needs to be followed in order to configure the stitcher properly and acquire the parameters for its usage.

  1. Know your input sources (N)
  2. Apply distortion correction to the inputs (only if necessary), see CUDA Accelerated GStreamer Camera Undistort Camera Calibration User Guide for more details
    • Run the calibration tool for each source that requires it
      • input: Multiple images of a calibration pattern
      • output: Camera matrix and distortion parameters
    • Save the camera matrix and distortion parameters for each camera since they will be required to build the pipelines
    • Repeat until every input has been corrected
  3. Calculate all (N-1) homographies between pairs of adjacent images, see Image Stitching Homography estimation User Guide for more details
    • Run the homography estimation tool for each image (target) and its reference (fixed)
      • input: Two still images from adjacent sources with overlap and a JSON config file
      • output: Homography matrix that describes the transformation between input sources
    • Save the homography matrices; they will be required in the next steps
    • Repeat until every (N-1) image has been a target (except for the original reference image)
  4. Assemble the homographies list JSON file
  5. Set the blending width, see Image Stitching Blending User Guide for more details
  6. Build and launch the stitcher pipeline, see Image Stitching GStreamer Pipeline Examples


Previous: User Guide Index Next: User Guide/Homography estimation