Difference between revisions of "GstInference/Helper Elements/Inference Filter"

From RidgeRun Developer Connection
Jump to: navigation, search
(Created page with "<noinclude> {{GstInference/Head|previous=Helper Elements|next=Example pipelines|keywords=inference filter}} </noinclude> <!-- If you want a custom title for the page, un-comme...")
 
m
 
(25 intermediate revisions by 5 users not shown)
Line 1: Line 1:
 
<noinclude>
 
<noinclude>
{{GstInference/Head|previous=Helper Elements|next=Example pipelines|keywords=inference filter}}
+
{{GstInference/Head|previous=Helper Elements|next=Crop Elements/Detection Crop|metakeywords=inference filter|title=GstInference inferencefilter}}
 
</noinclude>
 
</noinclude>
 +
 
<!-- If you want a custom title for the page, un-comment and edit this line:
 
<!-- If you want a custom title for the page, un-comment and edit this line:
 
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
 
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
 
-->
 
-->
  
Our inference metadata ( [[GstInference/Metadatas/GstInferenceMeta|GstInferenceMeta]]) behaves in a hierarchical manner; i.e, amongst the several predictions a frame might contain, each prediction may contain its own predictions. The inference filter element provides the functionality of selecting which of this predictions to process in further stages of the pipeline, by setting the ''enable'' property on each prediction.  
+
The Filter element aims to solve the problem of '''conditional inference'''. The idea is to avoid processing buffers that are of no interest to the application. For example, we can bypass a ''dog breed'' classifier if the label associated to a bounding box does not correspond to ''dog''.
 +
 
 +
Our [[GstInference/Metadatas/GstInferenceMeta|GstInferenceMeta]] behaves in a hierarchical manner. Amongst the several predictions, a buffer might contain, each prediction may contain its own predictions. The inference filter element provides the functionality of selecting which of these predictions to process in further stages of the pipeline, by setting the ''enable'' property accordingly.  
  
= Properties =
+
== Properties ==
 
The inference filter exposes the following properties, in order to select which classes to enable/disable, according to the class_id of the classifications of each prediction.   
 
The inference filter exposes the following properties, in order to select which classes to enable/disable, according to the class_id of the classifications of each prediction.   
  
These properties are documented on the following table:
+
These properties are documented in the following table:
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
Line 19: Line 22:
 
|-
 
|-
 
| filter-class
 
| filter-class
| -1 - 2147483647. Default: -1  
+
| Int [-1 - 2147483647] <br> Default: -1 </br>
 
| Class id we want to enable. If set to -1, the filter will be disabled.
 
| Class id we want to enable. If set to -1, the filter will be disabled.
 
|-
 
|-
| reset-enable
+
| reset-enable  
| Boolean. Default: false
+
| Boolean. <br> Default: false </br>
 
| Enables all inference meta to be processed.
 
| Enables all inference meta to be processed.
 
|}
 
|}
  
= Example =
+
== Example use cases and advantages ==
 +
The inference filter element allows doing selective processing based on the inference results of your application, allowing you to generate much more complex applications more easily. Many of the easily available models for different architectures are trained to detect or classify into a wide set of classes, and most applications don't require all of them. Filtering according to what is important for your application will not only improve your results but also reduce the amount of processing done in further inference stages.
 +
 
 +
The following image shows how in a real scenario there might be many elements on a video stream that we, temporarily or completely, might not be interested in processing. 
 +
[[File:Complete example.png|1200px|thumb|center|Inference filter example.|link=| <html>
 +
Original image taken from: <a style="background-color:black;color:white;text-decoration:none;padding:4px 6px;font-family:-apple-system, BlinkMacSystemFont, &quot;San Francisco&quot;, &quot;Helvetica Neue&quot;, Helvetica, Ubuntu, Roboto, Noto, &quot;Segoe UI&quot;, Arial, sans-serif;font-size:12px;font-weight:bold;line-height:1.2;display:inline-block;border-radius:3px" href="https://unsplash.com/@uncertainthink?utm_medium=referral&amp;utm_campaign=photographer-credit&amp;utm_content=creditBadge" target="_blank" rel="noopener noreferrer" title="Download free do whatever you want high-resolution photos from J Shim"><span style="display:inline-block;padding:2px 3px"><svg xmlns="http://www.w3.org/2000/svg" style="height:12px;width:auto;position:relative;vertical-align:middle;top:-2px;fill:white" viewBox="0 0 32 32"><title>unsplash-logo</title><path d="M10 9V0h12v9H10zm12 5h10v18H0V14h10v9h12v-9z"></path></svg></span><span style="display:inline-block;padding:2px 3px">J Shim</span></a>
 +
</html> ]]
 +
 
 +
== Example ==
 +
If you want to filter the output to draw the bounding box for a specific the following pipeline show how to do that.
 +
 
 +
* Pipeline
 +
<syntaxhighlight lang=bash>
 +
CAMERA='/dev/video0'
 +
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
 +
INPUT_LAYER='input/Placeholder'
 +
OUTPUT_LAYER='add_8'
 +
LABELS='labels.txt'
 +
gst-launch-1.0 \
 +
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
 +
t. ! videoscale ! queue ! net.sink_model \
 +
t. ! queue ! net.sink_bypass \
 +
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
 +
labels="$(cat $LABELS)" net.src_bypass ! inferencefilter filter-class=8 ! inferenceoverlay font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
 +
</syntaxhighlight>
 +
* Output
 +
** Left image without inferencefilter show two detections (1 chair and 1 pottedplant).
 +
** Right image with inferencefilter show only the chair because this have filter-class=8 (chair=8)
 +
 
 +
 
 +
[[File:Filter-class.png|1200px|thumb|center|Inferencefilter example with filter-class=8 (chair)|link=]]
  
For example pipelines using detectioncrop element please check the [[GstInference/Example_pipelines | example pipelines]] section.  
+
For more example pipelines using the inferencefilter element please check the [[GstInference/Example_pipelines | example pipelines]] section.  
  
 
<noinclude>
 
<noinclude>
{{GstInference/Foot|Crop Elements|Example pipelines}}
+
{{GstInference/Foot|Helper Elements|Crop Elements/Detection Crop}}
 
</noinclude>
 
</noinclude>

Latest revision as of 14:33, 27 February 2023



Previous: Helper Elements Index Next: Crop Elements/Detection Crop





The Filter element aims to solve the problem of conditional inference. The idea is to avoid processing buffers that are of no interest to the application. For example, we can bypass a dog breed classifier if the label associated to a bounding box does not correspond to dog.

Our GstInferenceMeta behaves in a hierarchical manner. Amongst the several predictions, a buffer might contain, each prediction may contain its own predictions. The inference filter element provides the functionality of selecting which of these predictions to process in further stages of the pipeline, by setting the enable property accordingly.

Properties

The inference filter exposes the following properties, in order to select which classes to enable/disable, according to the class_id of the classifications of each prediction.

These properties are documented in the following table:

Property Value Description
filter-class Int [-1 - 2147483647]
Default: -1
Class id we want to enable. If set to -1, the filter will be disabled.
reset-enable Boolean.
Default: false
Enables all inference meta to be processed.

Example use cases and advantages

The inference filter element allows doing selective processing based on the inference results of your application, allowing you to generate much more complex applications more easily. Many of the easily available models for different architectures are trained to detect or classify into a wide set of classes, and most applications don't require all of them. Filtering according to what is important for your application will not only improve your results but also reduce the amount of processing done in further inference stages.

The following image shows how in a real scenario there might be many elements on a video stream that we, temporarily or completely, might not be interested in processing.

Original image taken from: unsplash-logoJ Shim

Example

If you want to filter the output to draw the bounding box for a specific the following pipeline show how to do that.

  • Pipeline
CAMERA='/dev/video0'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
labels="$(cat $LABELS)" net.src_bypass ! inferencefilter filter-class=8 ! inferenceoverlay font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
  • Output
    • Left image without inferencefilter show two detections (1 chair and 1 pottedplant).
    • Right image with inferencefilter show only the chair because this have filter-class=8 (chair=8)


Inferencefilter example with filter-class=8 (chair)

For more example pipelines using the inferencefilter element please check the example pipelines section.


Previous: Helper Elements Index Next: Crop Elements/Detection Crop