GstInference inferencefilter

From RidgeRun Developer Connection
Jump to: navigation, search



Previous: Helper Elements Index Next: Crop Elements/Detection Crop





The Filter element aims to solve the problem of conditional inference. The idea is to avoid processing buffers that are of no interest to the application. For example, we can bypass a dog breed classifier if the label associated to a bounding box does not correspond to dog.

Our GstInferenceMeta behaves in a hierarchical manner. Amongst the several predictions, a buffer might contain, each prediction may contain its own predictions. The inference filter element provides the functionality of selecting which of these predictions to process in further stages of the pipeline, by setting the enable property accordingly.

Properties

The inference filter exposes the following properties, in order to select which classes to enable/disable, according to the class_id of the classifications of each prediction.

These properties are documented in the following table:

Property Value Description
filter-class Int [-1 - 2147483647]
Default: -1
Class id we want to enable. If set to -1, the filter will be disabled.
reset-enable Boolean.
Default: false
Enables all inference meta to be processed.

Example use cases and advantages

The inference filter element allows doing selective processing based on the inference results of your application, allowing you to generate much more complex applications more easily. Many of the easily available models for different architectures are trained to detect or classify into a wide set of classes, and most applications don't require all of them. Filtering according to what is important for your application will not only improve your results but also reduce the amount of processing done in further inference stages.

The following image shows how in a real scenario there might be many elements on a video stream that we, temporarily or completely, might not be interested in processing.

Original image taken from: unsplash-logoJ Shim

Example

If you want to filter the output to draw the bounding box for a specific the following pipeline show how to do that.

  • Pipeline
CAMERA='/dev/video0'
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
INPUT_LAYER='input/Placeholder'
OUTPUT_LAYER='add_8'
LABELS='labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
labels="$(cat $LABELS)" net.src_bypass ! inferencefilter filter-class=8 ! inferenceoverlay font-scale=1 thickness=2 ! videoconvert ! xvimagesink sync=false
  • Output
    • Left image without inferencefilter show two detections (1 chair and 1 pottedplant).
    • Right image with inferencefilter show only the chair because this have filter-class=8 (chair=8)


Inferencefilter example with filter-class=8 (chair)

For more example pipelines using the inferencefilter element please check the example pipelines section.


Previous: Helper Elements Index Next: Crop Elements/Detection Crop