Difference between revisions of "GstInference/Example pipelines with hierarchical metadata/PC"

From RidgeRun Developer Connection
Jump to: navigation, search
(Visualization with inference overlay)
 
(25 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
<noinclude>
 +
{{GstInference/Head|previous=Example pipelines|next=Example pipelines/NANO|title=GstInference GStreamer pipelines on PC}}
 +
</noinclude>
 +
<!-- If you want a custom title for the page, un-comment and edit this line:
 +
{{DISPLAYTITLE:GstInference - <descriptive page name>|noerror}}
 +
-->
 +
= Sample pipelines =
 +
The following section contains a tool for generating simple GStreamer pipelines with one model of a selected architecture using our hierarchical inference metadata. If you are using and older version, you chan check the legacy pipelines section. Please make sure to check the documentation to understand the property usage for each element.
  
== Tensorflow ==
+
The required elements are:
 +
* Backend
 +
* Model
 +
* Model location
 +
* Labels
 +
* Source
 +
* Sink
  
=== Inceptionv1 ===
+
The optional elements include:
 +
* inferencefilter
 +
* inferencrop
 +
* inferenceoverlay 
  
==== Image file ====
+
[[File:Inference example.png|1000px|thumb|center|Detection with new metadata]]
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
+
<html>
* You will need a image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
+
<head>
* Pipeline
+
<meta name="viewport" content="width=device-width, initial-scale=1">
<syntaxhighlight lang=bash>
+
<style>
IMAGE_FILE=cat.jpg
+
* {
MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
+
  box-sizing: border-box;
INPUT_LAYER='input'
+
}
OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
 
LABELS='imagenet_labels.txt'
 
GST_DEBUG=inceptionv1:6 gst-launch-1.0 \
 
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \
 
inceptionv1 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
 
</syntaxhighlight>
 
* Output
 
<syntaxhighlight lang=bash>
 
  
0:00:00.626529976  6700 0x55a306b258a0 LOG              inceptionv1 gstinceptionv1.c:150:gst_inceptionv1_preprocess:<net> Preprocess
+
.button {
0:00:00.643145025  6700 0x55a306b258a0 LOG              inceptionv1 gstinceptionv1.c:162:gst_inceptionv1_postprocess_old:<net> Postprocess
+
  background-color: #008CBA;
0:00:00.643180120  6700 0x55a306b258a0 LOG              inceptionv1 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 3804 : (4.191162)
+
  border: none;
0:00:00.643186095  6700 0x55a306b258a0 LOG              inceptionv1 gstinceptionv1.c:187:gst_inceptionv1_postprocess_new:<net> Postprocess Meta
+
  color: white;
0:00:00.643211153  6700 0x55a306b258a0 LOG              inceptionv1 gstinferencedebug.c:111:gst_inference_print_predictions:
+
  padding: 15px 32px;
{
+
   text-align: center;
   id : 7,
+
   text-decoration: none;
   enabled : True,
+
   display: inline-block;
   bbox : {
+
  font-size: 16px;
    x : 0
+
  margin: 4px 2px;
    y : 0
+
  cursor: pointer;
    width : 224
+
}
    height : 224
+
 
  },
+
input[type=text], select, textarea {
  classes : [
+
  width: 100%;
    {
+
  padding: 12px;
      Id : 14
+
  border: 1px solid #ccc;
      Class : 3804
+
  border-radius: 4px;
      Label : (null)
+
   resize: vertical;
      Probability : 4.191162
 
      Classes : 4004
 
    },
 
   ],
 
  predictions : [
 
   
 
  ]
 
 
}
 
}
  
</syntaxhighlight>
+
label {
 +
  padding: 12px 12px 12px 0;
 +
  display: inline-block;
 +
}
  
==== Video file ====
+
input[type=submit] {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
+
  background-color: #4CAF50;
* You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
+
  color: white;
* Pipeline
+
  padding: 12px 20px;
<syntaxhighlight lang=bash>
+
  border: none;
VIDEO_FILE='cat.mp4'
+
  border-radius: 4px;
MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
+
  cursor: pointer;
INPUT_LAYER='input'
+
  float: right;
OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
+
}
LABELS='imagenet_labels.txt'
 
  
GST_DEBUG=inceptionv1:6 gst-launch-1.0 \
+
input[type=submit]:hover {
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
+
   background-color: #45a049;
inceptionv1 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
 
</syntaxhighlight>
 
* Output
 
<syntaxhighlight lang=bash>
 
0:00:00.881389256  6700 0x55a306b258a0 LOG              inceptionv1 gstinceptionv1.c:150:gst_inceptionv1_preprocess:<net> Preprocess
 
0:00:00.898481750  6700 0x55a306b258a0 LOG              inceptionv1 gstinceptionv1.c:162:gst_inceptionv1_postprocess_old:<net> Postprocess
 
0:00:00.898515118  6700 0x55a306b258a0 LOG              inceptionv1 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 1016 : (4.182041)
 
0:00:00.898521200  6700 0x55a306b258a0 LOG              inceptionv1 gstinceptionv1.c:187:gst_inceptionv1_postprocess_new:<net> Postprocess Meta
 
0:00:00.898546079  6700 0x55a306b258a0 LOG              inceptionv1 gstinferencedebug.c:111:gst_inference_print_predictions:
 
{
 
   id : 22,
 
  enabled : True,
 
  bbox : {
 
    x : 0
 
    y : 0
 
    width : 224
 
    height : 224
 
  },
 
  classes : [
 
    {
 
      Id : 44
 
      Class : 1016
 
      Label : (null)
 
      Probability : 4.182041
 
      Classes : 4004
 
    },
 
  ],
 
  predictions : [
 
   
 
  ]
 
 
}
 
}
</syntaxhighlight>
 
  
==== Camera stream ====
+
.container {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
+
  border-radius: 5px;
* You will need a v4l2 compatible camera
+
  background-color: lightcyan;
* Pipeline
+
  padding: 20px;
<syntaxhighlight lang=bash>
+
}
CAMERA='/dev/video0'
 
MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
 
INPUT_LAYER='input'
 
OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
 
LABELS='imagenet_labels.txt'
 
  
GST_DEBUG=inceptionv1:6 gst-launch-1.0 \
+
.col-25 {
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
+
   float: left;
inceptionv1 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
+
   width: 25%;
</syntaxhighlight>
+
   margin-top: 6px;
* Output
 
<syntaxhighlight lang=bash>
 
0:00:03.858432794  6899 0x558a68bf0e80 LOG              inceptionv1 gstinceptionv1.c:150:gst_inceptionv1_preprocess:<net> Preprocess
 
0:00:03.875012119  6899 0x558a68bf0e80 LOG              inceptionv1 gstinceptionv1.c:162:gst_inceptionv1_postprocess_old:<net> Postprocess
 
0:00:03.875053519  6899 0x558a68bf0e80 LOG              inceptionv1 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 3022 : (9897291000005358165649701398904832.000000)
 
0:00:03.875061545  6899 0x558a68bf0e80 LOG              inceptionv1 gstinceptionv1.c:187:gst_inceptionv1_postprocess_new:<net> Postprocess Meta
 
0:00:03.875089371  6899 0x558a68bf0e80 LOG              inceptionv1 gstinferencedebug.c:111:gst_inference_print_predictions:
 
{
 
   id : 93,
 
  enabled : True,
 
   bbox : {
 
    x : 0
 
    y : 0
 
    width : 224
 
    height : 224
 
   },
 
  classes : [
 
    {
 
      Id : 186
 
      Class : 3022
 
      Label : (null)
 
      Probability : 9897291000005358165649701398904832.000000
 
      Classes : 4004
 
    },
 
  ],
 
  predictions : [
 
   
 
  ]
 
 
}
 
}
</syntaxhighlight>
 
  
==== Visualization with inference overlay ====
+
.col-50 {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv1-for-tensorflow this link]
+
  float: left;
* You will need a v4l2 compatible camera
+
  width: 50%;
* Pipeline
+
  margin-top: 6px;
<syntaxhighlight lang=bash>CAMERA='/dev/video0'
+
}
MODEL_LOCATION='graph_inceptionv1_tensorflow.pb'
 
INPUT_LAYER='input'
 
OUTPUT_LAYER='InceptionV1/Logits/Predictions/Reshape_1'
 
LABELS='imagenet_labels.txt'
 
gst-launch-1.0 \
 
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
 
t. ! videoscale ! queue ! net.sink_model \
 
t. ! queue ! net.sink_bypass \
 
inceptionv1 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
 
net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
 
</syntaxhighlight>
 
  
=== Inceptionv2 ===
+
.col-75 {
 +
  float: left;
 +
  width: 75%;
 +
  margin-top: 6px;
 +
}
  
==== Image file ====
+
/* Clear floats after the columns */
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
+
.row:after {
* You will need a image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
+
  content: "";
* Pipeline
+
  display: table;
<syntaxhighlight lang=bash>
+
  clear: both;
IMAGE_FILE=cat.jpg
+
}
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
 
INPUT_LAYER='input'
 
OUTPUT_LAYER='Softmax'
 
  
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
+
/* Responsive layout - when the screen is less than 600px wide, make the two columns stack on top of each other instead of next to each other */
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! queue ! net.sink_model \
+
@media screen and (max-width: 600px) {
inceptionv2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
+
  .col-25, .col-50, .col-75, input[type=submit] {
</syntaxhighlight>
+
     width: 100%;
* Output
+
     margin-top: 0;
<syntaxhighlight lang=bash>
+
   }
0:00:01.167111306 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinceptionv2.c:217:gst_inceptionv2_preprocess:<net> Preprocess
 
0:00:01.190633209 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinceptionv2.c:229:gst_inceptionv2_postprocess_old:<net> Postprocess
 
0:00:01.190667056 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 2058 : (33799702613643740784668592694586507264.000000)
 
0:00:01.190673102 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinceptionv2.c:254:gst_inceptionv2_postprocess_new:<net> Postprocess Meta
 
0:00:01.190699590 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinferencedebug.c:111:gst_inference_print_predictions:
 
{
 
  id : 23,
 
  enabled : True,
 
  bbox : {
 
     x : 0
 
     y : 0
 
    width : 224
 
    height : 224
 
   },
 
  classes : [
 
    {
 
      Id : 46
 
      Class : 2058
 
      Label : (null)
 
      Probability : 33799702613643740784668592694586507264.000000
 
      Classes : 4004
 
    },
 
  ],
 
  predictions : [
 
   
 
  ]
 
 
}
 
}
  
</syntaxhighlight>
+
// Material Select Initialization
 +
$(document).ready(function() {
 +
$('.mdb-select').materialSelect();
 +
});
 +
 
 +
</style>
 +
</head>
 +
<body>
 +
 
 +
<h2>Pipeline generator</h2>
 +
<p>The following tool will provide simple pipelines according to the selected elements.</p>
 +
 
 +
<div class="container">
 +
  <form action="/action_page.php" id="gen_form">
 +
    <div class="row">
 +
      <div class="col-25">
 +
        <label for="platform">Platform</label>
 +
      </div>
 +
      <div class="col-75">
 +
        <select id="platform" name="platform" onchange="dynamic_platform_dropdown(this.options[this.selectedIndex].value)">
 +
          <option value="" disabled selected>Select your platform</option>
 +
          <option value="pc">PC</option>
 +
          <option value="jetson">Jetson</option>
 +
        </select>
 +
      </div>
 +
    </div>
 +
    <div class="row">
 +
      <div class="col-25">
 +
        <label for="backend">Backend</label>
 +
      </div>
 +
      <div class="col-75">
 +
        <select id="backend" name="backend" onchange="backend_selection(this.options[this.selectedIndex].value)">
 +
          <option value="" disabled selected>Select your backend</option>
 +
          <option value="tensorflow">TensorFlow</option>
 +
          <option value="tflite">TFLite</option>
 +
        </select>
 +
      </div>
 +
    </div>
 +
    <div class="row">
 +
      <div class="col-25">
 +
        <label for="model_menu">Model</label>
 +
      </div>
 +
      <div class="col-75">
 +
        <select id="model" name="model" onchange="model_selection()">
 +
          <option value="" disabled selected>Select your architecture</option>
 +
          <option value="inceptionv1">Inceptionv1</option>
 +
          <option value="inceptionv2">Inceptionv2</option>
 +
          <option value="inceptionv3">Inceptionv3</option>
 +
          <option value="inceptionv4">Inceptionv4</option>
 +
          <option value="mobilenetv2">MobileNetv2</option>
 +
          <option value="facenetv1">FaceNet</option>
 +
          <option value="tinyyolov2">TinyYolov2</option>
 +
        </select>
 +
      </div>
 +
    </div>
 +
    <div class="row">
 +
      <div class="col-25">
 +
        <label for="input_layer">Input layer</label>
 +
      </div>
 +
      <div class="col-75">
 +
        <input type="text" id="inputlayer" name="inputlayer" placeholder="Input layer..">
 +
      </div>
 +
    </div>
 +
    <div class="row">
 +
      <div class="col-25">
 +
        <label for="output_layer">Output layer</label>
 +
      </div>
 +
      <div class="col-75">
 +
        <input type="text" id="outputlayer" name="outputlayer" placeholder="Output layer..">
 +
      </div>
 +
    </div>
 +
    <div class="row">
 +
      <div class="col-25">
 +
        <label for="output_layer">Model location</label>
 +
      </div>
 +
      <div class="col-75">
 +
        <input type="text" id="model_location" name="model_location" placeholder="Path to model..">
 +
      </div>
 +
    </div>
 +
    <div class="row">
 +
      <div class="col-25">
 +
        <label for="output_layer">Labels</label>
 +
      </div>
 +
      <div class="col-75">
 +
        <input type="text" id="labels" placeholder="Path to labels file..">
 +
      </div>
 +
    </div>
 +
    <div class="row">
 +
      <div class="col-25">
 +
        <label for="input_source">Source</label>
 +
      </div>
 +
      <div class="col-25">
 +
        <select id="source" name="source">
 +
          <option value="" disabled selected>Select your source</option>
 +
          <option value=" multifilesrc">Image file</option>
 +
          <option value=" filesrc">Video file</option>
 +
          <option value=" v4l2src">Camera stream (v4l2)</option>
 +
        </select>
 +
      </div>
 +
      <div class="col-50">
 +
        <input type="text" id="source_location" placeholder="Source location">
 +
      </div>
 +
    </div>
 +
    <div class="row">
 +
      <div class="col-25">
 +
        <label for="sink">Sink</label>
 +
      </div>
 +
      <div class="col-75">
 +
        <select id="sink">
 +
          <option value="" disabled selected>Select your sink</option>
 +
          <option value=" fakesink" >fakesink</option>
 +
          <option value=" videoconvert ! xvimagesink sync=false async=false qos=false">xvimagesink</option>
 +
        </select>
 +
      </div>
 +
    </div>
 +
  <h3>Optional utilites</h3>
 +
  <p>The following elements are optional yet very useful. Check the documentation for more details on their properties.</p>
 +
    <div class="row">
 +
      <div class="col-25">
 +
        <input type="checkbox" id="en_inferencefilter" onchange="enable_filter()">
 +
        <label for="en_inferencefilter">Inference Filter</label>
 +
      </div>
 +
      <div class="col-75">
 +
        <input type="text" id="filter_class_id" placeholder="Filter class id.."  disabled="true">
 +
      </div>
 +
    </div>
 +
    <div class="row">
 +
      <div class="col-25">
 +
        <input type="checkbox" id="en_inference_overlay" onchange="enable_overlay()">
 +
        <label for="en_inference_overlay">Inference Overlay</label>
 +
      </div>
 +
      <div class="col-25">
 +
        <input type="text" id="thickness" placeholder="Thickness" disabled="true">
 +
      </div>
 +
      <div class="col-25">
 +
        <input type="text" id="fontscale" placeholder="Fontscale" disabled="true">
 +
      </div>
 +
      <div class="col-25">
 +
        <select id="style" name="style" disabled=true>
 +
          <option value="" disabled selected>Pick a style</option>
 +
          <option value="0">Classic</option>
 +
          <option value="1">Dotted</option>
 +
          <option value="2">Dashed</option>
 +
        </select>
 +
      </div>
 +
    </div>
 +
    <div class="row">
 +
      <div class="col-25">
 +
        <input type="checkbox" id="en_inferencecrop" onchange="enable_crop()">
 +
        <label for="en_inferencecrop">Inference Crop</label>
 +
      </div>
 +
      <div class="col-25">
 +
        <input type="text" id="aspect_ratio" placeholder="Aspect ratio" disabled="true">
 +
      </div>
 +
    </div>
 +
    <div class="row">
 +
    <button type="button" class="button" onclick="reset_all()">Reset</button>
 +
    <button type="button" class="button" onclick="print()">Generate!</button>
 +
    </div>
 +
  </form>
 +
</div>
  
==== Video file ====
+
<!-- **********************  Variables and dicts ******************************** -->
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
 
* You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
 
* Pipeline
 
<syntaxhighlight lang=bash>
 
VIDEO_FILE='cat.mp4'
 
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
 
INPUT_LAYER='input'
 
OUTPUT_LAYER='Softmax'
 
LABELS='imagenet_labels.txt'
 
  
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
+
<script>
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
+
var src = "";
inceptionv2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
+
var model = "";
</syntaxhighlight>
+
var model_props = "";
* Output
+
var tee = "";
<syntaxhighlight lang=bash>
+
var filter = "";
0:00:01.167111306 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinceptionv2.c:217:gst_inceptionv2_preprocess:<net> Preprocess
+
var crop = "";
0:00:01.190633209 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinceptionv2.c:229:gst_inceptionv2_postprocess_old:<net> Postprocess
+
var overlay = "";
0:00:01.190667056 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinferencedebug.c:74:gst_inference_print_highest_probability:<net> Highest probability is label 2058 : (33799702613643740784668592694586507264.000000)
+
var sink = "";
0:00:01.190673102 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinceptionv2.c:254:gst_inceptionv2_postprocess_new:<net> Postprocess Meta
+
 
0:00:01.190699590 12853 0x55bc0eeb9770 LOG              inceptionv2 gstinferencedebug.c:111:gst_inference_print_predictions:
+
var input_layers = {
{
+
  inceptionv1: "input",
   id : 23,
+
  inceptionv2: "input",
   enabled : True,
+
  inceptionv3: "input",
   bbox : {
+
  inceptionv4: "input",
    x : 0
+
  mobilenetv2: "input",
    y : 0
+
  resnet50v1: "input_tensor",
    width : 224
+
  tinyyolov2: "input/Placeholder",
    height : 224
+
  tinyyolov3: "inputs",
   },
+
  facenetv1: "input"
   classes : [
+
};
    {
+
 
      Id : 46
+
 
      Class : 2058
+
var output_layers = {
      Label : (null)
+
  inceptionv1: "InceptionV1/Logits/Predictions/Reshape_1",
      Probability : 33799702613643740784668592694586507264.000000
+
  inceptionv2: "Softmax",
      Classes : 4004
+
  inceptionv3: "InceptionV3/Predictions/Reshape_1",
    },
+
  inceptionv4: "InceptionV4/Logits/Predictions",
   ],
+
  mobilenetv2: "MobilenetV2/Predictions/Reshape_1",
   predictions : [
+
  resnet50v1: "softmax_tensor",
   
+
  tinyyolov2: "add_8",
   ]
+
  tinyyolov3: "output_boxes",
 +
  facenetv1: "output"
 +
};
 +
 
 +
var model_names = {
 +
  inceptionv1: "graph_inceptionv1_tensorflow.pb",
 +
  inceptionv2: "graph_inceptionv2_tensorflow.pb",
 +
  inceptionv3: "graph_inceptionv3_tensorflow.pb",
 +
  inceptionv4: "graph_inceptionv4_tensorflow.pb",
 +
  mobilenetv2: "graph_mobilenetv2_tensorflow.pb",
 +
  resnet50v1: "graph_resnetv1_tensorflow.pb",
 +
  tinyyolov2: "graph_tinyyolov2_tensorflow.pb",
 +
  tinyyolov3: "graph_tinyyolov3_tensorflow.pb",
 +
  facenetv1: "graph_facenetv1_tensorflow.pb"
 +
};
 +
 
 +
var label_files = {
 +
   inceptionv1: "imagenet_labels.txt",
 +
   inceptionv2: "imagenet_labels.txt",
 +
   inceptionv3: "imagenet_labels.txt",
 +
  inceptionv4: "imagenet_labels.txt",
 +
  mobilenetv2: "imagenet_labels.txt",
 +
  resnet50v1: "imagenet_labels.txt",
 +
  tinyyolov2: "labels.txt",
 +
   tinyyolov3: "labels_ty3.txt",
 +
   facenetv1: "imagenet_labels.txt"
 +
};
 +
 
 +
/**********************************************************************
 +
* General
 +
***********************************************************************/
 +
function reset_all() {
 +
  document.getElementById("gen_form").reset();
 +
  backend = "";
 +
  src = "";
 +
  model = "";
 +
  model_props = "";
 +
  tee = "";
 +
  filter = "";
 +
   crop = "";
 +
   overlay = "";
 +
  sink = "";
 +
   disable_element("filter_class_id");
 
}
 
}
  
</syntaxhighlight>
+
function disable_element(element) {
 +
  document.getElementById(element).disabled=true;
 +
  document.getElementById(element).value=null;
 +
}
  
==== Camera stream ====
+
function enable_element(element) {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
+
  document.getElementById(element).disabled=false;
* You will need a v4l2 compatible camera
+
}
* Pipeline
 
<syntaxhighlight lang=bash>
 
CAMERA='/dev/video0'
 
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
 
INPUT_LAYER='input'
 
OUTPUT_LAYER='Softmax'
 
LABELS='imagenet_labels.txt'
 
  
GST_DEBUG=inceptionv2:6 gst-launch-1.0 \
+
function search_option(child_id, label, parent, mode) {
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
+
  var done="false";
inceptionv2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
+
  var selectobject = document.getElementById(parent);
</syntaxhighlight>
+
  for (var i=0; i<selectobject.length; i++) {
* Output
+
      if (selectobject.options[i].value == child_id) {
<syntaxhighlight lang=bash>
+
        if(mode == "remove") {
0:00:14.614862363 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
+
          selectobject.remove(i);
0:00:15.737842669 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
+
        }
0:00:15.737912053 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,105199)
+
        done="true";
0:00:15.738007534 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
+
      }
0:00:16.855603761 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
+
  }
0:00:16.855673578 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,093981)
+
  if (done =="false") {
0:00:16.855768558 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:199:gst_inceptionv2_preprocess:<net> Preprocess
+
    if (mode == "add") {
0:00:17.980784789 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:231:gst_inceptionv2_postprocess:<net> Postprocess
+
      var new_option = document.createElement("option");
0:00:17.980849612 27227      0x19cd4a0 LOG              inceptionv2 gstinceptionv2.c:252:gst_inceptionv2_postprocess:<net> Highest probability is label 838 : (0,077824)
+
      new_option.text = label;
</syntaxhighlight>
+
      new_option.value = child_id;
 +
      document.getElementById(parent).appendChild(new_option);
 +
    }
 +
  }
 +
}
  
==== Visualization with inference overlay ====
+
/**********************************************************************
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv2-for-tensorflow this link]
+
* Elements selection
* You will need a v4l2 compatible camera
+
***********************************************************************/
* Pipeline
+
function dynamic_platform_dropdown(platform) {
<syntaxhighlight lang=bash>CAMERA='/dev/video0'
+
  reset_all();
MODEL_LOCATION='graph_inceptionv2_tensorflow.pb'
+
  document.getElementById("platform").value = platform;
INPUT_LAYER='input'
+
 
OUTPUT_LAYER='Softmax'
+
  if (platform == "pc") {
LABELS='imagenet_labels.txt'
+
    search_option("ncsdk","NCSDK","backend","add");
 +
    search_option(" nvarguscamerasrc","nvarguscamerasrc","source","remove");
 +
    search_option(" videoconvert ! 'video/x-raw,format=I420' ! nvvidconv ! nvoverlaysink","nvoverlaysink","sink","remove");
 +
  } else {
 +
    search_option("ncsdk","NCSDK","backend","remove");
 +
    search_option(" nvarguscamerasrc","nvarguscamerasrc","source","add");
 +
    search_option(" videoconvert ! 'video/x-raw,format=I420' ! nvvidconv ! nvoverlaysink","nvoverlaysink","sink","add");
 +
  }
 +
}
  
gst-launch-1.0 \
+
function backend_selection(backend) {
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
+
  var option_rn = document.createElement("option");
t. ! videoscale ! queue ! net.sink_model \
 
t. ! queue ! net.sink_bypass \
 
inceptionv2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
 
net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
 
</syntaxhighlight>
 
  
=== Inceptionv3 ===
+
  switch (backend)
 +
  {
 +
    case "tensorflow" :
 +
      search_option("tinyyolov3","TinyYolov3","model","add");
 +
      search_option("resnet50v1","Resnet50V1","model","add");
 +
      if( document.getElementById("model").value != "") {
 +
        document.getElementById("inputlayer").value = input_layers[document.getElementById("model").value];
 +
        document.getElementById("outputlayer").value = output_layers[document.getElementById("model").value];
 +
      }
 +
      enable_element("inputlayer");
 +
      enable_element("outputlayer");
 +
      break;
 +
    case "tflite" :
 +
      search_option("tinyyolov3","TinyYolov3","model","add");
 +
      search_option("resnet50v1","Resnet50V1","model","add");
 +
      disable_element("inputlayer");
 +
      disable_element("outputlayer");
 +
      break;
 +
    case "ncsdk":
 +
      search_option("tinyyolov3","TinyYolov3","model","remove");
 +
      search_option("resnet50v1","Resnet50V1","model","remove");
 +
      disable_element("inputlayer");
 +
      disable_element("outputlayer");
 +
      break;
 +
  }
 +
}
  
==== Image file ====
+
function model_selection() {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv3-for-tensorflow this link]
+
  if( document.getElementById("backend").value == "tensorflow") {
* You will need a image file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
+
    document.getElementById("inputlayer").value = input_layers[document.getElementById("model").value];
* Pipeline
+
    document.getElementById("outputlayer").value = output_layers[document.getElementById("model").value];
<syntaxhighlight lang=bash>
+
  }
IMAGE_FILE=cat.jpg
+
  document.getElementById("model_location").value = model_names[document.getElementById("model").value];
MODEL_LOCATION='graph_inceptionv3_tensorflow.pb'
+
  document.getElementById("labels").value = label_files[document.getElementById("model").value];
INPUT_LAYER='input'
+
}
OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1
 
GST_DEBUG=inceptionv3:6 gst-launch-1.0 \
 
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true  ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue ! net.sink_model \
 
inceptionv3 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
 
</syntaxhighlight>
 
* Output
 
<syntaxhighlight lang=bash>
 
0:00:09.549749856 26945      0xaf9cf0 LOG              inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
 
0:00:10.672917685 26945      0xaf9cf0 LOG              inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
 
0:00:10.672976676 26945      0xaf9cf0 LOG              inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 284 : (0,691864)
 
0:00:10.673064576 26945      0xaf9cf0 LOG              inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
 
0:00:11.793890820 26945      0xaf9cf0 LOG              inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
 
0:00:11.793951581 26945      0xaf9cf0 LOG              inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 284 : (0,691864)
 
0:00:11.794041207 26945      0xaf9cf0 LOG              inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
 
0:00:12.920027410 26945      0xaf9cf0 LOG              inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
 
0:00:12.920093762 26945      0xaf9cf0 LOG              inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 284 : (0,691864)
 
</syntaxhighlight>
 
  
==== Video file ====
+
function crop_selection() {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv3-for-tensorflow this link]
+
  crop = " inferencecrop aspect-ratio=" + document.getElementById("aspect_ratio").value + " ! ";
* You will need a video file from one of [https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a ImageNet classes]
+
}
* Pipeline
 
<syntaxhighlight lang=bash>
 
VIDEO_FILE='cat.mp4'
 
MODEL_LOCATION='graph_inceptionv3_tensorflow.pb'
 
INPUT_LAYER='input'
 
OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1
 
GST_DEBUG=inceptionv3:6 gst-launch-1.0 \
 
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! net.sink_model \
 
inceptionv3 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
 
</syntaxhighlight>
 
* Output
 
<syntaxhighlight lang=bash>
 
0:00:11.878158663 27048      0x1d49800 LOG              inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
 
0:00:13.006776924 27048      0x1d49800 LOG              inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
 
0:00:13.006847113 27048      0x1d49800 LOG              inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 282 : (0,594995)
 
0:00:13.006946305 27048      0x1d49800 LOG              inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
 
0:00:14.170203673 27048      0x1d49800 LOG              inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
 
0:00:14.170277808 27048      0x1d49800 LOG              inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 282 : (0,595920)
 
0:00:14.170384768 27048      0x1d49800 LOG              inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
 
0:00:15.285901546 27048      0x1d49800 LOG              inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
 
0:00:15.285964794 27048      0x1d49800 LOG              inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 282 : (0,593185)
 
</syntaxhighlight>
 
  
==== Camera stream ====
+
function model_props_selection() {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv3-for-tensorflow this link]
+
  if (model != "") {
* You will need a v4l2 compatible camera
+
    model_props = " name=net model-location=" + document.getElementById("model_location").value + " backend=" + document.getElementById("backend").value + " labels=\"$(cat " +document.getElementById("labels").value + ")\"";
* Pipeline
 
<syntaxhighlight lang=bash>
 
CAMERA='/dev/video0'
 
MODEL_LOCATION='graph_inceptionv3_tensorflow.pb'
 
INPUT_LAYER='input'
 
OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1'
 
GST_DEBUG=inceptionv3:6 gst-launch-1.0 \
 
v4l2src device=$CAMERA ! videoconvert ! videoscale ! queue ! net.sink_model \
 
inceptionv3 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER
 
</syntaxhighlight>
 
* Output
 
<syntaxhighlight lang=bash>
 
0:00:14.614862363 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
 
0:00:15.737842669 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
 
0:00:15.737912053 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 838 : (0,105199)
 
0:00:15.738007534 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
 
0:00:16.855603761 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
 
0:00:16.855673578 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 838 : (0,093981)
 
0:00:16.855768558 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:199:gst_inceptionv3_preprocess:<net> Preprocess
 
0:00:17.980784789 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:231:gst_inceptionv3_postprocess:<net> Postprocess
 
0:00:17.980849612 27227      0x19cd4a0 LOG              inceptionv3 gstinceptionv3.c:252:gst_inceptionv3_postprocess:<net> Highest probability is label 838 : (0,077824)
 
</syntaxhighlight>
 
  
==== Visualization with inference overlay ====
+
    if (backend == "tensorflow") {
* Get the graph used on this example from [https://shop.ridgerun.com/products/inceptionv3-for-tensorflow this link]
+
      model_props = model_props + " backend::input-layer=" + document.getElementById("inputlayer").value + " backend::output-layer=" + document.getElementById("outputlayer").value;
* You will need a v4l2 compatible camera
+
    }
* Pipeline
+
  } else {
<syntaxhighlight lang=bash>
+
    model_props = "";
CAMERA='/dev/video0'
+
  }
MODEL_LOCATION='graph_inceptionv3_tensorflow.pb'
+
}
INPUT_LAYER='input'
 
OUTPUT_LAYER='InceptionV3/Predictions/Reshape_1'
 
LABELS='imagenet_labels.txt'
 
gst-launch-1.0 \
 
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
 
t. ! videoscale ! queue ! net.sink_model \
 
t. ! queue ! net.sink_bypass \
 
inceptionv3 name=net model-location=$MODEL_LOCATION backend=tensorflow labels=$(cat $LABELS) backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
 
net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
 
</syntaxhighlight>
 
  
=== TinyYolov2 ===
+
// TODO: modify for platform
 +
function tee_selection() {
 +
  if (model != "") {
 +
    switch (platform) {
 +
      case "jetson":
 +
        tee = " tee name=t t. ! nvvidconv !  queue ! net.sink_model t. ! nvvidconv !  video/x-raw,format=RGBA ! queue ! net.sink_bypass net.src_model";
 +
      break;
 +
      case "pc":
 +
        tee = " tee name=t t. ! queue ! videoconvert ! videoscale !  net.sink_model t. ! queue ! videoconvert ! net.sink_bypass  net.src_model !";
 +
      break;
 +
      default:
 +
        tee = "";
 +
      break;
 +
    }
 +
  } else {
 +
    tee = "";
 +
  }
 +
}
  
==== Image file ====
+
function jetson_src_selection() {
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
+
  switch(src) {
* You will need an image file from one of TinyYOLO classes
+
    case " multifilesrc":
* Pipeline
+
      src = src + " location=" + document.getElementById("source_location").value + " start-index=0 stop-index=0 loop=true ! jpegparse ! nvjpegdec ! 'video/x-raw' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvvidconv ! queue !";
CAMERA='/dev/video0'
+
    break;
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
+
    case " filesrc":
INPUT_LAYER='input/Placeholder'
+
      src = src + " location=" + document.getElementById("source_location").value + " qtdemux name=demux ! h264parse ! omxh264dec ! nvvidconv ! queue !";
OUTPUT_LAYER='add_8'
+
    break;
LABELS=labels.txt
+
    case " v4l2src":
 +
      src = src + " device=" + document.getElementById("source_location").value + " ! nvvidconv ! queue !";
 +
    break;
 +
    case " nvarguscamerasrc":
 +
      src = src + " sensor-id=0 ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvvidconv ! queue !";
 +
    break;
 +
    default:
 +
      src = "";
 +
    break;
 +
  }
 +
}
  
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
+
function pc_src_selection() {
multifilesrc location=$IMAGE_FILE start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! videorate ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! video/x-raw,format=RGB ! net.sink_bypass tinyyolov2 new-meta=true name=net backend=tensorflow model-location=$MODEL_LOCATION labels=$(cat labels.txt) backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
+
  switch(src) {
</syntaxhighlight>
+
    case " multifilesrc":
* Output
+
      src = src + " location=" + document.getElementById("source_location").value + " start-index=0 stop-index=0 loop=true ! jpegparse ! jpegdec ! videoconvert ! videoscale ! queue !";
<syntaxhighlight lang=bash>
+
    break;
0:00:03.050336570  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:286:gst_tinyyolov2_preprocess:<net> Preprocess
+
    case " filesrc":
0:00:03.097045162  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:325:gst_tinyyolov2_postprocess_old:<net> Postprocess
+
      src = src + " location=" + document.getElementById("source_location").value + " ! decodebin ! videoconvert ! videoscale ! queue !";
0:00:03.097080665  8194 0x55b131f7aad0 LOG              tinyyolov2 gstinferencedebug.c:93:gst_inference_print_boxes:<net> Box: [class:7, x:87.942292, y:102.912900, width:244.945642, height:285.130143, prob:16.271288]
+
    break;
0:00:03.097087457  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:359:gst_tinyyolov2_postprocess_new:<net> Postprocess Meta
+
    case " v4l2src":
0:00:03.097095173  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:366:gst_tinyyolov2_postprocess_new:<net> Number of predictions: 1
+
      src = src + " device=" + document.getElementById("source_location").value + " ! videoconvert ! videoscale ! queue !";
0:00:03.097117947  8194 0x55b131f7aad0 LOG              tinyyolov2 gstinferencedebug.c:111:gst_inference_print_predictions:
+
    break;
{
+
    default:
  id : 346,
+
      src = "";
  enabled : True,
+
    break;
  bbox : {
+
   }
    x : 0
 
    y : 0
 
    width : 416
 
    height : 416
 
   },
 
  classes : [
 
   
 
  ],
 
  predictions : [
 
    {
 
      id : 347,
 
      enabled : True,
 
      bbox : {
 
        x : 87
 
        y : 102
 
        width : 244
 
        height : 285
 
      },
 
      classes : [
 
        {
 
          Id : 258
 
          Class : 7
 
          Label : cat
 
          Probability : 16.271288
 
          Classes : 20
 
        },
 
      ],
 
      predictions : [
 
       
 
      ]
 
    },
 
  ]
 
 
}
 
}
</syntaxhighlight>
 
  
==== Video file ====
+
function src_props_selection() {
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
+
  switch(platform) {
* You will need a video file from one of TinyYOLO classes
+
    case "pc":
* Pipeline
+
      pc_src_selection();
<syntaxhighlight lang=bash>
+
    break;
VIDEO_FILE='cat.mp4'
+
    case "jetson":
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
+
      jetson_src_selection();
INPUT_LAYER='input/Placeholder'
+
    break;
OUTPUT_LAYER='add_8'
+
  }
LABELS=labels.txt
 
  
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
 
filesrc location=$VIDEO_FILE ! decodebin ! videoconvert ! videoscale ! queue ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! video/x-raw,format=RGB ! net.sink_bypass tinyyolov2 new-meta=true name=net backend=tensorflow model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
 
</syntaxhighlight>
 
* Output
 
<syntaxhighlight lang=bash>
 
0:00:02.992422192  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:286:gst_tinyyolov2_preprocess:<net> Preprocess
 
0:00:03.048734915  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:325:gst_tinyyolov2_postprocess_old:<net> Postprocess
 
0:00:03.048770315  8194 0x55b131f7aad0 LOG              tinyyolov2 gstinferencedebug.c:93:gst_inference_print_boxes:<net> Box: [class:7, x:87.942292, y:102.912900, width:244.945642, height:285.130143, prob:16.271288]
 
0:00:03.048776786  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:359:gst_tinyyolov2_postprocess_new:<net> Postprocess Meta
 
0:00:03.048784401  8194 0x55b131f7aad0 LOG              tinyyolov2 gsttinyyolov2.c:366:gst_tinyyolov2_postprocess_new:<net> Number of predictions: 1
 
0:00:03.048805819  8194 0x55b131f7aad0 LOG              tinyyolov2 gstinferencedebug.c:111:gst_inference_print_predictions:
 
{
 
  id : 338,
 
  enabled : True,
 
  bbox : {
 
    x : 0
 
    y : 0
 
    width : 416
 
    height : 416
 
  },
 
  classes : [
 
   
 
  ],
 
  predictions : [
 
    {
 
      id : 339,
 
      enabled : True,
 
      bbox : {
 
        x : 87
 
        y : 102
 
        width : 244
 
        height : 285
 
      },
 
      classes : [
 
        {
 
          Id : 252
 
          Class : 7
 
          Label : cat
 
          Probability : 16.271288
 
          Classes : 20
 
        },
 
      ],
 
      predictions : [
 
       
 
      ]
 
    },
 
  ]
 
 
}
 
}
  
</syntaxhighlight>
+
/**********************************************************************
 +
* Optional utilities - enable
 +
***********************************************************************/
  
==== Camera stream ====
+
function enable_overlay() {   
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
+
  var checkbox = document.getElementById('en_inference_overlay');
* You will need a v4l2 compatible camera
+
  if(checkbox.checked == true) {
* Pipeline
+
    enable_element("thickness");
<syntaxhighlight lang=bash>
+
    enable_element("fontscale");
CAMERA='/dev/video0'
+
    enable_element("style");
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
+
    overlay = " fakesink net.src_bypass ! queue ! inferenceoverlay !";
INPUT_LAYER='input/Placeholder'
+
  } else {
OUTPUT_LAYER='add_8'
+
    disable_element("thickness");
LABELS=labels.txt
+
    disable_element("fontscale");
 +
    disable_element("style");
 +
    overlay = "";
 +
  }
 +
}
  
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
+
function enable_filter() {   
v4l2src device=$CAMERA ! "video/x-raw" ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! video/x-raw,format=RGB ! net.sink_bypass tinyyolov2 new-meta=true name=net backend=tensorflow model-location=$MODEL_LOCATION labels=$(cat labels.txt) backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER
+
  var checkbox = document.getElementById('en_inferencefilter');
</syntaxhighlight>
+
  if(checkbox.checked == true) {
* Output
+
     enable_element("filter_class_id");
<syntaxhighlight lang=bash>
+
     filter = " inferencefilter";
0:00:02.493931842  8814 0x557dfec450f0 LOG              tinyyolov2 gsttinyyolov2.c:286:gst_tinyyolov2_preprocess:<net> Preprocess
+
   } else {
^Chandling interrupt.
+
    disable_element("filter_class_id");
Interrupt: Stopping pipeline ...
+
     filter = "";
Execution ended after 0:00:01.951234668
+
   }
Setting pipeline to PAUSED ...
 
Setting pipeline to READY ...
 
0:00:02.541405794  8814 0x557dfec450f0 LOG              tinyyolov2 gsttinyyolov2.c:325:gst_tinyyolov2_postprocess_old:<net> Postprocess
 
0:00:02.541440570  8814 0x557dfec450f0 LOG              tinyyolov2 gstinferencedebug.c:93:gst_inference_print_boxes:<net> Box: [class:14, x:82.788036, y:126.779761, width:250.107193, height:300.441625, prob:12.457702]
 
0:00:02.541447102  8814 0x557dfec450f0 LOG              tinyyolov2 gsttinyyolov2.c:359:gst_tinyyolov2_postprocess_new:<net> Postprocess Meta
 
0:00:02.541454350  8814 0x557dfec450f0 LOG              tinyyolov2 gsttinyyolov2.c:366:gst_tinyyolov2_postprocess_new:<net> Number of predictions: 1
 
0:00:02.541476722  8814 0x557dfec450f0 LOG              tinyyolov2 gstinferencedebug.c:111:gst_inference_print_predictions:
 
{
 
  id : 177,
 
  enabled : True,
 
  bbox : {
 
     x : 0
 
     y : 0
 
    width : 416
 
    height : 416
 
   },
 
  classes : [
 
      
 
   ],
 
  predictions : [
 
    {
 
      id : 178,
 
      enabled : True,
 
      bbox : {
 
        x : 82
 
        y : 126
 
        width : 250
 
        height : 300
 
      },
 
      classes : [
 
        {
 
          Id : 101
 
          Class : 14
 
          Label : person
 
          Probability : 12.457702
 
          Classes : 20
 
        },
 
      ],
 
      predictions : [
 
       
 
      ]
 
    },
 
  ]
 
 
}
 
}
</syntaxhighlight>
 
  
==== Visualization with inference overlay ====
+
function enable_crop() {   
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
+
  var checkbox = document.getElementById('en_inferencecrop');
* You will need a v4l2 compatible camera
+
  if(checkbox.checked == true) {
* Pipeline
+
    enable_element("aspect_ratio");
<syntaxhighlight lang=bash>
+
    crop = " inferencecrop";
CAMERA='/dev/video0'
+
  } else {
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
+
    disable_element("aspect_ratio");
INPUT_LAYER='input/Placeholder'
+
    crop = "";
OUTPUT_LAYER='add_8'
+
  }
LABELS='labels.txt'
+
}
 
 
GST_DEBUG=tinyyolov2:6 gst-launch-1.0 \
 
v4l2src device=$CAMERA ! "video/x-raw" ! tee name=t t. ! queue ! videoconvert ! videoscale ! net.sink_model t. ! queue ! videoconvert ! video/x-raw,format=RGB ! net.sink_bypass tinyyolov2 new-meta=true name=net backend=tensorflow model-location=$MODEL_LOCATION labels=$(cat labels.txt)  backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_bypass ! inferenceoverlay ! videoconvert ! queue ! xvimagesink async=false sync=false qos=false
 
</syntaxhighlight>
 
  
==== Using inference filter ====
+
/**********************************************************************
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
+
* print final pipeline
* You will need a v4l2 compatible camera
+
***********************************************************************/
* Pipeline
 
<syntaxhighlight lang=bash>
 
  
GST_DEBUG=2,*inferencedebug*:6 gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! tee name=t t. ! videoscale ! queue ! net.sink_model t. ! queue ! net.sink_bypass tinyyolov2 name=net model-location=$MODEL_LOCATION labels=$(cat $LABELS) backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER net.src_model ! inferencefilter filter-class=8 ! inferencedebug ! fakesink
+
function print() {
<syntaxhighlight>
+
  platform = document.getElementById("platform").value;
 +
  backend = document.getElementById("backend").value;
 +
  model = document.getElementById("model").value;
 +
  src = document.getElementById("source").value;
 +
  sink = document.getElementById("sink").value;
 +
 
 +
  if(model != "") {
 +
    model_props_selection();
 +
  }
 +
 
 +
  if(src != "") {
 +
  tee_selection();
 +
  src_props_selection();  
 +
  }
  
* Output
 
<syntaxhighlight lang=bash>
 
  
0:00:03.255231109 11277 0x55f5ce5cfde0 DEBUG        inferencedebug gstinferencedebug.c:131:gst_inference_debug_transform_ip:<inferencedebug0> transform_ip
+
  if(overlay != "") {
0:00:03.255268289 11277 0x55f5ce5cfde0 DEBUG        inferencedebug gstinferencedebug.c:120:gst_inference_debug_print_predictions:<inferencedebug0> Prediction Tree:
+
    overlay = " fakesink net.src_bypass ! queue ! inferenceoverlay !";
{
+
    var thickness= document.getElementById("thickness").value;
   id : 169,
+
    var fontscale=document.getElementById("fontscale").value;
  enabled : False,
+
    var style=document.getElementById("style").value;
   bbox : {
+
 
     x : 0
+
    if(thickness != "") {
     y : 0
+
      overlay = overlay + " thickness=" + thickness;
     width : 416
+
    }
     height : 416
+
    if(fontscale != "") {
   },
+
      overlay = overlay + " fontscale=" + fontscale;
  classes : [
+
    }
 +
    if(style != "") {
 +
      overlay = overlay + " style=" + style;
 +
    }
 +
    overlay = overlay + " !";
 +
   }
 +
 +
   if(filter != "") {
 +
     filter = " inferencefilter";
 +
     if(document.getElementById("filter_class_id").value != "") {
 +
      filter = filter + " filter-class=" + document.getElementById("filter_class_id").value + " !";
 +
     } else {
 +
      filter = filter + " !";
 +
     }
 +
   }  
 
      
 
      
   ],
+
   if(crop != "") {
  predictions : [
+
    crop = " inferencecrop";
     {
+
     if(document.getElementById("aspect_ratio").value != "") {
       id : 170,
+
       crop = crop + " aspect-ratio=" + document.getElementById("aspect_ratio").value + " !";
      enabled : False,
+
    } else {
      bbox : {
+
       crop = crop + " !";
        x : 101
+
    }
        y : 96
+
  }  
        width : 274
+
   document.getElementById("new_pipeline").value = "gst-launch-1.0 " + model + model_props + src + tee + filter + crop + overlay + sink;
        height : 346
 
       },
 
      classes : [
 
        {
 
          Id : 81
 
          Class : 14
 
          Label : person
 
          Probability : 12.842868
 
          Classes : 20
 
        },
 
      ],
 
      predictions : [
 
       
 
      ]
 
    },
 
   ]
 
 
}
 
}
</syntaxhighlight>
 
  
 +
</script>
 +
<textarea id="new_pipeline" name="Text1" cols="140" rows="5"></textarea>
  
 +
</body>
 +
</html>
  
==== Visualization with detection crop ====
 
* Get the graph used on this example from [https://shop.ridgerun.com/products/tinyyolov2-for-tensorflow this link]
 
* You will need a v4l2 compatible camera
 
* Pipeline
 
===== Example with aspect-ratio property  =====
 
<syntaxhighlight lang=bash>
 
CAMERA='/dev/video0'
 
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
 
INPUT_LAYER='input/Placeholder'
 
OUTPUT_LAYER='add_8'
 
LABELS='labels.txt'
 
gst-launch-1.0 \
 
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
 
t. ! videoscale ! queue ! net.sink_model \
 
t. ! queue ! net.sink_bypass \
 
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
 
net.src_bypass ! detectioncrop aspect-ratio=1/1 ! videoscale ! ximagesink sync=false
 
</syntaxhighlight>
 
  
===== Example with crop-index property  =====
+
= Advanced pipelines =
<syntaxhighlight lang=bash>
 
CAMERA='/dev/video0'
 
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
 
INPUT_LAYER='input/Placeholder'
 
OUTPUT_LAYER='add_8'
 
LABELS='labels.txt'
 
gst-launch-1.0 \
 
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
 
t. ! videoscale ! queue ! net.sink_model \
 
t. ! queue ! net.sink_bypass \
 
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
 
net.src_bypass ! detectioncrop crop-index=1 ! videoscale ! ximagesink sync=false
 
</syntaxhighlight>
 
  
===== Example with crop-class property =====
+
<noinclude>
<syntaxhighlight lang=bash>
+
{{GstInference/Foot|Example pipelines|Example pipelines/NANO}}
CAMERA='/dev/video0'
+
</noinclude>
MODEL_LOCATION='graph_tinyyolov2_tensorflow.pb'
 
INPUT_LAYER='input/Placeholder'
 
OUTPUT_LAYER='add_8'
 
LABELS='labels.txt'
 
gst-launch-1.0 \
 
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
 
t. ! videoscale ! queue ! net.sink_model \
 
t. ! queue ! net.sink_bypass \
 
tinyyolov2 name=net model-location=$MODEL_LOCATION backend=tensorflow backend::input-layer=$INPUT_LAYER backend::output-layer=$OUTPUT_LAYER \
 
net.src_bypass ! detectioncrop crop-class=4 ! videoscale ! ximagesink sync=false
 
</syntaxhighlight>
 

Latest revision as of 11:32, 13 March 2020



Previous: Example pipelines Index Next: Example pipelines/NANO




Sample pipelines

The following section contains a tool for generating simple GStreamer pipelines with one model of a selected architecture using our hierarchical inference metadata. If you are using and older version, you chan check the legacy pipelines section. Please make sure to check the documentation to understand the property usage for each element.

The required elements are:

  • Backend
  • Model
  • Model location
  • Labels
  • Source
  • Sink

The optional elements include:

  • inferencefilter
  • inferencrop
  • inferenceoverlay
Error creating thumbnail: Unable to save thumbnail to destination
Detection with new metadata

Pipeline generator

The following tool will provide simple pipelines according to the selected elements.

Optional utilites

The following elements are optional yet very useful. Check the documentation for more details on their properties.


Advanced pipelines

Previous: Example pipelines Index Next: Example pipelines/NANO