Difference between revisions of "Coral from Google/GstInference/Example Pipelines"

From RidgeRun Developer Connection
Jump to: navigation, search
(Introduction)
m
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
<noinclude>
 
<noinclude>
{{Coral from Google/Head|next=GstInference/Demos|previous=GstInference/Why_use_GstInference?|keywords=}}
+
{{Coral from Google/Head|next=GstInference/Demos|previous=GstInference/Why_use_GstInference?|metakeywords=}}
 
</noinclude>
 
</noinclude>
  
=Introduction=
+
==Introduction==
  
 
The pipelines in this wiki are designed to test the GstInference capabilities in a simple way, so you just need to copy and paste the code inside the colored boxes into your terminal. The blue pipelines are meant to be executed inside the folder that contains the inference model data. The purple pipelines are for displaying the received stream, so they can be executed at any location.
 
The pipelines in this wiki are designed to test the GstInference capabilities in a simple way, so you just need to copy and paste the code inside the colored boxes into your terminal. The blue pipelines are meant to be executed inside the folder that contains the inference model data. The purple pipelines are for displaying the received stream, so they can be executed at any location.
Line 10: Line 10:
  
 
* '''MobilenetV2''': [https://coral.ai/models/ model and labels].
 
* '''MobilenetV2''': [https://coral.ai/models/ model and labels].
* '''MobilenetV2 + SSD''': [https://coral.ai/models/ model] and [https://developer.ridgerun.com/wiki/index.php?title=Coral_MobilenetV2SSD_COCO_labels labels]. In this case, you need to save the labels content into a file named ''coco_labels.txt''.
+
* '''MobilenetV2 + SSD''': [https://coral.ai/models/ model] and [[Coral_MobilenetV2SSD_COCO_labels | labels]]. In this case, you need to save the labels content into a file named ''coco_labels.txt''.
  
 
<pre style="background-color:#f5e05b; width:750px">
 
<pre style="background-color:#f5e05b; width:750px">
Line 18: Line 18:
 
Once you have downloaded them, test your preferred pipeline from the list below.
 
Once you have downloaded them, test your preferred pipeline from the list below.
  
'''Note''': These pipelines have been tested using the Coral USB Accelerator and the Coral Dev Board.
+
{{Ambox
 +
|type=notice
 +
|small=left
 +
|issue='''Note''': These pipelines have been tested using the Coral USB Accelerator and the Coral Dev Board.
 +
|style=width:unset;
 +
}}
  
= Classification: MobilenetV2 =
+
== Classification: MobilenetV2 ==
  
== Camera Source ==
+
=== Camera Source ===
  
 
For these pipelines, you can modify the CAMERA variable according to your device.
 
For these pipelines, you can modify the CAMERA variable according to your device.
Line 87: Line 92:
 
</pre>
 
</pre>
  
== File Source ==
+
=== File Source ===
  
 
For these pipelines, you can modify the VIDEO_FILE variable in order to provide an mp4 video file that contains any class of the ones listed inside the ''imagenet_labels.txt'' from the downloaded model.
 
For these pipelines, you can modify the VIDEO_FILE variable in order to provide an mp4 video file that contains any class of the ones listed inside the ''imagenet_labels.txt'' from the downloaded model.
Line 154: Line 159:
 
</pre>
 
</pre>
  
== RTSP Source ==
+
=== RTSP Source ===
  
 
For these pipelines, you may modify the RTSP_URI variable according to your needs.
 
For these pipelines, you may modify the RTSP_URI variable according to your needs.
Line 221: Line 226:
 
</pre>
 
</pre>
  
= Detection: MobilenetV2 + SSD =
+
== Detection: MobilenetV2 + SSD ==
  
== Camera Source ==
+
=== Camera Source ===
  
 
For these pipelines, you can modify the CAMERA variable according to your device.
 
For these pipelines, you can modify the CAMERA variable according to your device.
Line 288: Line 293:
 
</pre>
 
</pre>
  
== File Source ==
+
=== File Source ===
  
 
For these pipelines, you can modify the VIDEO_FILE variable in order to provide an mp4 video file that contains any class of the ones listed inside the ''imagenet_labels.txt'' from the downloaded model.
 
For these pipelines, you can modify the VIDEO_FILE variable in order to provide an mp4 video file that contains any class of the ones listed inside the ''imagenet_labels.txt'' from the downloaded model.
Line 355: Line 360:
 
</pre>
 
</pre>
  
== RTSP Source ==
+
=== RTSP Source ===
  
 
For these pipelines, you may modify the RTSP_URI variable according to your needs.
 
For these pipelines, you may modify the RTSP_URI variable according to your needs.

Latest revision as of 06:46, 6 March 2023



Previous: GstInference/Why_use_GstInference? Index Next: GstInference/Demos





Introduction

The pipelines in this wiki are designed to test the GstInference capabilities in a simple way, so you just need to copy and paste the code inside the colored boxes into your terminal. The blue pipelines are meant to be executed inside the folder that contains the inference model data. The purple pipelines are for displaying the received stream, so they can be executed at any location.

The model and labels for these pipelines can be downloaded from:

  • MobilenetV2: model and labels.
  • MobilenetV2 + SSD: model and labels. In this case, you need to save the labels content into a file named coco_labels.txt.
    Important: Make sure you use RidgeRun labels to get the correct inference results.

Once you have downloaded them, test your preferred pipeline from the list below.

Classification: MobilenetV2

Camera Source

For these pipelines, you can modify the CAMERA variable according to your device.

Display Output

CAMERA='/dev/video1'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! \
inferencebin arch=mobilenetv2 backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! waylandsink fullscreen=false sync=false

Recording Output

CAMERA='/dev/video1'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
OUTPUT_FILE='recording.mpeg'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! \
inferencebin arch=mobilenetv2 backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e

Streaming Output

Remember to modify the HOST and PORT variables according to your own needs.

  • Processing side
CAMERA='/dev/video1'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
HOST='192.168.0.13'
PORT='5000'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! \
inferencebin arch=mobilenetv2 backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! avenc_mpeg2video ! mpegtsmux ! udpsink host=$HOST port=$PORT sync=false
  • Client-side
PORT='5000'
gst-launch-1.0 udpsrc port=$PORT ! queue  ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e

File Source

For these pipelines, you can modify the VIDEO_FILE variable in order to provide an mp4 video file that contains any class of the ones listed inside the imagenet_labels.txt from the downloaded model.

Display Output

VIDEO_FILE='animals.mp4'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! \
inferencebin arch=mobilenetv2 backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! waylandsink fullscreen=false sync=false

Recording Output

You can modify the OUTPUT_FILE variable to the name you want for your recording.

VIDEO_FILE='animals.mp4'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
OUTPUT_FILE='recording.mpeg'
gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! \
inferencebin arch=mobilenetv2 backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e

Streaming Output

Remember to modify the HOST and PORT variables according to your own needs.

  • Processing side
VIDEO_FILE='animals.mp4'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
HOST='192.168.0.13'
PORT='5000'
gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! \
inferencebin arch=mobilenetv2 backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! avenc_mpeg2video ! mpegtsmux ! udpsink host=$HOST port=$PORT sync=false
  • Client-side
PORT='5000'
gst-launch-1.0 udpsrc port=$PORT ! queue  ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e

RTSP Source

For these pipelines, you may modify the RTSP_URI variable according to your needs.

Display Output

RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! \
inferencebin arch=mobilenetv2 backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! waylandsink fullscreen=false sync=false

Recording Output

You can modify the OUTPUT_FILE variable to the name you want for your recording.

RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
OUTPUT_FILE='recording.mpeg'
gst-launch-1.0 \
rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! \
inferencebin arch=mobilenetv2 backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e

Streaming Output

Remember to modify the HOST and PORT variables according to your own needs.

  • Processing side
RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
HOST='192.168.0.13'
PORT='5000'
gst-launch-1.0 \
rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! \
inferencebin arch=mobilenetv2 backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! avenc_mpeg2video ! mpegtsmux ! udpsink host=$HOST port=$PORT sync=false
  • Client-side
PORT='5000'
gst-launch-1.0 udpsrc port=$PORT ! queue  ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e

Detection: MobilenetV2 + SSD

Camera Source

For these pipelines, you can modify the CAMERA variable according to your device.

Display Output

CAMERA='/dev/video1'
MODEL_LOCATION='ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='coco_labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! \
inferencebin arch=mobilenetv2ssd backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! waylandsink fullscreen=false sync=false

Recording Output

CAMERA='/dev/video1'
MODEL_LOCATION='ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='coco_labels.txt'
OUTPUT_FILE='recording.mpeg'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! \
inferencebin arch=mobilenetv2ssd backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e

Streaming Output

Remember to modify the HOST and PORT variables according to your own needs.

  • Processing side
CAMERA='/dev/video1'
MODEL_LOCATION='ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='coco_labels.txt'
HOST='192.168.0.13'
PORT='5000'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! \
inferencebin arch=mobilenetv2ssd backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! avenc_mpeg2video ! mpegtsmux ! udpsink host=$HOST port=$PORT sync=false
  • Client-side
PORT='5000'
gst-launch-1.0 udpsrc port=$PORT ! queue  ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e

File Source

For these pipelines, you can modify the VIDEO_FILE variable in order to provide an mp4 video file that contains any class of the ones listed inside the imagenet_labels.txt from the downloaded model.

Display Output

VIDEO_FILE='animals.mp4'
MODEL_LOCATION='ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='coco_labels.txt'
gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! \
inferencebin arch=mobilenetv2ssd backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! waylandsink fullscreen=false sync=false

Recording Output

You can modify the OUTPUT_FILE variable to the name you want for your recording.

VIDEO_FILE='animals.mp4'
MODEL_LOCATION='ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='coco_labels.txt'
OUTPUT_FILE='recording.mpeg'
gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! \
inferencebin arch=mobilenetv2ssd backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e

Streaming Output

Remember to modify the HOST and PORT variables according to your own needs.

  • Processing side
VIDEO_FILE='animals.mp4'
MODEL_LOCATION='ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='coco_labels.txt'
HOST='192.168.0.13'
PORT='5000'
gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! \
inferencebin arch=mobilenetv2ssd backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! avenc_mpeg2video ! mpegtsmux ! udpsink host=$HOST port=$PORT sync=false
  • Client-side
PORT='5000'
gst-launch-1.0 udpsrc port=$PORT ! queue  ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e

RTSP Source

For these pipelines, you may modify the RTSP_URI variable according to your needs.

Display Output

RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa'
MODEL_LOCATION='ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='coco_labels.txt'
gst-launch-1.0 \
rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! \
inferencebin arch=mobilenetv2ssd backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! waylandsink fullscreen=false sync=false

Recording Output

You can modify the OUTPUT_FILE variable to the name you want for your recording.

RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa'
MODEL_LOCATION='ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='coco_labels.txt'
OUTPUT_FILE='recording.mpeg'
gst-launch-1.0 \
rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! \
inferencebin arch=mobilenetv2ssd backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e

Streaming Output

Remember to modify the HOST and PORT variables according to your own needs.

  • Processing side
RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa'
MODEL_LOCATION='ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='coco_labels.txt'
HOST='192.168.0.13'
PORT='5000'
gst-launch-1.0 \
rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! \
inferencebin arch=mobilenetv2ssd backend=coral model-location=$MODEL_LOCATION \
input-layer=$INPUT_LAYER output-layer=$OUTPUT_LAYER \
labels="\"$(awk '{$1=""; printf "\%s\;",$0}' $LABELS)\"" overlay=true ! \
videoconvert ! avenc_mpeg2video ! mpegtsmux ! udpsink host=$HOST port=$PORT sync=false
  • Client-side
PORT='5000'
gst-launch-1.0 udpsrc port=$PORT ! queue  ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e


Previous: GstInference/Why_use_GstInference? Index Next: GstInference/Demos