Difference between revisions of "Coral from Google/GstInference/Example Pipelines"

From RidgeRun Developer Connection
Jump to: navigation, search
m
m
Line 7: Line 7:
 
The pipelines in this wiki are designed to test the GstInference capabilities in a simple way, so you just need to copy and paste the code inside the colored boxes into your terminal. The blue pipelines are meant to be executed inside the folder that contains the inference model data. The purple pipelines are for displaying the received stream, so they can be executed at any location.
 
The pipelines in this wiki are designed to test the GstInference capabilities in a simple way, so you just need to copy and paste the code inside the colored boxes into your terminal. The blue pipelines are meant to be executed inside the folder that contains the inference model data. The purple pipelines are for displaying the received stream, so they can be executed at any location.
  
The model for these pipelines can be downloaded from Coral store:
+
The model for these pipelines can be downloaded from the Coral store:
  
 
* [https://github.com/google-coral/test_data/raw/master/mobilenet_v2_1.0_224_quant_edgetpu.tflite MobilenetV2 (model)]
 
* [https://github.com/google-coral/test_data/raw/master/mobilenet_v2_1.0_224_quant_edgetpu.tflite MobilenetV2 (model)]
  
And for the labels file you may use the one that comes for the tensorflow backend in RidgeRun store:
+
And for the labels file you may use the one that comes for the TensorFlow backend in the RidgeRun store:
 
* [https://shop.ridgerun.com/products/mobilenetv2-for-tensorflow MobilenetV2 (labels)]
 
* [https://shop.ridgerun.com/products/mobilenetv2-for-tensorflow MobilenetV2 (labels)]
  
Line 22: Line 22:
 
=== Camera Source ===
 
=== Camera Source ===
  
For these pipelines you can modify the CAMERA variable according to your device.
+
For these pipelines, you can modify the CAMERA variable according to your device.
  
 
'''Display Output'''
 
'''Display Output'''
Line 81: Line 81:
 
</pre>
 
</pre>
  
* Client side
+
* Client-side
  
 
<pre style="background:#e4d2fa">
 
<pre style="background:#e4d2fa">
Line 90: Line 90:
 
=== File Source ===
 
=== File Source ===
  
For these pipelines you can modify the VIDEO_FILE variable in order to provide an mp4 video file that contains any class of the ones listed inside the ''imagenet_labels.txt'' from the downloaded model.
+
For these pipelines, you can modify the VIDEO_FILE variable in order to provide an mp4 video file that contains any class of the ones listed inside the ''imagenet_labels.txt'' from the downloaded model.
  
 
'''Display Output'''
 
'''Display Output'''
Line 151: Line 151:
 
</pre>
 
</pre>
  
* Client side
+
* Client-side
  
 
<pre style="background:#e4d2fa">
 
<pre style="background:#e4d2fa">
Line 160: Line 160:
 
=== RTSP Source ===
 
=== RTSP Source ===
  
For these pipelines you may modify the RTSP_URI variable according to your needs.
+
For these pipelines, you may modify the RTSP_URI variable according to your needs.
  
 
'''Display Output'''
 
'''Display Output'''
Line 221: Line 221:
 
</pre>
 
</pre>
  
* Client side
+
* Client-side
  
 
<pre style="background:#e4d2fa">
 
<pre style="background:#e4d2fa">

Revision as of 15:28, 25 February 2021



Previous: GstInference/Why_use_GstInference? Index Next: GstInference/Demos





Introduction

The pipelines in this wiki are designed to test the GstInference capabilities in a simple way, so you just need to copy and paste the code inside the colored boxes into your terminal. The blue pipelines are meant to be executed inside the folder that contains the inference model data. The purple pipelines are for displaying the received stream, so they can be executed at any location.

The model for these pipelines can be downloaded from the Coral store:

And for the labels file you may use the one that comes for the TensorFlow backend in the RidgeRun store:

Once you have downloaded them, unzip them and test your preferred pipeline from the list below.

Dev Board

Classification: MobilenetV2

Camera Source

For these pipelines, you can modify the CAMERA variable according to your device.

Display Output

CAMERA='/dev/video1'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \
waylandsink fullscreen=false sync=false

Recording Output

CAMERA='/dev/video1'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
OUTPUT_FILE='recording.mpeg'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \
avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e

Streaming Output

Remember to modify the HOST and PORT variables according to your own needs.

  • Processing side
CAMERA='/dev/video1'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
HOST='192.168.0.17'
PORT='5000'
gst-launch-1.0 \
v4l2src device=$CAMERA ! "video/x-raw, width=1280, height=720" ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! avenc_mpeg2video ! mpegtsmux ! \
udpsink host=$HOST port=$PORT sync=false
  • Client-side
PORT='5000'
gst-launch-1.0 udpsrc port=$PORT ! queue  ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e

File Source

For these pipelines, you can modify the VIDEO_FILE variable in order to provide an mp4 video file that contains any class of the ones listed inside the imagenet_labels.txt from the downloaded model.

Display Output

VIDEO_FILE='animals.mp4'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \
waylandsink fullscreen=false sync=false

Recording Output

You can modify the OUTPUT_FILE variable to the name you want for your recording.

VIDEO_FILE='animals.mp4'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
OUTPUT_FILE='recording.mpeg'
gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \
avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e

Streaming Output

Remember to modify the HOST and PORT variables according to your own needs.

  • Processing side
VIDEO_FILE='animals.mp4'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
HOST='192.168.0.17'
PORT='5000'
gst-launch-1.0 \
filesrc location=$VIDEO_FILE ! qtdemux ! queue ! h264parse ! avdec_h264 ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! avenc_mpeg2video ! mpegtsmux ! \
udpsink host=$HOST port=$PORT sync=false
  • Client-side
PORT='5000'
gst-launch-1.0 udpsrc port=$PORT ! queue  ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e

RTSP Source

For these pipelines, you may modify the RTSP_URI variable according to your needs.

Display Output

RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
gst-launch-1.0 \
rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \
waylandsink fullscreen=false sync=false

Recording Output

You can modify the OUTPUT_FILE variable to the name you want for your recording.

RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
OUTPUT_FILE='recording.mpeg'
gst-launch-1.0 \
rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! \
avenc_mpeg2video ! mpegtsmux ! filesink location=$OUTPUT_FILE -e

Streaming Output

Remember to modify the HOST and PORT variables according to your own needs.

  • Processing side
RTSP_URI='rtspt://170.93.143.139/rtplive/1701519c02510075004d823633235daa'
MODEL_LOCATION='mobilenet_v2_1.0_224_quant_edgetpu.tflite'
INPUT_LAYER='input'
OUTPUT_LAYER='MobilenetV2/Predictions/Reshape_1'
LABELS='imagenet_labels.txt'
HOST='192.168.0.17'
PORT='5000'
gst-launch-1.0 \
rtspsrc location=$RTSP_URI ! rtph264depay ! decodebin ! queue ! videoconvert ! tee name=t \
t. ! videoscale ! queue ! net.sink_model \
t. ! queue ! net.sink_bypass \
mobilenetv2 name=net model-location=$MODEL_LOCATION backend=edgetpu backend::input-layer=$INPUT_LAYER  backend::output-layer=$OUTPUT_LAYER \
net.src_bypass ! classificationoverlay labels="$(cat $LABELS)" font-scale=4 thickness=4 ! videoconvert ! avenc_mpeg2video ! mpegtsmux ! \
udpsink host=$HOST port=$PORT sync=false
  • Client-side
PORT='5000'
gst-launch-1.0 udpsrc port=$PORT ! queue  ! tsdemux ! mpeg2dec ! queue ! videoconvert ! autovideosink sync=false -e

Detection: MobilenetSSD v2

USB Accelerator

Previous: GstInference/Why_use_GstInference? Index Next: GstInference/Demos