GstInference Signals
< GstInference | Metadatas
![]() | Make sure you also check GstInference's companion project: R2Inference |
Overview
Metadata from GstInference is available to be obtained through GSignals and therefore can be used in other programs or processes such as using Python or C++.
Available Signals
Signals are created and listed in gst-libs/gst/r2inference/gstvideoinference.c
Signal Name | To be used by | Details |
---|---|---|
new-inference | C/C++ | Format can be casted to struct that defines metadata and dereference pointers. |
new-inference-string | Python, Javascript, Other |
String format needs to be used because pointer dereference from other memory space is not available. |
Inference String Signal
The following code show a simple capture of the signal in GStreamer using Python. It installs the function handler to the signal called "new-inference-string" from GstInference element. The signal sends a string formatted as json which can be parsed in python using json.loads function.
For details about what elements can be accessed in the serialized json string, check this section.
1 import gi
2 gi.require_version("Gst", "1.0")
3 gi.require_version("GstVideo", "1.0")
4 from gi.repository import Gst, GObject, GstVideo
5 import json
6
7 GObject.threads_init()
8 Gst.init(None)
9
10 def newPrediction(element, meta):
11 # Parse data from string to json object
12 data = json.loads(meta)
13 print(data)
14
15 # Settings
16 video_dev = "/dev/video0"
17 arch = "mobilenetv2ssd"
18 backend = "coral"
19 model = "/home/coral/models/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite"
20 input_layer = "" # Needed by other backends such as Tensorflow
21 output_layer = "" # Needed by other backends such as Tensorflow
22
23 # Pipeline
24 inf_pipe_str = "v4l2src device=%s ! videoscale ! videoconvert ! \
25 video/x-raw,width=640,height=480,format=I420 ! \
26 videoconvert ! inferencebin arch=%s backend=%s \
27 model-location=%s input-layer=%s output-layer=%s \
28 overlay=true name=net ! \
29 videoconvert ! autovideosink name=videosink sync=false" % \
30 (video_dev,arch,backend,model,input_layer,output_layer)
31
32 # Load pipeline from string
33 inference_pipe = Gst.parse_launch(inf_pipe_str)
34 # Start pipeline
35 inference_pipe.set_state(Gst.State.PLAYING)
36
37 if (not inference_pipe):
38 print("Unable to create pipeline")
39 exit(1)
40
41 # Search for arch element from inferencebin
42 net = inference_pipe.get_by_name("arch")
43
44 # Connect to inference string signal
45 net.connect("new-inference-string", newPrediction)
46
47 # Launch loop
48 loop = GObject.MainLoop()
49 loop.run()