DeepStream Reference Designs - Customizing the Project - Implementing Custom Media

From RidgeRun Developer Connection
< DeepStream Reference Designs‎ | Customizing the Project
Revision as of 11:42, 7 July 2022 by Felizondo (talk | contribs) (Media Interface Operations)
Jump to: navigation, search


Previous: Customizing the Project/Implementing Custom Actions Index Next: Customizing the Project/Implementing a Custom Application
Nvidia-preferred-partner-badge-rgb-for-screen.png




The Media Module

This element is responsible for encapsulating and managing the information received through the system's cameras. For example, in the case of the APLVR reference design, the Gstd plugin is used to manage media instances. And within the media descriptor, the camera protocol used is indicated, which corresponds to RTSP in said system. Other examples of components that could be used for this purpose were mentioned in the High-Level Design section.

As indicated in other modules of this system, the design of this component is not restricted to a specific implementation, so different technologies, plugins, or your own components can be added within the project structure, however, certain conditions must be respected prior to the incorporation of a custom component. After reading this wiki page, you will know what are the requirements to add your custom Media Module to this reference design.

Class Diagram

The following figure shows a class diagram representation of the Media component. To have a better understanding of the component, the diagram also shows which modules within the system are related to the Media module.

Media Module Class Diagram

Communication Between Modules

  • Camera Capture: This module is in charge of transmitting the inference data that is obtained in real-time through the DeepStream pipeline. Once a new inference is obtained, a callback function is activated, to continue with the expected processing flow. For this, it is necessary to use the inference parser, since the information received depends on the context of the application used, where it can come with specific formats, parameters specific to each inference model, amount of data, etc. Regardless of the previously mentioned conditions, the custom Inference Parser will know how to interpret and transform said information into a format that is understandable by the rest of the system components. For the Inference Listener to be able to use the custom Parser, it must be previously registered using the method provided by the Inference Listener interface called register_inference_parser. Later in this wiki page, there will be a brief demonstration of how to make this connection in code.
  • Media Factory: This class simply represents a Data Transfer Object (DTO), which will contain the information parsed by the custom Inference Parser. The idea is that the rest of the system modules can share this DTO to transmit the information obtained from the inference process so that everyone can interpret it regardless of the context of the application with which they are working. It is the responsibility of the Inference Parser to produce a new Inference Info after parsing the information received.


Media Interface Operations

As shown in the diagram, any custom Inference Parser module must implement the operations that are defined by the interface named "Inference Parser". As a user, you can add more elements to the design of the said component, for example, add new methods, modify the constructor of the class, and even carry out your implementations of each of the operations exposed by the interface. The important thing is that these methods are maintained. Next, there will be a brief explanation of the purpose of each of the operations defined by the Inference Parser interface. Remember that the specific implementation is up to your criteria or needs.

  • start: This method is responsible for parsing the information received by the Inference Listener, using a custom format defined by the application. The received inference is a String, and at the end of interpreting said inference, the method must build an Inference Info object that contains the parsed information and return it to the module that invoked it.
  • stop: This method is responsible for parsing the information received by the Inference Listener, using a custom format defined by the application. The received inference is a String, and at the end of interpreting said inference, the method must build an Inference Info object that contains the parsed information and return it to the module that invoked it.
  • get_name: This method is responsible for parsing the information received by the Inference Listener, using a custom format defined by the application. The received inference is a String, and at the end of interpreting said inference, the method must build an Inference Info object that contains the parsed information and return it to the module that invoked it.
  • get_triggers: This method is responsible for parsing the information received by the Inference Listener, using a custom format defined by the application. The received inference is a String, and at the end of interpreting said inference, the method must build an Inference Info object that contains the parsed information and return it to the module that invoked it.
  • register_error_callback: This method is responsible for parsing the information received by the Inference Listener, using a custom format defined by the application. The received inference is a String, and at the end of interpreting said inference, the method must build an Inference Info object that contains the parsed information and return it to the module that invoked it.



Previous: Customizing the Project/Implementing Custom Actions Index Next: Customizing the Project/Implementing a Custom Application