DeepStream Reference Designs - Customizing the Project - Implementing Custom Inference Parser

From RidgeRun Developer Connection
Jump to: navigation, search


Previous: Customizing the Project Index Next: Customizing the Project/Implementing Custom Inference Listener
Nvidia-preferred-partner-badge-rgb-for-screen.png




The Inference Parser Module

This element is in charge of parsing the information received by the inference listener. The metadata in question may be encoded using a particular format. Some examples of components that could be used for this purpose were mentioned in the High-Level Design section, where the factor to take into account is that the format of the information received will depend on the specific application being used.

As indicated in other modules of this system, the design of this component is not restricted to a specific implementation, so different technologies, plugins, or your own components can be added within the project structure, however, certain conditions must be respected prior to the incorporation of a custom component. After reading this wiki page, you will know what are the requirements to add your custom Inference Parser to this reference design.

Class Diagram

The following figure shows a class diagram representation of the Inference Parser. To have a better understanding of the component, the diagram also shows which modules within the system are related to the Inference Parser.

Inference Parser Class Diagram

Communication between modules

  • Inference Listener: This module is in charge of transmitting the inference data that is obtained in real-time through the DeepStream pipeline. Once a new inference is obtained, a callback function is activated, to continue with the expected processing flow. For this, it is necessary to use the inference parser, since the information received depends on the context of the application used, where it can come with specific formats, parameters specific to each inference model, amount of data, etc. Regardless of the previously mentioned conditions, the custom Inference Parser will know how to interpret and transform said information into a format that is understandable by the rest of the system components. For the Inference Listener to be able to use the custom Parser, it must be previously registered using the method provided by the Inference Listener interface called register_inference_parser. Later in this wiki page, there will be a brief demonstration of how to make this connection in code.
  • Inference Info: This class simply represents a Data Transfer Object (DTO), which will contain the information parsed by the custom Inference Parser. The idea is that the rest of the system modules can share this DTO to transmit the information obtained from the inference process so that everyone can interpret it regardless of the context of the application with which they are working. It is the responsibility of the Inference Parser to produce a new Inference Info after parsing the information received.


Inference Parser Operations

As shown in the diagram, any custom Inference Parser module must implement the operations that are defined by the interface named "Inference Parser". As a user, you can add more elements to the design of the said component, for example, add new methods, modify the constructor of the class, and even carry out your implementations of each of the operations exposed by the interface. The important thing is that these methods are maintained. Next, there will be a brief explanation of the purpose of each of the operations defined by the Inference Parser interface. Remember that the specific implementation is up to your criteria or needs.

  • parse: This method is responsible for parsing the information received by the Inference Listener, using a custom format defined by the application. The received inference is a String, and at the end of interpreting said inference, the method must build an Inference Info object that contains the parsed information and return it to the module that invoked it.

Code Example

Below is a brief code example, using the Python programming language, on how to add a custom Inference Parser to your project. This tutorial assumes that the component code has already been developed and implements the methods established by the interface module.

Before starting with the example, we show you the directory structure of the Main Framework where we indicate which folder your custom implementation should be added to. The example shows the file structure using the Visual Studio Code editor, although you can use your preferred editor. The custom code is called my_custom_inference_parser.py, and it is highlighted in the following figure:


Error creating thumbnail: Unable to save thumbnail to destination
Directory Structure and Path of the Inference Parser


Initially, we start with a codebase that represents the main module of your application, like the following template named main.py for simplicity:

def main():
    """ 
    *** Here comes your project configuration  *** 
    """

    """
    *** Application start section ***
    """

if __name__ == "__main__":
    main()

Based on the previous code, we proceed to create the necessary instances to attach the custom inference parser entity. Recalling the class diagram of this module, the Inference Parser will be added to the Inference Listener instance, which will be in charge of executing the functionalities provided by the Parser, at the end of the process of receiving the inference information. So, this code sample instantiates the Inference Listener and Inference Parser components, as shown below:

def main():
    """ 
    *** Here comes your project configuration  *** 
    """

    # Instantiating the components required to add the custom Inference Parser
    my_listener = CustomInferenceListener()
    my_parser = CustomInferenceParser()
    

    """
    *** Application start section ***
    """

if __name__ == "__main__":
    main()

Next, we proceed to use the register methods to save the Inference Parser that will be used at the end of the Listener process. All of that is shown in the following code:

def main():
    """ 
    *** Here comes your project configuration  *** 
    """

    # Instantiating the components required to add the custom Inference Parser
    my_listener = CustomInferenceListener()
    my_parser = CustomInferenceParser()

    # Attaching the Inference Parser module to the Custom Inference Listener
    my_listener.register_inference_parser(my_parser)
    

    """
    *** Application start section ***
    """

if __name__ == "__main__":
    main()


With these simple steps, the inference parser will be added to the system and will be used by the AIManager, specifically the Inference Listener, to fulfill its purpose. Once again, we remember the importance of respecting the methods defined by the Inference Parser interface, so that the system can maintain its correct operation regardless of the specific implementation that is being used.

Important note: Please note that this brief example only shows the lines of code necessary to incorporate an Inference Parser into the system structure. The rest of the configurations that must be made in the main module are not shown in the example, including the stage where the application is started and finished. If you want to know how to add other custom components, and what other types of requirements or configurations are necessary to initialize the application, we invite you to read the other sections included in this Wiki and, if necessary, contact our support team.



Previous: Customizing the Project Index Next: Customizing the Project/Implementing Custom Inference Listener