DeepStream Reference Designs - Customizing the Project - Implementing Custom Inference Listener

From RidgeRun Developer Connection
Jump to: navigation, search


Previous: Customizing the Project/Implementing Custom Inference Parser Index Next: Customizing the Project/Implementing Custom Policies
Nvidia-preferred-partner-badge-rgb-for-screen.png




The Inference listener Module

As mentioned in the High-Level Design section, this module is in charge of transmitting the information metadata that is obtained at the output of the inference that is handled in the DeepStream pipeline. Depending on the application being used, the information metadata can represent different things, nevertheless, the responsibility of this component is the transmission of information regardless of the content that is being handled. Similarly, the design of this component is not restricted to a specific implementation, so different technologies, plugins, or own components can be added within the project scheme, however, certain conditions must be respected prior to the incorporation of a custom component. After reading this wiki page, you will know what are the requirements to add your custom Inference Listener to this reference design.

Class Diagram

The following figure shows a class diagram representation of the Inference Listener. To have a better understanding of the component, the diagram also shows which modules within the system are related to the Inference Listener.


Inference Listener Class Diagram


Communication between modules

  • AI Manager: This subsystem is responsible for managing the entire inference process of the application. To do this, it interacts with instances of the engine class, which it creates through the method called:
 add_engine(description: MediaDescription, listener: InferenceListener = None) 

This method receives within its parameters an instance of Inference Listener, which will be attached to the newly created engine. As you will notice in the method signature, the Inference Listener parameter has a default value of None. This is because the inference listener component can be added to the engine to be used according to its needs, however, there is no dependency that the engine cannot be created if an inference listener is not added to it. Therefore, the flexibility of the design allows the module to be added or not. In case no Inference Listener is added, the system is not prevented from working. Still, be warned that if you don't add an inference listener, the system won't be able to do any processing on the data it receives from the DeepStream pipeline.

  • Engine: This module abstracts the models and configurations used to perform the inference through the DeepStream pipeline. After the AI Manager attaches the Inference Listener to the engine, it will be in charge of listening to the information flow in the pipeline and invoking a callback method, at the moment an inference is received. That callback presents a specific signature on its method that must be respected.
  • Inference Parser & Action Dispatcher: The relationship between these components and the Inference Listener is that the latter will use them to delegate responsibilities on how the received inference is interpreted according to the application used, and what actions will be executed afterward. The creation of the Action Dispatcher and the Inference Parser is not the responsibility of the Inference Listener, it simply receives them and uses them in the callback method, once the inference is obtained. In the case that none of these components (Action Dispatcher and Inference Parser) are registered, what will happen is that no processing will be performed on the information received.

Inference Listener Operations

As shown in the diagram, any custom Inference Listener module must implement the operations that are defined by the interface named "Inference Listener". As a user, you can add more elements to the design of the said component, for example, add new methods, modify the constructor of the class, and even carry out your own implementations of each of the operations exposed by the interface. The important thing is that these methods are maintained. Next, there will be a brief explanation of the purpose of each of the operations defined by the Inference Listener interface. Remember that the specific implementation is up to your own criteria or needs.

  • start: This method is responsible for making the necessary configurations of the component and initializing the process of listening to the messages that come from the inference made by the DeepStream pipeline. Although the implementation of the module is customized, it is important to note that the process of listening to the messages must be automatic, so that once the Inference Listener is started, it can receive the data that arrives, without the need for the system to make manual requests to obtain the information.
  • stop: As its name indicates, what this method is looking for is to correctly finish the process of listening to the messages, before the instances of the engines are eliminated.
  • register_inference_parser: Method responsible for saving within the current instance of the Inference Listener, the type of parser that will be used within the application to interpret the information received.
  • register_action_dispatcher: Similar to the previous method, the goal of this operation is to save within the current Inference Listener object, the Action Dispatcher module that will be used to carry out further processing of the information received.
  • callback: This operation represents a callback function, which will be called when the module detects that a new inference has been generated. Again, the idea is that the process is carried out automatically, therefore, we seek to take advantage of the asynchronous behavior that the nature of callback functions gives us. After the inference is received, the idea is that this callback delegates the responsibility of parsing and executing the respective actions to the Inference Parser and Action Dispatcher components, which were added to this module via the register methods.

Code Example

Below is a brief code example, using the Python programming language, on how to add a custom Inference Listener to your project. This tutorial assumes that the component code has already been developed and implements the methods established by the interface module.

Before starting with the example, we show you the directory structure of the Main Framework where we indicate which folder your custom implementation should be added to. The example shows the file structure using the Visual Studio Code editor, although you can use your preferred editor. The custom code is called my_custom_inference_listener.py, and it is highlighted in the following figure:

Error creating thumbnail: Unable to save thumbnail to destination
Directory Structure and Path of the Inference Listener

Initially, we start with a codebase that represents the main module of your application, like the following template named main.py for simplicity:

def main():
    """ 
    *** Here comes your project configuration  *** 
    """

    """
    *** Application start section ***
    """

if __name__ == "__main__":
    main()

Based on the previous code, we proceed to create the necessary instances to build the custom inference listener entity, which includes the modules: MediaFactory, AIManager, Inference Parser, and Action Dispatcher.

def main():
    """ 
    *** Here comes your project configuration  *** 
    """

    # Instantiating the components required to add the custom Inference Listener
    my_factory = CustomMediaFactory()
    ai_manager = AIManager(myFactory)
    action_dispatcher = ActionDispatcher()
    my_parser = CustomInferenceParser()
    

    """
    *** Application start section ***
    """

if __name__ == "__main__":
    main()

Recalling the general design of the system, the AIManager and Action Dispatcher modules are part of the Main Framework of the system, and the MediaFactory and the Inference Listener are custom components, which is reflected in the nomenclature used when naming these instances in the above code. Next, we proceed to create the instance of the already developed module of the Inference Listener, and in addition, we use the registration methods to save the Inference Parser and the Action Dispatcher that will be used at the end of the Listener process. All of that is shown in the following code:

def main():
    """ 
    *** Here comes your project configuration  *** 
    """

    # Instantiating the components required to add the custom Inference Listener
    my_factory = CustomMediaFactory()
    ai_manager = AIManager(myFactory)
    action_dispatcher = ActionDispatcher()
    my_parser = CustomInferenceParser()

    # Instantiating the Inference Listener and registering the respective components
    my_listener = CustomInferenceListener()
    my_listener.register_inference_parser(my_parser)
    my_listener.register_action_dispatcher(action_dispatcher)
    

    """
    *** Application start section ***
    """

if __name__ == "__main__":
    main()

And finally, the AIManager instance is used to call the add_engine method, through which an inference listener can be attached to the engine that will be created. This method receives an engine descriptor as a parameter, which is outside the scope of this tutorial, but for completeness of the example, it is assumed that it has already been created and is included as a parameter.

def main():
    """ 
    *** Here comes your project configuration  *** 
    """

    # Instantiating the components required to add the custom Inference Listener
    my_factory = CustomMediaFactory()
    ai_manager = AIManager(myFactory)
    action_dispatcher = ActionDispatcher()
    my_parser = CustomInferenceParser()

    # Instantiating the Inference Listener and registering the respective components
    my_listener = CustomInferenceListener()
    my_listener.register_inference_parser(my_parser)
    my_listener.register_action_dispatcher(action_dispatcher)

    # Attaching the Inference Listener to an engine by using the AIManager
    ai_manager.add_engine(engine_descriptor, my_listener)    

    """
    *** Application start section ***
    """

if __name__ == "__main__":
    main()

With these simple steps, the inference listener will be added to the system and will be used by the AIManager to fulfill its purpose. Once again, we remember the importance of respecting the methods defined by the Inference Listener interface, so that the system can maintain its correct operation regardless of the specific implementation that is being used.

Important note: Please note that this brief example only shows the lines of code necessary to incorporate an Inference Listener into the system structure. The rest of the configurations that must be made in the main module are not shown in the example, including the stage where the application is started and finished. If you want to know how to add other custom components, and what other types of requirements or configurations are necessary to initialize the application, we invite you to read the other sections included in this Wiki and, if necessary, contact our support team.



Previous: Customizing the Project/Implementing Custom Inference Parser Index Next: Customizing the Project/Implementing Custom Policies