Architecture of GPU Accelerated Motion Detector

From RidgeRun Developer Connection
Jump to: navigation, search



Previous: Overview Index Next: Overview/Supported Platforms



Nvidia-preferred-partner-badge-rgb-for-screen.png




The GPU Accelerated Motion Detector library is based on the Abstract Factory design pattern since it eliminates the need to explicitly instance the objects to be used, which has clear advantages in the decoupling and interdependence of the classes. This library consists of different stages where specific processing is done. A general summary of the library architecture can be seen in the following image:

Figure 1. GPU Accelerated Motion Detector Architecture


According to the previous image, the library can be divided into 4 sub-modules, which are:

  • Params (Blue).
  • Frame (Red).
  • Serializer (Green).
  • Algorithms (Orange).

Params

The params class abstracts the parameters consumed by the different algorithms. These are the parameters required to perform motion detection. This class allows you to add, remove and obtain ROI (Region of Interest) type objects. Also, the Algorithm Params inherits from the Params class, therefore allowing to create different implementations, which are specific to each algorithm.

Frame

This class is a wrapper around different kinds of data such asGstVideoFrame, cv::Mat, GstCudaData, etc. Its purpose is to pass input/output data to the algorithms. Similar to the params class, this class works as a basis for implementing different types of input and output Frames. This class allows you to obtain frame information such as data, format, stride and resolution.

Serializer

The purpose of this class is to serialize into any format the motion objects detected. JsonSerializer is an implementation of the serializer that converts the array of detected motions or blobs (explained in the overview section) into a JSON string. More implementations can be added, but at the moment JSON strings are used for simplicity.

Algorithms

It is the part in charge of creating the objects that allow motion detection. It is divided into 3 interfaces where each one is in charge of carrying out a stage of the processing. Using these 3 in the correct order allows for generating the final result. The correct order would be: first, the motion detection stage, then, the denoise, and finally, the blob detection. They are all created from the same factory class, which is used to abstract the algorithm implementations. In this way, it is possible to add different implementations of the algorithms without having to modify the interfaces or the factory. The algorithms of this section receive 2 Frame objects (I/O) and a Params object as arguments. This Params type object contains the necessary parameters for each algorithm. The library will only analyze objects that are inside the ROI, where this ROI is specified using the Params object. Within that ROI the processing begins in the order described in the previous Overview section.


Previous: Overview Index Next: Overview/Supported Platforms