Difference between revisions of "GstCUDA - GstCUDA Framework"

From RidgeRun Developer Connection
Jump to: navigation, search
m
 
(6 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{GstCUDA Page |  
+
{{GstCUDA/Head|previous=Project Structure|next=libGstCUDA API|metakeywords=GstCUDA Framework, GstCUDA base class, GstCUDA APIs, libGstCUDA, GstCUDA base class documentation, GstCUDA API documentation, GstCUDA API}}
[[GstCUDA - Building and Installation Guide|Building and Installation Guide]]|
 
[[GstCUDA - libGstCUDA API | libGstCUDA API]]|
 
 
 
This page offers a description of the GstCUDA framework.
 
  
 
__TOC__
 
__TOC__
  
 
==Description==
 
==Description==
GstCUDA offers a framework that allows users to easily develop custom GStreamer elements that executes any CUDA algorithm. The GstCUDA framework is a series of base classes abstracting the complexity of both CUDA and GStreamer. With GstCUDA, developers avoid writing elements from scratch, thanks to the provided framework, allowing the developer to focus on the CUDA algorithm logic, and accelerating time to market.
 
  
 +
GstCUDA offers a framework that allows users to easily develop custom GStreamer elements that execute any CUDA algorithm. The GstCUDA framework is a series of base classes abstracting the complexity of both CUDA and GStreamer. With GstCUDA, developers avoid writing elements from scratch, thanks to the provided framework, allowing the developers to focus on the CUDA algorithm logic, thus accelerating time to market.
  
Those base classes are based on the libGstCUDA API. It consists in a library that expose special structures and functions that abstracts the complexity of: handle NVMM memory type buffers, extracts the data buffer to be processed and pass it to the GPU, and recover back the processed data from the GPU to the GStreamer element. Also, the methods implemented in the libGstCUDA API ensures an optimal performance, due to direct handling of NVMM memory type buffers and zero memory copy interface between GStreamer and CUDA.     
+
The base classes utilize libGstCUDA API. The library exposes special structures and functions abstracting the complexity of: handling NVMM memory buffers, extracting the data buffer to be processed and passing it to the GPU, and receiving the processed data from the GPU to route the processed data to the GStreamer element. Also, the methods implemented in libGstCUDA ensure optimal performance, due to direct handling of NVMM memory buffers and zero memory copy interface between GStreamer and CUDA.     
  
 
+
There is a specific base class for each different filter element topology according to the number of sink and source pads required. These base classes are intended to help users develop custom GStreamer/CUDA elements, by providing structures and functions that abstract complexity, avoids replicated code, and simplifies and speeds up the development process. Below there is the list of base class:
There is a specific base class for each different filter element topology according to the number of sink and source pads required. This base classes are intended to help users in the develop of custom GStreamer/CUDA elements, by providing structures and functions that abstracts difficulty, avoids to replicate code, simplify and speeds up the development process, and reduce product time to market. Below there is a list that briefly describes the purpose of each provided base class:
 
 
* '''''GstCUDABaseFilter:''''' Base class for single input/single output filter topology elements.
 
* '''''GstCUDABaseFilter:''''' Base class for single input/single output filter topology elements.
 
* '''''GstCUDABaseMISO:''''' Base class for multiple input/single output filter topology elements.
 
* '''''GstCUDABaseMISO:''''' Base class for multiple input/single output filter topology elements.
Line 20: Line 15:
 
* '''''GstCUDABaseMIMO:''''' Base class for multiple input/multiple output filter topology elements.
 
* '''''GstCUDABaseMIMO:''''' Base class for multiple input/multiple output filter topology elements.
  
 +
If a user wants to develop a custom GStreamer/CUDA element, the developer first should review the provided base classes to determine which one matches their GStreamer element and CUDA algorithm requirements. Then the developer should use the structures and functions provided in the chosen base class in the development of the custom element. Using the base classes will remove the drudgery and common mistakes often made when working in a complex framework like GStreamer.
  
If an user wants to develop its custom GStreamer/CUDA element, first should review the provided base classes to determine which one matches with it's GStreamer element and CUDA algorithm requirements. Then, should use the structures and functions provided in the chosen base class in the development of the custom element. Using the base classes will be more easier than writing the element from scratch or using only the libGstCUDA API. 
+
==GstCUDA Framework Index==
  
 
+
The following index gives a detailed description of the libGstCUDA APIs and the base classes provided in the GstCUDA framework.
==GstCUDA Framework Index==
 
The following index gives a detailed description of the libGstCUDA API and each of the base classes provided in the GstCUDA framework.
 
  
 
<html>
 
<html>
Line 39: Line 33:
 
</html>
 
</html>
  
}}
+
{{GstCUDA/Foot|previous=Project Structure|next=libGstCUDA API}}

Latest revision as of 02:52, 6 March 2023


Previous: Project Structure Index Next: libGstCUDA API


Nvidia-preferred-partner-badge-rgb-for-screen.png



Description

GstCUDA offers a framework that allows users to easily develop custom GStreamer elements that execute any CUDA algorithm. The GstCUDA framework is a series of base classes abstracting the complexity of both CUDA and GStreamer. With GstCUDA, developers avoid writing elements from scratch, thanks to the provided framework, allowing the developers to focus on the CUDA algorithm logic, thus accelerating time to market.

The base classes utilize libGstCUDA API. The library exposes special structures and functions abstracting the complexity of: handling NVMM memory buffers, extracting the data buffer to be processed and passing it to the GPU, and receiving the processed data from the GPU to route the processed data to the GStreamer element. Also, the methods implemented in libGstCUDA ensure optimal performance, due to direct handling of NVMM memory buffers and zero memory copy interface between GStreamer and CUDA.

There is a specific base class for each different filter element topology according to the number of sink and source pads required. These base classes are intended to help users develop custom GStreamer/CUDA elements, by providing structures and functions that abstract complexity, avoids replicated code, and simplifies and speeds up the development process. Below there is the list of base class:

  • GstCUDABaseFilter: Base class for single input/single output filter topology elements.
  • GstCUDABaseMISO: Base class for multiple input/single output filter topology elements.
  • GstCUDABaseSIMO: Base class for single input/multiple output filter topology elements.
  • GstCUDABaseMIMO: Base class for multiple input/multiple output filter topology elements.

If a user wants to develop a custom GStreamer/CUDA element, the developer first should review the provided base classes to determine which one matches their GStreamer element and CUDA algorithm requirements. Then the developer should use the structures and functions provided in the chosen base class in the development of the custom element. Using the base classes will remove the drudgery and common mistakes often made when working in a complex framework like GStreamer.

GstCUDA Framework Index

The following index gives a detailed description of the libGstCUDA APIs and the base classes provided in the GstCUDA framework.


Previous: Project Structure Index Next: libGstCUDA API