Difference between revisions of "RidgeRun CUDA Optimisation Guide/Empirical Experiments/Multi-threaded bounding test"

From RidgeRun Developer Connection
Jump to: navigation, search
(Jetson Nano)
(Jetson AGX Orin)
Line 351: Line 351:
 
Overall it seems that the managed performs better on Jetson Nano than on the discrete GPU. In this case, it does not make sense to use pinned or zero copy nor memory reserved with hostMalloc. In the IO-bound case, managed memory can perform better than traditional, but on a processing-bound program, the traditional performs best.
 
Overall it seems that the managed performs better on Jetson Nano than on the discrete GPU. In this case, it does not make sense to use pinned or zero copy nor memory reserved with hostMalloc. In the IO-bound case, managed memory can perform better than traditional, but on a processing-bound program, the traditional performs best.
  
===Jetson AGX Orin===
+
===Jetson AGX Orin(pending)===
  
 
====Kernel execution time====
 
====Kernel execution time====

Revision as of 13:20, 27 February 2023


Nvidia-preferred-partner-badge-rgb-for-screen.png

RR Contact Us.png

Introduction

This page is the follow-up of Cuda Memory Benchmark, it adds multithreading to the testing of the different memory management modes. It's reduced to test traditional, managed, page-locked memory with and without copy call and CUDA mapped. [PENDING]

Testing Setup

Memory Management Methods

The program tested had the option to use each of the following memory management configurations:

  • Traditional mode, using malloc to reserve the memory on host, then cudaMalloc to reserve it on the device, and then having to move the data between them with cudaMemcpy. Internally, the driver will allocate a non-pageable memory chunk, to copy the data there and after the copy, finally use the data on the device.
  • Managed, using cudaMallocManaged and not having to manually copy the data and handle two different pointers.
  • Non paging memory, using cudaMallocHost a chunk of page-locked memory can be reserved that can be used directly by the device since its non-pageable
  • Non paging memory with discrete copy, using cudaMallocHost and a discrete call to cudaMemcpy, so its similar to the traditional model with different pointers one for host and another for device, but according to the NVIDIA docs on the mallocHost, the calls to cudaMemcpy are accelerated when using thid type of memory.
  • Zero-Copy Memory, using cudaHostAlloc to reserve memory that is page-locked and directly accessible to the device. There are different flags that can change the properties of the memory, in this case, the flags used were cudaHostAllocMapped and cudaHostAllocWriteCombined.

Platforms

Program Structure

The program is divided into three main sections, one where the input memory is filled with data, the kernel worker threads, and the verify. The verify reads all the results and uses assert to verify them. Before every test, 10 iterations of the full process were done to warm up and avoid any initialization time penalty. After that, the average of 100 runs was obtained. Each of the sections can be seen in Figure 1.

Error creating thumbnail: Unable to save thumbnail to destination
Figure 1. Measurement points on the code

Each kernel block can be seen in Figure 2. Each block has a semaphore to syncronize where there is data available on the input and has another that is raised at the end when the data is processed. The semaphores are shared between each block, in a chained manner, where the output semaphore is the input of the next block and so on. Also the kernel block has a condition that checks if the memory mode being used is "managed like" meaning that uses a single pointer to device and host memory and the driver handles the transfers on the back. If so it doesn't do the copy from host to device and back when done, otherwise it calls cudaMemcpy accordingly. And lastly the kernel itself it is called once for the IO bound case and 50 times for the processing bound case.

Error creating thumbnail: Unable to save thumbnail to destination
Figure 2. Composition of a kernel block

Used Data

The aim of the tests is to emulate a 4k RGBA frame so the results can be representative of the results on a real world media-handling software. To represent this data the following structure was used:

struct rgba_frame{
    float r[SIZE_W*SIZE_H];
    float g[SIZE_W*SIZE_H];
    float b[SIZE_W*SIZE_H];
    float a[SIZE_W*SIZE_H];
};

The macros are SIZE_W=3840 and SIZE_H=2160, the image size of a 4k frame.

Code

The kernel that was tested is:

Normalizer
int blockSize = 256;
int numBlocks = ((SIZE_W*SIZE_H) + blockSize - 1) / blockSize;

__global__
void normalize(int n, float *x, float *y)
{
  int index = blockIdx.x * blockDim.x + threadIdx.x;
  int stride = blockDim.x * gridDim.x;
  for (int i = index; i < n; i += stride)
    y[i] = (x[i]*((MAX-MIN)/ABS_MAX))+MIN;
}

void *exec_kernel_cpy(void * arg){
    struct sync_struct *args_k = (struct sync_struct *)arg;
    rgba_frame *d_frame_in = args_k->d_frame_in;
    rgba_frame *d_frame_out = args_k->d_frame_out;
    rgba_frame *frame_in = args_k->frame_in;
    rgba_frame *frame_out = args_k->frame_out;
    for (int i = 0; i < LOOP_CYCLES; i ++){
        sem_wait(args_k->lock_in);
        if (i > WARM_UP_RUNS){
            start_time(args_k);
        }
        if (d_frame_in != frame_in && mode != 4){
            cudaMemcpy(d_frame_in, frame_in, sizeof(struct rgba_frame), cudaMemcpyHostToDevice);
        }

        for (int j = 0; j < KERNEL_CYCLES; j ++){
            normalize<<<numBlocks, blockSize>>>(SIZE_W*SIZE_H, d_frame_in->r, d_frame_out->r);
            normalize<<<numBlocks, blockSize>>>(SIZE_W*SIZE_H, d_frame_in->g, d_frame_out->g);
            normalize<<<numBlocks, blockSize>>>(SIZE_W*SIZE_H, d_frame_in->b, d_frame_out->b);
            normalize<<<numBlocks, blockSize>>>(SIZE_W*SIZE_H, d_frame_in->a, d_frame_out->a);
            cudaDeviceSynchronize();
        }
        if (frame_out != d_frame_out && mode != 4){
            cudaMemcpy(frame_out, d_frame_out, sizeof(struct rgba_frame), cudaMemcpyDeviceToHost);
        }
        if (i > WARM_UP_RUNS){
            stop_time(args_k);
        }
        sem_post(args_k->lock_out);
    }
    return NULL;
}

This is the kernel that is bound to the worker threads, and containes two main cycles, one that is responsible to execute the number of loops to get the average and the inner one that uses the macro KERNEL_CYCLES. The value was changed between 1 and 50 to have a IO-bound case and processing-bound case, respectively. This can be seen on the figures with the label normalizer 1x and normalizer 50x, respectively.

Apart from this, the code has two sections an initial section and a end section. The initial section takes the array and fills it with 1s. It also contains the cycle responsible for the average.

void *fill_array(void * arg){
    struct sync_struct *args_fill = (struct sync_struct *)arg;
    for (int i = 0; i < LOOP_CYCLES; i ++){
        sem_wait(args_fill->lock_in);
        if (i > WARM_UP_RUNS){
            start_time(args_fill);
        }
        for (int j = 0; j < SIZE_W*SIZE_H; j++) {
            args_fill->frame_out->r[j] = 1.0f;
        }
        for (int j = 0; j < SIZE_W*SIZE_H; j++) {
            args_fill->frame_out->g[j] = 1.0f;
        }
        for (int j = 0; j < SIZE_W*SIZE_H; j++) {
            args_fill->frame_out->b[j] = 1.0f;
        }
        for (int j = 0; j < SIZE_W*SIZE_H; j++) {
            args_fill->frame_out->a[j] = 1.0f;
        }
        if (i > WARM_UP_RUNS){
            stop_time(args_fill);
        }
        sem_post(args_fill->lock_out);
    }
    return NULL;
}

The end section is where the output is read and at the same time, the results are checked to verify that the process is behaving as expected.

void *verify_results(void * arg){
    struct sync_struct *args_verf = (struct sync_struct *)arg;
    float ref = 1.0f;
    for (int i = 0; i < STAGES-1; i++){
        ref = (ref*((MAX-MIN)/ABS_MAX))+MIN;
    }
    for (int i = 0; i < LOOP_CYCLES; i ++){
        sem_wait(args_verf->lock_in);
        if (i > WARM_UP_RUNS){
            start_time(args_verf);
        }
        for (int j = 0; j < SIZE_W*SIZE_H; j++) {
            assert(args_verf->frame_in->r[j] == ref);
        }
        for (int j = 0; j < SIZE_W*SIZE_H; j++) {
            assert(args_verf->frame_in->g[j] == ref);
        }
        for (int j = 0; j < SIZE_W*SIZE_H; j++) {
            assert(args_verf->frame_in->b[j] == ref);
        }
        for (int j = 0; j < SIZE_W*SIZE_H; j++) {
            assert(args_verf->frame_in->a[j] == ref);
        }
        if (i > WARM_UP_RUNS){
            stop_time(args_verf);
        }
        sem_post(args_verf->lock_out);
    }
    return NULL;
}

Each section was measured using different timers, and the results were added to get the total times. As can be seen from the code pieces each worker has a sync_struct associated, this piece most notably, holds the semaphores for each and the times for each, amongs other necessary values.

Results

Discrete GPU

Kernel Execution Time

In Figure 3 the fastest mode is CUDA mapped, at 11.85ms on average, followed by pinned memory without copy and managed on the IO-bound case, but on the processing bound case, CUDA mapped and pinned memory without copy, suffer an increase of around 48 times for both. And in this scenario, the fastes mode is managed, and Table 1 sheds some light into the reason.


Figure 3. Kernel times for discrete GPU


Table 1. Kernel workers exec time for each
Case Worker Time avg(ms)
IO-Bound W1 30.466015
W2 0.761422
W3 0.757867
W4 0.757999
W5 29.853945
Processing-Bound W1 64.937218
W2 36.428116
W3 36.336861
W4 29.853945
W5 64.063713


It can be seen that in both scenarios the CUDA runtime identifies the chain of buffers and speeds up the data transfers which results on a considerable 40 times, time reduction on the IO-bound case and 2 times, on the Processing-Bound on the inner worker threads. However it can also be seen that there is a time penalty on the end thread and initial thread as it has to fetch the data from host memory. No other memory mode showed this behavior.

Full Execution Times

On table 2, it can be seen that the best overall is hostMalloc, however it's between variance of the traditional mode. Also the worst overall with a verify time of 58 times more that the best is CUDA mapped.

Table 2. Fill and verify times for dGPU
Memory mode Fill time avg(ms) Verify time avg(ms)
Traditional 63.391980 51.470143
Managed 62.733280 71.642704
HostMalloc 61.780409 51.822929
HostMalloc W Cpy 64.2764265 51.49177
Pinned 72.431336 3,001.322266


When we take the three times and combine them, to get the total execution time, as shown in Figure 4. We see that in the case of the discrete GPU, the best performing for the IO-Bound case is HostMalloc without discrete copy, and for the Processing-Bound case, the best is Managed memory, since it has that edge on the worker to worker transfers.


Figure 3. Total execution time for discrete GPU


In general, it seems that in IO-bound cases, it can yield benefits using memory reserved with hostMalloc and not doing the manual copy, but on a processing-bound scenario, the dicrete call to copy is needed. Overall we have slower performance with managed memory and the slowest is with pinned or zero-copy memory.

Jetson Nano(pending)

Kernel Execution Time

For the kernel times, on Figure 4, we have a difference on a performance bound vs IO-bound, were on the first the one that performs best is the memory reserved with hostMalloc with discrete copy, and on the IO-bound case, the managed memory performs notably better that the rest.

Figure 4. Kernel times for Jetson Nano


In the Jetson Nano, we have a different behavior than a discrete GPU, which is expected since the memory topology is different.

Full Execution Times

On the Jetson Nano we can see that the overall best, Table 2, is the traditional model. Also, it can be seen that there is a time increase from using both modes, mallocHost and pinned, this is different from the discrete GPU, where only the pinned performed poorly.

Table 2. Fill and verify times for Jetson Nano
Memory mode Fill time avg(ms) Verify time avg(ms)
Traditional 355.4375 181.965027
Managed 399.8251645 231.341667
Managed & prefetch 400.890045 231.1730195
Managed & advice as GPU 400.677246 231.1446
Managed & advice as CPU 399.785202 230.9549945
Managed & prefetch & advice as GPU 399.7821045 230.913635
Managed & prefetch & advice as CPU 399.8194735 232.494896
HostMalloc 355.0729065 1326.459168
HostMalloc W Cpy 354.795273 1328.617737
Pinned 354.804642 1327.90094

Figure 5, shows that the Jetson Nano has a different trend, where the managed actually performs well. We have the same behavior as the discrete GPU for the pinned or zero copy. But the hostMalloc performs slowly, compared to the discrete GPU results.

Figure 5. Total execution time for Jetson Nano


Overall it seems that the managed performs better on Jetson Nano than on the discrete GPU. In this case, it does not make sense to use pinned or zero copy nor memory reserved with hostMalloc. In the IO-bound case, managed memory can perform better than traditional, but on a processing-bound program, the traditional performs best.

Jetson AGX Orin(pending)

Kernel execution time

In kernel execution times, Figure 6, there is a clear time reduction when using hostMalloc, where it performs better than traditional memory management. It can be seen that with managed memory there is a bit more gain to be had at around 2ms less than either of them.


Figure 6. Kernel times for Jetson AGX Orin


For the Jetson AGX Orin, we have results that look more like the discrete GPU results, but the main difference is that using memory with hostMalloc does achieve better results always.

Full Execution Times

As for the Jetson AGX Orin, Table 3, the results from the fill and verify operations, show that there is a similar trend as the discrete GPU, where the managed performs slower compared to the rest, but as for the pinned, there is not as much time increase compared to the discrete GPU, since its around 3 times more, but still, it performs the worst.

Table 3. Fill and verify times for Jetson Orin AGX
Memory mode Fill time avg(ms) Verify time avg(ms)
Traditional 96.3693465 93.119007
Managed 141.320404 90.6683695
Managed & prefetch 140.552361 90.991768
Managed & advice as GPU 141.023903 91.376148
Managed & advice as CPU 141.235008 91.276241
Managed & prefetch & advice as GPU 141.092499 91.323822
Managed & prefetch & advice as CPU 140.9354705 91.141693
HostMalloc 96.887695 99.5928345
HostMalloc W Cpy 96.2297175 98.4038125
Pinned 96.558895 792.3588565

When looking at the full execution times, Figure 7, there is a different behavior than the Jetson Nano, but it's similar to the discrete GPU. Where the managed performs notably slower overall, and the hostMalloc performs better.

Figure 7. Total execution time for Jetson AGX Orin


In the case of the Jetson Orin AGX, there is one memory management mode that performs better without regarding if its an IO o processing bound scenario, that being the memory reserved with hostMalloc and without the need of handling the discrete transfers, compared to the discrete GPU.

Resource Usage Jetson

In both Jetson targets, tegrastats was used to monitor the resource utilization, mainly the CPU and GPU usage and the used memory. Upon inspection, there is virtually no difference from run to run. Where the different memory management tests, used the same amount of memory. As for the general system usage, there is also nothing worthy of attention.

Conclusions

We don't have a definitive management mode that performs best in all cases and all devices, but we can see that in different use cases and devices, one can perform better than the other. However, if you are looking for consistency and control, the traditional memory model is the way to go. But if you need to have the best execution times, we have some points that might help:

  • On a discrete GPU, use the hostMalloc memory model, but remember to use manual transfers when in a processing bound case.
  • On Jetson Nano, on an IO-bound scenario, use managed memory, otherwise, use the traditional memory model.
  • On Jetson AGX Orin we do have a one for all, in this case, use hostMalloc, this performs the best regardless, and with the bonus of not having to handle dual pointers for device and host memory.


RidgeRun Resources

Quick Start Client Engagement Process RidgeRun Blog Homepage
Technical and Sales Support RidgeRun Online Store RidgeRun Videos Contact Us

OOjs UI icon message-progressive.svg Contact Us

Visit our Main Website for the RidgeRun Products and Online Store. RidgeRun Engineering informations are available in RidgeRun Professional Services, RidgeRun Subscription Model and Client Engagement Process wiki pages. Please email to support@ridgerun.com for technical questions and contactus@ridgerun.com for other queries. Contact details for sponsoring the RidgeRun GStreamer projects are available in Sponsor Projects page. Ridgerun-logo.svg
RR Contact Us.png