Reducing audio video streaming latency

From RidgeRun Developer Connection
Jump to: navigation, search

Background

A doctor is performing microscopic surgery. An in-body camera is capturing the video and an LCD is displaying the surgery site. The doctor is operating by looking at the LCD display. Even a latency of 50 ms will effect the doctor's ability to operate. For this RidgeRun customer, the solution was to feed the video though an FPGA and straight to the LCD to minimize latency.

A police officer has an A/V capture device with an LCD on the back. The police officer occasionally turns on the LCD to verify the camera is working as expected.

What causes latency

There are lots of factors that may cause latency on a multimedia stream, some of the main reasons are:

1. Extra buffering

On either audio or video applications it is common the usage of data queues to hold on the incoming or outcoming data, this provides stability to the data and avoid the lost of information, however an intensive usage of queues or the usage of big queues that holds a lot of data before being picked up tends to be problematic in almost all the scenarios.

This way the selection of the correct queue size is a drawback between the latency and the probability to drop some data packets.

2. Non optimized algorithms for data processing.

On audio and video applications the usage of signal processing algorithms is very common, from a single FIR audio filter to a complex face recognition algorithm will make use of intensive data manipulation which, if done in the wrong way, may introduce much more latency than expected. Some hints are described for compiler optimizations and also it can be found lots of trick for each specific programming language.

If possible it is always recommended to optimize the code using any hardware acceleration available to reduce the processing time, on ARM systems if there is not an specific purpose unit for certain feature it is at least recommended to use the NEON unit in the SoC for acceleration.

3. Wrong memory usage

It is common to deal with entire audio/video buffers that need to be processed on some way. This processing can be in place or should require to generate a completely new data buffer. In any case copies, writes, reads and some other stuff may be needed, the way we manage those operations will directly impact in the overall performance. As a rule of thumb the programmer must always try to do as less memory copies as possible.

4. Hardware latency

This is an inherent latency source on any stream but in very rare occasions it introduces a significant delay.

ALSA or JACK for audio

The standard Linux library for using APIS (for example alsa-lib) can add latency into audio streamings, one solution is to use JACK instead jackaudio.org/. Jack is an audio server that can redirect audio from one piece of software to another, using JACK doesn't introduces extra latency into the system, on the other hand using JACK adds a slight increase in the amount of used CPU. On testing JACK by itself adds almost no latency to the system, but if you need extreme low latency JACK can be used with Real Time kernels.

Jack server (Jackd) can be used for redirecting audio from one device to another, there are also Gstreamer applications for audio sinks and sources that are JACK compatible:

  • jackaudiosink
  • jackaudiosrc