Jetson TX2 - Jetson TX2 V4L2 driver support - V4L2 driver for camera sensor or capture chip

From RidgeRun Developer Connection
Jump to: navigation, search



  Index Next: Getting Started





Introduction

RidgeRun has more than 12 years experience creating custom Linux V4L2 drivers for embedded systems. The customer selects the hardware sensor or chip and RidgeRun creates the V4L2 driver for it. This wiki describes the services provided by RidgeRun to create a V4L2 driver for your system as well as some of the considerations related to time frame, documentation, hardware, etc. Contact Us section provides info on how to reach RidgeRun team.

V4L2 Driver

V4L2 is the official Linux Kernel API to handle capture devices like camera sensors, video decoders or FPGAs feeding video frames to the SoC. The video frames can come from component, composite, HDMI, or SDI, or frames from other video encoding standards.

The V4L2 framework defines the API the Linux camera driver supports in order to be V4L2 compliant. The Linux kernel uses the camera driver to initialize the hardware and produce video frames. Each of these functions have an specific implication to the camera sensor. Often the driver interacts with the camera sensor, receiver chip or FPGA using by reading and writing I2C or SPI registers.

Creating a Linux camera driver consists of four steps:

  • Subdevice driver - camera sensor configuration via I2C, SPI or other low level communication to initialize sensor and support different resolutions. RidgeRun custom drivers support one resolution, other can be added as needed.
  • Device tree modification
  • Capture subsystem configuration and video node creation (/dev/video):
    • In Jetson TX1/TX2/Xavier involves code needed to configure Video Input (VI) to receive the video coming from the camera. Support to capture from v4l2, libargus and nvcamerasrc (YUV).
    • In UltraScale+ involves adding the code to configure the VPSS to receive the video coming from the sensor. It might require some work on the PL.
    • In DM8168 and DM8148, this is the VPSS configuration
    • In i.MX6 this is the IPU configuration
    • In DM368 this is the VPFE configuration
  • Application Support:
    • Add support to one application like GStreamer, Yavta, etc to grab the frames available in the video node (/dev/video), sometimes this involves creating software patches to support custom color spaces.

Resolutions and controls

  • Additional CMOS Sensor or V4L2 driver product includes the work to complete each of the stages mentioned above. Its price includes support for one resolution chosen by the customers and others can be added later T&M, normally the hardest part is to get the system capturing frames but once it is working for one resolutions adding others is straight forward.
  • RidgeRun also provides services to extend the driver to support additional controls like auto white balance, contrast, exposure time, etc if the sensor has this capabilities as well as multiple sensor/chips support.
  • In case of NVIDIA Jetson, RidgeRun will use the default ISP calibration. Please notice that once the driver is in place you might need to create a custom ISP calibration file for your sensor if you need to use the built-in ISP. NVIDIA gives access to the ISP calibration tools only to ODMs, so companies like D3engineering and Leopard Imaging can create this file for you if the default settings don't produce the expected image quality.

Devices

Some of the devices that might need a V4L2 driver are:

  • Camera Sensor from different vendors like Sony, Aptina, Omnivision, etc
  • SDI receivers like Gennum
  • HDMI receivers
  • GMSL or FPD Link chips to extend the physical connection of the camera to the SoC
  • Composite or component decoders
  • FPGA feeding video

Delivery

Once the driver is done RidgeRun provides the source code of the driver as well as a wiki page with instructions about how to compile and test the driver, normally using applications like GStreamer or Yavta with performance measurements like ARM load and frames per seconds.

Documentation Required

  • In order to complete the driver RidgeRun needs access to the documentation that describes how to configure the sensor or receiver, this happens normally through i2c or SPI registers unless your driver is a V4L2 driver for a FPGA. For this reason RidgeRun has NDA with:
    • Omnivision
    • Maxim
    • Sony
    • Framos
    • Aptina
    • Toshiba
  • Although it is not mandatory it is useful to provide the schematics for your board to understand better how are the video receivers connected, details like i2c bus, MIPI CSI2 port, parallel port, clock signal, etc, help RidgeRun engineers to create your driver faster and in some case to detect hardware issues.

Hardware

  • RidgeRun needs remote or physical access to the hardware to create and test the driver. RidgeRun assumes that there are not hardware issues that would delay the development process (and increase costs). In case of problems with your hardware, RidgeRun will bill up to 20 hours of engineers services for the time needed to inform you what is wrong.
  • Once we are done with the driver the hardware is shipped back to the customer

Time frame

  • Creating a V4L2 driver requires between 3 to 4 weeks if RidgeRun doesn't have the driver already done. During this period partial deliveries are provided to the customers as well as progress updates. In case of situations blocking the progress (like hardware issues) these are informed to the customer as well.

EDID Support on HDMI capture drivers

If your capture chip is a HDMI receiver please ask RidgeRun about the EDID Support for your driver because getting the chip working with your camera might require additional work due EDID requirements. This section explains some of the work that is required.

EDID Background

The EDID is like a descriptor (multiple HEX numbers) of the resolutions that are supported by the HDMI receiver, for instance the TC358840 chip, so the camera or video source knows what resolutions it can output. These EDID information is sent through the DDC which is kind of i2c [1].

The EDID descriptor has multiple revisions or versions [2] and therefore not all the cameras are able to parse all the same EDID or versions. This is why it is important to know which video source you will use. We have seen cases where our EDID works with multiple cameras but one camera doesn't like it so we have to start modifying it until we get it working. Furthermore, in some cases the camera manufacturer doesn't pay attention to the EDID and it just output a default resolution.

One option for instance, is to connect your cameras to a specific monitor. If all them output 1080p60 then we could try copying the EDID from your monitor and put it in our driver after some modifications because your monitor EDID will report likely a huge amount of resolutions supported that are not supported by the TC358840.

In Linux, after connecting the monitor you can read the EDID using these commands:

  • EDID
Display EDID information for each display
xrandr --props

Display EDID information for specific display

cat /sys/class/drm/card0-HDMI-A-1/edid | hexdump

Decode EDID information

sudo apt-get install edid-decode
cat /sys/class/drm/card0-HDMI-A-1/edid | edid-decode

In Windows there are tools to take the EDID and edit it, but since there are multiple version not always all the tools will be able to decode it and edit it.

[1] https://en.wikipedia.org/wiki/HDMI

[2] https://www.extron.com/company/article.aspx?id=uedid

Links


  Index Next: Getting Started