Difference between revisions of "Sony IMX219 Linux driver"

From RidgeRun Developer Connection
Jump to: navigation, search
(IMX219 Features)
m
 
(94 intermediate revisions by 7 users not shown)
Line 1: Line 1:
 +
<seo title=" Sony IMX219 Linux driver | Sony IMX219 Linux Driver for Jetson | RidgeRun Developer" titlemode="replace" metakeywords=" GStreamer, Linux SDK, Linux BSP,  Embedded Linux, Device Drivers, Nvidia, Xilinx, TI, NXP, Freescale, Embedded Linux driver development, Linux Software development, Embedded Linux SDK, Embedded Linux Application development, GStreamer Multimedia Framework, Sony IMX219, IMX219, IMX219 Linux driver, Sony IMX219 Linux driver, Linux driver for Jetson TX1, Linux driver for Jetson TX2, Sony IMX219 Linux driver for Jetson TX2, V4L2 Driver, IMX219 driver" metadescription="Check out our comprehensive overview of the Sony IMX219 Linux driver for Jetson  Watch an overview video, check out the features and more at RidgeRun!"></seo>
 +
 
<table>
 
<table>
 
<tr>
 
<tr>
 
<td><div class="clear; float:right">__TOC__</div></td>
 
<td><div class="clear; float:right">__TOC__</div></td>
 
<td>
 
<td>
<html>
+
{{Shopping cart mpo for V4L2 camera drivers}}
<div class="ecwid ecwid-SingleProduct ecwid-Product ecwid-Product-59350605" itemscope itemtype="http://schema.org/Product" data-single-product-id="59350605"><div itemprop="image"></div><div class="ecwid-title" itemprop="name"></div><div itemtype="http://schema.org/Offer" itemscope itemprop="offers"><div class="ecwid-productBrowser-price ecwid-price" itemprop="price"></div></div><div customprop="options"></div><div customprop="addtobag"></div></div><script type="text/javascript" src="https://app.ecwid.com/script.js?7804024&data_platform=singleproduct" charset="utf-8"></script><script type="text/javascript">xSingleProduct()</script>
+
<td>
</html>
+
{{NVIDIA Preferred Partner logo}}
For more information please visit our [http://www.ridgerun.com/store/!/Additional-CMOS-Sensor-or-V4L2-driver/p/59350605/category%3D16360695 online store] or [http://www.ridgerun.com/#!contact/c3vn contact us link]
+
<td>
</td></tr></table>
+
<td>
 +
{{GStreamer debug}}
 +
<td>  
 +
<center>
 +
{{ContactUs Button}}
 +
</center>
 +
</tr>
 +
</table>
 +
<br>
  
'''[Under construction]'''
+
== Overview Video ==
 +
<br>
 +
<!----------
 +
<center>
 +
<embedvideo service="vimeo">https://vimeo.com/191588590</embedvideo>
 +
</center>
 +
---->
 +
<center>
 +
<embedvideo service="vimeo" itemprop="video" itemscope itemtype="https://schema.org/VideoObject">
 +
  <meta itemprop="contentURL" content="https://vimeo.com/191588590">
 +
  <meta itemprop="thumbnailUrl" content="DualCapture SonyIMX219 JetsonTX1.png">
 +
  <meta itemprop="description" content="Dual IMX219 Camera Capture with Tegra X1">
 +
  <meta itemprop="name" content="Dual IMX219 Camera Capture with Tegra X1">
 +
</embedvideo>
 +
</center>
  
=IMX219 Features=
+
==Sony IMX219 image sensor features==
  
 
The Sony IMX219 is a CMOS image sensor with the following features:
 
The Sony IMX219 is a CMOS image sensor with the following features:
Line 21: Line 45:
 
* Max resolution of 3280 (H) x 2464 (V) approx. 8.08 M pixels
 
* Max resolution of 3280 (H) x 2464 (V) approx. 8.08 M pixels
  
RidgeRun has developed a driver for the Tegra X1 platform with the following support:
+
RidgeRun has developed a driver for the Jetson TX1 platform with the following support:
  
 
* L4T 24.2 and Jetpack 2.3
 
* L4T 24.2 and Jetpack 2.3
Line 31: Line 55:
 
* Tested resolution 820x616 @ 30 fps
 
* Tested resolution 820x616 @ 30 fps
 
* Tested with J100 and J20 Auvidea boards.
 
* Tested with J100 and J20 Auvidea boards.
 +
* Capture with v4l2src and also with nvcamerasrc using the ISP.
  
= Getting the source code =
+
==Enabling the driver==
 
 
In order to get the source code please send an email to support@ridgerun.com or [http://www.ridgerun.com/#!contact/c3vn contact us]
 
 
 
=Enabling the driver=
 
  
 
In order to use this driver, you have to patch and compile the kernel source, and there are two ways to do it:
 
In order to use this driver, you have to patch and compile the kernel source, and there are two ways to do it:
  
==Using RidgeRun SDK==
+
===Using RidgeRun SDK===
  
Through the SDK you can easily patch the kernel and generate an image with the required changes to get the imx219 sensor to work. In this wiki [[Getting_Started_Guide_for_Tegra_X1_Jetson]] you can find all the information required to build a Tegra X1 SDK from scratch.
+
Through the SDK you can easily patch the kernel and generate an image with the required changes to get the imx219 sensor to work. In this wiki [[Getting Started Guide for Tegra X1 Jetson]] you can find all the information required to build a Jetson TX1 SDK from scratch.
  
In order to add the IMX219 driver follow this steps:
+
In order to add the IMX219 driver, follow these steps:
  
* Go to to your SDK directory
+
* Go to your SDK directory
 
* Go to the kernel directory
 
* Go to the kernel directory
 
* Copy the patches in the patches directory
 
* Copy the patches in the patches directory
Line 53: Line 74:
 
0002-add-imx219-dtb.patch
 
0002-add-imx219-dtb.patch
 
</pre>
 
</pre>
 +
 +
In the case that you will use the driver with the J20 board, you will need to copy the following patch:
 +
 +
<pre>
 +
0003-add-j20-board-driver.patch
 +
</pre>
 +
 
* Modify the series file in the kernel directory. You have to add the 3 above patches.
 
* Modify the series file in the kernel directory. You have to add the 3 above patches.
 
* Run '''make config''' and select the IMX219 in the Kernel Configuration like this:
 
* Run '''make config''' and select the IMX219 in the Kernel Configuration like this:
 +
<pre>
 +
-> Kernel Configuration
 +
-> Device Drivers                                                                                                                       
 +
  -> Multimedia support                                                                                         
 +
    -> Encoders, decoders, sensors, and other helper chips
 +
      -> <*> IMX219 camera sensor support
 +
</pre>
 +
 +
In the case that you will use the driver with the J20 board, you need to select the J20 support Kernel Configuration like this:
 +
 
<pre>
 
<pre>
 
-> Kernel Configuration
 
-> Kernel Configuration
Line 60: Line 98:
 
   -> Multimedia support                                                                                           
 
   -> Multimedia support                                                                                           
 
     -> Encoders, decoders, sensors and other helper chips
 
     -> Encoders, decoders, sensors and other helper chips
       -> <*> IMX219 camera sensor support
+
       -> <*> Auvidea J20 Expansion board
 
</pre>
 
</pre>
* Then '''make''' the SDK and install following the Started Guide mentioned before
 
  
==Using Jetpack==
+
* Then '''make''' the SDK and install it following the Started Guide mentioned before
  
* Follow the instructions in [http://developer.ridgerun.com/wiki/index.php?title=Compiling_Tegra_X1_source_code#Build_Kernel Compiling_Tegra_X1_source_code (Downloading the code)] to get the kernel source code.
+
===Using Jetpack===
 +
 
 +
* Follow the instructions in [[Compiling_Tegra_X1_source_code#Build_Kernel Compiling Jetson TX1 source code | (Downloading the code)]] to get the kernel source code.
  
 
* Once you have the source code, apply the following two patches if you haven't yet, for fixing kernel errors during compilation.
 
* Once you have the source code, apply the following two patches if you haven't yet, for fixing kernel errors during compilation.
Line 81: Line 120:
 
</pre>
 
</pre>
  
* Follow the instructions in [http://developer.ridgerun.com/wiki/index.php?title=Compiling_Tegra_X1_source_code#Build_Kernel Compiling_Tegra_X1_source_code (Build Kernel)] for building the kernel, and then flash the image.
+
In the case that you will use the driver with the J20 board, you will need to apply the following patch:
 +
 
 +
<pre>
 +
0003-add-j20-board-driver.patch
 +
</pre>
 +
 
 +
you can apply the patches with the command:
 +
<pre>
 +
quilt push -a
 +
</pre>
 +
* Follow the instructions in [[Compiling_Tegra_X1_source_code#Build_Kernel Compiling_Tegra_X1_source_code | (Build Kernel)]] for building the kernel, and then flash the image.
  
 
Make sure to enable imx219 driver support:
 
Make sure to enable imx219 driver support:
Line 95: Line 144:
 
</pre>
 
</pre>
  
=Using the driver=
+
In the case that you will use the driver with the J20 board, you need to make sure to enable the J20 support:
  
==Gstreamer examples==
+
<pre>
 +
make menuconfig
 +
</pre>
  
The Gstreamer version distributed with Jetpack doesn't support bayer RAW10 only RAW8 so gstreamer needs to be patched in order to capture using v4l2src. Follow the steps in the following wiki page to add the support for RAW10:
+
<pre>
 +
-> Device Drivers                                                                                                                       
 +
  -> Multimedia support                                                                                         
 +
    -> Encoders, decoders, sensors and other helper chips
 +
      -> <*> Auvidea J20 Expansion board
 +
</pre>
 +
 
 +
==Using the driver==
 +
 
 +
===GStreamer examples===
 +
 
 +
The GStreamer version distributed with Jetpack doesn't support Bayer RAW10 only RAW8 so GStreamer needs to be patched in order to capture using v4l2src. Follow the steps in the following wiki page to add the support for RAW10:
  
 
http://developer.ridgerun.com/wiki/index.php?title=Compile_gstreamer_on_tegra_X1
 
http://developer.ridgerun.com/wiki/index.php?title=Compile_gstreamer_on_tegra_X1
  
=== Snapshots ===
+
'''Important Note:''' When you are accessing the board through serial or ssh and you want to run a pipeline to display with autovideosink, nveglglessink, xvimagesink or any other video sink, you have to run your pipeline with ''DISPLAY=:0'' at the beginning of the description:
 +
 
 +
<pre style="background:#d6e4f1">
 +
DISPLAY=:0 gst-launch-1.0 ...
 +
</pre> 
 +
 
 +
==== Snapshots ====
 +
 
 +
In order to check the snapshot, you can use the following tool:
 +
 
 +
https://github.com/jdthomas/bayer2rgb
 +
 
 +
So, run the following commands to download the tool and compile it:
 +
 
 +
<pre>
 +
git clone git@github.com:jdthomas/bayer2rgb.git
 +
cd bayer2rgb
 +
make
 +
cp bayer2rgb /usr/bin/
 +
</pre>
 +
 
 +
 
 +
Bayer2rgb will convert naked (no header) Bayer grid data into RGB data. There are several choices of interpolation (though they all look essentially the same to my eye). It can output tiff files and can integrate with ImageMagick to output other formats.
  
 
*3280x2464
 
*3280x2464
Line 109: Line 193:
 
<pre style="background:#d6e4f1">
 
<pre style="background:#d6e4f1">
 
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=3280, height=2464" ! filesink location=test_3280x2464.bayer
 
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=3280, height=2464" ! filesink location=test_3280x2464.bayer
 +
</pre>
 +
 +
<pre>
 +
./bayer2rgb --input=test_3280x2464.bayer --output=data.tiff --width=3296 --height=2464 --bpp=16 --first=RGGB --method=BILINEAR --tiff
 +
</pre>
 +
 +
Use image_magik to convert the tiff to png:
 +
 +
<pre>
 +
convert data.tiff data.png
 
</pre>
 
</pre>
  
Line 114: Line 208:
 
<pre style="background:#d6e4f1">
 
<pre style="background:#d6e4f1">
 
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=1920, height=1080" ! filesink location=test_1920x1080.bayer
 
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=1920, height=1080" ! filesink location=test_1920x1080.bayer
 +
</pre>
 +
 +
Check the snapshot with:
 +
 +
<pre>
 +
./bayer2rgb --input=test_1920x1080.bayer --output=data.tiff --width=1920 --height=1080 --bpp=16 --first=RGGB --method=BILINEAR --tiff
 +
</pre>
 +
 +
Use image_magik to convert the tiff to png:
 +
 +
<pre>
 +
convert data.tiff data.png
 
</pre>
 
</pre>
  
Line 122: Line 228:
 
</pre>
 
</pre>
  
In order to check the snapshot, you can use the following tool:
+
Check the snapshot with:
 +
 
 +
<pre>
 +
./bayer2rgb --input=test_1280x720.bayer --output=data.tiff --width=1280 --height=720 --bpp=16 --first=RGGB --method=BILINEAR --tiff
 +
</pre>
 +
 
 +
Use image_magik to convert the tiff to png:
 +
 
 +
<pre>
 +
convert data.tiff data.png
 +
</pre>
 +
 
 +
*1640x1232 (Binning x2)
 +
 
 +
<pre style="background:#d6e4f1">
 +
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=1640, height=1232" ! filesink location=test_1640x1232.bayer
 +
</pre>
 +
 
 +
Check the snapshot with:
  
https://github.com/jdthomas/bayer2rgb
+
<pre>
 +
./bayer2rgb --input=test_1640x1232.bayer --output=data.tiff --width=1664 --height=1232 --bpp=16 --first=RGGB --method=BILINEAR --tiff
 +
</pre>
  
So, run the following commands to download the tool and compile it:  
+
Use image_magik to convert the tiff to png:
  
 
<pre>
 
<pre>
git clone git@github.com:jdthomas/bayer2rgb.git
+
convert data.tiff data.png
cd bayer2rgb
 
make
 
cp bayer2rgb /usr/bin/
 
 
</pre>
 
</pre>
  
 +
*820x616 (Binning x4)
 +
 +
<pre style="background:#d6e4f1">
 +
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=820, height=616" ! filesink location=test_820x616.bayer
 +
</pre>
  
Bayer2rgb will convert naked (no header) bayer grid data into rgb data. There are several choices of interpolation (though they all look essentially the same to my eye). It can output tiff files, and can integrate with ImageMagick to output other formats.
+
Check the snapshot with:
  
 
<pre>
 
<pre>
./bayer2rgb --input=<input file> --output=data.tiff --width=<width> --height=<height> --bpp=16 --first=RGGB --method=BILINEAR --tiff
+
./bayer2rgb --input=test_820x616.bayer --output=data.tiff --width=832 --height=616 --bpp=16 --first=RGGB --method=BILINEAR --tiff
 
</pre>
 
</pre>
  
Line 148: Line 276:
 
</pre>
 
</pre>
  
=== Capture ===
+
==== Capture ====
 +
 
 +
===== V4l2src =====
 +
You can use the raw2rgbpnm tool to check all the buffers:
 +
 
 +
https://github.com/martinezjavier/raw2rgbpnm
 +
 
 +
So, run the following commands to download the tool and compile it:
 +
 
 +
<pre>
 +
git clone git clone git@github.com:martinezjavier/raw2rgbpnm.git
 +
cd raw2rgbpnm
 +
</pre>
 +
 
 +
Open the file raw2rgbpnm.c and change line 489 with:
 +
 
 +
<pre>
 +
int c = getopt(argc, argv, "a:b:f:ghs:wn");
 +
</pre>
 +
 
 +
This is to enable the option to extract multiple frames from a file. Now, you can build the application:
 +
 
 +
<pre>
 +
make
 +
</pre>
 +
 
 +
This tool converts from GRBG10 to pnm. We capture RGGB in the IMX219, so you will see that the colors at the output of the image are wrong.
 +
 
  
 
In order to capture 10 buffers and save them in a file, you can run the following pipelines:
 
In order to capture 10 buffers and save them in a file, you can run the following pipelines:
Line 156: Line 311:
 
<pre style="background:#d6e4f1">
 
<pre style="background:#d6e4f1">
 
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=3280, height=2464" ! filesink location=test_3280x2464.bayer
 
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=3280, height=2464" ! filesink location=test_3280x2464.bayer
 +
</pre>
 +
 +
Check the buffers with:
 +
 +
<pre >
 +
./raw2rgbpnm -f SGRBG10 -s 3296x2464 -b 5.0 -n test_3280x2464.bayer output_3280x2464
 
</pre>
 
</pre>
  
Line 161: Line 322:
 
<pre style="background:#d6e4f1">
 
<pre style="background:#d6e4f1">
 
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=1920, height=1080" ! filesink location=test_1920x1080.bayer
 
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=1920, height=1080" ! filesink location=test_1920x1080.bayer
 +
</pre>
 +
 +
Check the buffers with:
 +
 +
<pre >
 +
./raw2rgbpnm -f SGRBG10 -s 1920x1080 -b 5.0 -n test_1920x1080.bayer output_1920x1080
 
</pre>
 
</pre>
  
Line 169: Line 336:
 
</pre>
 
</pre>
  
You can use the raw2rgbpnm tool to check the 10 buffers:
+
Check the buffers with:
  
https://github.com/martinezjavier/raw2rgbpnm
+
<pre >
 +
./raw2rgbpnm -f SGRBG10 -s 1280x720 -b 5.0 -n test_1280x720.bayer output_1280x720
 +
</pre>
  
So, run the following commands to download the tool and compile it:
+
*1640x1232 (Binning x2)
  
<pre>
+
<pre style="background:#d6e4f1">
git clone git clone git@github.com:martinezjavier/raw2rgbpnm.git
+
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=1640, height=1232" ! filesink location=test_1640x1232.bayer
cd raw2rgbpnm
 
 
</pre>
 
</pre>
  
Open the file raw2rgbpnm.c and change the line 489 with:
+
Check the buffers with:
  
<pre>
+
<pre >
int c = getopt(argc, argv, "a:b:f:ghs:wn");
+
./raw2rgbpnm -f SGRBG10 -s 1664x1232 -b 5.0 -n test_1640x1232.bayer output_1640x1232
 
</pre>
 
</pre>
  
This is to enable the option to extract multiple frames from a file. Now, you can build the application:
+
*820x616 (Binning x4)
  
<pre>
+
<pre style="background:#d6e4f1">
make
+
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=820, height=616" ! filesink location=test_820x616.bayer
 
</pre>
 
</pre>
  
This tool converts from GRBG10 to pnm. We capture RGGB in the IMX219, so you will see that the colors at the output of the image are wrong.
+
Check the buffers with:
  
 
<pre >
 
<pre >
./raw2rgbpnm -f SGRBG10 -s <resolution> -b 5.0 -n <input file> <output file>
+
./raw2rgbpnm -f SGRBG10 -s 832x616 -b 5.0 -n test_820x616.bayer output_820x616
 +
</pre>
 +
 
 +
===== Nvcamerasrc =====
 +
 
 +
*3280x2464
 +
 
 +
<pre style="background:#d6e4f1">
 +
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="21 21" ! 'video/x-raw(memory:NVMM), width=(int)3280, height=(int)2464, format=(string)I420, framerate=(fraction)21/1' ! autovideosink
 +
</pre>
 +
 
 +
*1920x1080
 +
 
 +
<pre style="background:#d6e4f1">
 +
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! autovideosink
 +
</pre>
 +
 
 +
*1640x1232
 +
 
 +
<pre style="background:#d6e4f1">
 +
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, framerate=(fraction)30/1' ! autovideosink
 +
</pre>
 +
 
 +
This is an image captured with the above pipeline:
 +
 
 +
[[Image:Sony_IMX219_capture.JPG|thumb|border|center|800px|alt=Alt|Figure 1: IMX219 capture with nvcamerasrc. A camera aimed at the computer monitor on the left which is reflecting the wall and ceiling shown on the right.]]
 +
 
 +
*1280x720
 +
 
 +
<pre style="background:#d6e4f1">
 +
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)30/1' ! autovideosink
 +
</pre>
 +
 
 +
*820x616
 +
 
 +
<pre style="background:#d6e4f1">
 +
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)820, height=(int)616, format=(string)I420, framerate=(fraction)30/1' ! autovideosink
 
</pre>
 
</pre>
  
===Dual Capture===
+
====Dual Capture====
 +
 
 +
Using the following pipelines we can test the performance of the Jetson TX1 when doing dual capture:
  
Using the following pipeline we can test the performance of the Tegra X1 when doing dual capture:
+
===== V4l2src =====
  
 
<pre  style="background:#d6e4f1">
 
<pre  style="background:#d6e4f1">
Line 207: Line 413:
 
</pre>
 
</pre>
  
We noticed that using two cameras with the max resolution '''3280x2464''', the cpu load consumption measured with tegrastats doesn't change considerably, and it remains almost the same:
+
We noticed that using two cameras with the max resolution '''3280x2464''', the CPU load consumption measured with tegrastats doesn't change considerably, and it remains almost the same:
  
 
* Tegrastats in normal operation:
 
* Tegrastats in normal operation:
Line 233: Line 439:
 
</pre>
 
</pre>
  
These tests were done using the J20 board from Auvidea [http://developer.ridgerun.com/wiki/index.php?title=Getting_started_guide_for_Auvidea_J20_board Getting_started_guide_for_Auvidea_J20_board]
+
===== Nvcamerasrc =====
 +
 
 +
<pre style="background:#d6e4f1">
 +
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="21 21" ! 'video/x-raw(memory:NVMM), width=(int)3280, height=(int)2464, format=(string)I420, framerate=(fraction)21/1' ! \
 +
fakesink nvcamerasrc sensor-id=2 fpsRange="21 21" ! 'video/x-raw(memory:NVMM), width=(int)3280, height=(int)2464, format=(string)I420, framerate=(fraction)21/1' ! fakesink
 +
</pre>
 +
 
 +
These tests were done using the J20 board from Auvidea [http://developer.ridgerun.com/wiki/index.php?title=Getting_started_guide_for_Auvidea_J20_board Getting started guide for Auvidea J20 board]
 +
 
 +
==== Dual capture and dual display ====
 +
 
 +
<pre style="background:#d6e4f1">
 +
DISPLAY=:0 gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, \
 +
framerate=(fraction)21/1' ! nvegltransform ! nveglglessink nvcamerasrc sensor-id=2 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1640, \
 +
height=(int)1232, format=(string)I420, framerate=(fraction)21/1' ! nvegltransform ! nveglglessink -e
 +
</pre>
 +
 
 +
==== Video Encoding Transport Stream 1640x1232@30fps ====
 +
 
 +
<pre style="background:#d6e4f1">
 +
CAPS="video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, framerate=(fraction)30/1"
 +
 
 +
gst-launch-1.0 nvcamerasrc sensor-id=1 fpsRange="30 30" num-buffers=500 ! capsfilter caps="$CAPS" ! omxh264enc ! \
 +
              mpegtsmux ! filesink location=test.ts
 +
</pre>
 +
 
 +
==== Snapshots ====
 +
 
 +
<pre style="background:#d6e4f1">
 +
gst-launch-1.0 -v nvcamerasrc sensor-id=1 fpsRange="30 30" num-buffers=100 ! 'video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, \
 +
framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw, width=(int)1640, height=(int)1232, format=(string)I420, framerate=(fraction)30/1' ! multifilesink location=test_%d.yuv
 +
</pre>
 +
 
 +
 
 +
== See also  ==
 +
 
 +
:[[Imx219 vs ov5693 armload]]
 +
:[[Jetson J20 imx219 glass to glass latency]]
 +
 
 +
{{ContactUs}}
 +
 
  
[[Category:TegraX1]]
+
[[Category:Jetson]][[Category:Jetson V4L2 Drivers]][[Category:Sony]]

Latest revision as of 04:17, 14 May 2023

Nvidia-preferred-partner-badge-rgb-for-screen.png

Error something wrong.jpg Problems running the pipelines shown on this page?
Please see our GStreamer Debugging guide for help.

RR Contact Us.png


Overview Video


Sony IMX219 image sensor features

The Sony IMX219 is a CMOS image sensor with the following features:

  • CSI2 serial data output (selection of 4 or 2 lanes)
  • Max. 30 frame/s in all-pixel scan mode
  • 180 frame/s 720p with 2x2 analog (special) binning, 60 frame/s @ 1080p with V-crop
  • Data rate: Max. 722 Mbps/lane(@4lane), 912 Mbps/Lane(@2lane)
  • Max resolution of 3280 (H) x 2464 (V) approx. 8.08 M pixels

RidgeRun has developed a driver for the Jetson TX1 platform with the following support:

  • L4T 24.2 and Jetpack 2.3
  • V4l2 Media controller driver
  • Tested resolution 3280 x 2464 @ 15 fps
  • Tested resolution 1080p @ 47 fps
  • Tested resolution 720p @ 78 fps
  • Tested resolution 1640x1232 @ 30 fps
  • Tested resolution 820x616 @ 30 fps
  • Tested with J100 and J20 Auvidea boards.
  • Capture with v4l2src and also with nvcamerasrc using the ISP.

Enabling the driver

In order to use this driver, you have to patch and compile the kernel source, and there are two ways to do it:

Using RidgeRun SDK

Through the SDK you can easily patch the kernel and generate an image with the required changes to get the imx219 sensor to work. In this wiki Getting Started Guide for Tegra X1 Jetson you can find all the information required to build a Jetson TX1 SDK from scratch.

In order to add the IMX219 driver, follow these steps:

  • Go to your SDK directory
  • Go to the kernel directory
  • Copy the patches in the patches directory
0001-add-imx219-subdevice-driver.patch
0002-add-imx219-dtb.patch

In the case that you will use the driver with the J20 board, you will need to copy the following patch:

0003-add-j20-board-driver.patch
  • Modify the series file in the kernel directory. You have to add the 3 above patches.
  • Run make config and select the IMX219 in the Kernel Configuration like this:
-> Kernel Configuration
 -> Device Drivers                                                                                                                        
  -> Multimedia support                                                                                           
    -> Encoders, decoders, sensors, and other helper chips
       -> <*> IMX219 camera sensor support

In the case that you will use the driver with the J20 board, you need to select the J20 support Kernel Configuration like this:

-> Kernel Configuration
 -> Device Drivers                                                                                                                        
  -> Multimedia support                                                                                           
    -> Encoders, decoders, sensors and other helper chips
       -> <*> Auvidea J20 Expansion board
  • Then make the SDK and install it following the Started Guide mentioned before

Using Jetpack

  • Once you have the source code, apply the following two patches if you haven't yet, for fixing kernel errors during compilation.
kernel_r7_asm.patch
logical_comparison.patch
  • Apply the driver patches:
0001-add-imx219-subdevice-driver.patch
0002-add-imx219-dtb.patch

In the case that you will use the driver with the J20 board, you will need to apply the following patch:

0003-add-j20-board-driver.patch

you can apply the patches with the command:

quilt push -a
  • Follow the instructions in (Build Kernel) for building the kernel, and then flash the image.

Make sure to enable imx219 driver support:

make menuconfig
-> Device Drivers                                                                                                                        
  -> Multimedia support                                                                                           
    -> Encoders, decoders, sensors and other helper chips
       -> <*> IMX219 camera sensor support

In the case that you will use the driver with the J20 board, you need to make sure to enable the J20 support:

make menuconfig
-> Device Drivers                                                                                                                        
  -> Multimedia support                                                                                           
    -> Encoders, decoders, sensors and other helper chips
       -> <*> Auvidea J20 Expansion board

Using the driver

GStreamer examples

The GStreamer version distributed with Jetpack doesn't support Bayer RAW10 only RAW8 so GStreamer needs to be patched in order to capture using v4l2src. Follow the steps in the following wiki page to add the support for RAW10:

http://developer.ridgerun.com/wiki/index.php?title=Compile_gstreamer_on_tegra_X1

Important Note: When you are accessing the board through serial or ssh and you want to run a pipeline to display with autovideosink, nveglglessink, xvimagesink or any other video sink, you have to run your pipeline with DISPLAY=:0 at the beginning of the description:

DISPLAY=:0 gst-launch-1.0 ...

Snapshots

In order to check the snapshot, you can use the following tool:

https://github.com/jdthomas/bayer2rgb

So, run the following commands to download the tool and compile it:

git clone git@github.com:jdthomas/bayer2rgb.git
cd bayer2rgb
make
cp bayer2rgb /usr/bin/


Bayer2rgb will convert naked (no header) Bayer grid data into RGB data. There are several choices of interpolation (though they all look essentially the same to my eye). It can output tiff files and can integrate with ImageMagick to output other formats.

  • 3280x2464
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=3280, height=2464" ! filesink location=test_3280x2464.bayer
./bayer2rgb --input=test_3280x2464.bayer --output=data.tiff --width=3296 --height=2464 --bpp=16 --first=RGGB --method=BILINEAR --tiff

Use image_magik to convert the tiff to png:

convert data.tiff data.png
  • 1920x1080
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=1920, height=1080" ! filesink location=test_1920x1080.bayer

Check the snapshot with:

./bayer2rgb --input=test_1920x1080.bayer --output=data.tiff --width=1920 --height=1080 --bpp=16 --first=RGGB --method=BILINEAR --tiff

Use image_magik to convert the tiff to png:

convert data.tiff data.png
  • 1280x720
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=1280, height=720" ! filesink location=test_1280x720.bayer

Check the snapshot with:

./bayer2rgb --input=test_1280x720.bayer --output=data.tiff --width=1280 --height=720 --bpp=16 --first=RGGB --method=BILINEAR --tiff

Use image_magik to convert the tiff to png:

convert data.tiff data.png
  • 1640x1232 (Binning x2)
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=1640, height=1232" ! filesink location=test_1640x1232.bayer

Check the snapshot with:

./bayer2rgb --input=test_1640x1232.bayer --output=data.tiff --width=1664 --height=1232 --bpp=16 --first=RGGB --method=BILINEAR --tiff

Use image_magik to convert the tiff to png:

convert data.tiff data.png
  • 820x616 (Binning x4)
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=820, height=616" ! filesink location=test_820x616.bayer

Check the snapshot with:

./bayer2rgb --input=test_820x616.bayer --output=data.tiff --width=832 --height=616 --bpp=16 --first=RGGB --method=BILINEAR --tiff

Use image_magik to convert the tiff to png:

convert data.tiff data.png

Capture

V4l2src

You can use the raw2rgbpnm tool to check all the buffers:

https://github.com/martinezjavier/raw2rgbpnm

So, run the following commands to download the tool and compile it:

git clone git clone git@github.com:martinezjavier/raw2rgbpnm.git
cd raw2rgbpnm

Open the file raw2rgbpnm.c and change line 489 with:

int c = getopt(argc, argv, "a:b:f:ghs:wn");

This is to enable the option to extract multiple frames from a file. Now, you can build the application:

make

This tool converts from GRBG10 to pnm. We capture RGGB in the IMX219, so you will see that the colors at the output of the image are wrong.


In order to capture 10 buffers and save them in a file, you can run the following pipelines:

  • 3280x2464
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=3280, height=2464" ! filesink location=test_3280x2464.bayer

Check the buffers with:

./raw2rgbpnm -f SGRBG10 -s 3296x2464 -b 5.0 -n test_3280x2464.bayer output_3280x2464
  • 1920x1080
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=1920, height=1080" ! filesink location=test_1920x1080.bayer

Check the buffers with:

./raw2rgbpnm -f SGRBG10 -s 1920x1080 -b 5.0 -n test_1920x1080.bayer output_1920x1080
  • 1280x720
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=1280, height=720" ! filesink location=test_1280x720.bayer

Check the buffers with:

./raw2rgbpnm -f SGRBG10 -s 1280x720 -b 5.0 -n test_1280x720.bayer output_1280x720
  • 1640x1232 (Binning x2)
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=1640, height=1232" ! filesink location=test_1640x1232.bayer

Check the buffers with:

./raw2rgbpnm -f SGRBG10 -s 1664x1232 -b 5.0 -n test_1640x1232.bayer output_1640x1232
  • 820x616 (Binning x4)
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=820, height=616" ! filesink location=test_820x616.bayer

Check the buffers with:

./raw2rgbpnm -f SGRBG10 -s 832x616 -b 5.0 -n test_820x616.bayer output_820x616
Nvcamerasrc
  • 3280x2464
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="21 21" ! 'video/x-raw(memory:NVMM), width=(int)3280, height=(int)2464, format=(string)I420, framerate=(fraction)21/1' ! autovideosink
  • 1920x1080
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! autovideosink
  • 1640x1232
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, framerate=(fraction)30/1' ! autovideosink

This is an image captured with the above pipeline:

Alt
Figure 1: IMX219 capture with nvcamerasrc. A camera aimed at the computer monitor on the left which is reflecting the wall and ceiling shown on the right.
  • 1280x720
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)30/1' ! autovideosink
  • 820x616
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)820, height=(int)616, format=(string)I420, framerate=(fraction)30/1' ! autovideosink

Dual Capture

Using the following pipelines we can test the performance of the Jetson TX1 when doing dual capture:

V4l2src
gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-bayer,format=rggb,width=3280,height=2464' ! fakesink 
v4l2src device=/dev/video3 ! 'video/x-bayer,format=rggb,width=3280,height=2464' ! fakesink

We noticed that using two cameras with the max resolution 3280x2464, the CPU load consumption measured with tegrastats doesn't change considerably, and it remains almost the same:

  • Tegrastats in normal operation:
RAM 563/3994MB (lfb 748x4MB) cpu [51%,0%,0%,0%]@518 GR3D 0%@76 EDP limit 0
RAM 563/3994MB (lfb 748x4MB) cpu [50%,0%,0%,0%]@518 GR3D 0%@76 EDP limit 0
RAM 563/3994MB (lfb 748x4MB) cpu [50%,0%,0%,0%]@518 GR3D 0%@76 EDP limit 0
RAM 563/3994MB (lfb 748x4MB) cpu [51%,0%,0%,0%]@518 GR3D 0%@76 EDP limit 0
RAM 563/3994MB (lfb 748x4MB) cpu [49%,0%,0%,0%]@518 GR3D 0%@76 EDP limit 0
RAM 563/3994MB (lfb 748x4MB) cpu [53%,0%,0%,0%]@518 GR3D 0%@76 EDP limit 0
RAM 563/3994MB (lfb 748x4MB) cpu [52%,0%,0%,0%]@403 GR3D 0%@76 EDP limit 0
  • Tegrastats with the above pipeline running
RAM 608/3994MB (lfb 651x4MB) cpu [57%,3%,0%,0%]@825 GR3D 0%@76 EDP limit 0
RAM 608/3994MB (lfb 651x4MB) cpu [52%,29%,0%,0%]@403 GR3D 0%@76 EDP limit 0
RAM 608/3994MB (lfb 651x4MB) cpu [56%,0%,1%,0%]@403 GR3D 0%@76 EDP limit 0
RAM 608/3994MB (lfb 651x4MB) cpu [60%,1%,1%,0%]@403 GR3D 0%@76 EDP limit 0
RAM 608/3994MB (lfb 651x4MB) cpu [57%,1%,0%,0%]@403 GR3D 0%@76 EDP limit 0
RAM 608/3994MB (lfb 651x4MB) cpu [58%,0%,1%,0%]@403 GR3D 0%@76 EDP limit 0
RAM 608/3994MB (lfb 651x4MB) cpu [58%,1%,0%,0%]@403 GR3D 0%@76 EDP limit 0
Nvcamerasrc
gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="21 21" ! 'video/x-raw(memory:NVMM), width=(int)3280, height=(int)2464, format=(string)I420, framerate=(fraction)21/1' ! \
fakesink nvcamerasrc sensor-id=2 fpsRange="21 21" ! 'video/x-raw(memory:NVMM), width=(int)3280, height=(int)2464, format=(string)I420, framerate=(fraction)21/1' ! fakesink

These tests were done using the J20 board from Auvidea Getting started guide for Auvidea J20 board

Dual capture and dual display

DISPLAY=:0 gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, \
framerate=(fraction)21/1' ! nvegltransform ! nveglglessink nvcamerasrc sensor-id=2 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)1640, \
height=(int)1232, format=(string)I420, framerate=(fraction)21/1' ! nvegltransform ! nveglglessink -e

Video Encoding Transport Stream 1640x1232@30fps

CAPS="video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, framerate=(fraction)30/1"

gst-launch-1.0 nvcamerasrc sensor-id=1 fpsRange="30 30" num-buffers=500 ! capsfilter caps="$CAPS" ! omxh264enc ! \
               mpegtsmux ! filesink location=test.ts

Snapshots

gst-launch-1.0 -v nvcamerasrc sensor-id=1 fpsRange="30 30" num-buffers=100 ! 'video/x-raw(memory:NVMM), width=(int)1640, height=(int)1232, format=(string)I420, \
framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw, width=(int)1640, height=(int)1232, format=(string)I420, framerate=(fraction)30/1' ! multifilesink location=test_%d.yuv


See also

Imx219 vs ov5693 armload
Jetson J20 imx219 glass to glass latency


RidgeRun Resources

Quick Start Client Engagement Process RidgeRun Blog Homepage
Technical and Sales Support RidgeRun Online Store RidgeRun Videos Contact Us

OOjs UI icon message-progressive.svg Contact Us

Visit our Main Website for the RidgeRun Products and Online Store. RidgeRun Engineering informations are available in RidgeRun Professional Services, RidgeRun Subscription Model and Client Engagement Process wiki pages. Please email to support@ridgerun.com for technical questions and contactus@ridgerun.com for other queries. Contact details for sponsoring the RidgeRun GStreamer projects are available in Sponsor Projects page. Ridgerun-logo.svg
RR Contact Us.png