Difference between revisions of "Sony IMX230 Linux driver"

From RidgeRun Developer Connection
Jump to: navigation, search
(Dual capture)
(Single capture)
Line 354: Line 354:
  
 
</pre>
 
</pre>
The average for each core while single capturing is: [11.82%, 10.22%, 13.6%, 12.35%] and the memory is 1720MB.
+
The average for each core while single capturing is: [11.82%, 10.22%, 13.6%, 12.35%] and the memory is 1720MB for the total system, for only the single capture the armload is [5.82%, 3.22%, 9.6%, 5.35%] for each core and takes 71MB of memory.
  
 
===Dual capture===
 
===Dual capture===

Revision as of 10:16, 20 September 2018

Template:Eval SDK Download, Demo Image download and Contact Us buttons

Keywords: IMX230 Jetson TX1, Gstreamer, Raspbery PI, NVIDIA, RidgeRun, V4L2 Driver, IMX2XX


IM230 Features

The IMX230 is a CMOS image sensor with the following features:

  • Phase Detection Auto Focus (PDAF)
  • Single Frame High Dynamic Range (HDR) with equivalent full pixels
  • High signal to noise ratio (SNR)
  • Full resolution @24fps (Normal / HDR). 4K2K @30fps (Normal / HDR) 1080p @60fps (Normal / HDR)
  • Independent flipping and mirroring
  • CSI-2 serial data output (MIPI 2lane/4lane, Max. 1.5Gbps/lane, D-PHY spec. ver. 1.1 compliant)
  • CSI2 serial data output (MIPI interface 2 lanes)
  • Output video format of RAW10/8, COMP8/6
  • Advanced Noise Reduction (Chroma noise reduction and RAW noise reduction)


Oficial Manufacturer IMX230 camera sensor documentation link: IMX230 21-megapixel product brief


RidgeRun has developed a driver for the Jetson TX1 platform with the following support:

  • Jetpack 3.1 and 2.3.1 support.
  • V4l2 Media controller driver
  • Tested resolution 5344x4016 @ 21 fps. Also cropped 4208x3120, 2672x2008 and 1920x1080 .
  • Output format: RAW10 Bayer BGGR pattern.
  • Leopard Imaging CB boards for TX1
  • Capture with v4l2src and nvcamerasrc
  • Support for triple video capture. (Tested using the 3 available ports LI CB)
  • BU64295/BU64297 VCM driver support

Enabling the driver

In order to use this driver, you have to patch and compile the kernel source, and there are two ways to do it:

Using RidgeRun SDK

Through the SDK you can easily patch the kernel and generate an image with the required changes to get the IMX230 sensor to work. In this wiki Getting_Started_Guide_for_Tegra_X1_Jetson you can find all the information required to build a Jetson TX1 SDK from scratch.

In order to add the IMX230 driver follow these steps:

  • Go to to your SDK directory
  • Go to the kernel directory
  • Copy the patches in the patches directory
001-add-li-cb-board-driver.patch 
002-add-driver-imx230-camera.patch                    
003-imx230-custom-dtb-li-cb.patch
  • Modify the series file in the kernel directory. You have to add the 3 above patches.
  • Run make config and select the IMX230 driver in the Kernel Configuration like this:
-> Kernel Configuration
 -> Device Drivers                                                                                                                        
  -> Multimedia support                                                                                           
    -> Encoders, decoders, sensors and other helper chips
       -> <*> IMX230 camera sensor support
  • Then make the SDK and install following the Started Guide mentioned before

Using Jetpack

  • Once you have the source code, apply the patches for fix kernel error during compilation and add the support for the IMX230:
cd $DEVDIR/64_TX1/Linux_for_Tegra_64_tx1/sources/
PATCHES=<path to the tarball>
tar -xzf $PATCHES/imx230-driver-for-tegra-x1-patches.tar.gz
mv imx230-driver-for-tegra-x1-patches patches
quilt push -a

Please change the path in PATCHES for the one where you downloaded the tarball with the IMX230 patches.

Make sure to enable IMX230 and driver support in the step 5:

-> Device Drivers                                                                                                                        
 -> Multimedia support                                                                                           
   -> Encoders, decoders, sensors and other helper chips
      -> <*> Omnivision IMX230 camera sensor support

Using the driver (Gstreamer examples)

The Gstreamer version distributed with Jetpack doesn't support bayer RAW10 only RAW8 so gstreamer needs to be patched in order to capture using v4l2src. Follow the steps in the following wiki page to add the support for RAW10:

http://developer.ridgerun.com/wiki/index.php?title=Compile_gstreamer_on_tegra_X1


Important Note: When you are accessing to the board through serial or ssh and you want to run a pipeline to display with autovideosink, nveglglessink, xvimagesink or any other video sink, you have to run your pipeline with DISPLAY=:0 at the beginning of the description:

DISPLAY=:0 gst-launch-1.0 ...


Snapshots

In order to check the snapshot, you can use the following tool:

https://github.com/jdthomas/bayer2rgb

So, run the following commands to download the tool and compile it:

git clone git@github.com:jdthomas/bayer2rgb.git
cd bayer2rgb
make
cp bayer2rgb /usr/bin/

Bayer2rgb will convert naked (no header) bayer grid data into rgb data. There are several choices of interpolation (though they all look essentially the same to my eye). It can output tiff files, and can integrate with ImageMagick to output other formats.

  • 1920x1080
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=bggr, width=1920, height=1080" \
! multifilesink location=test%d_1920x1080.bayer

Check the snapshot with:

./bayer2rgb --input=test#_1920x1080.bayer --output=data.tiff --width=5344 --height=4016 --bpp=16 --first=BGGR \
--method=BILINEAR --tiff

Use image_magik to convert the tiff to png:

convert data.tiff data.png

Important Note 1: In general the first buffer contains very low light because the AWB algorithm of the sensor is calibrating, so we recommend to use multifilesink to test debayer with a buffer above from number one. To obtain better image colors and bright quality, due to automatic sensor image calibration, we recommend to test debayer with a frame above number 10, to give time to the sensor to adjust the best image calibration parameters.

Important Note 2: The debayered image obtained as the output when use the "bayer2rgb" tool presents some kind of light saturation, viewed as multiple color pixels sections. This is a problem of the tool used to do the debayer process, but led the users to verify that the driver and camera sensor is working fine.

Snapshots with nvcamerasrc

The following pipeline will create a file for each captured frame. You can visualize the file in the following web page: http://rawpixels.net/

gst-launch-1.0 -v nvcamerasrc sensor-id=1 fpsRange="30 30" num-buffers=100 ! 'video/x-raw(memory:NVMM), width=(int)5344, \
height=(int)4016, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw, width=(int)5344, \
height=(int)4016, format=(string)I420, framerate=(fraction)30/1' ! multifilesink location=test_%d.yuv


Single Capture

V4l2src

You can use the raw2rgbpnm tool to check all the buffers:

https://github.com/martinezjavier/raw2rgbpnm

So, run the following commands to download the tool and compile it:

git clone git clone git@github.com:martinezjavier/raw2rgbpnm.git
cd raw2rgbpnm

Open the file raw2rgbpnm.c and change the line 489 with:

int c = getopt(argc, argv, "a:b:f:ghs:wn");

This is to enable the option to extract multiple frames from a file. Now, you can build the application:

make

Important Note: This tool converts from GRBG10 to pnm. We capture RGGB in the IMX230, so you will see that the colors at the output of the image are wrong.


In order to capture 10 buffers and save them in a file, you can run the following pipelines:

  • 5344x4016
gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=10 ! "video/x-bayer, format=rggb, width=5344, height=4016" \
! filesink location=test_5344x4016.bayer

Check the buffers with:

./raw2rgbpnm -f SRGGB10 -s 5344x4016 -b 5.0 -n test_5344x4016.bayer output_5344x4016

Nvcamerasrc

  • 5344x4016
DISPLAY=:0 gst-launch-1.0 nvcamerasrc sensor-id=0 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)5344, \
height=(int)4016, format=(string)I420, framerate=(fraction)30/1' ! nvegltransform ! nveglglessink -e

Tiple Capture

The following image consists in a Jetson TX1 with three IMX230 cameras connected at the board


V4l2src

  • Pipeline for sextuple video capture using v4l2src, at 5344x4016 @21fps:
gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-bayer,format=bggr,width=5344,height=4016' ! fakesink \
v4l2src device=/dev/video1 ! 'video/x-bayer,format=bggr,width=5344,height=4016' ! fakesink \
v4l2src device=/dev/video2 ! 'video/x-bayer,format=bggr,width=5344,height=4016' ! fakesink \

Nvcamerasrc

  • Pipeline for triple video capture using nvcamerasrc, at 5344x4016 @21fps:
DISPLAY=:0 gst-launch-1.0 nvcamerasrc exposure-time=0.006 sensor-id=0 fpsRange="30 30" auto-exposure=off  ! 'video/x-raw(memory:NVMM), width=(int)5344, height=(int)4016, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv ! "video/x-raw,width=(int)640,height=(int)480,format=(string)I420,framerate=(fraction)30/1"  ! queue !  mixer.sink_0  nvcamerasrc  auto-exposure=off exposure-time=0.006  sensor-id=1 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)5344, height=(int)4016, format=(string)I420,framerate=(fraction)30/1' ! nvvidconv ! "video/x-raw,width=(int)640,height=(int)480,format=(string)I420,framerate=(fraction)30/1"  ! queue ! mixer.sink_1 nvcamerasrc auto-exposure=off exposure-time=0.006 sensor-id=2 fpsRange="30 30" ! 'video/x-raw(memory:NVMM), width=(int)5344, height=(int)4016, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv !  "video/x-raw,width=(int)640,height=(int)480,format=(string)I420,framerate=(fraction)30/1" ! queue ! mixer.sink_2  videomixer background=1 name=mixer sink_0::xpos=0 sink_0::ypos=0  sink_1::xpos=640 sink_1::ypos=0 sink_2::xpos=320 sink_2::ypos=480 ! queue ! nvoverlaysink sync=false -v

Performance

For this test framerate was measured with vl4-ctl and perf using nvcamerasrc to capture

Framerate with v4l

This test was developed with v4l-util tools to capture a raw image and to measure the framerate.

nvidia@tegra-ubuntu:~$ v4l2-ctl -d /dev/video0 --set-fmt-video=width=5344,height=4016,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap
<<<<<<<<<<<<<<<<<<<<< 19.60 fps
<<<<<<<<<<<<<<<<<<<<< 20.39 fps
<<<<<<<<<<<<<<<<<<<<<< 20.72 fps
<<<<<<<<<<<<<<<<<<<<< 20.89 fps
<<<<<<<<<<<<<<<<<<<<< 20.95 fps
<<<<<<<<<<<<<<<<<<<<<< 21.02 fps
<<<<<<<<<<<<<<<<<<<<< 21.08 fps
<<<<<<<<<<<<<<<<<<<<< 21.12 fps
<<<<<<<<<<<<<<<<<<<<<< 21.15 fps
<<<<<<<<<<<<<<<<<<<<< 21.15 fps
<<<<<<<<<<<<<<<<<<<<< 21.18 fps
<<<<<<<<<<<<<<<<<<<<<< 21.19 fps
<<<<<<<<<<<<<<<<<<<<< 21.19 fps
<<<<<<<<<<<<<<<<<<<<< 21.21 fps
<<<<<<<<<<<<<<<<<<<<<< 21.22 fps
<<<<<<<<<<<<<<<<<<<<< 21.22 fps
<<<<<<<<<<<<<<<<<<<<< 21.23 fps

The sensor provides the capture at 21fps with v4l.

Framerate with Nvcamerasrc and perf

These test was developed with nvcamerasrc, the results are showed below:

nvidia@tegra-ubuntu:~$ gst-launch-1.0 nvcamerasrc ! perf ! fakesink
Setting pipeline to PAUSED ...

Available Sensor modes : 
5344 x 4016 FR=21.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
4208 x 3120 FR=21.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
2672 x 2008 FR=21.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1920 x 1080 FR=21.000000 CF=0x1009208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock

NvCameraSrc: Trying To Set Default Camera Resolution. Selected sensorModeIndex = 0 WxH = 5344x4016 FrameRate = 21.000000 ...

GST-PERF INFO -->  Timestamp: 0:03:39.618922100; Bps: 0; fps: 0.0 
GST-PERF INFO -->  Timestamp: 0:03:40.759031526; Bps: 708; fps: 3.50 
GST-PERF INFO -->  Timestamp: 0:03:41.969865651; Bps: 667; fps: 3.30 
GST-PERF INFO -->  Timestamp: 0:03:43.155424734; Bps: 681; fps: 3.37 
GST-PERF INFO -->  Timestamp: 0:03:44.355749255; Bps: 673; fps: 3.33 
GST-PERF INFO -->  Timestamp: 0:03:45.537114811; Bps: 684; fps: 3.38 
GST-PERF INFO -->  Timestamp: 0:03:46.749476158; Bps: 666; fps: 3.30 
GST-PERF INFO -->  Timestamp: 0:03:47.941200469; Bps: 678; fps: 3.35 
GST-PERF INFO -->  Timestamp: 0:03:49.032922515; Bps: 740; fps: 3.66 
GST-PERF INFO -->  Timestamp: 0:03:50.074386850; Bps: 776; fps: 10.56 
GST-PERF INFO -->  Timestamp: 0:03:51.086839413; Bps: 798; fps: 15.81 
GST-PERF INFO -->  Timestamp: 0:03:52.131798205; Bps: 773; fps: 16.28 
GST-PERF INFO -->  Timestamp: 0:03:53.183461763; Bps: 768; fps: 16.17 
GST-PERF INFO -->  Timestamp: 0:03:54.238724611; Bps: 765; fps: 16.11 
GST-PERF INFO -->  Timestamp: 0:03:55.285163133; Bps: 772; fps: 16.25 
GST-PERF INFO -->  Timestamp: 0:03:56.341732680; Bps: 765; fps: 16.9 
GST-PERF INFO -->  Timestamp: 0:03:57.389713997; Bps: 771; fps: 16.23 
GST-PERF INFO -->  Timestamp: 0:03:58.464777416; Bps: 751; fps: 15.81 
GST-PERF INFO -->  Timestamp: 0:03:59.509986213; Bps: 773; fps: 16.26 
GST-PERF INFO -->  Timestamp: 0:04:00.547844340; Bps: 779; fps: 16.39 

With nvcamerasrc the framerate is not 21fps, it can be related to the exposure time that maybe needs to be improved.

Arm load while capturing

The results showed below was taken with one camera capturing and using the pipeline mentioned on the "Framerate with Nvcamerasrc and perf" section in this wiki

Single capture

These measures was taken with nvcamerasrc and only the capture system and one camera as pipeline below:

gst-launch-1.0 nvcamerasrc sensor-id=0 ! perf ! fakesink
RAM 1645/3976MB (lfb 332x4MB) CPU [0%@102,0%@102,0%@102,0%@102]
RAM 1645/3976MB (lfb 332x4MB) CPU [4%@102,1%@102,1%@102,6%@102]
RAM 1645/3976MB (lfb 332x4MB) CPU [7%@204,1%@204,2%@204,8%@204]
RAM 1645/3976MB (lfb 332x4MB) CPU [6%@102,3%@102,1%@102,4%@102]
RAM 1647/3976MB (lfb 332x4MB) CPU [8%@408,14%@408,4%@408,16%@408]-----------> starting capture
RAM 1717/3976MB (lfb 332x4MB) CPU [13%@1734,52%@1734,14%@1734,13%@1734]
RAM 1718/3976MB (lfb 331x4MB) CPU [17%@306,2%@306,7%@306,8%@306] 
RAM 1719/3976MB (lfb 331x4MB) CPU [11%@408,5%@408,5%@408,4%@408] 
RAM 1719/3976MB (lfb 331x4MB) CPU [12%@102,5%@102,8%@102,5%@102] 
RAM 1719/3976MB (lfb 331x4MB) CPU [13%@1734,6%@1734,5%@1734,9%@1734]
RAM 1719/3976MB (lfb 331x4MB) CPU [12%@102,3%@102,11%@102,9%@102]
RAM 1719/3976MB (lfb 331x4MB) CPU [10%@102,1%@102,13%@102,7%@102]
RAM 1720/3976MB (lfb 331x4MB) CPU [11%@102,4%@102,9%@102,8%@102] 
RAM 1720/3976MB (lfb 331x4MB) CPU [15%@102,7%@102,5%@102,7%@102] 
RAM 1720/3976MB (lfb 331x4MB) CPU [10%@1734,7%@1734,15%@1734,5%@1734]
RAM 1720/3976MB (lfb 331x4MB) CPU [5%@102,8%@102,5%@102,6%@102]
RAM 1720/3976MB (lfb 331x4MB) CPU [10%@1734,6%@1734,10%@1734,20%@1734]
RAM 1720/3976MB (lfb 331x4MB) CPU [11%@306,11%@306,2%@306,32%@306] 
RAM 1720/3976MB (lfb 331x4MB) CPU [16%@1224,14%@1224,23%@1224,12%@1224]
RAM 1720/3976MB (lfb 331x4MB) CPU [16%@306,11%@306,13%@306,20%@306] 
RAM 1720/3976MB (lfb 331x4MB) CPU [12%@306,11%@306,22%@306,18%@306] 
RAM 1720/3976MB (lfb 331x4MB) CPU [7%@408,11%@408,21%@408,19%@408] 
RAM 1720/3976MB (lfb 331x4MB) CPU [9%@306,9%@306,24%@306,19%@306] 
RAM 1720/3976MB (lfb 331x4MB) CPU [12%@102,13%@102,22%@102,20%@102] 
RAM 1720/3976MB (lfb 331x4MB) CPU [11%@306,9%@306,14%@306,13%@306] 
RAM 1720/3976MB (lfb 331x4MB) CPU [10%@1224,14%@1224,23%@1224,11%@1224]
RAM 1720/3976MB (lfb 331x4MB) CPU [13%@408,11%@408,17%@408,10%@408]
RAM 1720/3976MB (lfb 331x4MB) CPU [16%@408,15%@408,25%@408,9%@408] 
RAM 1649/3976MB (lfb 331x4MB) CPU [16%@1734,16%@1734,22%@1734,11%@1734]
RAM 1649/3976MB (lfb 331x4MB) CPU [7%@102,5%@102,1%@102,7%@102] --------------> finishing capture
RAM 1649/3976MB (lfb 331x4MB) CPU [4%@102,6%@102,2%@102,7%@102] 
RAM 1649/3976MB (lfb 331x4MB) CPU [3%@102,5%@102,4%@102,7%@102] 
RAM 1649/3976MB (lfb 331x4MB) CPU [0%@102,9%@102,4%@102,7%@102] 

The average for each core while single capturing is: [11.82%, 10.22%, 13.6%, 12.35%] and the memory is 1720MB for the total system, for only the single capture the armload is [5.82%, 3.22%, 9.6%, 5.35%] for each core and takes 71MB of memory.

Dual capture

These measures was taken with nvcamerasrc and only the capture system and two cameras as the pipeline below:

gst-launch-1.0 nvcamerasrc sensor-id=0 ! perf ! fakesink nvcamerasrc sensor-id=1 ! perf ! fakesink
nvidia@tegra-ubuntu:~$ ./tegrastats 
RAM 1485/3976MB (lfb 333x4MB) CPU [0%@204,0%@204,0%@204,0%@204]
RAM 1485/3976MB (lfb 333x4MB) CPU [2%@102,1%@102,3%@102,0%@102]
RAM 1485/3976MB (lfb 333x4MB) CPU [1%@102,0%@102,3%@102,1%@102]
RAM 1486/3976MB (lfb 333x4MB) CPU [4%@1122,3%@1122,7%@1122,3%@1122]
RAM 1491/3976MB (lfb 333x4MB) CPU [11%@1734,51%@1734,1%@1734,3%@1734]
RAM 2075/3976MB (lfb 325x4MB) CPU [35%@204,37%@204,30%@204,31%@204]-----------> starting capture
RAM 2089/3976MB (lfb 325x4MB) CPU [9%@204,19%@204,1%@204,4%@204]
RAM 2092/3976MB (lfb 324x4MB) CPU [6%@1734,7%@1734,19%@1734,3%@1734]
RAM 2094/3976MB (lfb 323x4MB) CPU [6%@1734,11%@1734,28%@1734,0%@1734]
RAM 2095/3976MB (lfb 323x4MB) CPU [6%@1734,6%@1734,25%@1734,3%@1734]
RAM 2097/3976MB (lfb 323x4MB) CPU [13%@612,8%@612,5%@612,5%@102]
RAM 2097/3976MB (lfb 323x4MB) CPU [10%@1734,9%@1734,12%@1734,7%@1734]
RAM 2097/3976MB (lfb 323x4MB) CPU [12%@102,10%@102,8%@102,10%@102]
RAM 2097/3976MB (lfb 323x4MB) CPU [12%@1224,18%@1224,6%@1224,6%@1224]
RAM 2098/3976MB (lfb 323x4MB) CPU [12%@1734,7%@1734,17%@1734,2%@1734]
RAM 2098/3976MB (lfb 323x4MB) CPU [24%@1734,16%@1734,5%@1734,8%@1734]
RAM 2101/3976MB (lfb 321x4MB) CPU [86%@1734,8%@1734,1%@1734,5%@1734] 
RAM 2105/3976MB (lfb 319x4MB) CPU [37%@408,11%@408,12%@408,13%@408]
RAM 2106/3976MB (lfb 319x4MB) CPU [26%@1224,9%@1224,17%@1224,26%@1224] 
RAM 2107/3976MB (lfb 319x4MB) CPU [26%@1224,17%@1224,20%@1224,21%@1224]
RAM 2107/3976MB (lfb 319x4MB) CPU [22%@1224,18%@1224,16%@1224,17%@1224]
RAM 2107/3976MB (lfb 319x4MB) CPU [24%@1224,19%@1224,22%@1224,21%@1224]
RAM 2107/3976MB (lfb 319x4MB) CPU [23%@408,18%@408,23%@408,18%@408]
RAM 2107/3976MB (lfb 319x4MB) CPU [25%@408,14%@408,20%@408,16%@408]
RAM 2107/3976MB (lfb 319x4MB) CPU [22%@1224,12%@1224,22%@1224,22%@1224]
RAM 2107/3976MB (lfb 319x4MB) CPU [20%@306,15%@306,20%@306,16%@306]
RAM 2108/3976MB (lfb 319x4MB) CPU [23%@204,17%@204,17%@204,18%@204]
RAM 2108/3976MB (lfb 319x4MB) CPU [27%@1224,19%@1224,23%@1224,17%@1224]
RAM 2108/3976MB (lfb 319x4MB) CPU [28%@306,13%@306,20%@306,18%@306]
RAM 2108/3976MB (lfb 319x4MB) CPU [26%@1224,10%@1224,24%@1224,21%@1224]
RAM 2108/3976MB (lfb 319x4MB) CPU [25%@1224,11%@1224,23%@1224,20%@1224]
RAM 2108/3976MB (lfb 319x4MB) CPU [19%@408,15%@408,19%@408,16%@408] --------------> finishing capture
RAM 1501/3976MB (lfb 323x4MB) CPU [17%@102,11%@102,19%@102,17%@102]
RAM 1500/3976MB (lfb 323x4MB) CPU [3%@102,3%@102,0%@102,0%@102]
RAM 1500/3976MB (lfb 323x4MB) CPU [3%@102,4%@102,1%@102,0%@102]
RAM 1500/3976MB (lfb 323x4MB) CPU [2%@102,4%@102,0%@102,0%@102]
RAM 1500/3976MB (lfb 323x4MB) CPU [2%@102,3%@102,0%@102,0%@102]

The average for each core while dual capturing is: [22.37%, 13.851%, 16.851% , 13.48%] and the memory is 2108MB for the total system, for only the dual capture the armload is [15.37%, 6.85%, 13.81%, 10.81%] for each core and takes 607MB of memory.

Contact Us

For technical questions please send an email to support@ridgerun.com or if you are interested in evaluation version or purchasing the driver, please post your inquiry at our Contact Us link.