Difference between revisions of "R2Inference/Getting started/Building the library"

From RidgeRun Developer Connection
Jump to: navigation, search
(ARM64)
m
 
(41 intermediate revisions by 9 users not shown)
Line 1: Line 1:
 
<noinclude>
 
<noinclude>
{{R2Inference/Head|previous=Getting started/Getting the code|next=Supported backends|keywords=R2Inference library,R2Inference library dependencies,Installing R2Inference library ,verifying R2Inference library}}
+
{{R2Inference/Head|previous=Getting started/Getting the code|next=Supported backends|metakeywords=R2Inference library,R2Inference library dependencies,Installing R2Inference library,verifying R2Inference library}}
 
</noinclude>
 
</noinclude>
 
<!-- If you want a custom title for the page, un-comment and edit this line:
 
<!-- If you want a custom title for the page, un-comment and edit this line:
Line 6: Line 6:
 
-->
 
-->
  
==Dependencies==
+
==R2Inference dependencies==
  
 
R2Inference has the following dependencies:
 
R2Inference has the following dependencies:
  
* autoreconf
 
* automake
 
 
* pkg-config
 
* pkg-config
* libtool
+
* cpputest
* gtk-doc-tools
+
* doxygen
  
In Ubuntu 16.04 based systems, you may install the dependencies with the following command:
+
Many backends also have these common dependencies:
 +
* git
 +
* curl
 +
* unzip
 +
 
 +
Also, R2Inference makes use of the Meson build system.
 +
 
 +
In Debian based systems, you may install the dependencies with the following command:
  
 
<syntaxhighlight lang='bash'>
 
<syntaxhighlight lang='bash'>
sudo apt-get install -y autoconf automake pkg-config libtool gtk-doc-tools libcpputest-dev doxygen
+
sudo apt-get install -y python3 python3-pip python3-setuptools python3-wheel ninja-build pkg-config libcpputest-dev doxygen git curl unzip
 
</syntaxhighlight>
 
</syntaxhighlight>
  
You need to install the '''C API''' for at least one of our [[R2Inference/Supported_backends|supported backends]] in order to build R2inference. Follow these links for instructions on how to install your preferred backend:
+
Then, use '''pip3''' to install the latest version of '''Meson''' directly from its repository.
 +
<syntaxhighlight lang=bash>
 +
sudo -H pip3 install git+https://github.com/mesonbuild/meson.git
 +
</syntaxhighlight>
 +
 
 +
You need to install the '''API''' for at least one of our [[R2Inference/Supported_backends|supported backends]] in order to build R2inference. Follow these links for instructions on how to install your preferred backend:
  
*[[R2Inference/Supported_backends/NCSDK#Installation | NCSDK installation instructions]]
 
 
*[[R2Inference/Supported_backends/TensorFlow#Installation | TensorFlow installation instructions]]
 
*[[R2Inference/Supported_backends/TensorFlow#Installation | TensorFlow installation instructions]]
 
*[[R2Inference/Supported_backends/TensorFlow-Lite#Installation | TensorFlow-Lite installation instructions]]
 
*[[R2Inference/Supported_backends/TensorFlow-Lite#Installation | TensorFlow-Lite installation instructions]]
 +
*[[R2Inference/Supported_backends/TensorRT#Installation | TensorRT installation instructions]]
 +
*[[R2Inference/Supported_backends/EdgeTPU#Installation | Edge TPU installation instructions]]
 +
*[[R2Inference/Supported_backends/ONNXRT#Installation | ONNXRT installation instructions]]
 +
*[[R2Inference/Supported_backends/ONNXRT ACL#Installation | ONNXRT ACL installation instructions]]
  
==Install library==
+
==Installing R2Inference library==
 
=== Linux ===
 
=== Linux ===
 +
 +
These instructions have been tested on:
 +
* x86
 +
* ARM64
 +
 
To build and install r2inference you can run the following commands:
 
To build and install r2inference you can run the following commands:
  
Line 40: Line 58:
 
     </tr>
 
     </tr>
 
     <tr>
 
     <tr>
       <td>--enable-ncsdk</td>
+
       <td>-Denable-coral=true</td>
       <td>Compile the library with NCSDK backend support</td>
+
       <td>Compile the library with Coral Edge TPU backend support</td>
 
     </tr>
 
     </tr>
 
     <tr>
 
     <tr>
       <td>--enable-tensorflow</td>
+
       <td>-Denable-tensorflow=true</td>
       <td>Compile the library with TensorFlow backend support</td>
+
       <td>Compile the library with Tensorflow backend support</td>
 
     </tr>
 
     </tr>
 
     <tr>
 
     <tr>
       <td>--enable-tflite</td>
+
       <td>-Denable-tflite=true</td>
       <td>Compile the library with TensorFlow-Lite backend support</td>
+
       <td>Compile the library with TensorFlow Lite backend support</td>
 
     </tr>
 
     </tr>
    <caption>Table 1. R2Inference configuration options</caption>
 
  </table>
 
  </center>
 
</html>
 
 
If you are going to use R2Inference in combination with Gst-Inference, please add also the following options:
 
 
<html>
 
  <center>
 
  <table class='wikitable'>
 
 
     <tr>
 
     <tr>
       <th>System</th>
+
       <td>-Denable-tensorrt=true</td>
       <th>Configure Option</th>
+
       <td>Compile the library with TensorRT backend support</td>
 
     </tr>
 
     </tr>
 
     <tr>
 
     <tr>
       <td>Ubuntu 64 bits</td>
+
       <td>-Denable-onnxrt=true</td>
       <td>--prefix /usr/ --libdir /usr/lib/x86_64-linux-gnu/</td>
+
       <td>Compile the library with ONNXRT backend support</td>
 
     </tr>
 
     </tr>
 
     <tr>
 
     <tr>
       <td>RidgeRun's Embedded FS</td>
+
       <td>-Denable-onnxrt-acl=true</td>
       <td>--prefix /usr/</td>
+
       <td>Compile the library with ONNXRT backend with Arm Compute Library (ACL) support</td>
 
     </tr>
 
     </tr>
 
     <tr>
 
     <tr>
       <td>Jetson NANO TX2 Xavier</td>
+
       <td>-Denable-onnxrt-openvino=true</td>
       <td>--prefix /usr/ --libdir /usr/lib/aarch64-linux-gnu/</td>
+
       <td>Compile the library with ONNXRT backend with OpenVINO support</td>
 
     </tr>
 
     </tr>
    <tr>
+
     <caption>Table 2. R2Inference configuration options (Meson)</caption>
      <td>i.MX8</td>
 
      <td>--prefix /usr/ --libdir /usr/lib/aarch64-linux-gnu/</td>
 
    </tr>
 
     <caption>Table 2. Platform configuration options</caption>
 
 
   </table>
 
   </table>
 
   </center>
 
   </center>
 
</html>
 
</html>
== X86 ==
+
 
In case of use the Tensorflow-Lite backend it is necessary add the following flags to the configure:
+
<br>
 +
{{Ambox
 +
|type=notice
 +
|small=left
 +
|issue='''In case of use the ONNXRT backend (or any of the [[R2Inference/Supported_backends/ONNXRT#Supported_execution_providers|execution providers]]) it is necessary add the following flags to the build configuration:'''
 +
|style=width:unset;
 +
}}
 
<syntaxhighlight lang='bash'>
 
<syntaxhighlight lang='bash'>
export TensorflowPath=/PATH/TENSORFLOW/SRC
+
# NOTE:
export CPLUS_INCLUDE_PATH=${TensorflowPath}:${TensorflowPath}/tensorflow/lite/tools/make/downloads/flatbuffers/include
+
# These exports are only needed if you are building the ONNXRT backend.
export CXXFLAGS=-pthread
+
# They are NOT necessary if using any of the other backend.
 +
export ONNXRUNTIMEPATH=/PATH/ONNXRUNTIME/SRC/include/onnxruntime/
 +
export CPPFLAGS="-I${ONNXRUNTIMEPATH}"
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
 +
<!--------
 +
<pre style="background-color:yellow">
 +
The Edge TPU backend has as a dependency the TensorFlow-lite backend, hence you need to enable it.
 +
</pre>
 +
---->
 +
<br>
 +
{{Ambox
 +
|type=notice
 +
|small=left
 +
|issue='''The Edge TPU backend has as a dependency the TensorFlow-lite backend, hence you need to enable it. Also is needed to add the following flags:'''
 +
|style=width:unset;
 +
}}
 
<syntaxhighlight lang='bash'>
 
<syntaxhighlight lang='bash'>
git clone https://github.com/RidgeRun/r2inference.git
+
# NOTE:
cd r2inference
+
# These exports are only needed if you are using the Edge TPU and TFLite backends.
./autogen.sh $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE ABOVE
+
# They are NOT necessary if using any of the other backend.
make
+
export TENSORFLOW_PATH='<path-to-tensorflow>'
make check
+
export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include -L${TENSORFLOW_PATH}/tensorflow/lite/tools/make/gen/linux_aarch64/lib"
sudo make install
 
 
</syntaxhighlight>
 
</syntaxhighlight>
== ARM64 ==
+
 
In case of use the Tensorflow-Lite backend it is necessary add the following flags to the configure:
+
<!--------
 +
<pre style="background-color:yellow">
 +
In case of use the Tensorflow-Lite backend it is necessary to add the following flags to the configure:
 +
</pre>
 +
---->
 +
<br>
 +
{{Ambox
 +
|type=notice
 +
|small=left
 +
|issue='''In case of use the Tensorflow-Lite backend it is necessary add the following flags to the configure:.'''
 +
|style=width:unset;
 +
}}
 
<syntaxhighlight lang='bash'>
 
<syntaxhighlight lang='bash'>
export TensorflowPath=/PATH/TENSORFLOW/SRC
+
# NOTE:
export CPLUS_INCLUDE_PATH=${TensorflowPath}
+
# These exports are only needed if you are using the TFLite backend.
export CXXFLAGS=-pthread
+
# They are NOT necessary if using any of the other backend.
sudo apt install flatbuffers-compiler-dev libflatbuffers-dev
+
export TENSORFLOW_PATH=/PATH/TENSORFLOW/SRC
 +
export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include"
 
</syntaxhighlight>
 
</syntaxhighlight>
  
Line 112: Line 148:
 
git clone https://github.com/RidgeRun/r2inference.git
 
git clone https://github.com/RidgeRun/r2inference.git
 
cd r2inference
 
cd r2inference
./autogen.sh $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE ABOVE
+
meson build $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE 2
make
+
ninja -C build # Compile the project
make check
+
ninja -C build test # Run tests
sudo make install
+
sudo ninja -C build install # Install the library
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
 +
'''Note''': If you are building R2Inference in the Coral Dev Kit consider using <code>ninja -C build -j 1</code> instead to avoid the compilation getting killed due to memory.
 +
 
=== Yocto ===
 
=== Yocto ===
 
R2Inference is available at Ridgerun's meta-layer, please check our recipes [https://github.com/RidgeRun/meta-ridgerun here].
 
R2Inference is available at Ridgerun's meta-layer, please check our recipes [https://github.com/RidgeRun/meta-ridgerun here].
Line 140: Line 179:
 
</syntaxhighlight>
 
</syntaxhighlight>
  
Finally build your desired image, the previous steps added R2Inference and its requirements into your Yocto image.
+
Finally, build your desired image, the previous steps added R2Inference and its requirements into your Yocto image.
  
 
==Verify==
 
==Verify==
Line 179: Line 218:
 
You can also check our [[R2Inference/Examples | examples ]] page to get the examples included with the library running.
 
You can also check our [[R2Inference/Examples | examples ]] page to get the examples included with the library running.
  
= Troubleshooting =
+
== Troubleshooting ==
  
 
* After following [https://developer.ridgerun.com/wiki/index.php?title=R2Inference/Supported_backends/TensorFlow#Installation TensorFlow Installation Instructions] and you got below installation issue:
 
* After following [https://developer.ridgerun.com/wiki/index.php?title=R2Inference/Supported_backends/TensorFlow#Installation TensorFlow Installation Instructions] and you got below installation issue:
Line 195: Line 234:
 
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
 
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
 +
== Known issues ==
 +
 +
* If GstInference and R2Inference were built on Ubuntu 16.04, and the backend for TensorFlow and TensorFlow Lite was enabled; the building process will potentially have issues or present segmentation faults while using one of these backends.
  
 
<noinclude>
 
<noinclude>
 
{{R2Inference/Foot|Getting started/Getting the code|Supported backends}}
 
{{R2Inference/Foot|Getting started/Getting the code|Supported backends}}
 
</noinclude>
 
</noinclude>

Latest revision as of 13:22, 7 March 2023



Previous: Getting started/Getting the code Index Next: Supported backends




R2Inference dependencies

R2Inference has the following dependencies:

  • pkg-config
  • cpputest
  • doxygen

Many backends also have these common dependencies:

  • git
  • curl
  • unzip

Also, R2Inference makes use of the Meson build system.

In Debian based systems, you may install the dependencies with the following command:

sudo apt-get install -y python3 python3-pip python3-setuptools python3-wheel ninja-build pkg-config libcpputest-dev doxygen git curl unzip

Then, use pip3 to install the latest version of Meson directly from its repository.

sudo -H pip3 install git+https://github.com/mesonbuild/meson.git

You need to install the API for at least one of our supported backends in order to build R2inference. Follow these links for instructions on how to install your preferred backend:

Installing R2Inference library

Linux

These instructions have been tested on:

  • x86
  • ARM64

To build and install r2inference you can run the following commands:

Configure Option Description
-Denable-coral=true Compile the library with Coral Edge TPU backend support
-Denable-tensorflow=true Compile the library with Tensorflow backend support
-Denable-tflite=true Compile the library with TensorFlow Lite backend support
-Denable-tensorrt=true Compile the library with TensorRT backend support
-Denable-onnxrt=true Compile the library with ONNXRT backend support
-Denable-onnxrt-acl=true Compile the library with ONNXRT backend with Arm Compute Library (ACL) support
-Denable-onnxrt-openvino=true Compile the library with ONNXRT backend with OpenVINO support
Table 2. R2Inference configuration options (Meson)


# NOTE:
# These exports are only needed if you are building the ONNXRT backend.
# They are NOT necessary if using any of the other backend.
export ONNXRUNTIMEPATH=/PATH/ONNXRUNTIME/SRC/include/onnxruntime/
export CPPFLAGS="-I${ONNXRUNTIMEPATH}"


# NOTE:
# These exports are only needed if you are using the Edge TPU and TFLite backends.
# They are NOT necessary if using any of the other backend.
export TENSORFLOW_PATH='<path-to-tensorflow>'
export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include -L${TENSORFLOW_PATH}/tensorflow/lite/tools/make/gen/linux_aarch64/lib"


# NOTE:
# These exports are only needed if you are using the TFLite backend.
# They are NOT necessary if using any of the other backend.
export TENSORFLOW_PATH=/PATH/TENSORFLOW/SRC
export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include"
git clone https://github.com/RidgeRun/r2inference.git
cd r2inference
meson build $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE 2
ninja -C build # Compile the project
ninja -C build test # Run tests
sudo ninja -C build install # Install the library

Note: If you are building R2Inference in the Coral Dev Kit consider using ninja -C build -j 1 instead to avoid the compilation getting killed due to memory.

Yocto

R2Inference is available at Ridgerun's meta-layer, please check our recipes here. Actually, only i.MX8 platforms are supported with Yocto.

First, create a Yocto environment for i.MX8, this i.MX8 dedicated wiki has more information to setup up a Yocto environment.

i.MX8 Yocto guide here.

In your Yocto sources folder, run the following command

git clone https://github.com/RidgeRun/meta-ridgerun.git

Enable RidgeRun's meta-layer in your conf/bblayers.conf file by adding the following line.

  ${BSPDIR}/sources/meta-ridgerun \

Enable Prebuilt-TensorFlow, R2Inference and GstInference in your conf/local.conf.

  IMAGE_INSTALL_append = "prebuilt-tensorflow r2inference"

Finally, build your desired image, the previous steps added R2Inference and its requirements into your Yocto image.

Verify

You can verify the library with a simple application:

r2i_verify.cc

#include <iostream>
#include <r2i/r2i.h>

void PrintFramework (r2i::FrameworkMeta &meta) {
  std::cout << "Name        : " << meta.name << std::endl;
  std::cout << "Description : " << meta.description << std::endl;
  std::cout << "Version     : " << meta.version << std::endl;
  std::cout << "---" << std::endl;
}

int main (int argc, char *argv[]) {
  r2i::RuntimeError error;

  std::cout << "Backends supported by your system:" << std::endl;
  std::cout << "==================================" << std::endl;

  for (auto &meta : r2i::IFrameworkFactory::List (error)) {
    PrintFramework (meta);
  }

  return 0;
}

You may build this example by running:

g++ r2i_verify.cc `pkg-config --cflags --libs r2inference-0.0` -std=c++11 -o r2i_verify

You can also check our examples page to get the examples included with the library running.

Troubleshooting

configure: *** checking feature: tensorflow ***
checking for TF_Version in -ltensorflow... no
configure: error: Couldn't find tensorflow
[AUTOGEN][11:46:38][ERROR]	Failed to run configure

The /usr/local directory has not been included on your system library paths, export LD_LIBRARY_PATH appending the /usr/local location.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/

Known issues

  • If GstInference and R2Inference were built on Ubuntu 16.04, and the backend for TensorFlow and TensorFlow Lite was enabled; the building process will potentially have issues or present segmentation faults while using one of these backends.




Previous: Getting started/Getting the code Index Next: Supported backends