Difference between revisions of "R2Inference/Getting started/Building the library"

From RidgeRun Developer Connection
Jump to: navigation, search
(Meson)
(Meson)
Line 142: Line 142:
 
git clone https://github.com/RidgeRun/r2inference.git
 
git clone https://github.com/RidgeRun/r2inference.git
 
cd r2inference
 
cd r2inference
meson build --prefix /usr $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE 2
+
meson build $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE 2
 
ninja -C build # Compile project
 
ninja -C build # Compile project
 
ninja -C build test # Run tests
 
ninja -C build test # Run tests

Revision as of 15:37, 7 May 2020



Previous: Getting started/Getting the code Index Next: Supported backends




Dependencies

R2Inference has the following dependencies:

  • autoreconf
  • automake
  • pkg-config
  • libtool
  • cpputest

In Debian based systems, you may install the dependencies with the following command:

sudo apt-get install -y autoconf automake pkg-config libtool libcpputest-dev doxygen

You need to install the API for at least one of our supported backends in order to build R2inference. Follow these links for instructions on how to install your preferred backend:

Install library

Linux

Autotools

These instructions have been tested on:

  • x86
  • ARM64

To build and install r2inference you can run the following commands:

Configure Option Description
--enable-ncsdk Compile the library with NCSDK backend support
--enable-tensorflow Compile the library with TensorFlow backend support
--enable-tflite Compile the library with TensorFlow-Lite backend support
Table 1. R2Inference configuration options


# NOTE:
# These exports are only needed if you are using the TFLite backend.
# They are NOT necessary if using any of the other backend.
export TENSORFLOW_PATH=/PATH/TENSORFLOW/SRC
export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include"
git clone https://github.com/RidgeRun/r2inference.git
cd r2inference
./autogen.sh $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE ABOVE
make
make check
sudo make install

Meson

These instructions have been tested on:

  • x86

To build and install r2inference you can run the following commands:

Configure Option Description
-Denable-tensorflow=true Compile the library with Tensorflow backend support
-Denable-tflite=true Compile the library with TensorFlow Lite backend support
Table 2. R2Inference configuration options (Meson)


# NOTE:
# These exports are only needed if you are using the TFLite backend.
# They are NOT necessary if using any of the other backend.
export TENSORFLOW_PATH=/PATH/TENSORFLOW/SRC
export CPPFLAGS="-I${TENSORFLOW_PATH} -I${TENSORFLOW_PATH}/tensorflow/lite/tools/make/downloads/flatbuffers/include"
git clone https://github.com/RidgeRun/r2inference.git
cd r2inference
meson build $OPTIONS # CHOOSE THE APPROPRIATE CONFIGURATION FROM THE TABLE 2
ninja -C build # Compile project
ninja -C build test # Run tests
sudo ninja -C build install # Install the library

Yocto

R2Inference is available at Ridgerun's meta-layer, please check our recipes here. Actually, only i.MX8 platforms are supported with Yocto.

First, create a Yocto environment for i.MX8, this i.MX8 dedicated wiki has more information to setup up a Yocto environment.

i.MX8 Yocto guide here.

In your Yocto sources folder, run the following command

git clone https://github.com/RidgeRun/meta-ridgerun.git

Enable RidgeRun's meta-layer in your conf/bblayers.conf file by adding the following line.

  ${BSPDIR}/sources/meta-ridgerun \

Enable Prebuilt-TensorFlow, R2Inference and GstInference in your conf/local.conf.

  IMAGE_INSTALL_append = "prebuilt-tensorflow r2inference"

Finally build your desired image, the previous steps added R2Inference and its requirements into your Yocto image.

Verify

You can verify the library with a simple application:

r2i_verify.cc

#include <iostream>
#include <r2i/r2i.h>

void PrintFramework (r2i::FrameworkMeta &meta) {
  std::cout << "Name        : " << meta.name << std::endl;
  std::cout << "Description : " << meta.description << std::endl;
  std::cout << "Version     : " << meta.version << std::endl;
  std::cout << "---" << std::endl;
}

int main (int argc, char *argv[]) {
  r2i::RuntimeError error;

  std::cout << "Backends supported by your system:" << std::endl;
  std::cout << "==================================" << std::endl;

  for (auto &meta : r2i::IFrameworkFactory::List (error)) {
    PrintFramework (meta);
  }

  return 0;
}

You may build this example by running:

g++ r2i_verify.cc `pkg-config --cflags --libs r2inference-0.0` -std=c++11 -o r2i_verify

You can also check our examples page to get the examples included with the library running.

Troubleshooting

configure: *** checking feature: tensorflow ***
checking for TF_Version in -ltensorflow... no
configure: error: Couldn't find tensorflow
[AUTOGEN][11:46:38][ERROR]	Failed to run configure

The /usr/local directory has not been included on your system library paths, export LD_LIBRARY_PATH appending the /usr/local location.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/




Previous: Getting started/Getting the code Index Next: Supported backends