Getting started with AI on NXP i.MX8M Plus - Development - Developing software for the board - Crosscompiling apps for GStreamer, TensorFlow Lite, and OpenCV

From RidgeRun Developer Connection
Jump to: navigation, search




NXP Partner Program Registered Vertical.jpg NXP Partner Program Horizontal.jpg
Previous: Development/Developing software for the board/Building the SDK Index Next: Development/Developing software for the board/Exploring TensorFlow Lite delegates for prototyping



Crosscompiling using GStreamer library

In order to build a simple app that requires a GStreamer library, then follow the steps explained below:

1. Call the following code as test_gstreamer.cpp, in a folder called crosscompilation_test in your root folder (let's suppose your root folder is $HOME):

#include <gst/gst.h>

#include <glib.h>

int main(int argc, char *argv[]) {

  /* Gstreamer initializaton */
  gst_init(&argc, &argv);

  
  g_print("Hello\n");
  return 0;
}

2. Open up a terminal in the crosscompilation_test folder and initialize the toolchain:

cd $HOME/crosscompilation_test

. /opt/fsl-imx-xwayland/<your Yocto version>/environment-setup-cortexa53-crypto-poky-linux

3. Crosscompile the code:

$CC -O test_gstreamer test_gstreamer.cpp $(pkg-config --cflags --libs gstreamer-1.0)

4. Pass the binary to the board by using SSH:

IMX8M_PLUS_IP=192.168.x.y

DEFAULT_USER=root

BIN=test_gstreamer

scp $BIN $DEFAULT_USER@$IMX8M_PLUS_IP:/

5. Connect to the board by serial or SSH:

# Serial mode:
sudo picocom -b 115200 /dev/ttyUSB0

# SSH mode:
IMX8M_PLUS_IP=192.168.x.y

DEFAULT_USER=root

ssh $DEFAULT_USER@$IMX8M_PLUS_IP

6. Execute the binary recently compiled:

cd /

./test_gstreamer

Compile using TensorFlow Lite

To test the compilation let's compile the minimal.cc example from TensorFlow Lite repository.

The code for minimal.cc is the following:

/* Copyright 2018 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include <cstdio>
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/model.h"
#include "tensorflow/lite/optional_debug_tools.h"

// This is an example that is minimal to read a model
// from disk and perform inference. There is no data being loaded
// that is up to you to add as a user.
//
// NOTE: Do not add any dependencies to this that cannot be built with
// the minimal makefile. This example must remain trivial to build with
// the minimal build tool.
//
// Usage: minimal <tflite model>

#define TFLITE_MINIMAL_CHECK(x)                              \
  if (!(x)) {                                                \
    fprintf(stderr, "Error at %s:%d\n", __FILE__, __LINE__); \
    exit(1);                                                 \
  }

int main(int argc, char* argv[]) {
  if (argc != 2) {
    fprintf(stderr, "minimal <tflite model>\n");
    return 1;
  }
  const char* filename = argv[1];

  // Load model
  std::unique_ptr<tflite::FlatBufferModel> model =
      tflite::FlatBufferModel::BuildFromFile(filename);
  TFLITE_MINIMAL_CHECK(model != nullptr);

  // Build the interpreter with the InterpreterBuilder.
  // Note: all Interpreters should be built with the InterpreterBuilder,
  // which allocates memory for the Interpreter and does various set up
  // tasks so that the Interpreter can read the provided model.
  tflite::ops::builtin::BuiltinOpResolver resolver;
  tflite::InterpreterBuilder builder(*model, resolver);
  std::unique_ptr<tflite::Interpreter> interpreter;
  builder(&interpreter);
  TFLITE_MINIMAL_CHECK(interpreter != nullptr);

  // Allocate tensor buffers.
  TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);
  printf("=== Pre-invoke Interpreter State ===\n");
  tflite::PrintInterpreterState(interpreter.get());

  // Fill input buffers
  // TODO(user): Insert code to fill input tensors.
  // Note: The buffer of the input tensor with index `i` of type T can
  // be accessed with `T* input = interpreter->typed_input_tensor<T>(i);`

  // Run inference
  TFLITE_MINIMAL_CHECK(interpreter->Invoke() == kTfLiteOk);
  printf("\n\n=== Post-invoke Interpreter State ===\n");
  tflite::PrintInterpreterState(interpreter.get());

  // Read output buffers
  // TODO(user): Insert getting data out code.
  // Note: The buffer of the output tensor with index `i` of type T can
  // be accessed with `T* output = interpreter->typed_output_tensor<T>(i);`

  return 0;
}

1. Go to your root folder, open up a terminal and execute the following commands:

cd $HOME

git clone https://github.com/tensorflow/tensorflow.git tensorflow_src

2. Initialize the toolchain:

. /opt/fsl-imx-xwayland/<your Yocto version>/environment-setup-cortexa53-crypto-poky-linux

3. Create the building directory and build the project for compiling the minimal:

mkdir build
cd build
cmake ../tensorflow_src/tensorflow/lite/examples/minimal

4. Compile the minimal example:

cmake .

5. Pass the binary to the board by using SSH:

IMX8M_PLUS_IP=192.168.x.y

DEFAULT_USER=root

BIN=minimal

scp $BIN $DEFAULT_USER@$IMX8M_PLUS_IP:/

6. Connect to the board by serial or SSH:

# Serial mode:
sudo picocom -b 115200 /dev/ttyUSB0

# SSH mode:
IMX8M_PLUS_IP=192.168.x.y

DEFAULT_USER=root

ssh $DEFAULT_USER@$IMX8M_PLUS_IP

7. Execute the binary recently compiled:

cd /

./minimal <your test image>

Compile using OpenCV

In order to build a simple app that requires an OpenCV library, then follow the steps explained below:

1. Call the following code as test_opencv.cpp, in the same folder called crosscompilation_test:

#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>

#include <stdio.h>

int main(int argc, char *argv[]) {

  cv::Mat inputImage;
  
  if (1 != argc) {
     std::cout << "The usage is ./test_opencv <image name>\n" << std::endl;
     return FALSE;
  }
  
  inputImage = cv::imread(argv[1]);
  std::cout << "Image height: " << inputImage.rows << "\n" << std::endl;
  std::cout << "Image width: " << inputImage.cols << "\n" << std::endl;
  std::cout << "Image channels: " << inputImage.channels() << "\n" << std::endl;
  
  return 0;
}

2. Open up a terminal in the crosscompilation_test folder and initialize the toolchain:

cd $HOME/crosscompilation_test

. /opt/fsl-imx-xwayland/<your Yocto version>/environment-setup-cortexa53-crypto-poky-linux

3. Crosscompile the code:

$CXX -O test_opencv.c -o test_opencv $(pkg-config --cflags --libs opencv4)

4. Pass the binary to the board by using SSH:

IMX8M_PLUS_IP=192.168.x.y

DEFAULT_USER=root

BIN=test_opencv

scp $BIN $DEFAULT_USER@$IMX8M_PLUS_IP:/

5. Connect to the board by serial or SSH:

# Serial mode:
sudo picocom -b 115200 /dev/ttyUSB0

# SSH mode:
IMX8M_PLUS_IP=192.168.x.y

DEFAULT_USER=root

ssh $DEFAULT_USER@$IMX8M_PLUS_IP

6. Execute the binary recently compiled:

cd /

./test_opencv <your test image>



Previous: Development/Developing software for the board/Building the SDK Index Next: Development/Developing software for the board/Exploring TensorFlow Lite delegates for prototyping