Difference between revisions of "ONNX simple sample"

From RidgeRun Developer Connection
Jump to: navigation, search
m
 
(14 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
__TOC__
 
__TOC__
= Introduction =
+
== Introduction ==
On this page, you are going to find the steps to install ONXX and ONXXRuntime and run a simple C/C++ sample on Linux. This wiki page tries to describe the importance of ONNX models and how to use it. The goal is to provide you some examples.
+
On this page, you are going to find the steps to install ONXX and ONXXRuntime and run a simple C/C++ example on Linux. This wiki page describes the importance of ONNX models and how to use it. The goal is to provide you some examples.
  
= Installing ONNX =
+
== Installing ONNX ==
You can then install ONNX from PyPi with the following command:
+
You can install ONNX from PyPI with the following command:
  
 
<pre>
 
<pre>
Line 18: Line 18:
 
</pre>
 
</pre>
  
= Installing ONNXRuntime =
+
== Installing ONNXRuntime ==
This guide build the baseline CPU version of ONNXRuntime form source, for build it from the source code use the following commands:
+
This guide builds the baseline CPU version of ONNXRuntime form source, to build it use the following commands:
  
 
<pre>
 
<pre>
Line 32: Line 32:
 
</pre>
 
</pre>
  
After install CMake run the following command for build onnxruntime:
+
After install CMake run the following command to build onnxruntime:
  
 
<pre>
 
<pre>
 
./build.sh --config RelWithDebInfo --build_shared_lib --parallel
 
./build.sh --config RelWithDebInfo --build_shared_lib --parallel
 
</pre>
 
</pre>
 +
'''*''' To use a different backend please refer to this [https://github.com/microsoft/onnxruntime/blob/master/BUILD.md site] to check how to build ONNXRuntime<br>
  
 
Finally, install it:
 
Finally, install it:
Line 45: Line 46:
 
</pre>
 
</pre>
  
Because this test is on Linux you need to copy the .so file to general lib path:
+
Finally, copy the .so file to general lib path:
 
<pre>
 
<pre>
 
cp libonnxruntime.so /usr/lib/x86_64-linux-gnu/
 
cp libonnxruntime.so /usr/lib/x86_64-linux-gnu/
 
</pre>
 
</pre>
  
= Sample =
+
=== Enabling other execution providers ===
This guide is for using an ONNXRuntime C/C++ code on linux, for that reason only the SqueezeNet samples are be build it.
 
  
== Build ==
+
ONNX Runtime supports multiple execution providers for a full list visit: https://github.com/microsoft/onnxruntime/blob/master/BUILD.md
First go to the path with the C/C++ code samples.
+
 
 +
==== Intel DNNL ====
 +
<pre>
 +
./build.sh --config RelWithDebInfo --build_shared_lib --parallel --use_dnnl
 +
cd build/Linux/RelWithDebInfo
 +
sudo make install
 +
sudo cp libonnxruntime.so.1.2.0 /usr/lib/x86_64-linux-gnu/libonnxruntime.so
 +
sudo cp dnnl/install/lib/libmkldnn.so /usr/lib/x86_64-linux-gnu/
 +
</pre>
 +
 
 +
== Example ==
 +
This guide is for using an ONNXRuntime C/C++ code on Linux, for that reason only the SqueezeNet examples are built it.
 +
 
 +
=== Build ===
 +
First, go to the path with the C/C++ code examples.
 
<pre>
 
<pre>
 
cd onnxruntime/csharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/
 
cd onnxruntime/csharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/
 
</pre>
 
</pre>
  
After that build the code:
+
After that, build the code:
  
 
<pre>
 
<pre>
Line 65: Line 79:
 
</pre>
 
</pre>
  
== Run ==
+
=== Run ===
  
Finally just run the code:
+
Finally, just run the code:
  
 
<pre>
 
<pre>
Line 73: Line 87:
 
</pre>
 
</pre>
  
Running this sample you will get the following output:
+
Running this example you will get the following output:
  
 
<pre>
 
<pre>
Line 92: Line 106:
 
Done!
 
Done!
 
</pre>
 
</pre>
 +
 +
== Convert DNN models to ONNX ==
 +
The objective of ONNX is provide a common language to describe the graph of neural network, for that reason they provide tools to convert models from different deep learning frameworks to ONNX protocol buffer.
 +
=== Tensorflow to ONNX ===
 +
This tool can convert TensorFlow models from saved_model, checkpoint or frozen graph formats. <br>
 +
[https://github.com/onnx/tensorflow-onnx/blob/master/tutorials/ConvertingSSDMobilenetToONNX.ipynb Here] you can find an example of how to convert a saved_model or frozen graph to ONNX.<br>
 +
To obtain more information and download the tool, refer to this [https://github.com/onnx/tensorflow-onnx site]
 +
 +
=== Keras to ONNX ===
 +
This tool can convert Keras models.<br>
 +
To obtain more information and download the tool, refer to this [https://github.com/onnx/keras-onnx site]
 +
 +
=== Scikit-Learn to ONNX ===
 +
This tool can convert Scikit-Learn models.<br>
 +
To obtain more information and download the tool, refer to this [https://github.com/onnx/sklearn-onnx site]
 +
 +
=== ONNXMLTools ===
 +
ONNXMLTools provide support to convert models from CoreML, LightGBM, LibSVM, and XGBoost to ONNX.<br>
 +
This project also work as a wrapper to the TesorFlow, Keras and Scikit-Learn converters. <br>
 +
To obtain more information and download the tool, refer to this [https://github.com/onnx/onnxmltools site]
 +
  
 
[[Category:ONNX]] [[Category:AI]]
 
[[Category:ONNX]] [[Category:AI]]

Latest revision as of 13:26, 1 June 2020

Introduction

On this page, you are going to find the steps to install ONXX and ONXXRuntime and run a simple C/C++ example on Linux. This wiki page describes the importance of ONNX models and how to use it. The goal is to provide you some examples.

Installing ONNX

You can install ONNX from PyPI with the following command:

sudo pip install onnx

You can also build and install ONNX locally from source code:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
python setup.py install

Installing ONNXRuntime

This guide builds the baseline CPU version of ONNXRuntime form source, to build it use the following commands:

git clone --recursive https://github.com/Microsoft/onnxruntime -b v1.0.0
cd onnxruntime

Before install onnxruntime you need to install CMake 3.13 or higher.

sudo -H pip3 install cmake

After install CMake run the following command to build onnxruntime:

./build.sh --config RelWithDebInfo --build_shared_lib --parallel

* To use a different backend please refer to this site to check how to build ONNXRuntime

Finally, install it:

cd build/Linux/RelWithDebInfo
sudo make install

Finally, copy the .so file to general lib path:

cp libonnxruntime.so /usr/lib/x86_64-linux-gnu/

Enabling other execution providers

ONNX Runtime supports multiple execution providers for a full list visit: https://github.com/microsoft/onnxruntime/blob/master/BUILD.md

Intel DNNL

./build.sh --config RelWithDebInfo --build_shared_lib --parallel --use_dnnl
cd build/Linux/RelWithDebInfo
sudo make install
sudo cp libonnxruntime.so.1.2.0 /usr/lib/x86_64-linux-gnu/libonnxruntime.so
sudo cp dnnl/install/lib/libmkldnn.so /usr/lib/x86_64-linux-gnu/

Example

This guide is for using an ONNXRuntime C/C++ code on Linux, for that reason only the SqueezeNet examples are built it.

Build

First, go to the path with the C/C++ code examples.

cd onnxruntime/csharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/

After that, build the code:

g++ -o Capi_sample C_Api_Sample.cpp -I $PATHTOONNXRUNTIMESESSION (#CHOOSE THE APPROPRIATE PATH TO onnxruntime/include/onnxruntime/core/session) -lonnxruntime -std=c++14

Run

Finally, just run the code:

./Capi_sample

Running this example you will get the following output:

Using Onnxruntime C API
Number of inputs = 1
Input 0 : name=data_0
Input 0 : type=1
Input 0 : num_dims=4
Input 0 : dim 0=1
Input 0 : dim 1=3
Input 0 : dim 2=224
Input 0 : dim 3=224
Score for class [0] =  0.000045
Score for class [1] =  0.003846
Score for class [2] =  0.000125
Score for class [3] =  0.001180
Score for class [4] =  0.001317
Done!

Convert DNN models to ONNX

The objective of ONNX is provide a common language to describe the graph of neural network, for that reason they provide tools to convert models from different deep learning frameworks to ONNX protocol buffer.

Tensorflow to ONNX

This tool can convert TensorFlow models from saved_model, checkpoint or frozen graph formats.
Here you can find an example of how to convert a saved_model or frozen graph to ONNX.
To obtain more information and download the tool, refer to this site

Keras to ONNX

This tool can convert Keras models.
To obtain more information and download the tool, refer to this site

Scikit-Learn to ONNX

This tool can convert Scikit-Learn models.
To obtain more information and download the tool, refer to this site

ONNXMLTools

ONNXMLTools provide support to convert models from CoreML, LightGBM, LibSVM, and XGBoost to ONNX.
This project also work as a wrapper to the TesorFlow, Keras and Scikit-Learn converters.
To obtain more information and download the tool, refer to this site