Getting started with TI Jacinto 7 Edge AI - Demos - RidgeRun Demos

From RidgeRun Developer Connection
Jump to: navigation, search




Previous: Demos/Python Demos/Semantic Segmentation Index Next: Reference_Documentation





About the AI Demo

This Demo shows you how to create a multi-channel AI server on the Jacinto 7 platform with capabilities to detect user-specific objects and trigger actions based on inference results, which is a system that is typically found at "Smart City" applications. It receives multiple RTSP video streams and detects objects based on the user's needs, and triggers actions such as video recordings and event logging. This demo could become a base system in "Smart City" applications like surveillance, traffic congestion control, smart parking use cases and more.

Please download Download alt font awesome.svg the Smart City Demo at the following repository:

repository ti-edge-ai-demos

Platform

In order to get started with the Jacinto 7 platform setup, please visit: https://www.ti.com/lit/ml/spruis8/spruis8.pdf. You will quickly get into the EVM setup and the Edge AI SDK that lets you have the OS image.

Demo Design

The different components involved were developed independently to have a modular, extensible design.

Every stage has a well-defined responsibility so that it can be independently tested, modified and even replaced.

Server configuration

The first thing to do is to configure the Demo with the YAML config file config.yaml. This configuration file is used to parse the necessary parameters to the different modules.

The following sections describe in detail the different sections of the YAML.

Model Params

The model parameters are used to load the neural network weights and related artifacts. It must contain the following elements:

Parameter Type Description
disp_width int Used to scale the width of the post-processed image. As of now, it is recommended to keep it at 320.
disp_height int Used to scale the height of the post-processed image. As of now, it is recommended to keep it at 240.
model object Sub-object containing different configurations:
- **detection** (str): The absolute path to the detection model in the file system.

Streams

The streams section consists of a list of individual stream descriptions. Each stream represents a camera to be captured and appended to the grid display. A maximum of 8 streams is supported. Each stream description contains the following fields:

Parameter Type Description
id str A unique human-readable description.
uri str A valid URI to play. Only H264 is supported at the time being.
triggers list A list of valid triggers (as specified by the name in the **triggers** section.

Filters

The filters section consists of a list of individual filter descriptions. The filter evaluates the prediction and, based on the configuration, decides if the actions should be executed or not.

Parameter Type Description
name str A unique human-readable name of the filter.
labels list A list of strings representing valid classes that will trigger the filter. Depends on the used model.
threshold double The minimum value that the predicted class must score in order to trigger the filter.

Actions

The actions section consists of a list of individual action descriptions. The action is exectuted if the filter evaluates positevly to the prediction. Currently two actions are supported:

Record Event

Parameter Type Description
name str A unique human-readable name of the filter.
type str For recording must be **record_event**.
length int The length in seconds of the video recordings.
location str The directory where video recordings should be stored. The path must exist.

Log Event

Parameter Type Description
name str A unique human-readable name of the action.
type str For recording must be **log_event**.
location str The file to where the events will be logged to.

Triggers

The triggers section consists of a list of individual trigger descriptions. A trigger combines an action and a list of filters. The rationale behind this design is to allow users to reuse filters and actions in different configurations.

The triggers are assigned to each stream individually. When a prediction is made, it is forwarded to the filters. If any of the filter is activated, the specified action will be executed.

Parameter Type Description
name str A unique human-readable name of the trigger.
action str The name of a valid action as specified in the **actions** section.
filters str A list of filter names as specified in the **filters** section.

Performance

Shown below are the performance results for the Demo metrics about the frames per seconds, CPU usage and more for 1 to 8 streams!


AI inference model Number of streams CPU % GPU % FPS
MobileNet 300x300 1 30 % 30 % @ 150 MHz 30
4 30 % 30 % @ 150 MHz 30
8 30 % 30 % @ 150 MHz 30
YoloV3 416x416 1 30 % 30 % @ 150 MHz 30
4 30 % 30 % @ 150 MHz 30
8 30 % 30 % @ 150 MHz 30


Previous: Demos/C++ Demos/Semantic Segmentation Index Next: Reference_Documentation