Difference between revisions of "DeepStream Reference Designs/Getting Started/Evaluating the Project"

From RidgeRun Developer Connection
Jump to: navigation, search
(Web Dashboard)
(Web Dashboard)
Line 74: Line 74:
  
 
In the example above, the web dashboard will be listening on the local address of 192.168.55.100, and port 4200. You can change those two values, as long as you respect the API endpoint called /dashboard. Make sure there are no firewall problems on the indicated IP address and port so that the communication can take place without any problem. If you want to know more about this functionality and other actions included in this reference design, you can consult the [https://developer.ridgerun.com/wiki/index.php/DeepStream_Reference_Designs/Reference_Designs/Automatic_Parking_Lot_Vehicle_Registration APLVR Reference Design] Wiki.
 
In the example above, the web dashboard will be listening on the local address of 192.168.55.100, and port 4200. You can change those two values, as long as you respect the API endpoint called /dashboard. Make sure there are no firewall problems on the indicated IP address and port so that the communication can take place without any problem. If you want to know more about this functionality and other actions included in this reference design, you can consult the [https://developer.ridgerun.com/wiki/index.php/DeepStream_Reference_Designs/Reference_Designs/Automatic_Parking_Lot_Vehicle_Registration APLVR Reference Design] Wiki.
 +
 +
To run the web dashboard, open another terminal and navigate within the project directory to the following path:
  
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
 
$ cd src/aplvr/dashboard/
 
$ cd src/aplvr/dashboard/
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
 +
And finally, execute the following command:
  
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
 
$ python3 web_app.py
 
$ python3 web_app.py
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
 +
'''Important Note''': Dashboard execution does not need to be done within the Jetson board, you can use another computer as the dashboard host, as long as you set the correct IP address and port number to establish the respective communication. Also, in case you don't want to use this functionality, you can simply remove it from the action parameters of the <span style="color: blue"> aplvr.yaml </span> configuration file.
  
 
=== RTSP Protocol ===
 
=== RTSP Protocol ===

Revision as of 13:55, 6 July 2022


Previous: Getting Started/Environment Setup Index Next: Project Architecture
Nvidia-preferred-partner-badge-rgb-for-screen.png




Requesting the Code

Project Installation

TAO Pre-trained Models

The DeepStream inference process performed in this project relies on the use of Car Detection and License Plate Detection models. The models included are part of the NVIDIA TAO Toolkit version 3.0. If you want more information about the DeepStream pipeline configuration and the models used, you can read the following Wiki. This project provides an automation script that handles the process of downloading the respective models and installing a Custom Parser required for the License Plate Recognition model. All you have to do is execute the following command from the top level of the project directory:

$ ./download_models.sh

Running the above script will create a directory called models/ at the root of the project, which should have the following structure:


├── custom_lpr_parser
│   ├── Makefile
│   └── nvinfer_custom_lpr_parser.cpp
    BINARY-PENDING!!!!!!
├── lpd
│   ├── usa_lpd_cal.bin
│   ├── usa_lpd_label.txt
│   └── usa_pruned.etlt
├── lpr
│   └── us_lprnet_baseline18_deployable.etlt
└── trafficcamnet
    ├── resnet18_trafficcamnet_pruned.etlt
    └── trafficnet_int8.txt

The system provides model configuration files for each supported platform, where that information can be verified in the APLVR Supported Platforms section. These files refer to the paths where the models generated are located, so it is recommended NOT to move the location of these files. As can be seen in the generated directory structure, for each of the models, calibration files (when applicable) and files in etlt format are downloaded, which are the files exported from the NVIDIA Transfer Learning Toolkit. The first time the project is executed, each of the model's engines will be generated based on the parameters established in the configuration files, so depending on the platform used, this operation may take a few minutes. From the second run, the configuration files will reference each of the generated models, so it will simply load them without delay.

The engine files of each model will have an associated name, which will depend on the configuration parameters used, for example, when generating the Trafficcamnet network model using a calibration of INT8, a gpu-id of 0, and a batch size of 1, the file name will be:

resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine

The mentioned parameters of batch-size, gpu-id, INT8-calib, etc, can be found in the configuration files of each model within the config_files/ directory of the project. After the first run, you can check the name of the generated models for each network inside the models/ directory.

Setup Tools

This project was developed using Python's Setuptools build system. To install the project, it is necessary to execute the following command from the top level of the project directory:

$ sudo python3 setup.py install

The SetupTools tool will verify that the dependencies required by the project are installed, otherwise, it will proceed to install them through the pip system.

Testing the Project

Important Note: Before you test the project, we strongly recommend that you check the config params located in the aplvr.yaml file, especially the config files path of the DeepStream section, where the models configuration will be loaded, and the RabbitMQ params, where you can set the message broker server IP address. In the following Wiki, you can check more information about the aplvr.yaml and the rest of the files located in the config files directory.

Web Dashboard

By default, the project has a functionality of a Web Dashboard, where you can see in real-time the logs of the information generated. The configuration of this functionality is done through the actions section of the aplvr.yaml config file. For instance, through the "url" parameter you can indicate the IP address and the port where the API that hosts the web page will be launched:

actions:
  - name: "dashboard"
    url:  "http://192.168.55.100:4200/dashboard"

In the example above, the web dashboard will be listening on the local address of 192.168.55.100, and port 4200. You can change those two values, as long as you respect the API endpoint called /dashboard. Make sure there are no firewall problems on the indicated IP address and port so that the communication can take place without any problem. If you want to know more about this functionality and other actions included in this reference design, you can consult the APLVR Reference Design Wiki.

To run the web dashboard, open another terminal and navigate within the project directory to the following path:

$ cd src/aplvr/dashboard/

And finally, execute the following command:

$ python3 web_app.py

Important Note: Dashboard execution does not need to be done within the Jetson board, you can use another computer as the dashboard host, as long as you set the correct IP address and port number to establish the respective communication. Also, in case you don't want to use this functionality, you can simply remove it from the action parameters of the aplvr.yaml configuration file.

RTSP Protocol

Code Execution

This project provides you with an automation script, which handles the process initialization and termination tasks required by the application. APLVR Reference Design depends on its operation, that the RabbitMQ server is up and that the Gstreamer Daemon is running. Therefore, to simplify the work when executing the application, the script does this work for you, in addition, it is responsible for capturing any interruption detected during the execution of the system, to cleanly end the processes that have been started. The script is named run_aplvr.sh , it is located at the top level of the project directory, and all you have to do is run the script as shown below:

$ sudo ./run_aplvr.sh

It is necessary to execute the script with privileged permissions since it is a requirement of the command that starts the RabbitMQ server.

Troubleshooting


Previous: Getting Started/Environment Setup Index Next: Project Architecture