Difference between revisions of "Xilinx ZYNQ UltraScale+ MPSoC/Introduction/Getting started"

From RidgeRun Developer Connection
Jump to: navigation, search
(Running the NLP-SmartVision app)
Line 47: Line 47:
 
== Running the NLP-SmartVision app ==
 
== Running the NLP-SmartVision app ==
  
 +
=== Installing the Xilinx environment and app ===
 
Install the Xilinx Development and Demonstration environment:
 
Install the Xilinx Development and Demonstration environment:
  
Line 63: Line 64:
 
</syntaxhighlight>
 
</syntaxhighlight>
  
 +
Install the NLP-SmartVision app:
 +
<syntaxhighlight lang=bash>
 +
sudo xlnx-config --snap --install xlnx-nlp-smartvision
 +
</syntaxhighlight>
 +
 +
=== Loading the firmware ===
 +
Once everything is installed and the firmware is in the /lib/firmware/xilinx folder, the accelerator bitstream must be loaded:
 +
 +
You can list the available accelerators with:
 +
<syntaxhighlight lang=bash>
 +
sudo xlnx-config --xmutil listapps
 +
</syntaxhighlight>
 +
 +
If there is another bitstream loaded first unload it with:
 +
<syntaxhighlight lang=bash>
 +
sudo xlnx-config --xmutil unloadapp
 +
</syntaxhighlight>
 +
 +
Now, load the nlp-smartvision accelerator:
 +
<syntaxhighlight lang=bash>
 +
sudo xlnx-config --xmutil loadapp nlp-smartvision
 +
</syntaxhighlight>
 +
 +
=== App usage ===
 +
 +
For the NLP-SmartVision app you will need either a USB Webcam or a MIPI camera + USB microphone. Then, set the default microphone as follows:
 +
<syntaxhighlight lang=bash>
 +
ubuntu@kria:~$ xlnx-nlp-smartvision.set-mic
 +
 +
Microphones available in the System:
 +
card 1: Device [Usb Audio Device], device 0: USB Audio [USB Audio]
 +
 +
Which card number do you want to use?1
 +
 +
</syntaxhighlight>
 +
 +
After that you can start the application with the -u or -m option depending if you want to use the USB or MIPI camera. It is important to note that the application assumes you are running it from the GNOME environment and won't work remotely without further configuration.
 +
<syntaxhighlight lang=bash>
 +
# USB Camera
 +
xlnx-nlp-smartvision.nlp-smartvision -u
 +
 +
# MIPI Camera
 +
xlnx-nlp-smartvision.nlp-smartvision -m
 +
</syntaxhighlight>
 +
 +
Once the application is running you can talk to it to change the current task. This will switch the machine learning model running on the DPU. The possible voice commands are:
  
 +
[[File:Kria-nlp-smartvision-commands.png|1000px|center|thumb|State machine and voice commands accepted by the NLP-SmartVision application. Source: Xilinx.]]
  
 
=== Certificates issues ===
 
=== Certificates issues ===
Line 80: Line 128:
 
You can take a look at the official Xilinx documentation at the following links:  
 
You can take a look at the official Xilinx documentation at the following links:  
 
* [https://www.xilinx.com/products/som/kria/kv260-vision-starter-kit/kv260-getting-started/getting-started.html Getting Started with Kria KV260 Vision AI Starter Kit].
 
* [https://www.xilinx.com/products/som/kria/kv260-vision-starter-kit/kv260-getting-started/getting-started.html Getting Started with Kria KV260 Vision AI Starter Kit].
 +
* [https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/2218918010/Snaps+-+xlnx-nlp-smartvision+Snap+for+Certified+Ubuntu+on+Xilinx+Devices NLP-SmartVision App]
  
  
 
<noinclude>{{Xilinx ZYNQ UltraScale+ MPSoC/Foot|Introduction/Developer kits|Development}}</noinclude>
 
<noinclude>{{Xilinx ZYNQ UltraScale+ MPSoC/Foot|Introduction/Developer kits|Development}}</noinclude>

Revision as of 19:37, 26 August 2022






Previous: Introduction/Developer kits Index Next: Development





Using prebuilt Ubuntu image

Flash the microSD card

You can download the prebuilt images from the following link: Install Ubuntu on Xilinx. In my case, I had a better outcome using the Ubuntu Desktop 20.04.3 LTS image. With these images, you should be able to get a graphical interface automatically working using a display, keyboard, and mouse.

After downloading the image you need to write it to your microSD card. You can flash it with a tool like balenaEtcher (https://www.balena.io/etcher/) or following the instructions below:

1. Insert the microSD card into your computer and identify the corresponding device:

dmesg | tail | grep sd

2. Flash the downloaded image, change /dev/sdX to your corresponding device:

xzcat ~/Downloads/<image-file.xz> | sudo dd of=/dev/sdX bs=32M

3. Eject the microSD card:

sudo eject /dev/sdx

Setup the Board

1. Insert the prepared microSD card into the board.

2. Connect an HDMI display

3. Connect USB keyboard and mouse

4. Connect your power supply. The KV260 Starter kit will power on and boot automatically.

First Boot

Many green LEDs close to the Micro-USB connector will light as soon as the board gets power. Wait for the kernel to finish loading, and you will see the login screen with a single user named "ubuntu":

  • Log in to ubuntu with the password: ubuntu
  • It will immediately ask you to change the password. Go ahead and choose another one.

After these steps, you should see the Ubuntu 20.04 GUI Desktop.

Running the NLP-SmartVision app

Installing the Xilinx environment and app

Install the Xilinx Development and Demonstration environment:

sudo snap install xlnx-config --classic

For Ubuntu 20.04 you need to install an older version with:

sudo snap install xlnx-config --classic --channel=1.x

Then initialize the Xilinx environment. This will install all the required packages. Just execute and follow the on-screen instructions:

xlnx-config.sysinit

Install the NLP-SmartVision app:

sudo xlnx-config --snap --install xlnx-nlp-smartvision

Loading the firmware

Once everything is installed and the firmware is in the /lib/firmware/xilinx folder, the accelerator bitstream must be loaded:

You can list the available accelerators with:

sudo xlnx-config --xmutil listapps

If there is another bitstream loaded first unload it with:

sudo xlnx-config --xmutil unloadapp

Now, load the nlp-smartvision accelerator:

sudo xlnx-config --xmutil loadapp nlp-smartvision

App usage

For the NLP-SmartVision app you will need either a USB Webcam or a MIPI camera + USB microphone. Then, set the default microphone as follows:

ubuntu@kria:~$ xlnx-nlp-smartvision.set-mic

Microphones available in the System:
card 1: Device [Usb Audio Device], device 0: USB Audio [USB Audio]

Which card number do you want to use?1

After that you can start the application with the -u or -m option depending if you want to use the USB or MIPI camera. It is important to note that the application assumes you are running it from the GNOME environment and won't work remotely without further configuration.

# USB Camera
xlnx-nlp-smartvision.nlp-smartvision -u
 
# MIPI Camera
xlnx-nlp-smartvision.nlp-smartvision -m

Once the application is running you can talk to it to change the current task. This will switch the machine learning model running on the DPU. The possible voice commands are:

State machine and voice commands accepted by the NLP-SmartVision application. Source: Xilinx.

Certificates issues

If you have trouble with certificates check the date of the system and adjust it with:

timedatectl set-timezone 'America/Los_Angeles'
timedatectl set-time '2022-08-26 16:00:00'

Using prebuilt Petalinux image

References

You can take a look at the official Xilinx documentation at the following links:



Previous: Introduction/Developer kits Index Next: Development