Xilinx in-booth technology showcase – booth 9:
- Semantic segmentation with custom Convolutional Neural Network – Presented by Xilinx
This demonstration highlights a design example of Semantic Segmentation to recognise the human body in the foreground - overlapped with green colour mask - from the background scene. In addition to the Xilinx Zynq UltraScale+ MPSoC device and various I/O peripherals, the design contains a combination of image processing cores and a custom Convolutional Neural Network core with 19 layers and 8-bit fixed point weights based on FCN-AlexNet. The design was developed with the reVISION stack which includes the SDSoC development environment 2017.2 release with Xilinx deep learning libraries (xfDNN) and hardware accelerated OpenCV libraries (xfOpenCV).
- Sensor Fusion with Xilinx All Programmable SoCs – Presented by Avnet
This demonstration implements sensor fusion and image warping with a Python 1300 camera module and a thermal camera module on the Avnet PicoZed Embedded Vision Kit. The tool demonstrates the advantages of running hardware-accelerated OpenCV functions in the programmable logic of Xilinx All Programmable SoC devices.
- MJPEG streaming with Avnet MiniZed board – Presented by Avnet
This demonstration features MJPEG streaming over WiFi with the Avnet MiniZed board in combination with a camera PMOD adapter board. This board is part of a cost-optimised development environment enabling the programming of Xilinx Zynq-7000 All Programmable SoC devices with traditional programming languages like C or C++.
Xilinx conference presentation:
- Accelerating Embedded Vision and Machine Learning applications at the Edge
- 13th October 2017, 15.30–16.00pm.
The presentation will first summarise the compelling benefits of responsiveness and flexibility derived from All Programmable technology in embedded vision applications. It will then describe the new reVISION acceleration stack which enables software-defined programming using popular frameworks and libraries, putting those benefits in the hands of all embedded vision designers without requiring them to have access to hardware designers.