Events News

Adaptive and intelligent computing at Embedded World 2020

21st January 2020
Alex Lynn
0

Experience adaptive and intelligent computing, from Cloud to the Edge, with Xilinx at Embedded World 2020. At the event, Xilinx will showcase a collection of demos, highlighting its ACAP (Adaptive Compute Acceleration Card), Alveo, Industrial IoT (IIoT) and automotive solutions, plus much more.

Additionally, Xilinx is pleased to demonstrate Vitis, the unified software platform launched in October 2019, that enables a broad new range of developers, including software engineers and AI scientists, to take advantage of the power of hardware adaptability. Xilinx is exhibiting in hall 3A, booth 235.

The growth in the use of high definition (HD) and above video resolution streams has outstripped the rate at which network infrastructure has been deployed. When streaming video at limited transmission bandwidth, it’s necessary to use intelligent encoding whereby region of interest can be encoded at higher visual quality than the rest of the region within the frame. This demonstration showcases ROI based encoding using ZU+ Video Codec Unit (VCU).

Using Vitis AI, the Xilinx Deep Learning Processor Unit (DPU) gets integrated in the pipeline and is used to identify the ROI’s mask within the frame. Using the ROI’s mask information, VCU allocates more bits for ROIs in comparison to the rest of the region at a given bitrate to improve the encoding efficiency. The key markets for this demonstration are video surveillance, video conferencing, medical and broadcast.

Introducing the industry’s first Versal Adaptive Compute Acceleration Platform (ACAP), a fully software-programmable, heterogeneous compute platform that combines Scalar Engines, Adaptable Engines, and Intelligent Engines to achieve dramatic performance improvements. This demonstration showcases a ML inference solution for edge use cases on Versal ACAP.

Using Xilinx tools, the system is realised on a heterogeneous compute platform where adaptable engines are used to integrate live video interfaces along with pre/post processing elements. Intelligent Engines (AIE) are used to implement ML inference compute intensive algorithms and a scalar engine is used to run the operating system (OS) to control the different elements within the pipeline.

Using AWS certified boards from Xilinx, this demo highlights an easily accessible platform for an innovative collaboration between edge and cloud. The showcase features a neural network in the programmable system on chip (SoC), complemented by deterministic functions. After the AWS Sagemaker Neo cloud service executes the training for a ML model and deposits the result on secure cloud storage, the Xilinx-SoC-based edge device retrieves this without compromising real-time functionality.

The immense calculation power of programmable logic allows for real-time execution of Digital Twins. The Bullet Physics Engine runs a continuous 3D calculation of the effects of collisions using multiple elements in various shapes. Our demonstration shows the engine on a Xilinx Alveo accelerator board with smooth graphics output at high frame rate. Industrial applications benefit from digital real-time models of industrial devices by continuously adapting the operation parameters for the corresponding device.

DRIVE-XA is an automated driving demonstration and path-finding platform that can be used to implement various aspects of automated driving functionality (such as perception, environmental characterisation, decision, and control). The platform offers modular and extensive connectivity for a dynamic set of sensors and sensor interfaces. Demonstrations of this at Embedded World include: 

  • Data Aggregation, Pre-Processing, and Distribution (DAPD) powered by a single Zynq Ultrascale+ MPSoC and featuring six-camera/1-lidar framework from Xylon.
  • ML compute acceleration on a second MPSoC device that implements a multi-channel neural network processor for automotive roadway scene segmentation and object recognition.

logiADAK7 is the latest ADAS development kit from Xylon, based on the Xilinx Zynq Ultrascale+ MPSoC device. The logiADAK7 platform showcases Xylon’s latest developments in multi-camera central module processing including the production-ready “ViewMore Natural Surround View” IP solution. The ViewMore features include Virtual Flying Camera, Multi-camera Image Equalisation, Dynamic 3D Bowl Adaptation, and enables HW and SW customisation options for differentiation.

In addition to these demos, Xilinx will be discussing several compelling topics at Embedded World, including:

  • ‘Architecture Apocalypse: dream architecture for deep learning inference and compute -Versal AI Core’ on Wednesday 26th February at 2pm.
  • ‘Low-bit CNN implementation and optimisation on FPGA’ on Wednesday 26th February at 3pm.
  • ‘Emerging SoC performance/power challenges and a dozen techniques’ on Wednesday 26th February at 4pm.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier