Micros

DSP/IP Core pursues grand SLAM

15th May 2019
Mick Elliott
0

Targeting the automotive, AR/VR, drone, mobile, robotics and surveillance markets, Cadence Design Systems has expanded the high end of its Tensilica Vision DSP product family with the introduction of the Vision Q7 DSP delivering up to 1.82 tera operations per second (TOPS).

“It provides numerous architecture and instruction set architecture (ISA) improvements for simultaneous localisation and mapping (SLAM),” says Pulin Desai, Director of Product Marketing, Tensilica Imaging and Vision IP Group. “It offers two times the performance for vision/AI applications compared to the Vision Q6 DSP.”

A quick explanation here. SLAM is a technique used in the robotics, drone, mobile and automotive markets to automatically construct or update a map of an unknown environment.

It is used in the AR/VR market for inside-out tracking. The process relies heavily on a variety of linear algebra and matrix operations and is computationally heavy.

The Blocks are based on classical computer vision (CV) approaches and are typically implemented on CPUs or GPUs.

Behind this activity is an image sensor market which is driving forward the market for vision digital signal processors (DSPs).

This escalating demand for image sensors in edge applications is driving growth of the embedded vision market. Today’s vision use cases demand a mix of both vision and AI operations, and edge SoCs require highly flexible, high-performance vision and AI solutions operating at low power.

In addition, edge applications that include an imaging camera demand a vision DSP capable of performing pre- or post-processing before any AI task.

While performing SLAM, edge SoCs also require a computational offload engine to increase performance, reduce latency and further lower power for battery-operated devices.

Because SLAM utilises fixed- and floating-point arithmetic to achieve the necessary accuracy, any vision DSP employed for SLAM must provide higher performance for both data types.

With its low power and architectural and instruction set enhancements, the Vision Q7 DSP is suited for demanding edge vision and AI processing requirements andboosts performance for a number of key metrics.

A very long instruction word (VLIW) SIMD architecture delivers up to 1.7X higher TOPS compared to the Vision Q6 DSP in the same area, and an enhanced instruction set supporting 8/16/32-bit data types and optional VFPU support for single and half precision enables up to 2X faster performance on SLAM kernels compared to the Vision Q6 and Vision P6 DSPs.

The Vision Q7 delivers up to 2X improvement in floating-point operations per mm2 (FLOPS/mm2) for both half precision (FP16) and single precision (FP32) compared to the Vision Q6 and Vision P6 DSPs and up to 2X greater AI performance in the same area compared to the Vision Q6 DSP results in up to 2X improvement in GMAC/mm2 compared to the Vision Q6 DSP.

For AI applications, the Vision Q7 DSP provides a flexible solution delivering 512 8-bit MACs, compared to 256 MACs for the Vision Q6 DSP. For greater AI performance, the Vision Q7 DSP can be paired with the Tensilica DNA 100 processor.

A number of iDMA enhancements include 3D DMA, compression and a 256-bit AXI interface. The Vision Q7 DSP is a superset of the Vision Q6 DSP, which preserves customers’ existing software investment and enables an easy migration from the Vision Q6 or Vision P6 DSPs.

“For edge computing in our target markets, offloading vision applications on a high-performance, low-power, highly flexible DSP is a must,” noted Lazaar Louis, senior director of product management and marketing for Tensilica IP at Cadence. “Cadence has a long and successful track record spanning six generations of Vision DSPs, and the Vision Q7 DSP was designed to address the needs of our key customers deploying highly complex vision and AI algorithms, including SLAM for perception. The Vision Q7 DSP strengthens our very successful automotive portfolio, bringing leading-edge computation to the ‘computer in the car’ that can be compliant with safety requirements like ISO 26262.”

The Vision Q7 DSP supports AI applications developed in the Caffe, TensorFlow and TensorFlowLite frameworks through the Tensilica Xtensa Neural Network Compiler (XNNC), which maps neural networks into executable and highly optimized high-performance code for the Vision Q7 DSP. The Vision Q7 DSP also supports the Android Neural Network (ANN) API for on-device AI acceleration in Android-powered devices, and the software environment also features complete and optimised support for more than 1,700 OpenCV-based vision library functions, enabling fast, high-level migration of existing vision applications.

In addition, development tools and libraries are all designed to enable SoC vendors to achieve ISO 26262 automotive safety integrity level D (ASIL D) certification.

The Vision Q7 DSP has been sampled to strategic customers and is expected to be available for general release in the second quarter of 2019.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier