Design

Software framework enables real-time object recognition

8th October 2015
Jordan Mulcare
0

CEVA has introduced the CEVA Deep Neural Network (CDNN), a real-time neural network software framework, to streamline machine learning deployment in low-power embedded systems. Harnessing the processing power of the CEVA-XM4 imaging & vision DSP, the CDNN enables embedded systems to perform deep learning tasks three times faster than the leading GPU-based systems while consuming 30 times less power and requiring 15 times less memory bandwidth.

For example, running a Deep Neural Network (DNN) based pedestrian detection algorithm at 28nm requires less than 30mW for a 1080p 30fs video stream.

Key to the performance, low power and low memory bandwidth capabilities of CDNN is the CEVA Network Generator, a proprietary automated technology that converts a customer’s network structure and weights to a slim, customised network model used in real-time. This enables a faster network model which consumes significantly lower power and memory bandwidth, with less than 1% degradation in accuracy compared to the original network. Once the customised embedded-ready network is generated, it runs on the CEVA-XM4 imaging and vision DSP using fully optimised Convolutional Neural Network (CNN) layers, software libraries and APIs.

Phi Algorithm Solutions, a member of CEVA’s CEVAnet partner program, has used CDNN to implement a CNN-based Universal Object Detector algorithm for the CEVA-XM4 DSP. This is now available for application developers and OEMs to run a variety of applications including pedestrian detection and face detection for security, ADAS and other embedded devices based around low-power camera-enabled systems.

“The CEVA Deep Neural Network framework provided a quick and smooth path from offline training to real-time detection for our convolutional neural network based algorithms,” said Steven Hanna, President and Co-Founder, Phi Algorithm Solutions. “In a matter of days we were able to get an optimised implementation of our unique object detection network, while significantly reducing power consumption compared to other platforms. The CEVA-XM4 imaging & vision DSP together with the CDNN framework is suitable for embedded vision devices and paves the way to advances in artificial intelligence devices in the coming years using deep learning techniques.”

“With more than 20 design wins to-date, we continue to lead the industry in the embedded vision processor domain and are constantly enhancing our portfolio of vision IP offerings to help our customers get to market quicker with minimal risk,” said Eran Briman, Vice President of Marketing, CEVA. “Our new Deep Neural Network framework for the CEVA-XM4 is the first of its kind in the embedded industry, providing a significant step forward for developers looking to implement viable deep learning algorithms within power-constrained embedded systems.”

The CDNN software framework is supplied as source code, extending the CEVA-XM4’s existing application developer kit. It is flexible and modular, capable of supporting either the complete CNN implementation or specific layers. It works with various networks and structures, such as networks developed with Caffe, Torch or Theano training frameworks, or proprietary networks. CDNN includes real-time example models for image classification, localisation and object recognition. It is intended to be used for object and scene recognition, ADAS, AI, video analytics, augmented reality, VR and similar computer vision applications.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier