Sensors

Multi-core: Giving a car the power of sight

7th November 2011
ES Admin
0
Making sense of images captured by cameras in today’s Advanced Driver Assistance Systems requires sophisticated microprocessors with specific feature sets. But what makes a good image processor? Sally Ward-Foxton investigates.
Tomorrow’s cars will be kitted out with systems that can tell you if you’re about to drift out of your lane, break the speed limit, or even hit a pedestrian. Sophisticated camera systems will photograph scenes inside and outside the vehicle and the images are processed by complex embedded electronics which are set up to recognise certain shapes. Lane markings, traffic signs and other vehicles will be recognised and tracked, with warnings sounded if for example the car crosses out of its lane without indicating. The aim is to reduce the number of accidents by either warning the driver that they are about to do something dangerous, or by stopping them from doing it altogether, usually by applying the brakes. These systems are commonly known as advanced driver assistance systems (ADAS).



Capturing images of what’s up ahead can be done successfully using a high dynamic range CCD camera. So far, so easy. But how does the microprocessor then recognise shapes, such as traffic signs or lane markings, in that stream of images so it can alert the driver?



“The main principle is that you first identify a region of interest,” says Martin Duncan, ADAS Marketing and System Manager at ST Microelectronics. “This is any part of the image that is ‘different’ from the surrounding scene. You then track its evolution through several frames and try to resolve what kind of shape it is, then compare it to shapes already known in order to identify what it is.”

Duncan explained that typically more than 100 of these regions would typically be tracked at any one time.



“Pedestrian detection is particularly challenging as it is a non-rigid shape, it is very variable shape and they tend to move in random fashion!” he said. “There can also be many more of them to track in the image, like downtown in a city, at any one time.”



Hardware acceleration

Primarily, what differentiates an image recognition processor from a generic processor is the use of on-chip hardware vision accelerators, which look for edges and points of interest in the images and classify them into different types. There are also other features that suit certain types of processor to image recognition.



“The best overall architecture is multi-core to reduce latency and run the different types of tasks,” Duncan says. “It also needs sufficient on-chip memory, versatile hardware accelerators (so they can be used for multiple tasks) and low power consumption.”



ST recently announced the third generation of its EyeQ processor family for image recognition in ADAS systems, which is being developed in conjunction with Mobileye. EyeQ3 contains Mobileye’s Vector Microcode Processor (VMP) vision accelerators designed in the SIMD VLIW style (single instruction, multiple data and very long instruction word – both parallelising techniques). This processor is multi-core, containing four VMP cores as well as four multi-threaded MIPS 32 cores.



“The VMP design was done taking into account all the most relevant vision algorithms known and was optimised to accelerate these about 4x faster on average than a DSP would run them,” Duncan says. “Multi-core helps also to run different applications in parallel and without increasing overall clock frequency and therefore power consumption too much.”



With the use of all these cores, the EyeQ3 can simultaneously run vehicle detection, pedestrian detection, lane marks detection, traffic sign recognition, high/low beam, forward collision warning, adaptive cruise control and lane keeping. It will also accept multiple camera inputs from Surround-View-Systems in order to create a safety 'cocoon' around the vehicle.



Multi-core architecture

Toshiba Electronics Europe’s senior marketing engineer Klaus Neuenhueskes agrees that multi-core processors are best suited to this application. Toshiba’s Visconti2, the company’s image processing offering for ADAS, has four media processing engines inside its multi-core media processor.



“Visconti2 has multiple image recognition processing engines that work in parallel to expedite the ADAS recognition results,” he says. “Visconti2 implements special hardware filter units, an affine transformation engine, a histogram accelerator, and a heterogeneous multi-core architecture with image processing co-processors.”



Neuenhueskes emphasised that it’s the specialised hardware accelerators that ensure image recognition tasks are processed in a fast and timely manner.



“In principle, a high-end multiple GHz PC-class processor could do the job as well,” he concedes. “But it would be unable to meet the requirements of an embedded design in terms of power consumption, operating temperature, system size, and cost requirements.”



He cited the example of lane departure warnings, noting that lane departure is actually a relatively low performance ADAS task. It requires line edge detection algorithms that are supported by a special Visconti2 filter hardware unit, but plenty of performance remains to process additional ADAS tasks in parallel.



Conversely, the tasks requiring highest processing performance are pedestrian detection in daylight and driver cabin monitoring for face direction or drowsiness detection, Neuenhueskes said. Visconti2 copes with these requirements by assigning the tasks to multiple image recognition engines embedded inside Visconti2, using HOG (Histogram of Oriented Gradients) features, enabling real-time pedestrian detection both in night-time and daytime. The HOG technique is particularly suited to detection of humans because it’s robust against things like lighting changes.



Additionally, because of its multi-core architecture, Visconti2 can accept inputs from up to four cameras at once for 360o ‘birds’-eye’ parking assistance systems (see box). This second generation product supports colour cameras up to 1.3 million pixels – the first generation supported greyscale only – so they can be used for applications requiring colour recognition, such as detection of traffic lights and signs.



Replacing the driver?

The advent of multi-core processing is allowing the development of application specific processors like the EyeQ and Visconti families, which are built to perform many difficult but specific tasks simultaneously. Looking for hazardous situations by analysing captured images is not straightforward, but with the help of hardware accelerators, this can be done electronically. Whether these systems will ever be able to replace the driver completely remains to be seen, but in the short term, the promise of accident reduction or even prevention more than justifies the time and money spent developing these applications.



Birds’ eye view

360o parking assistance systems is one area of ADAS that has not yet reached mass market adoption, largely because of high costs. However a recent system developed by Broadcom, Freescale and OmniVision, which uses Ethernet to distribute images around the system, may hold the key to low cost implementations. The system is based on the Broadcom BroadR-Reach BCM89810 standalone physical layer transceiver (PHY), Freescale’s Qorivva MPC5604E 32-bit MCU, and OmniVision’s AEC-Q100 qualified OV10630 colour high dynamic range CMOS image sensor.



“In-car systems today use analogue or LVDS based cabling as link technology for Point2Point connections,” explains Bernd Rucha, AP Automotive Segment Director at Freescale, pointing out that while analogue cables offer low-cost cabling at limited video data quality, LVDS transports uncompressed video data at very high data rates but requires expensive shielded cables, for EMC reasons.



“Contrary to the existing cabling, Ethernet based cabling offers high quality video data quality at low cost connectivity. This combination will pave the way to mass production,” he said. “Going forward, multi-camera surround view park assist systems are expected to be used in parallel to existing ultra-sonic based park assist systems, as they reduce blind spots and improve overall visibility.”

##IMAGE_2_C##

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier