Automotive

L2+ and HD Radar: A Golden Opportunity

26th April 2021
Lanna Deamer
0

We’re still waiting for full autonomy in cars, but now realise that’s much further out than we originally thought. The automotive value chain was already ahead of us and is now well into plan B - a very active push from SAE level 2, conventional ADAS, to something now being called L2+. L2+ is a little short of L3 in that redundancy continues to depend on the human driver and will require strong driver monitoring systems.

Nevertheless, L2+ offer significant advances over current ADAS, in urban driving, highway automation, lane changes, and merges. And it can arrive much sooner, without need for fundamental changes to regulation, infrastructure and social acceptance. High-Definition (HD) radar will play an important role in making this possible.

Why radar is important for L2+

Support for L2+ and beyond requires aggressive target failure rates. That in turn requires multiple complementary sensors. In theory Lidar would be one of those options but is still expensive. That leaves us with camera and radar in the near-term, both in production use and proven today. Radar has clear complementary advantages over cameras in low-light and bad weather conditions. Further, L2+ will need 360o views, therefore more sensors per car. All of which makes this an exciting time for radar sensing.

HD radar tracking and classification

The challenge for radar has been resolution. Standard products offer 12 channels, good enough for short range, to trigger emergency braking for example, but not to replace Lidar-like resolution. Which is why HD radar, delivering hundreds to thousands of channels, is already coming into production from several vendors. Managing that many channels at the front-end requires the same MIMO and beamforming technologies already familiar in advanced cellular communications. Together with a standard component in radars - CFAR - which essentially declutters the signal of background internal and external noise.

The next stage of the front-end, converting to a point-cloud, requires intensive computations to create the 4D data cube (spatial and velocity coordinates) common in radar imaging. The post processing back end is the recognition stage, segmenting, tracking and classifying targets within this point cloud.

This backend involves multiple methods for handling and azimuth/elevation analysis, target tracking using Kalman filters, and ultimately inferencing. These require intensive floating point operations, and ultimately more and more AI processing methods. Together these greatly extend the value HD radar sensing can add to L2+. As this is a rapidly evolving market, builders need both the flexibility of software and high performance, for which any scalable platform has to be DSP-based.

Pulling it all together for L2+

Now back to those aggressive target failure rates. Good tracking and classification from independent sensors is a start, but it can become much stronger through redundancy analysis between these inputs. Which can be provided by sensor fusion. This kind of capability is now available through technology such as the SensPro2 IP family from CEVA.

SensPro2 provides hardware and software solutions from front-end to back-end and fusion. A platform of options with a dedicated radar ISA, SDK and libraries, support for multiple processing options, both fixed point and floating point, as well as native support for AI processing loads and a rich AI library and framework. All of which is needed to support L2+ products. SensPro2 provides a scalable range of solutions, from wearables all the way up to L2+.

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier