Artificial Intelligence

What is next for connected and smart vehicles?

16th October 2019
Anna Flockett
0

As the automotive industry continuously grows, so do the companies and consumers within it. Having recently attended the AutoSens show in Belgium, the European automotive sensor conference, it was interesting to see what is happening in this space, and all the innovation that is yet to come when it comes to the automotive sector. Being home to some of the most advanced minds when it comes to vehicle perception, ADAS and autonomous vehicles AutoSens 2019 allowed the vision of what the future potentially holds in the short term and long term when it comes to smart vehicles. 

With the global ADAS market expected to reach $67.43bn by 2025, there is an ever growing demand for driver safety and assistance systems, and this has significantly increased over the past ten years. The need for driver safety in autonomous vehicles is now at an all-time high, which is mainly down to the increasing level of technological innovation and growing initiatives toward vehicle automation and self-driving cars.

AutoSens has always been a community for professionals to come together and discuss science and technology solutions in automotive electronics, and show the latest solutions. This year was no different, as some companies presented their latest solutions to us.

Outsight

Raul Bravo of Outsight, a new company launched earlier this year by Dibotics, explained at the show that the company aims to bring full situation awareness to smart machines, this includes cars, but isn’t restricted just to vehicles.

One of the main problems when it comes to situation awareness is that you fully need the following four elements; localisation, perception, semantics and behaviour all in one device, which before now was not possible. Most devices that perform situation awareness include three elements, but not behaviour, Outsight has created a solution that can do all four within one device.

The idea was then applied to a real life situation to prove how it works. Bravo explained: “Thousands of deaths occur every year from icy roads. Situation awareness means you can now understand the road, have an active understanding and perception to avoid accidents like this happening all from within the vehicle. Through material detection we use hyperspectral sensing to detect the ice on the road.”

Outsight created a 3D semantic camera which generates light, sending photons to the device. Bravo added: “So like colour reflects for humans to see, each material reflects in a wavelength to be identified on the road.”

3D semantic cameras minimise false perceptions, Bravo commented: “That’s what we are doing, pulling in a 3D vision of the world so we can see it in a deeper way.”

The Snow-aware ADAS with Active Hyperspectral Sensing, provides full situation awareness and illustrates in a concrete way the value this can create for ADAS applications. “The assessment of road conditions and hazards in real-time is not a new subject, but we’re showing how this can be concretely achieved with our remote active detection of ice and snow based on hyperspectral imaging, in real-time,” Bravo added.

This product can see materials such as black ice, vehicle-metal and bicycle-skin textiles and differentiate between them all.

On SemiConductor

On Semiconductor’s Wade Appelman also took the time at AutoSens to explain what the company has been working on, and continuing to innovate, as the lead sponsor of the AutoSens event. On Semi is part of the Intelligent Sensing Group, made up of three areas automotive, machine vision and Edge AI.

ADAS requires a fusion of sensors, and not just in automotive, also in robotic transport and consumer. Appelman broke down the anatomy of LiDAR, into the six major hardware functions block on LiDAR Systems:

  • Transmit
  • Receive
  • Beam Steering
  • Optics
  • Readout
  • Power and System   

SIPM and SPAD array emerge as the detector for LiDAR in the forms of:

  • Wanting long distance
  • Compact and blench into vehicle
  • Wanting lowest cost

What is on the On Semi detector horizon?

  • More PDE (Photon Detection Efficiency)
  • Lower noise
  • Faster recovery times
  • New single pixels

Radar sensing is an integral part of ON Semiconductor’s strategy for ADAS and autonomous driving targeting high resolution, scalable, power efficient, and value optimised mmIC. The company is performing on the plan, and is receiving interest in its unique RF capabilities for system level design optimisation.

Another new product for On Semi is the AR0233AT a 2.6MP, 1/2.5” digital image sensor for ADAS and viewing automotive camera systems, which is currently in production.

As part of the Hayabusa image sensor platform which ranges in resolutions from 1.3-3.1MP it provides a leading 120dB of high dynamic range with LED flicker mitigation. Image sensors in the Hayabusa family of products feature the same pixel size, architecture, and benefits, delivering high performance in all lighting conditions and allowing automakers to leverage the development of a core platform to create specific camera systems optimised for different vehicle lines.

On Semi have currently got the R-Series on the market which in 2018 had just seven percent PDE and 30% XDT. In 2019 the R-Series includes ten percent PDE and 30% XDT, but is predicted in 2020 to have 15% PDE and 25% XDT. Appelman said: “Having the microlens will really help. Taking a first look into the future, our 3D ToF SPAD arrays reflect what is yet to come with improved angular resolution, better ambient light rejection and flexibility for low-cost system design.”

Leddar Tech

Leddar Tech also presented its Leddar Pixel Cocoon – LiDAR for autonomous vehicles at AutoSens. Vincent Racine and Esteban Velasquez explained that the transformation of urban environments has really taken off in recent years, with 55% of the world’s population now living in urban areas. The number of cars on the road worldwide is also set to double by 2040, so this put even more pressure on urban areas and creates challenges.

These challenges include:

  1. Traffic congestions
  2. Vehicle emissions
  3. Road accidents
  4. Wasted time

Leddar Tech believes the solution to urban mobility is autonomous shuttles, as they are easier to engineer and there are more than two million ride sharing shuttles expected to be deployed by 2025.

With the autonomous shuttle it has the capacity to support and transport between four and 15 people per vehicle, which tackles a number of challenges there alone, its autonomous navigation stands at sub 50km/h speeds, and is restricted to specific areas, using predetermined learnt paths. Racine explained: “It will not replace cars, but will be used more for airports and theme parks, so there is still a need for domestic sensors.”

The pair explained there is a gap in technology between VRUs and vehicles, with 50% of road traffic ending in fatality. The anatomy of autonomous shuttles can help bring this figure down. Racine explained: “In urban environments, autonomous systems need to adjust to various obstacles such as small objects, hanging objects, parking gates and temporary roadblocks.”

Autonomous driving sensor technology that is used includes cameras, infrared, radar, GPS and LiDAR. LiDAR needs wide FOV to minimise the number of units, therefore mechanical scanners are often used.

When it comes to being geolocalised, LiDAR is what needs to be considered, but it creates blind spot zones of up to ten metres around the vehicles immediate surroundings, this leads to long-term reliability risks due to mechanical and motorised components.

Detecting obstacles within the dead zone around the vehicle is critical to ensure safe and success autonomous shuttles. Leddar Tech’s Leddar Pixell, the cocoon LiDAR for autonomous vehicles provides secure immediate surroundings, 360-degree coverage and no dead zones, or blind spots.

The Leddar Pixell is a 3D solid-state LiDAR for detection cocoon solutions in autonomous shuttles, commercial vehicles and robotaxis. The main advantages include dependable object and VRU detection over 180-degrees, 96 horizontal x 8 vertical segments which results in 768 independent surfaces with simultaneous data acquisition. It includes road-ready design for superior durability and is complementary to mechanical scanning LiDAR for complete sensing solutions.

It works with 3D Flash LiDARs being placed at the front, back and sides of vehicles, and each sensor covering 180-degrees meaning the whole vehicle is covered 360-degrees.

Benefits of Flash LiDAR include, 3D Flash illumination technology which provides 100% scene coverage, and uses significantly less data than other methods.

Racine explained: “Built around LCA2 Leddar Engine technology the new Leddar Pixell is made up of the Leddar core SoC and Leddar SP Library.”

With this new technology it unlocks new potential:

  • LiDAR data being used to the trained neural network
  • No video data input during classification
  • Neural network creating a segmentation map

Overall Racine summed up the Leddar Pixell Cocoon LiDAR’s key benefits: “It is optimised for ADAS and AD detection with cocoon applications, it is ideal for detection cocoon in stop-and-start situations, there is a zero proximity dead zone, with no blind spots in entire field of view, and is robust-ready technology to help in reducing maintenance, which enhances robustness of the perception platform. Finally, it also overlaps with other sensing technology, leaving the future open for exciting things.”

Featured products

Product Spotlight

Upcoming Events

No events found.
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier