Design

Filter unwanted noise with these AI-powered headphones

20th May 2024
Paige West
0

Noise-cancelling headphones have been valuable for living and working in loud environments. They automatically identified background sounds and cancelled them out, providing much-needed peace and quiet.

However, typical noise-cancelling failed to distinguish between unwanted background sounds and crucial information, leaving headphone users unaware of their surroundings.

Shyam Gollakota, from the University of Washington, was an expert in using AI tools for real-time audio processing. His team created a system for targeted speech hearing in noisy environments and developed AI-based headphones that selectively filtered out specific sounds while preserving others.

“Imagine you are in a park, admiring the sounds of chirping birds, but then you have the loud chatter of a nearby group of people who just can’t stop talking,” said Gollakota. “Now imagine if your headphones could grant you the ability to focus on the sounds of the birds while the rest of the noise just goes away. That is exactly what we set out to achieve with our system.”

Gollakota and his team combined noise-cancelling technology with a smartphone-based neural network trained to identify 20 different environmental sound categories. These included alarm clocks, crying babies, sirens, car horns, and birdsong. When a user selected one or more of these categories, the software identified and played those sounds through the headphones in real-time while filtering out everything else.

Researchers augmented noise-cancelling headphones with a smartphone-based neural network to identify ambient sounds and preserve them while filtering out everything else. (Credit: Shyam Gollakota)

Making this system work seamlessly was not an easy task, however.

“To achieve what we want, we first needed a high-level intelligence to identify all the different sounds in an environment,” said Gollakota. “Then, we needed to separate the target sounds from all the interfering noises. If this is not hard enough, whatever sounds we extracted needed to sync with the user’s visual senses, since they cannot be hearing someone two seconds too late. This means the neural network algorithms must process sounds in real-time in under a hundredth of a second, which is what we achieved.”

The team employed this AI-powered approach to focus on human speech. Relying on similar content-aware techniques, their algorithm could identify a speaker and isolate their voice from ambient noise in real-time for clearer conversations.

Gollakota was excited to be at the forefront of the next generation of audio devices.

“We have a very unique opportunity to create the future of intelligent hearables that can enhance human hearing capability and augment intelligence to make lives better,” said Gollakota.

Featured products

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier