Brain-inspired algorithm could help with cocktail party problem
Boston University (BU) researchers believe they have the solution to the commonly-known ‘cocktail party problem’ - referring to a situation in which overlapping conversations at parties can be difficult for people with forms of hearing loss to navigate, and background noise can be difficult to filter.
A brain-inspired algorithm developed at the university could support hearing aids with tuning out interference and isolate individual voices in a crowd. Through testing, researchers discovered it could improve word recognition accuracy by 40% points compared with current hearing aid algorithms.
“We were extremely surprised and excited by the magnitude of the improvement in performance—it’s pretty rare to find such big improvements,” said Kamal Sen, the algorithm’s developer and a BU College of Engineering associate professor of biomedical engineering. The findings were published in ‘Communications Engineering’, a Nature Portfolio journal.
Estimates place Americans with hearing loss at around 50 million. By 2050, around 2.5 billion globally are expected to have some form of hearing loss, according to the World Health Organization.
“The primary complaint of people with hearing loss is that they have trouble communicating in noisy environments,” added Virginia Best, a BU Sargent College of Health & Rehabilitation Sciences research associate professor of speech, language, and hearing sciences. “These environments are very common in daily life and they tend to be really important to people—think about dinner table conversations, social gatherings, workplace meetings. So, solutions that can enhance communication in noisy places have the potential for a huge impact.”
As part of the research, Best, Sen and BU biomedical engineering PhD candidate Alexander B. Boyd tested the ability of current hearing aid algorithms to cope with the noise of cocktail parties. Many hearing aids use noise reduction algorithms and directional microphones to emphasise sounds coming from the front.
“We decided to benchmark against the industry standard algorithm that’s currently in hearing aids,” said Sen. The existing algorithm “doesn’t improve performance at all; if anything, it makes it slightly worse. Now we have data showing what’s been known anecdotally from people with hearing aids.”
The new algorithm - known as BOSSA, biologically oriented sound segregation algorithm - has been patented by Sen, who is hoping to connect with companies interested in licensing the technology. With Apple entering into the hearing aid market, the breakthrough is timely.
“If hearing aid companies don’t start innovating fast, they’re going to get wiped out, because Apple and other start-ups are entering the market,” warned Sen.
Segregating sounds
For the last 20 years, Sen has been studying how the brain encodes and decodes sounds, looking for circuits involved in managing the cocktail party effect. He has plotted how sound waves are processed at different stages of the auditory pathway, tracking their journey from the ear to translation by the brain. Inhibitory neurons, brain cells that help to suppress certain, unwanted sounds, are a core component.
“You can think of it as a form of internal noise cancellation,” he said . “If there’s a sound at a particular location, these inhibitory neurons get activated.” According to Sen, different neurons are tuned to different locations and frequencies.
This approach has been the inspiration for the new algorithm, which uses spatial cues such as the volume and timing of a sound to tune in or out.
“It’s basically a computational model that mimics what the brain does,” said Sen, who’s affiliated with BU’s centres for neurophotonics and for systems neuroscience, “and actually segregates sound sources based on sound input.”
“Ultimately, the only way to know if a benefit will translate to the listener is via behavioural studies,” added Best, “and that requires scientists and clinicians who understand the target population.”
Best helped to design a study using a group of young adults with sensorineural hearing loss, caused by genetic factors or childhood diseases. While in a lab, participants wore headphones that simulated people talking from different locations. Their ability to pick out individual speakers was tested with the aid of the new algorithm, the standard algorithm, and new algorithm.
The researchers acknowledged that the technology provides “compelling support” for aiding individuals with hearing loss in cocktail party situations. They’re also testing an upgraded version that incorporates eye tracking technology to enable users to better direct their listening attention.
The algorithm may have implications beyond hearing loss.
“The [neural] circuits we are studying are much more general purpose and much more fundamental,” Sen concluded. “It ultimately has to do with attention, where you want to focus—that’s what the circuit was really built for. In the long term, we’re hoping to take this to other populations, like people with ADHD or autism, who also really struggle when there’s multiple things happening.”