Skip to main content
01.03.2021 | יז אדר התשפא

A Hearing Aid that Isolates Background Noise

A BIU team is developing a novel hearing aid that will utilize input from the brain to allow users to focus on a specific sound source within a cacophony of stimuli

Image
שמיעה

Modern living is full of stimuli competing for our attention. Focusing attention on a single speaker while ignoring various disruptions is not easy, particularly for people with hearing impairments. But what happens in a "cocktail party" scenario with several speakers in one room? For people with normal hearing, the brain helps direct attention to the right speaker. But what about people with hearing impairments who use technological aids?

This is the problem currently explored by a Bar-Ilan research team comprising Prof. Sharon Gannot, of the Kofkin Faculty of Engineering, who specializes in processing speech signals; Prof. Jacob Goldberger, also of the Kofkin Faculty, who specializes in deep learning; and Dr. Elana Zion-Golumbic, of the Gonda Multidisciplinary Brain Research Center, who specializes in the relationship between brainwaves and hearing. The goal is to develop innovative algorithms to significantly improve the ability to understand a specific speaker in complex acoustic environments.

The aim is to understand who the person is listening to and direct the device accordingly. To do that, it is necessary to ‘look into their brain”. This can be done by Electroencephalography (EEG), a non-invasive brain-imaging method, that in laboratory conditions, uses 64 electrodes to monitor electrical activity in the brain.  “There are areas in the brain in which hearing and attention processing is done; if we can analyze those and understand which one of the speakers the aid-wearer is listening to, we can focus the signal processing algorithms on extracting that speaker,” explains Prof. Gannot.

It is a complex project that requires the development of algorithms that can simultaneously receive audio (from microphones in the hearing aid) and brain information (EEG), decide where the listener wishes to direct his attention and feed the right information into the ear. “Modern hearing aids are made of a small, mounted earpiece, with 2-3 microphones. Usually, the wearer will have one device in each ear. All in all, we have 4-6 microphones that we can use to design a beamformer that is capable of extracting the desired speaker while maintaining the spatial information of the positioning of all speakers in the acoustic scene,” explains Gannot on his part in the project. “The key question is which among all speakers to listen to. To do that, we need the interface to the information, which is extracted from brainwaves. We create this interface using deep learning.”

The project received a NIS 1.8 million grant from the Israel Ministry of Science and Technology. The researchers are hoping to create a super-sophisticated hearing aid that, in addition to microphones, is also equipped with two EEG sensors. “At a later stage,” says Gannot, “the team would like to add another feature that monitors eye movements – the direction at which we are looking. At the end of the day,” he says, “our goal is to create a hearing aid that best imitates natural hearing.”