Final exam: Mobile Ecological Momentary Assessment based Hearing Aid Evaluations

April 24, 2017 - 10:00am
2390 UCC

PhD Candidate: Shabih Hasan


Hearing loss can significantly hinder an individual’s ability to engage socially and, when left untreated, can lead to anxiety, depression, and even dementia. The most common type of hearing loss is sensor-neural hearing loss that is treated using hearing aids (HAs). However, a significant fraction of individuals that may benefit from using HA do not use them and, the satisfaction of those that do, is only between 30-35%. Today, we have only a limited understanding regarding the factors that contribute to the low adoption and satisfaction rates. This is a limitation of existing laboratory-based assessment methods that cannot accurately predict the performance of HAs in the real-world as they do not fully reproduce the complexities of real-world environments. 

In this talk I shall highlight four core contributions of my PhD thesis: 

  • The development new mobile ecological momentary assessment (mEMA) methods for assessing HAs in the real-world. Our approach is based on the insight that HA performance is intrinsically dependent on the context in which a HA is used. A context includes characteristics of the listening activity, social context, and acoustic environment. To evaluate this hypothesis, we have developed AudioSense, a system that uses mobile phones to jointly characterize the context of users and the performance of HAs. 
  • We provide the first instance of characterization of the auditory lifestyle of hearing aid users, and the relationships that exist between the context and hearing aid outcomes. 
  • We utilize the subjective data collected using AudioSense to build novel models that can predict the success of hearing aid prescriptions for new and experienced users. We also quantitatively prove the importance of collecting contextual information for evaluating hearing aids. 
  • We use the objective audio data collected with AudioSense to predict contextual information like acoustic activity and noise level providing us with the groundwork for future context-sensitive mEMA.

Advisor: Octav Chipara