[Phoniatrics Artificial Intelligence Detection Tool]

Integrating an AI model for voice disorder classification with a user-friendly application aims to facilitate early detection of voice disorders in primary care settings.

Objective
Through PhonAID, our objective is to transfer specialized knowledge from secondary care to primary care, enabling early detection of voice disorders. Achieving this goal involves implementing an application equipped with an AI model.
Methodology
Combining audio recordings of speech tasks with clinical data and questionnaires as input to an AI model to classify categories of voice disorders. A population of individuals with a healthy voice will constitute a separate category to confirm the absence of a voice disorder. The AI model will be implemented in a user-friendly application for patients and general practitioners to use. The patients will enter clinical data, answer questionnaires and perform several speech tasks on which the AI model will make a prediction.
Impact and future directions
The voice serves as a crucial means of communication, and its impairment can lead to social challenges and hinder professional performance. Additionally, a voice disorder may indicate an underlying lung or neck tumor. Early detection is strongly advised to preserve quality of life and enhance patient outcomes.
General info and contact
Keywords (#): Voice disorder, AI, voice clinic, primary care, mobile health
RADar project research lead: M.Sc. Louise Berteloot
RADar project researchers: M.Sc. Louise Berteloot, M.Sc. Melanie Vandaele
Principal investigator: MD Lieve Delsupehe
Timeline: 2024-2025
Status: In progress / research ongoing
Publications / presentations: poster and oral presentation at ELS Congress Warsaw 2025
Partners: European Institute for Otorhinolaryngology (EIORL), Goomyx, ELG De Piramide
Funding: Vlaio O&O
