The sounds that permeate our daily lives - from speech and breathing patterns to heartbeat rhythms - contain a wealth of physiological information that could reveal valuable insights into human health and well-being, such as heart and pulmonary conditions. What’s more, audio signals that capture these sounds can now easily be acquired from consumer devices equipped with multiple microphones, like smart speakers, smartphones and other Internet of Things (IoT) devices. This is a field in which Dr. Aaqib Saeed will combine his longstanding research interests in human-centric AI, self-supervised learning, federated learning and audio understanding into applications that improve personal health - something he intends to pursue as the winner of the AiNed Fellowship Grant.
I am thrilled and honored to be selected as an AiNed Fellowship Grant recipient. This prestigious award will catalyze my mission to advance decentralized AI for revolutionizing audio-based health monitoring. By leveraging the power of federated learning, I aim to transform sounds into vital clinical insights while safeguarding user privacy. My team and I are eager to push the boundaries of health diagnostics and lead the charge towards a new era of collaborative and trustworthy AI.
In the ’Private Ears, Shared Insights: Scaling Clinical Audio Understanding with Federated Learning’ project, Dr. Saeed will develop fundamental decentralized AI techniques that can systematically analyze distributed audio data and build data-driven models capable of attaining clinical precision. Until now, the development of such techniques has been hindered by little to no access to audio due to the private nature of the data. The project will therefore focus on safe and fair results that address regulatory barriers (such as the EU AI Act and the General Date Protection Regulation) and, most critically, ensure the privacy of users. Ultimately, this will advance collaborative AI in a real-world context to tackle health-related challenges at a national scale.