Staff shortages and the constant desire to provide high-quality medical care. These are only two of the most important reasons for a sharp increase in the application of artificial intelligence (AI) in healthcare in the coming years. By launching the first healthcare AI Ethics Lab, Erasmus MC and TU Delft put the focus on ethically responsible and clinically relevant AI that will positively impact both patient care and healthcare workers.
In the future, will doctors discontinue medical treatment based on information provided by a computational model? This may be one of the most difficult questions regarding the application of AI in healthcare. But there are many more, and less formidable, questions. For example, whether it will be safe for a patient recovering from surgery to be discharged a few days earlier - a decision that both benefits the patient and frees up hospital resources. Or whether an ICU nurse assisted by AI can provide high-quality care to more patients.
It is imperative for the underlying AI models, that support doctors in making such medical decisions, to provide ethically responsible recommendations. "The World Health Organization has identified six core principles for AI in healthcare, such as a clear allocation of responsibilities and ensuring fairness and applicability for each individual patient," says Stefan Buijsman, Assistant Professor of Ethics at TU Delft. Jeroen van den Hoven, director of the TU Delft Digital Ethics Centre, contributed to the WHO AI-principles. "The major challenge is that, oftentimes, it is not self-evident what it means for an AI-model to be fair and how you can guarantee such fairness."
Safe and demonstrably beneficial
The Responsible and Ethical AI in Healthcare Lab (REAiHL), a collaboration between Erasmus MC, TU Delft, and software company SAS, aims to answer these questions. "The clinical expertise of Erasmus MC is in the lead - they provide the use cases and will be the ones applying the AI models in clinical practice," Buijsman says. "For more than two decades now, TU Delft has been at the forefront of digital ethics - how to translate ethical values into design requirements for engineers." In addition to responsible design, TU Delft will also play an important role in demonstrating the clinical added value of developed AI models.
"On the one hand, this involves demonstrating the positive impact on patient care," says Jacobien Oosterhoff, Assistant Professor of Artificial Intelligence for Healthcare Systems at TU Delft. "Much is already known about how to safely test rockets to mars in a remote area. But there still are many open questions when it comes to safely testing AI for patient care. A second focus is the effective integration of AI-models into the clinical workflow, ensuring that doctors and nurses feel truly supported. Our new lab is dedicated to answering these open questions. Our approach, involving doctors, engineers, nurses, data scientists, and ethicists, provides a unique synergy."
A hospital-wide framework
The new AI Ethics Lab was initiated by internist-intensivist Michel van Genderen from Erasmus MC. Diederik Gommers, Professor of Intensive Care Medicine at Erasmus MC, is also closely involved. "Initially, the new AI Ethics Lab will focus on developing best practices for the Intensive Care Unit," Buijsman says. "But our ultimate goal is to develop a generalized framework for the safe and ethical application of AI throughout the entire hospital. We therefore expect to soon start addressing use cases from other clinical departments as well."
Impact for a Better Society
The collaboration in the AI Ethics Lab is in perfect alignment with the TU Delft research vision: Impact for a better society. Currently, less than two percent of all AI studies find their way into the clinic. The researchers expect that developing guidelines for technically sound, ethically responsible, clinically relevant, and practical AI models will considerably increase the impact of these models in healthcare.
REAiHL
The Responsible and Ethical AI in Healthcare Lab (REAiHL) is a collaboration between Erasmus MC, TU Delft and software company SAS, and is located at the DataHub of Erasmus MC. The five PhD students and nurses, medical doctors, datascientists, dataengineers and ethicists will develop guidelines for the development and implementation of ethically responsible and clinically relevant AI for healthcare. REAiHL is an ICAI lab (Innovation Center for Artificial Intelligence); a research collaboration between industry, government or non-profit partners, and knowledge institutes. ICAI labs must meet requirements for data, expertise and capacity. They are expected to operationalize outcomes for the real world. REAiHL is the 9th ICAI lab in which the TU Delft collaborates with partners and other knowledge institutes.
TU Delft Digital Ethics Centre
In 2022, TU Delft opened the Digital Ethics Centre where researchers, government agencies and companies collaboratively investigate the ethical implications of AI and digitalisation, such as fairness, safety and transparency; and where research yields proper solutions and applications.
More about AI & ethics at TU Delft
At TU Delft, we believe that AI technology is of great importance in ensuring a more sustainable, safer, and healthier future. We research, design, and develop AI technology and study its application in society. AI technology plays a key role in each of our eight faculties and is an integral part of the student curriculum. We create impact for a better society through education, research and innovation in AI. Read more about our education, research and innovation in AI, Data & Digitalisation at: www.tudelft.nl/ai
For questions and additional information
Fien Bosman, press officer Health & Care TU Delft: f.j.bosman@tudelft.nl / +31624953733