Artificial intelligence engineer Hilde Weerts. Photo: Bart van Overbeeke
Artificial intelligence engineer Hilde Weerts. Photo: Bart van Overbeeke AI engineer Hilde Weerts on fairness of algorithms Ever since ChatGPT hit the scene, all eyes have been fixed on the meteoric development of Artificial Intelligence. Experts around the world are expressing concerns and speculating about what these large language models may lead to. In this series, Cursor and the scientific director of EAISI Wim Nuijten talk to TU/e researchers about their perspectives on the future of AI. Today we present part three: Hilde Weerts, artificial intelligence engineer at Mathematics and Computer Science. Her research centers on Explainable AI and fairness. Is it enough to know that an AI system produces reliable results, or should you also be able to explain why that is the case and what the results are based on? There is some doubt as to whether the latter is possible for models such as ChatGPT.
TO READ THIS ARTICLE, CREATE YOUR ACCOUNT
And extend your reading, free of charge and with no commitment.