On January 30th Mälardalen University held a seminar on Trustworthy AI, giving the opportunity for speakers and participants to discuss several advancements and challenges within the domain of Artificial Intelligence.
Amongst the topics covered, much discussion centered around the role and application of AI in industry and research environments, the technical aspects of ‘reasoning’ and the learning processes implemented, and also the ethical issues arising from the use of AI. The seminar delved into the foundational challenges associated with ensuring the trustworthiness of current AI systems. Prof. Mobyen Uddin Ahmed, TRUSTY project’s coordinator, moderated the event, which featured five speakers sharing insights from ongoing research on related topics.
Prof. Shahina Begum focused on the crucial aspect of ‘Explainability’ in Trustworthy AI. Beginning her presentation by educating the audience on the origins of Explainable AI (XAI), she then moved on to discuss current developments and her involvement in ongoing projects, including the SESAR projects ARTIMATION and TRUSTY.
Prof. Rafia Inam presented on Explainable AI in the Telecom industry, proposing an approach that combines explainability for data insights, feature analysis, machine learning, and machine reasoning.
Prof. Kerstin Bach explored ‘Trustworthy AI Applications’ through the NorwAI initiative, emphasizing the development of decision support systems for Trustworthy AI by integrating lawful, ethical, and robust AI principles.
Prof. Mark Dougherty discussed ethical principles for AI, covering aspects such as fairness, bias, trust, transparency, accountability, social benefit, privacy, and security.
Prof. Fredrik Heintz delved into the integration of learning and reasoning in Trustworthy AI through his ongoing project TrustLLM and TAILOR. He also addressed key research challenges outlined in EU regulation and highlighted the importance of human and computational thinking abilities.