This paper investigates decision-making in Air Traffic Control (ATC) and explores how Machine Learning (ML), Explainable AI (XAI), and visualization tools can enhance transparency and decision support. The study addresses challenges in AI adoption, focusing on user trust, acceptance, and alignment with human reasoning. By providing interpretable explanations and reliable AI outputs, the research demonstrates ways to support air traffic controllers in safety-critical scenarios, contributing to safer and more effective Air Traffic Management.
This work was presented in the proceedings of the 16th International Conference on Agents and Artificial Intelligence (ICAART 2024), held in Rome, Italy from 24 to 26 February 2024.
Link/DOI: 10.5220/0012471900003636