How does an air traffic controller’s brain react when working with AI? In this episode, Prof. Giulia Cartocci from Sapienza University shares insights from TRUSTY’s test and validation exercises — showing why explainability is essential for meaningful human–AI collaboration.
In the third episode of our Inside TRUSTY series, we meet Prof. Giulia Cartocci, assistant professor at Sapienza Università di Roma.
Together with Prof. Pietro Aricò and Dr. Elizabeth Humm (Deep Blue), she led the test and validation exercises in TRUSTY. Their challenge was both fascinating and crucial:
👉 to explore how air traffic controllers’ brain activity shifts when they interact with AI systems designed with different levels of trust.
The results reveal a clear message. When AI performs with high accuracy but low transparency, controllers tend to disengage: “I don’t know how you’re doing this, but it works — so go ahead.” However, this kind of blind reliance carries risks. In contrast, explainability allows controllers to remain engaged, fostering cooperation rather than passive acceptance.
These insights are helping TRUSTY to design AI solutions that support human expertise rather than replace it. By studying the brain and its reactions, the project is paving the way for more effective, human-centric approaches to AI in aviation.
▶️ Watch the full interview and discover how TRUSTY is contributing to the future of human–AI teamwork in air traffic management.