Workshop1 - Dr. Ali Ghandour
![]() |
![]() |
Dr. Ali GhandourHead of GEOAI research group
|
Dr. Abdul Karim GIZZINIAssociate Professor |
Short Abstract / Description
Artificial intelligence (AI) has rapidly advanced across numerous domains, including Earth observation, where AI-driven methods deliver state-of-the-art performance in remote sensing tasks such as object detection, classification, and segmentation. Despite these successes, most deep learning models remain opaque: they provide little insight into the data representations, learned features, and decision-making processes that drive their predictions. This lack of transparency limits trust, hinders adoption in operational settings, and poses challenges for domain experts who must interpret or validate model outputs. Explainable Artificial Intelligence (XAI) has emerged as a critical field for addressing these limitations by developing techniques that make AI systems more interpretable, trustworthy, and accountable. This workshop will introduce participants to the fundamental principles of XAI, covering both theoretical foundations and practical methodologies. Through interactive discussions and demonstrations, attendees will explore state-of-the-art XAI techniques and their relevance to remote sensing applications. The workshop concludes with a hands-on exercise focused on applying XAI methods to "buildings' footprint segmentation using high-resolution satellite imagery" use case. Participants will gain practical experience in interpreting model behavior and understanding how explainability can enhance the reliability, transparency, and operational deployment of AI systems. The session will also present methodologies for evaluating the efficacy and performance of various XAI techniques.

