Interpretable AI to identify biases and build trust with end users - Application to Sonar Seabed Segmentation

AI deep learning sonar seabed segmentation

5 November 2024 13h30-14h30

Yoann Arhant (CISS)

Deep learning models can be powerful tools, but they are often bound to fail unexpectedly with degenerate data, and it becomes all the more critical in military applications. Therefore, this is crucial to check and understand model outputs with interpretability. In this webinar, we will explore different methods, such as saliency maps and uncertainty estimation, to interpret the results of seabed semantic segmentation from high-resolution Synthetic Aperture Sonar (SAS) images, specifically for Mine Countermeasures (MCM). These interpretable methods not only help identify biases in both the data and the model, leading to better performance, but they also help build trust with end-users. Additionally, the information gained can be used for improved data curation and model retraining strategies, such as active learning, making the process even more effective.

Previous Post Next Post