Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis - IRT Saint Exupéry - Institut de Recherche Technologique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis

Résumé

A variety of methods have been proposed to try to explain how deep neural networks make their decisions. Key to those approaches is the need to sample the pixel space efficiently in order to derive importance maps. However, it has been shown that the sampling methods used to date introduce biases and other artifacts, leading to inaccurate estimates of the importance of individual pixels and severely limit the reliability of current explainability methods. Unfortunately, the alternative-to exhaustively sample the image space is computationally prohibitive. In this paper, we introduce EVA (Explaining using Verified perturbation Analysis)-the first explainability method guarantee to have an exhaustive exploration of a perturbation space. Specifically, we leverage the beneficial properties of verified perturbation analysis-time efficiency, tractability and guaranteed complete coverage of a manifold-to efficiently characterize the input variables that are most likely to drive the model decision. We evaluate the approach systematically and demonstrate state-of-the-art results on multiple benchmarks.
Fichier principal
Vignette du fichier
Formal Explainability.pdf (12.31 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03576300 , version 1 (15-02-2022)
hal-03576300 , version 2 (18-03-2023)

Identifiants

  • HAL Id : hal-03576300 , version 2

Citer

Thomas Fel, Mélanie Ducoffe, David Vigouroux, Rémi Cadène, Mikael Capelle, et al.. Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis. 2023. ⟨hal-03576300v2⟩
45 Consultations
41 Téléchargements

Partager

Gmail Facebook X LinkedIn More