Cross-domain saliency maps: interpretable attributions for EEG/MEG/LFP models (PyTorch & TensorFlow library)

We are introducing a saliency map method that produces meaningful attributions for time-series ML models. Instead of restricting explanations to the time domain, our approach generates saliency maps in relevant domains such as frequency or ICA components - improving interpretability when applying deep models to EEG/MEG/LFP. *Cross-Domain Saliency Maps* is an open-source toolkit that: - Provides frequency and ICA domain attributions out-of-the box - Extends to any invertible transform with a differentiable inverse - Works plug-and-play with PyTorch and TensorFlow, no retraining required - Demonstrates utility on EEG seizure detection and other datasets *Get started:* *What does your deep model see in your EEG? (Colab demo) * https://colab.research.google.com/drive/1mJmLGzgJaGFJ50Q3BmNVeUECtZQzFLQP#sc... *Get the code & run it on your data (GitHub repo)* https://github.com/esl-epfl/cross-domain-saliency-maps <https://github.com/esl-epfl/cross-domain-saliency-maps> *Read the full story (arXiv preprint)* https://arxiv.org/pdf/2505.13100 We would be happy to hear your thoughts and experiences applying this to neural data.
participants (1)
-
Christodoulos Kechris