ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.11114
  4. Cited By
Improving the Interpretability of fMRI Decoding using Deep Neural
  Networks and Adversarial Robustness

Improving the Interpretability of fMRI Decoding using Deep Neural Networks and Adversarial Robustness

23 April 2020
Patrick McClure
Dustin Moraczewski
K. Lam
Adam G. Thomas
Francisco Pereira
    FAtt
    AAML
ArXivPDFHTML

Papers citing "Improving the Interpretability of fMRI Decoding using Deep Neural Networks and Adversarial Robustness"

4 / 4 papers shown
Title
Modular Training of Neural Networks aids Interpretability
Modular Training of Neural Networks aids Interpretability
Satvik Golechha
Maheep Chaudhary
Joan Velja
Alessandro Abate
Nandi Schoots
99
0
0
04 Feb 2025
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
83
375
0
30 Apr 2020
Analyzing Neuroimaging Data Through Recurrent Deep Learning Models
Analyzing Neuroimaging Data Through Recurrent Deep Learning Models
A. Thomas
H. Heekeren
K. Müller
Wojciech Samek
26
78
0
23 Oct 2018
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
166
14,831
1
21 Dec 2013
1