ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.01401
  4. Cited By
CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
  Human Trust in Image Recognition Models

CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing Human Trust in Image Recognition Models

3 September 2021
Arjun Reddy Akula
Keze Wang
Changsong Liu
Sari Saba-Sadiya
Hongjing Lu
S. Todorovic
J. Chai
Song-Chun Zhu
ArXivPDFHTML

Papers citing "CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing Human Trust in Image Recognition Models"

50 / 60 papers shown
Title
Accurate Explanation Model for Image Classifiers using Class Association Embedding
Accurate Explanation Model for Image Classifiers using Class Association Embedding
Ruitao Xie
Jingbang Chen
Limai Jiang
Rui Xiao
Yi-Lun Pan
Yunpeng Cai
109
4
0
31 Dec 2024
Towards Unifying Evaluation of Counterfactual Explanations: Leveraging Large Language Models for Human-Centric Assessments
Towards Unifying Evaluation of Counterfactual Explanations: Leveraging Large Language Models for Human-Centric Assessments
M. Domnich
Julius Valja
Rasmus Moorits Veski
Giacomo Magnifico
Kadi Tulver
Eduard Barbu
Raul Vicente
LRM
ELM
53
4
0
28 Oct 2024
MindCraft: Theory of Mind Modeling for Situated Dialogue in
  Collaborative Tasks
MindCraft: Theory of Mind Modeling for Situated Dialogue in Collaborative Tasks
Cristian-Paul Bara
Sky CH-Wang
J. Chai
82
64
0
13 Sep 2021
Words aren't enough, their order matters: On the Robustness of Grounding
  Visual Referring Expressions
Words aren't enough, their order matters: On the Robustness of Grounding Visual Referring Expressions
Arjun Reddy Akula
Spandana Gella
Yaser Al-Onaizan
Song-Chun Zhu
Siva Reddy
ObjD
35
52
0
04 May 2020
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
  Common Sense
Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense
Yixin Zhu
Tao Gao
Lifeng Fan
Siyuan Huang
Mark Edmonds
...
Chi Zhang
Siyuan Qi
Ying Nian Wu
J. Tenenbaum
Song-Chun Zhu
68
129
0
20 Apr 2020
X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust
X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust
Arjun Reddy Akula
Changsong Liu
Sari Saba-Sadiya
Hongjing Lu
S. Todorovic
J. Chai
Song-Chun Zhu
31
18
0
15 Sep 2019
Interpretable Counterfactual Explanations Guided by Prototypes
Interpretable Counterfactual Explanations Guided by Prototypes
A. V. Looveren
Janis Klaise
FAtt
38
380
0
03 Jul 2019
What Does BERT Look At? An Analysis of BERT's Attention
What Does BERT Look At? An Analysis of BERT's Attention
Kevin Clark
Urvashi Khandelwal
Omer Levy
Christopher D. Manning
MILM
170
1,586
0
11 Jun 2019
Counterfactual Visual Explanations
Counterfactual Visual Explanations
Yash Goyal
Ziyan Wu
Jan Ernst
Dhruv Batra
Devi Parikh
Stefan Lee
CML
49
510
0
16 Apr 2019
Pixel-Adaptive Convolutional Neural Networks
Pixel-Adaptive Convolutional Neural Networks
Hang Su
Varun Jampani
Deqing Sun
Orazio Gallo
Erik Learned-Miller
Jan Kautz
57
287
0
10 Apr 2019
Natural Language Interaction with Explainable AI Models
Natural Language Interaction with Explainable AI Models
Arjun Reddy Akula
S. Todorovic
J. Chai
Song-Chun Zhu
42
23
0
13 Mar 2019
Attention is not Explanation
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
78
1,307
0
26 Feb 2019
Interpretable CNNs for Object Classification
Interpretable CNNs for Object Classification
Quanshi Zhang
Xin Eric Wang
Ying Nian Wu
Huilin Zhou
Song-Chun Zhu
32
54
0
08 Jan 2019
Mining Interpretable AOG Representations from Convolutional Networks via
  Active Question Answering
Mining Interpretable AOG Representations from Convolutional Networks via Active Question Answering
Quanshi Zhang
Ruiming Cao
Ying Nian Wu
Song-Chun Zhu
28
14
0
18 Dec 2018
Metrics for Explainable AI: Challenges and Prospects
Metrics for Explainable AI: Challenges and Prospects
R. Hoffman
Shane T. Mueller
Gary Klein
Jordan Litman
XAI
50
721
0
11 Dec 2018
Open the Black Box Data-Driven Explanation of Black Box Decision Systems
Open the Black Box Data-Driven Explanation of Black Box Decision Systems
D. Pedreschi
F. Giannotti
Riccardo Guidotti
A. Monreale
Luca Pappalardo
Salvatore Ruggieri
Franco Turini
82
38
0
26 Jun 2018
Generating Counterfactual Explanations with Natural Language
Generating Counterfactual Explanations with Natural Language
Lisa Anne Hendricks
Ronghang Hu
Trevor Darrell
Zeynep Akata
FAtt
27
99
0
26 Jun 2018
On the Robustness of Interpretability Methods
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
43
524
0
21 Jun 2018
Network Transplanting
Network Transplanting
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
OOD
30
5
0
26 Apr 2018
Modeling Others using Oneself in Multi-Agent Reinforcement Learning
Modeling Others using Oneself in Multi-Agent Reinforcement Learning
Roberta Raileanu
Emily L. Denton
Arthur Szlam
Rob Fergus
54
200
0
26 Feb 2018
Machine Theory of Mind
Machine Theory of Mind
Neil C. Rabinowitz
Frank Perbet
H. F. Song
Chiyuan Zhang
S. M. Ali Eslami
M. Botvinick
AI4CE
97
470
0
21 Feb 2018
Explanations based on the Missing: Towards Contrastive Explanations with
  Pertinent Negatives
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
84
587
0
21 Feb 2018
Do deep nets really need weight decay and dropout?
Do deep nets really need weight decay and dropout?
Alex Hernández-García
Peter König
39
27
0
20 Feb 2018
Model compression via distillation and quantization
Model compression via distillation and quantization
A. Polino
Razvan Pascanu
Dan Alistarh
MQ
57
722
0
15 Feb 2018
Visual Interpretability for Deep Learning: a Survey
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
69
812
0
02 Feb 2018
Interpreting CNNs via Decision Trees
Interpreting CNNs via Decision Trees
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
FAtt
32
323
0
01 Feb 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
129
1,817
0
30 Nov 2017
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
48
2,332
0
01 Nov 2017
Interpretable Convolutional Neural Networks
Interpretable Convolutional Neural Networks
Quanshi Zhang
Ying Nian Wu
Song-Chun Zhu
FAtt
31
774
0
02 Oct 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
210
4,229
0
22 Jun 2017
Teaching Compositionality to CNNs
Teaching Compositionality to CNNs
Austin Stone
Hua-Yan Wang
Michael Stark
Yi Liu
D. Phoenix
Dileep George
CoGe
37
54
0
14 Jun 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
167
2,211
0
12 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
280
21,459
0
22 May 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
26
1,514
0
11 Apr 2017
Mining Object Parts from CNNs via Active Question-Answering
Mining Object Parts from CNNs via Active Question-Answering
Quanshi Zhang
Ruiming Cao
Ying Nian Wu
Song-Chun Zhu
26
25
0
11 Apr 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
72
3,848
0
10 Apr 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
67
5,920
0
04 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
334
3,742
0
28 Feb 2017
Sample Efficient Actor-Critic with Experience Replay
Sample Efficient Actor-Critic with Experience Replay
Ziyun Wang
V. Bapst
N. Heess
Volodymyr Mnih
Rémi Munos
Koray Kavukcuoglu
Nando de Freitas
66
757
0
03 Nov 2016
Universal adversarial perturbations
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
102
2,520
0
26 Oct 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
172
19,796
0
07 Oct 2016
European Union regulations on algorithmic decision-making and a "right
  to explanation"
European Union regulations on algorithmic decision-making and a "right to explanation"
B. Goodman
Seth Flaxman
FaML
AILaw
50
1,888
0
28 Jun 2016
LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in
  Recurrent Neural Networks
LSTMVis: A Tool for Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks
Hendrik Strobelt
Sebastian Gehrmann
Hanspeter Pfister
Alexander M. Rush
HAI
43
83
0
23 Jun 2016
Rationalizing Neural Predictions
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
77
807
0
13 Jun 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
78
3,672
0
10 Jun 2016
Attribute And-Or Grammar for Joint Parsing of Human Attributes, Part and
  Pose
Attribute And-Or Grammar for Joint Parsing of Human Attributes, Part and Pose
Seyoung Park
Xiaohan Nie
Song-Chun Zhu
CVBM
35
18
0
06 May 2016
Generating Visual Explanations
Generating Visual Explanations
Lisa Anne Hendricks
Zeynep Akata
Marcus Rohrbach
Jeff Donahue
Bernt Schiele
Trevor Darrell
VLM
FAtt
58
620
0
28 Mar 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
342
16,765
0
16 Feb 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
107
9,266
0
14 Dec 2015
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.0K
192,638
0
10 Dec 2015
12
Next