ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.07557
  4. Cited By
CounterNet: End-to-End Training of Prediction Aware Counterfactual
  Explanations

CounterNet: End-to-End Training of Prediction Aware Counterfactual Explanations

15 September 2021
Hangzhi Guo
T. Nguyen
A. Yadav
    OffRL
ArXivPDFHTML

Papers citing "CounterNet: End-to-End Training of Prediction Aware Counterfactual Explanations"

41 / 41 papers shown
Title
VCNet: A self-explaining model for realistic counterfactual generation
VCNet: A self-explaining model for realistic counterfactual generation
Victor Guyomard
Franccoise Fessant
Thomas Guyet
Tassadit Bouadi
Alexandre Termier
BDL
OOD
CML
44
25
0
21 Dec 2022
Preserving Fine-Grain Feature Information in Classification via Entropic
  Regularization
Preserving Fine-Grain Feature Information in Classification via Entropic Regularization
Raphael Baena
Lucas Drumetz
Vincent Gripon
47
3
0
07 Aug 2022
Model-Based Counterfactual Synthesizer for Interpretation
Model-Based Counterfactual Synthesizer for Interpretation
Fan Yang
Sahan Suresh Alva
Jiahao Chen
X. Hu
40
31
0
16 Jun 2021
Amortized Generation of Sequential Algorithmic Recourses for Black-box
  Models
Amortized Generation of Sequential Algorithmic Recourses for Black-box Models
Sahil Verma
Keegan E. Hines
John P. Dickerson
60
23
0
07 Jun 2021
Beyond Trivial Counterfactual Explanations with Diverse Valuable
  Explanations
Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations
Pau Rodríguez López
Massimo Caccia
Alexandre Lacoste
L. Zamparo
I. Laradji
Laurent Charlin
David Vazquez
AAML
74
56
0
18 Mar 2021
Generating Interpretable Counterfactual Explanations By Implicit
  Minimisation of Epistemic and Aleatoric Uncertainties
Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties
Lisa Schut
Oscar Key
R. McGrath
Luca Costabello
Bogdan Sacaleanu
Medb Corcoran
Y. Gal
CML
93
48
0
16 Mar 2021
Towards Robust and Reliable Algorithmic Recourse
Towards Robust and Reliable Algorithmic Recourse
Sohini Upadhyay
Shalmali Joshi
Himabindu Lakkaraju
52
109
0
26 Feb 2021
Impact of Response Latency on User Behaviour in Mobile Web Search
Impact of Response Latency on User Behaviour in Mobile Web Search
Ioannis Arapakis
Souneil Park
M. Pielot
32
13
0
22 Jan 2021
Learning Models for Actionable Recourse
Learning Models for Actionable Recourse
Alexis Ross
Himabindu Lakkaraju
Osbert Bastani
FaML
78
19
0
12 Nov 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and
  Challenges
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
79
402
0
19 Oct 2020
On the Fairness of Causal Algorithmic Recourse
On the Fairness of Causal Algorithmic Recourse
Julius von Kügelgen
Amir-Hossein Karimi
Umang Bhatt
Isabel Valera
Adrian Weller
Bernhard Schölkopf
FaML
111
85
0
13 Oct 2020
A survey of algorithmic recourse: definitions, formulations, solutions,
  and prospects
A survey of algorithmic recourse: definitions, formulations, solutions, and prospects
Amir-Hossein Karimi
Gilles Barthe
Bernhard Schölkopf
Isabel Valera
FaML
68
172
0
08 Oct 2020
Do Wider Neural Networks Really Help Adversarial Robustness?
Do Wider Neural Networks Really Help Adversarial Robustness?
Boxi Wu
Jinghui Chen
Deng Cai
Xiaofei He
Quanquan Gu
AAML
59
95
0
03 Oct 2020
Evaluation of Neural Architectures Trained with Square Loss vs
  Cross-Entropy in Classification Tasks
Evaluation of Neural Architectures Trained with Square Loss vs Cross-Entropy in Classification Tasks
Like Hui
M. Belkin
UQCV
AAML
VLM
48
171
0
12 Jun 2020
Algorithmic Recourse: from Counterfactual Explanations to Interventions
Algorithmic Recourse: from Counterfactual Explanations to Interventions
Amir-Hossein Karimi
Bernhard Schölkopf
Isabel Valera
CML
49
340
0
14 Feb 2020
Preserving Causal Constraints in Counterfactual Explanations for Machine
  Learning Classifiers
Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers
Divyat Mahajan
Chenhao Tan
Amit Sharma
OOD
CML
95
206
0
06 Dec 2019
Explanation by Progressive Exaggeration
Explanation by Progressive Exaggeration
Sumedha Singla
Brian Pollack
Junxiang Chen
Kayhan Batmanghelich
FAtt
MedIm
67
103
0
01 Nov 2019
Learning Model-Agnostic Counterfactual Explanations for Tabular Data
Learning Model-Agnostic Counterfactual Explanations for Tabular Data
Martin Pawelczyk
Johannes Haug
Klaus Broelemann
Gjergji Kasneci
OOD
CML
58
203
0
21 Oct 2019
Towards Realistic Individual Recourse and Actionable Explanations in
  Black-Box Decision Making Systems
Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems
Shalmali Joshi
Oluwasanmi Koyejo
Warut D. Vijitbenjaronk
Been Kim
Joydeep Ghosh
FaML
60
187
0
22 Jul 2019
Interpretable Counterfactual Explanations Guided by Prototypes
Interpretable Counterfactual Explanations Guided by Prototypes
A. V. Looveren
Janis Klaise
FAtt
72
384
0
03 Jul 2019
Explaining Machine Learning Classifiers through Diverse Counterfactual
  Explanations
Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations
R. Mothilal
Amit Sharma
Chenhao Tan
CML
108
1,021
0
19 May 2019
Actionable Recourse in Linear Classification
Actionable Recourse in Linear Classification
Berk Ustun
Alexander Spangher
Yang Liu
FaML
117
549
0
18 Sep 2018
This Looks Like That: Deep Learning for Interpretable Image Recognition
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
240
1,186
0
27 Jun 2018
Manipulating and Measuring Model Interpretability
Manipulating and Measuring Model Interpretability
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
86
698
0
21 Feb 2018
Explanations based on the Missing: Towards Contrastive Explanations with
  Pertinent Negatives
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Amit Dhurandhar
Pin-Yu Chen
Ronny Luss
Chun-Chen Tu
Pai-Shun Ting
Karthikeyan Shanmugam
Payel Das
FAtt
115
589
0
21 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
124
3,961
0
06 Feb 2018
Ít's Reducing a Human Being to a Percentage'; Perceptions of Justice in
  Algorithmic Decisions
Ít's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions
Reuben Binns
Max Van Kleek
Michael Veale
Ulrik Lyngs
Jun Zhao
N. Shadbolt
FaML
56
538
0
31 Jan 2018
Robust Loss Functions under Label Noise for Deep Neural Networks
Robust Loss Functions under Label Noise for Deep Neural Networks
Aritra Ghosh
Himanshu Kumar
P. Sastry
NoLa
OOD
67
957
0
27 Dec 2017
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
109
2,354
0
01 Nov 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
242
4,265
0
22 Jun 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
307
12,069
0
19 Jun 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
201
2,226
0
12 Jun 2017
Formal Guarantees on the Robustness of a Classifier against Adversarial
  Manipulation
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Matthias Hein
Maksym Andriushchenko
AAML
110
512
0
23 May 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,939
0
22 May 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
210
2,894
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
188
5,989
0
04 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
297
20,023
0
07 Oct 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
16,990
0
16 Feb 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,020
0
10 Dec 2015
Empirical Evaluation of Rectified Activations in Convolutional Network
Empirical Evaluation of Rectified Activations in Convolutional Network
Bing Xu
Naiyan Wang
Tianqi Chen
Mu Li
135
2,912
0
05 May 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
277
19,066
0
20 Dec 2014
1