ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.01264
  4. Cited By
Evaluating the Interpretability of Generative Models by Interactive
  Reconstruction

Evaluating the Interpretability of Generative Models by Interactive Reconstruction

2 February 2021
A. Ross
Nina Chen
Elisa Zhao Hang
Elena L. Glassman
Finale Doshi-Velez
ArXivPDFHTML

Papers citing "Evaluating the Interpretability of Generative Models by Interactive Reconstruction"

31 / 31 papers shown
Title
On the Challenges and Opportunities in Generative AI
On the Challenges and Opportunities in Generative AI
Laura Manduchi
Kushagra Pandey
Robert Bamler
Ryan Cotterell
Sina Daubener
...
F. Wenzel
Frank Wood
Stephan Mandt
Vincent Fortuin
Vincent Fortuin
227
21
0
28 Feb 2024
A Loss Function for Generative Neural Networks Based on Watson's
  Perceptual Model
A Loss Function for Generative Neural Networks Based on Watson's Perceptual Model
Steffen Czolbe
Oswin Krause
Ingemar Cox
Christian Igel
GAN
32
48
0
26 Jun 2020
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating
  Explainable AI Systems
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Zana Buçinca
Phoebe Lin
Krzysztof Z. Gajos
Elena L. Glassman
ELM
73
284
0
22 Jan 2020
The What-If Tool: Interactive Probing of Machine Learning Models
The What-If Tool: Interactive Probing of Machine Learning Models
James Wexler
Mahima Pushkarna
Tolga Bolukbasi
Martin Wattenberg
F. Viégas
Jimbo Wilson
VLM
79
492
0
09 Jul 2019
Assessing the Local Interpretability of Machine Learning Models
Assessing the Local Interpretability of Machine Learning Models
Dylan Slack
Sorelle A. Friedler
C. Scheidegger
Chitradeep Dutta Roy
FAtt
43
71
0
09 Feb 2019
Human-Centered Tools for Coping with Imperfect Algorithms during Medical
  Decision-Making
Human-Centered Tools for Coping with Imperfect Algorithms during Medical Decision-Making
Carrie J. Cai
Emily Reif
Narayan Hegde
J. Hipp
Been Kim
...
Martin Wattenberg
F. Viégas
G. Corrado
Martin C. Stumpe
Michael Terry
104
403
0
08 Feb 2019
An Evaluation of the Human-Interpretability of Explanation
An Evaluation of the Human-Interpretability of Explanation
Isaac Lage
Emily Chen
Jeffrey He
Menaka Narayanan
Been Kim
Sam Gershman
Finale Doshi-Velez
FAtt
XAI
106
156
0
31 Jan 2019
TensorFlow.js: Machine Learning for the Web and Beyond
TensorFlow.js: Machine Learning for the Web and Beyond
D. Smilkov
Nikhil Thorat
Yannick Assogba
Ann Yuan
Nick Kreeger
...
D. Sculley
R. Monga
G. Corrado
F. Viégas
Martin Wattenberg
80
174
0
16 Jan 2019
Towards a Definition of Disentangled Representations
Towards a Definition of Disentangled Representations
I. Higgins
David Amos
David Pfau
S. Racanière
Loic Matthey
Danilo Jimenez Rezende
Alexander Lerchner
OCL
DRL
103
480
0
05 Dec 2018
Challenging Common Assumptions in the Unsupervised Learning of
  Disentangled Representations
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
Francesco Locatello
Stefan Bauer
Mario Lucic
Gunnar Rätsch
Sylvain Gelly
Bernhard Schölkopf
Olivier Bachem
OOD
118
1,467
0
29 Nov 2018
A Visual Interaction Framework for Dimensionality Reduction Based Data
  Exploration
A Visual Interaction Framework for Dimensionality Reduction Based Data Exploration
M. Cavallo
Çağatay Demiralp
36
55
0
28 Nov 2018
What can AI do for me: Evaluating Machine Learning Interpretations in
  Cooperative Play
What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play
Shi Feng
Jordan L. Boyd-Graber
HAI
47
129
0
23 Oct 2018
Recurrent World Models Facilitate Policy Evolution
Recurrent World Models Facilitate Policy Evolution
David R Ha
Jürgen Schmidhuber
SyDa
TPM
117
944
0
04 Sep 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
126
941
0
20 Jun 2018
Explaining Explanations: An Overview of Interpretability of Machine
  Learning
Explaining Explanations: An Overview of Interpretability of Machine Learning
Leilani H. Gilpin
David Bau
Ben Z. Yuan
Ayesha Bajwa
Michael A. Specter
Lalana Kagal
XAI
95
1,857
0
31 May 2018
Human-in-the-Loop Interpretability Prior
Human-in-the-Loop Interpretability Prior
Isaac Lage
A. Ross
Been Kim
S. Gershman
Finale Doshi-Velez
77
121
0
29 May 2018
Manipulating and Measuring Model Interpretability
Manipulating and Measuring Model Interpretability
Forough Poursabzi-Sangdeh
D. Goldstein
Jake M. Hofman
Jennifer Wortman Vaughan
Hanna M. Wallach
88
698
0
21 Feb 2018
Disentangling by Factorising
Disentangling by Factorising
Hyunjik Kim
A. Mnih
CoGe
OOD
62
1,350
0
16 Feb 2018
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
Richard Y. Zhang
Phillip Isola
Alexei A. Efros
Eli Shechtman
Oliver Wang
EGVM
377
11,795
0
11 Jan 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
214
1,842
0
30 Nov 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
245
4,265
0
22 Jun 2017
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations
Network Dissection: Quantifying Interpretability of Deep Visual Representations
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILM
FAtt
146
1,515
1
19 Apr 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
399
3,798
0
28 Feb 2017
A Survey of Inductive Biases for Factorial Representation-Learning
A Survey of Inductive Biases for Factorial Representation-Learning
Karl Ridgeway
DRL
CML
66
76
0
15 Dec 2016
Embedding Projector: Interactive Visualization and Interpretation of
  Embeddings
Embedding Projector: Interactive Visualization and Interpretation of Embeddings
D. Smilkov
Nikhil Thorat
Charles Nicholson
Emily Reif
F. Viégas
Martin Wattenberg
58
179
0
16 Nov 2016
InfoGAN: Interpretable Representation Learning by Information Maximizing
  Generative Adversarial Nets
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
Xi Chen
Yan Duan
Rein Houthooft
John Schulman
Ilya Sutskever
Pieter Abbeel
GAN
159
4,235
0
12 Jun 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
180
3,701
0
10 Jun 2016
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.8K
150,115
0
22 Dec 2014
Auto-Encoding Variational Bayes
Auto-Encoding Variational Bayes
Diederik P. Kingma
Max Welling
BDL
452
16,929
0
20 Dec 2013
Disentangling Factors of Variation via Generative Entangling
Disentangling Factors of Variation via Generative Entangling
Guillaume Desjardins
Aaron Courville
Yoshua Bengio
CoGe
CML
DRL
90
104
0
19 Oct 2012
Representation Learning: A Review and New Perspectives
Representation Learning: A Review and New Perspectives
Yoshua Bengio
Aaron Courville
Pascal Vincent
OOD
SSL
264
12,439
0
24 Jun 2012
1