ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.11431
  4. Cited By
Identifying Interpretable Visual Features in Artificial and Biological
  Neural Systems
v1v2 (latest)

Identifying Interpretable Visual Features in Artificial and Biological Neural Systems

17 October 2023
David A. Klindt
Sophia Sanborn
Francisco Acosta
Frédéric Poitevin
Nina Miolane
    MILMFAtt
ArXiv (abs)PDFHTML

Papers citing "Identifying Interpretable Visual Features in Artificial and Biological Neural Systems"

13 / 13 papers shown
Title
Compute Optimal Inference and Provable Amortisation Gap in Sparse Autoencoders
Compute Optimal Inference and Provable Amortisation Gap in Sparse Autoencoders
Charles OÑeill
David Klindt
David Klindt
171
2
0
20 Nov 2024
Towards Automated Circuit Discovery for Mechanistic Interpretability
Towards Automated Circuit Discovery for Mechanistic Interpretability
Arthur Conmy
Augustine N. Mavor-Parker
Aengus Lynch
Stefan Heimersheim
Adrià Garriga-Alonso
66
319
0
28 Apr 2023
Toy Models of Superposition
Toy Models of Superposition
Nelson Elhage
Tristan Hume
Catherine Olsson
Nicholas Schiefer
T. Henighan
...
Sam McCandlish
Jared Kaplan
Dario Amodei
Martin Wattenberg
C. Olah
AAMLMILM
198
380
0
21 Sep 2022
How Well do Feature Visualizations Support Causal Understanding of CNN
  Activations?
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
Roland S. Zimmermann
Judy Borowski
Robert Geirhos
Matthias Bethge
Thomas S. A. Wallis
Wieland Brendel
FAtt
63
33
0
23 Jun 2021
Towards falsifiable interpretability research
Towards falsifiable interpretability research
Matthew L. Leavitt
Ari S. Morcos
AAMLAI4CE
80
68
0
22 Oct 2020
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random
  Features in CNNs
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
Jonathan Frankle
D. Schwab
Ari S. Morcos
96
143
0
29 Feb 2020
Understanding Neural Networks via Feature Visualization: A survey
Understanding Neural Networks via Feature Visualization: A survey
Anh Nguyen
J. Yosinski
Jeff Clune
FAtt
76
162
0
18 Apr 2019
Challenging Common Assumptions in the Unsupervised Learning of
  Disentangled Representations
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
Francesco Locatello
Stefan Bauer
Mario Lucic
Gunnar Rätsch
Sylvain Gelly
Bernhard Schölkopf
Olivier Bachem
OOD
143
1,473
0
29 Nov 2018
On the importance of single directions for generalization
On the importance of single directions for generalization
Ari S. Morcos
David Barrett
Neil C. Rabinowitz
M. Botvinick
85
333
0
19 Mar 2018
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
Richard Y. Zhang
Phillip Isola
Alexei A. Efros
Eli Shechtman
Oliver Wang
EGVM
384
11,938
0
11 Jan 2018
Neural system identification for large populations separating "what" and
  "where"
Neural system identification for large populations separating "what" and "where"
David A. Klindt
Alexander S. Ecker
Thomas Euler
Matthias Bethge
60
122
0
07 Nov 2017
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of
  Dimensionality: a Review
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review
T. Poggio
H. Mhaskar
Lorenzo Rosasco
Brando Miranda
Q. Liao
149
576
0
02 Nov 2016
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
317
7,321
0
20 Dec 2013
1