ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.06805
  4. Cited By
Unlocking Feature Visualization for Deeper Networks with MAgnitude
  Constrained Optimization

Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization

11 June 2023
Thomas Fel
Thibaut Boissin
Victor Boutin
Agustin Picard
Paul Novello
Julien Colin
Drew Linsley
Tom Rousseau
Rémi Cadène
Laurent Gardes
Thomas Serre
    FAtt
ArXivPDFHTML

Papers citing "Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization"

12 / 12 papers shown
Title
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Harrish Thasarathan
Julian Forsyth
Thomas Fel
M. Kowal
Konstantinos G. Derpanis
111
7
0
06 Feb 2025
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
81
0
0
23 Nov 2024
Flow AM: Generating Point Cloud Global Explanations by Latent Alignment
Flow AM: Generating Point Cloud Global Explanations by Latent Alignment
Hanxiao Tan
41
1
0
29 Apr 2024
Labeling Neural Representations with Inverse Recognition
Labeling Neural Representations with Inverse Recognition
Kirill Bykov
Laura Kopf
Shinichi Nakajima
Marius Kloft
Marina M.-C. Höhne
BDL
29
15
0
22 Nov 2023
Plug-In Inversion: Model-Agnostic Inversion for Vision with Data
  Augmentations
Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations
Amin Ghiasi
Hamid Kazemi
Steven Reich
Chen Zhu
Micah Goldblum
Tom Goldstein
48
15
0
31 Jan 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based
  Sensitivity Analysis
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAU
FAtt
AAML
114
58
0
07 Nov 2021
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
  Goals of Human Trust in AI
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
255
426
0
15 Oct 2020
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,572
0
17 Apr 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,690
0
28 Feb 2017
Xception: Deep Learning with Depthwise Separable Convolutions
Xception: Deep Learning with Depthwise Separable Convolutions
François Chollet
MDE
BDL
PINN
206
14,376
0
07 Oct 2016
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
L. V. D. van der Maaten
Kilian Q. Weinberger
PINN
3DV
312
36,381
0
25 Aug 2016
1