ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.10965
  4. Cited By
CLIP-Dissect: Automatic Description of Neuron Representations in Deep
  Vision Networks

CLIP-Dissect: Automatic Description of Neuron Representations in Deep Vision Networks

23 April 2022
Tuomas P. Oikarinen
Tsui-Wei Weng
    VLM
ArXivPDFHTML

Papers citing "CLIP-Dissect: Automatic Description of Neuron Representations in Deep Vision Networks"

21 / 21 papers shown
Title
Wasserstein Distances Made Explainable: Insights into Dataset Shifts and Transport Phenomena
Wasserstein Distances Made Explainable: Insights into Dataset Shifts and Transport Phenomena
Philip Naumann
Jacob R. Kauffmann
G. Montavon
29
0
0
09 May 2025
Effective Skill Unlearning through Intervention and Abstention
Effective Skill Unlearning through Intervention and Abstention
Yongce Li
Chung-En Sun
Tsui-Wei Weng
MU
178
0
0
27 Mar 2025
Discovering Influential Neuron Path in Vision Transformers
Discovering Influential Neuron Path in Vision Transformers
Yifan Wang
Yifei Liu
Yingdong Shi
Chong Li
Anqi Pang
Sibei Yang
Jingyi Yu
Kan Ren
ViT
69
0
0
12 Mar 2025
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Itay Benou
Tammy Riklin-Raviv
67
0
0
27 Feb 2025
LaVCa: LLM-assisted Visual Cortex Captioning
LaVCa: LLM-assisted Visual Cortex Captioning
Takuya Matsuyama
Shinji Nishimoto
Yu Takagi
61
0
0
20 Feb 2025
VLG-CBM: Training Concept Bottleneck Models with Vision-Language Guidance
VLG-CBM: Training Concept Bottleneck Models with Vision-Language Guidance
Divyansh Srivastava
Beatriz Cabrero-Daniel
Christian Berger
VLM
67
8
0
17 Jan 2025
Crafting Large Language Models for Enhanced Interpretability
Crafting Large Language Models for Enhanced Interpretability
Chung-En Sun
Tuomas P. Oikarinen
Tsui-Wei Weng
38
6
0
05 Jul 2024
AND: Audio Network Dissection for Interpreting Deep Acoustic Models
AND: Audio Network Dissection for Interpreting Deep Acoustic Models
Tung-Yu Wu
Yu-Xiang Lin
Tsui-Wei Weng
54
1
0
24 Jun 2024
Interpreting the Second-Order Effects of Neurons in CLIP
Interpreting the Second-Order Effects of Neurons in CLIP
Yossi Gandelsman
Alexei A. Efros
Jacob Steinhardt
MILM
62
16
0
06 Jun 2024
Learning Discrete Concepts in Latent Hierarchical Models
Learning Discrete Concepts in Latent Hierarchical Models
Lingjing Kong
Guan-Hong Chen
Erdun Gao
Eric P. Xing
Yuejie Chi
Kun Zhang
52
4
0
01 Jun 2024
Linear Explanations for Individual Neurons
Linear Explanations for Individual Neurons
Tuomas P. Oikarinen
Tsui-Wei Weng
FAtt
MILM
31
6
0
10 May 2024
CoCoG: Controllable Visual Stimuli Generation based on Human Concept
  Representations
CoCoG: Controllable Visual Stimuli Generation based on Human Concept Representations
Chen Wei
Jiachen Zou
Dietmar Heinke
Quanying Liu
48
3
0
25 Apr 2024
A Multimodal Automated Interpretability Agent
A Multimodal Automated Interpretability Agent
Tamar Rott Shaham
Sarah Schwettmann
Franklin Wang
Achyuta Rajaram
Evan Hernandez
Jacob Andreas
Antonio Torralba
37
18
0
22 Apr 2024
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron
  Activation Analysis
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis
Abhilekha Dalal
R. Rayan
Adrita Barua
Eugene Y. Vasserman
Md Kamruzzaman Sarker
Pascal Hitzler
30
4
0
21 Apr 2024
CEIR: Concept-based Explainable Image Representation Learning
CEIR: Concept-based Explainable Image Representation Learning
Yan Cui
Shuhong Liu
Liuzhuozheng Li
Zhiyuan Yuan
SSL
VLM
31
3
0
17 Dec 2023
Labeling Neural Representations with Inverse Recognition
Labeling Neural Representations with Inverse Recognition
Kirill Bykov
Laura Kopf
Shinichi Nakajima
Marius Kloft
Marina M.-C. Höhne
BDL
29
15
0
22 Nov 2023
The Importance of Prompt Tuning for Automated Neuron Explanations
The Importance of Prompt Tuning for Automated Neuron Explanations
Justin Lee
Tuomas P. Oikarinen
Arjun Chatha
Keng-Chi Chang
Yilan Chen
Tsui-Wei Weng
LRM
30
5
0
09 Oct 2023
Coarse-to-Fine Concept Bottleneck Models
Coarse-to-Fine Concept Bottleneck Models
Konstantinos P. Panousis
Dino Ienco
Diego Marcos
28
5
0
03 Oct 2023
Variational Information Pursuit with Large Language and Multimodal
  Models for Interpretable Predictions
Variational Information Pursuit with Large Language and Multimodal Models for Interpretable Predictions
Kwan Ho Ryan Chan
Aditya Chattopadhyay
B. Haeffele
René Vidal
42
0
0
24 Aug 2023
Identifying Interpretable Subspaces in Image Representations
Identifying Interpretable Subspaces in Image Representations
Neha Kalibhat
S. Bhardwaj
Bayan Bruss
Hamed Firooz
Maziar Sanjabi
S. Feizi
FAtt
42
26
0
20 Jul 2023
Natural Language Descriptions of Deep Visual Features
Natural Language Descriptions of Deep Visual Features
Evan Hernandez
Sarah Schwettmann
David Bau
Teona Bagashvili
Antonio Torralba
Jacob Andreas
MILM
204
117
0
26 Jan 2022
1