ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.24105
  4. Cited By
Enhancing Pre-trained Representation Classifiability can Boost its Interpretability

Enhancing Pre-trained Representation Classifiability can Boost its Interpretability

International Conference on Learning Representations (ICLR), 2025
28 October 2025
Shufan Shen
Zhaobo Qi
Junshu Sun
Qingming Huang
Qi Tian
Shuhui Wang
    FAtt
ArXiv (abs)PDFHTMLGithub (55842★)

Papers citing "Enhancing Pre-trained Representation Classifiability can Boost its Interpretability"

4 / 4 papers shown
Title
Kernelized Sparse Fine-Tuning with Bi-level Parameter Competition for Vision Models
Kernelized Sparse Fine-Tuning with Bi-level Parameter Competition for Vision Models
Shufan Shen
Junshu Sun
Shuhui Wang
Qingming Huang
0
0
0
28 Oct 2025
Edit Less, Achieve More: Dynamic Sparse Neuron Masking for Lifelong Knowledge Editing in LLMs
Edit Less, Achieve More: Dynamic Sparse Neuron Masking for Lifelong Knowledge Editing in LLMs
Jinzhe Liu
Junshu Sun
Shufan Shen
Chenxue Yang
Shuhui Wang
KELMCLL
108
1
0
25 Oct 2025
VL-SAE: Interpreting and Enhancing Vision-Language Alignment with a Unified Concept Set
VL-SAE: Interpreting and Enhancing Vision-Language Alignment with a Unified Concept Set
Shufan Shen
Junshu Sun
Qingming Huang
Shuhui Wang
12
1
0
24 Oct 2025
Relieving the Over-Aggregating Effect in Graph Transformers
Relieving the Over-Aggregating Effect in Graph Transformers
Junshu Sun
Wanxing Chang
Chenxue Yang
Qingming Huang
Shuhui Wang
11
0
0
24 Oct 2025
1