ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Communities
  3. ...

Neighbor communities

0 / 0 papers shown
Title
Top Contributors
Name# Papers# Citations
Social Events
DateLocationEvent
  1. Home
  2. Communities
  3. FAtt

Feature Attribution

FAtt
More data

Determines the importance of each feature in predictions. Useful for model debugging and interpretability.

Neighbor communities

51015

Featured Papers

0 / 0 papers shown
Title

All papers

50 / 2,395 papers shown
Title
Enhancing Interpretability for Vision Models via Shapley Value Optimization
Enhancing Interpretability for Vision Models via Shapley Value Optimization
Kanglong Fan
Yunqiao Yang
Chen Ma
AAMLFAtt
90
0
0
16 Dec 2025
AgentSHAP: Interpreting LLM Agent Tool Importance with Monte Carlo Shapley Value Estimation
AgentSHAP: Interpreting LLM Agent Tool Importance with Monte Carlo Shapley Value Estimation
Miriam Horovicz
LLMAGFAtt
62
0
0
14 Dec 2025
On the Accuracy of Newton Step and Influence Function Data Attributions
On the Accuracy of Newton Step and Influence Function Data Attributions
Ittai Rubinstein
Samuel B. Hopkins
TDIFAtt
125
0
0
14 Dec 2025
Back to the Baseline: Examining Baseline Effects on Explainability Metrics
Back to the Baseline: Examining Baseline Effects on Explainability Metrics
Agustin Martin Picard
Thibaut Boissin
Varshini Subhash
Rémi Cadène
Thomas Fel
FAtt
0
0
0
12 Dec 2025
Provable Recovery of Locally Important Signed Features and Interactions from Random Forest
Provable Recovery of Locally Important Signed Features and Interactions from Random Forest
Kata Vuk
Nicolas Alexander Ihlo
Merle Behr
FAtt
108
0
0
11 Dec 2025
Explanation Bias is a Product: Revealing the Hidden Lexical and Position Preferences in Post-Hoc Feature Attribution
Explanation Bias is a Product: Revealing the Hidden Lexical and Position Preferences in Post-Hoc Feature Attribution
Jonathan Kamp
Roos Bakker
Dominique Blok
FAtt
0
0
0
11 Dec 2025
Interpreto: An Explainability Library for Transformers
Interpreto: An Explainability Library for Transformers
Antonin Poché
Thomas Mullor
Gabriele Sarti
Frédéric Boisnard
Corentin Friedrich
Charlotte Claye
François Hoofd
Raphael Bernas
Céline Hudelot
Fanny Jourdan
MILMFAtt
187
0
0
10 Dec 2025
STACHE: Local Black-Box Explanations for Reinforcement Learning Policies
STACHE: Local Black-Box Explanations for Reinforcement Learning Policies
Andrew Elashkin
Orna Grumberg
OffRLFAttLRM
105
0
0
10 Dec 2025
MelanomaNet: Explainable Deep Learning for Skin Lesion Classification
MelanomaNet: Explainable Deep Learning for Skin Lesion Classification
Sukhrobbek Ilyosbekov
FAttMedIm
108
0
0
10 Dec 2025
Clinical Interpretability of Deep Learning Segmentation Through Shapley-Derived Agreement and Uncertainty Metrics
Clinical Interpretability of Deep Learning Segmentation Through Shapley-Derived Agreement and Uncertainty Metrics
Tianyi Ren
Daniel Low
Pittra Jaengprajak
Juampablo Heras Rivera
Jacob Ruzevick
Mehmet Kurt
FAtt
144
0
0
08 Dec 2025
Zero-Shot Textual Explanations via Translating Decision-Critical Features
Zero-Shot Textual Explanations via Translating Decision-Critical Features
Toshinori Yamauchi
Hiroshi Kera
Kazuhiko Kawamoto
FAtt
44
0
0
08 Dec 2025
$ϕ$-test: Global Feature Selection and Inference for Shapley Additive Explanations
ϕϕϕ-test: Global Feature Selection and Inference for Shapley Additive Explanations
Dongseok Kim
Hyoungsun Choi
Mohamed Jismy Aashik Rasool
Gisung Oh
FAtt
36
0
0
08 Dec 2025
SSplain: Sparse and Smooth Explainer for Retinopathy of Prematurity Classification
SSplain: Sparse and Smooth Explainer for Retinopathy of Prematurity Classification
Elifnur Sunger
Tales Imbiriba
Peter Campbell
Deniz Erdogmus
Stratis Ioannidis
Jennifer Dy
FAtt
84
0
0
08 Dec 2025
Interpretive Efficiency: Information-Geometric Foundations of Data Usefulness
Interpretive Efficiency: Information-Geometric Foundations of Data Usefulness
Ronald Katende
FAtt
28
0
0
06 Dec 2025
Improving Local Fidelity Through Sampling and Modeling Nonlinearity
Improving Local Fidelity Through Sampling and Modeling Nonlinearity
Sanjeev Shrestha
Rahul Dubey
Hui Liu
FAtt
84
0
0
05 Dec 2025
Measuring the Effect of Background on Classification and Feature Importance in Deep Learning for AV Perception
Measuring the Effect of Background on Classification and Feature Importance in Deep Learning for AV Perception
Anne Sielemann
Valentin Barner
Stefan Wolf
Masoud Roschani
Jens Ziehn
Juergen Beyerer
FAtt
56
0
0
05 Dec 2025
MASE: Interpretable NLP Models via Model-Agnostic Saliency Estimation
MASE: Interpretable NLP Models via Model-Agnostic Saliency Estimation
Zhou Yang
Shunyan Luo
Jiazhen Zhu
Fang Jin
MILMFAtt
121
0
0
04 Dec 2025
Identifying attributions of causality in political text
Identifying attributions of causality in political text
Paulina Garcia-Corral
FAttCML
35
0
0
02 Dec 2025
Beyond Additivity: Sparse Isotonic Shapley Regression toward Nonlinear Explainability
Beyond Additivity: Sparse Isotonic Shapley Regression toward Nonlinear Explainability
Jialai She
FAtt
32
0
0
02 Dec 2025
The Effect of Enforcing Fairness on Reshaping Explanations in Machine Learning Models
The Effect of Enforcing Fairness on Reshaping Explanations in Machine Learning Models
Joshua Wolff Anderson
Shyam Visweswaran
FAtt
220
0
0
01 Dec 2025
QGShap: Quantum Acceleration for Faithful GNN Explanations
QGShap: Quantum Acceleration for Faithful GNN Explanations
Haribandhu Jena
Jyotirmaya Shivottam
Subhankar Mishra
FAtt
176
0
0
01 Dec 2025
Dynamic Algorithm for Explainable k-medians Clustering under lp Norm
Konstantin Makarychev
Ilias Papanikolaou
Liren Shan
FAtt
100
0
0
01 Dec 2025
Faster Verified Explanations for Neural Networks
Alessandro De Palma
Greta Dolcetti
Caterina Urban
FAtt
161
0
0
28 Nov 2025
On Computing the Shapley Value in Bankruptcy Games -llustrated by Rectified Linear Function Game-
On Computing the Shapley Value in Bankruptcy Games -llustrated by Rectified Linear Function Game-
Shunta Yamazaki
Tomomi Matsui
FAtt
48
0
0
27 Nov 2025
ABLE: Using Adversarial Pairs to Construct Local Models for Explaining Model Predictions
ABLE: Using Adversarial Pairs to Construct Local Models for Explaining Model Predictions
Krishna Khadka
Sunny Shree
Pujan Budhathoki
Yu Lei
Raghu Kacker
D. Richard Kuhn
AAMLFAtt
245
0
0
26 Nov 2025
The Directed Prediction Change - Efficient and Trustworthy Fidelity Assessment for Local Feature Attribution Methods
The Directed Prediction Change - Efficient and Trustworthy Fidelity Assessment for Local Feature Attribution Methods
Kevin Iselborn
David Dembinsky
Adriano Lucieri
Andreas Dengel
AAMLFAtt
247
0
0
26 Nov 2025
CID: Measuring Feature Importance Through Counterfactual Distributions
CID: Measuring Feature Importance Through Counterfactual Distributions
Eddie Conti
Álvaro Parafita
Axel Brando
FAttCML
345
0
0
19 Nov 2025
Sufficient Explanations in Databases and their Connections to Necessary Explanations and Repairs
Sufficient Explanations in Databases and their Connections to Necessary Explanations and Repairs
L. Bertossi
Nina Pardal
FAtt
109
0
0
19 Nov 2025
Explaining Digital Pathology Models via Clustering Activations
Explaining Digital Pathology Models via Clustering Activations
Adam Bajger
Jan Obdržálek
Vojtěch Kůr
Rudolf Nenutil
Petr Holub
Vít Musil
Tomáš Brázdil
FAtt
80
0
0
18 Nov 2025
Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for Explanations
Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for Explanations
Yehonatan Elisha
Seffi Cohen
Oren Barkan
Noam Koenigstein
FAtt
203
0
0
17 Nov 2025
Accuracy is Not Enough: Poisoning Interpretability in Federated Learning via Color Skew
Accuracy is Not Enough: Poisoning Interpretability in Federated Learning via Color Skew
Farhin Farhad Riya
Shahinul Hoque
J. Sun
Olivera Kotevska
AAMLFedMLFAtt
435
0
0
17 Nov 2025
From Black-Box to White-Box: Control-Theoretic Neural Network Interpretability
From Black-Box to White-Box: Control-Theoretic Neural Network Interpretability
Jihoon Moon
FAttMILM
96
0
0
17 Nov 2025
ScoresActivation: A New Activation Function for Model Agnostic Global Explainability by Design
ScoresActivation: A New Activation Function for Model Agnostic Global Explainability by Design
Emanuel Covaci
Fabian Galis
Radu Balan
Daniela Zaharie
Darian Onchis
FAtt
124
0
0
17 Nov 2025
LAYA: Layer-wise Attention Aggregation for Interpretable Depth-Aware Neural Networks
LAYA: Layer-wise Attention Aggregation for Interpretable Depth-Aware Neural Networks
Gennaro Vessio
FAtt
80
0
0
16 Nov 2025
FLEX: Feature Importance from Layered Counterfactual Explanations
FLEX: Feature Importance from Layered Counterfactual Explanations
Nawid Keshtmand
Roussel Desmond Nzoyem
J. N. Clark
FAtt
112
0
0
14 Nov 2025
Efficiently Transforming Neural Networks into Decision Trees: A Path to Ground Truth Explanations with RENTT
Efficiently Transforming Neural Networks into Decision Trees: A Path to Ground Truth Explanations with RENTT
Helena Monke
Benjamin Frész
Marco Bernreuther
Yilin Chen
Marco F. Huber
FAtt
102
0
0
12 Nov 2025
Spatial Information Bottleneck for Interpretable Visual Recognition
Spatial Information Bottleneck for Interpretable Visual Recognition
Kaixiang Shu
Kai Meng
Junqin Luo
FAtt
176
0
0
12 Nov 2025
Distribution-Based Feature Attribution for Explaining the Predictions of Any Classifier
Distribution-Based Feature Attribution for Explaining the Predictions of Any Classifier
Xinpeng Li
Kai Ming Ting
FAtt
165
0
0
12 Nov 2025
From Decision Trees to Boolean Logic: A Fast and Unified SHAP Algorithm
From Decision Trees to Boolean Logic: A Fast and Unified SHAP Algorithm
Alexander Nadel
Ron Wettenstein
FAtt
165
0
0
12 Nov 2025
Towards Fine-Grained Interpretability: Counterfactual Explanations for Misclassification with Saliency Partition
Towards Fine-Grained Interpretability: Counterfactual Explanations for Misclassification with Saliency PartitionComputer Vision and Pattern Recognition (CVPR), 2025
Lintong Zhang
Kang Yin
Seong-Whan Lee
FAtt
344
1
0
11 Nov 2025
Approximating Shapley Explanations in Reinforcement Learning
Approximating Shapley Explanations in Reinforcement Learning
Daniel Beechey
Özgür Simsek
FAttOffRL
263
0
0
08 Nov 2025
Fair and Explainable Credit-Scoring under Concept Drift: Adaptive Explanation Frameworks for Evolving Populations
Fair and Explainable Credit-Scoring under Concept Drift: Adaptive Explanation Frameworks for Evolving Populations
Shivogo John
FAtt
326
6
0
05 Nov 2025
Balanced contributions, consistency, and value for games with externalities
Balanced contributions, consistency, and value for games with externalities
André Casajus
Yukihiko Funaki
Frank Huettner
TDIFAtt
219
0
0
05 Nov 2025
llmSHAP: A Principled Approach to LLM Explainability
llmSHAP: A Principled Approach to LLM Explainability
Filip Naudot
Tobias Sundqvist
Timotheus Kampik
FAtt
223
0
0
03 Nov 2025
Interpretable Model-Aware Counterfactual Explanations for Random Forest
Interpretable Model-Aware Counterfactual Explanations for Random Forest
Joshua S. Harvey
Guanchao Feng
Sai Anusha Meesala
Tina Zhao
Dhagash Mehta
FAttCML
301
0
0
31 Oct 2025
Community Detection on Model Explanation Graphs for Explainable AI
Community Detection on Model Explanation Graphs for Explainable AI
Ehsan Moradi
FAtt
195
0
0
31 Oct 2025
Interpreting LLMs as Credit Risk Classifiers: Do Their Feature Explanations Align with Classical ML?
Interpreting LLMs as Credit Risk Classifiers: Do Their Feature Explanations Align with Classical ML?
Saeed AlMarri
Kristof Juhasz
Mathieu Ravaut
Gautier Marti
Hamdan Al Ahbabi
Ibrahim Elfadel
FAtt
123
0
0
29 Oct 2025
FaCT: Faithful Concept Traces for Explaining Neural Network Decisions
FaCT: Faithful Concept Traces for Explaining Neural Network Decisions
Amin Parchami-Araghi
Sukrut Rao
Jonas Fischer
Bernt Schiele
FAtt
156
0
0
29 Oct 2025
Enhancing Pre-trained Representation Classifiability can Boost its Interpretability
Enhancing Pre-trained Representation Classifiability can Boost its InterpretabilityInternational Conference on Learning Representations (ICLR), 2025
Shufan Shen
Zhaobo Qi
Junshu Sun
Qingming Huang
Qi Tian
Shuhui Wang
FAtt
344
4
0
28 Oct 2025
Fair Indivisible Payoffs through Shapley Value
Fair Indivisible Payoffs through Shapley Value
Mikołaj Czarnecki
Michał Korniak
Oskar Skibski
Piotr Skowron
FAtt
230
0
0
28 Oct 2025
Loading #Papers per Month with "FAtt"
Past speakers
Name (-)
Top Contributors
Name (-)
Top Organizations at ResearchTrend.AI
Name (-)
Social Events
DateLocationEvent
No social events available