ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.00407
  4. Cited By
Understanding Impacts of High-Order Loss Approximations and Features in
  Deep Learning Interpretation

Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation

1 February 2019
Sahil Singla
Eric Wallace
Shi Feng
S. Feizi
    FAtt
ArXivPDFHTML

Papers citing "Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation"

12 / 12 papers shown
Title
Aerial Image Classification in Scarce and Unconstrained Environments via Conformal Prediction
Aerial Image Classification in Scarce and Unconstrained Environments via Conformal Prediction
Farhad Pourkamali-Anaraki
43
0
0
24 Apr 2025
Unraveling the Hessian: A Key to Smooth Convergence in Loss Function
  Landscapes
Unraveling the Hessian: A Key to Smooth Convergence in Loss Function Landscapes
Nikita Kiselev
Andrey Grabovoy
54
1
0
18 Sep 2024
On Dissipativity of Cross-Entropy Loss in Training ResNets
On Dissipativity of Cross-Entropy Loss in Training ResNets
Jens Püttschneider
T. Faulwasser
35
0
0
29 May 2024
Data-Centric Debugging: mitigating model failures via targeted data
  collection
Data-Centric Debugging: mitigating model failures via targeted data collection
Sahil Singla
Atoosa Malemir Chegini
Mazda Moayeri
Soheil Feiz
27
4
0
17 Nov 2022
A Survey of Neural Trees
A Survey of Neural Trees
Haoling Li
Mingli Song
Mengqi Xue
Haofei Zhang
Jingwen Ye
Lechao Cheng
Mingli Song
AI4CE
20
6
0
07 Sep 2022
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers
  for Interpretable Image Recognition
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition
Mengqi Xue
Qihan Huang
Haofei Zhang
Lechao Cheng
Mingli Song
Ming-hui Wu
Mingli Song
ViT
33
52
0
22 Aug 2022
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
20
80
0
29 Nov 2021
Improving Attribution Methods by Learning Submodular Functions
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
24
6
0
19 Apr 2021
Sketching Curvature for Efficient Out-of-Distribution Detection for Deep
  Neural Networks
Sketching Curvature for Efficient Out-of-Distribution Detection for Deep Neural Networks
Apoorva Sharma
Navid Azizan
Marco Pavone
UQCV
25
45
0
24 Feb 2021
Feature Interaction Interpretability: A Case for Explaining
  Ad-Recommendation Systems via Neural Interaction Detection
Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection
Michael Tsang
Dehua Cheng
Hanpeng Liu
Xuening Feng
Eric Zhou
Yan Liu
FAtt
24
60
0
19 Jun 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
30
143
0
10 Feb 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
38
300
0
08 Jan 2020
1