ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.06422
  4. Cited By
Evaluating neural network explanation methods using hybrid documents and
  morphological agreement

Evaluating neural network explanation methods using hybrid documents and morphological agreement

19 January 2018
Nina Pörner
Benjamin Roth
Hinrich Schütze
ArXivPDFHTML

Papers citing "Evaluating neural network explanation methods using hybrid documents and morphological agreement"

2 / 2 papers shown
Title
Towards Faithfully Interpretable NLP Systems: How should we define and
  evaluate faithfulness?
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
43
571
0
07 Apr 2020
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
AAML
31
60
0
04 Oct 2019
1