ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.03894
  4. Cited By
Interpreting Recurrent and Attention-Based Neural Models: a Case Study
  on Natural Language Inference

Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference

12 August 2018
Reza Ghaeini
Xiaoli Z. Fern
Prasad Tadepalli
    MILM
ArXivPDFHTML

Papers citing "Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference"

4 / 54 papers shown
Title
Saliency Learning: Teaching the Model Where to Pay Attention
Saliency Learning: Teaching the Model Where to Pay Attention
Reza Ghaeini
Xiaoli Z. Fern
Hamed Shahbazi
Prasad Tadepalli
FAtt
XAI
32
30
0
22 Feb 2019
Analysis Methods in Neural Language Processing: A Survey
Analysis Methods in Neural Language Processing: A Survey
Yonatan Belinkov
James R. Glass
39
547
0
21 Dec 2018
Attentional Multi-Reading Sarcasm Detection
Attentional Multi-Reading Sarcasm Detection
Reza Ghaeini
Xiaoli Z. Fern
Prasad Tadepalli
19
5
0
09 Sep 2018
A Decomposable Attention Model for Natural Language Inference
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
216
1,367
0
06 Jun 2016
Previous
12