Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2207.06897
Cited By
Beware the Rationalization Trap! When Language Model Explainability Diverges from our Mental Models of Language
14 July 2022
R. Sevastjanova
Mennatallah El-Assady
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Beware the Rationalization Trap! When Language Model Explainability Diverges from our Mental Models of Language"
8 / 8 papers shown
Title
Beyond Words: On Large Language Models Actionability in Mission-Critical Risk Analysis
Matteo Esposito
Francesco Palagiano
Valentina Lenarduzzi
Davide Taibi
90
6
0
11 Jun 2024
Why is "Problems" Predictive of Positive Sentiment? A Case Study of Explaining Unintuitive Features in Sentiment Classification
Jiaming Qu
Jaime Arguello
Yue Wang
FAtt
35
0
0
05 Jun 2024
generAItor: Tree-in-the-Loop Text Generation for Language Model Explainability and Adaptation
Thilo Spinner
Rebecca Kehlbeck
R. Sevastjanova
Tobias Stähle
Daniel A. Keim
Oliver Deussen
Mennatallah El-Assady
46
3
0
12 Mar 2024
SyntaxShap: Syntax-aware Explainability Method for Text Generation
Kenza Amara
R. Sevastjanova
Mennatallah El-Assady
39
2
0
14 Feb 2024
RELIC: Investigating Large Language Model Responses using Self-Consistency
Furui Cheng
Vilém Zouhar
Simran Arora
Mrinmaya Sachan
Hendrik Strobelt
Mennatallah El-Assady
HILM
19
18
0
28 Nov 2023
Revealing the Unwritten: Visual Investigation of Beam Search Trees to Address Language Model Prompting Challenges
Thilo Spinner
Rebecca Kehlbeck
R. Sevastjanova
Tobias Stähle
Daniel A. Keim
Oliver Deussen
Andreas Spitz
Mennatallah El-Assady
32
2
0
17 Oct 2023
Characterizing Uncertainty in the Visual Text Analysis Pipeline
P. Haghighatkhah
Mennatallah El-Assady
Jean-Daniel Fekete
Narges Mahyar
C. Paradis
Vasiliki Simaki
Bettina Speckmann
13
2
0
22 Sep 2022
What you can cram into a single vector: Probing sentence embeddings for linguistic properties
Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
201
882
0
03 May 2018
1