ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.20307
  4. Cited By
Causal Interpretation of Self-Attention in Pre-Trained Transformers

Causal Interpretation of Self-Attention in Pre-Trained Transformers

31 October 2023
R. Y. Rohekar
Yaniv Gurwicz
Shami Nisimov
    MILM
ArXivPDFHTML

Papers citing "Causal Interpretation of Self-Attention in Pre-Trained Transformers"

2 / 2 papers shown
Title
Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question Answering
Zeping Yu
Sophia Ananiadou
208
0
0
17 Nov 2024
Iterative Causal Discovery in the Possible Presence of Latent
  Confounders and Selection Bias
Iterative Causal Discovery in the Possible Presence of Latent Confounders and Selection Bias
R. Y. Rohekar
Shami Nisimov
Yaniv Gurwicz
Gal Novik
CML
147
25
0
07 Nov 2021
1