ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.08158
  4. Cited By
MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal
  Contributions in Vision and Language Models & Tasks

MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks

15 December 2022
Letitia Parcalabescu
Anette Frank
ArXivPDFHTML

Papers citing "MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks"

11 / 11 papers shown
Title
What are You Looking at? Modality Contribution in Multimodal Medical Deep Learning Methods
Christian Gapp
Elias Tappeiner
M. Welk
Karl Fritscher
Elke Ruth Gizewski
R. Schubert
47
0
0
28 Feb 2025
SPEX: Scaling Feature Interaction Explanations for LLMs
SPEX: Scaling Feature Interaction Explanations for LLMs
J. S. Kang
Landon Butler
Abhineet Agarwal
Yigit Efe Erginbas
Ramtin Pedarsani
Kannan Ramchandran
Bin Yu
VLM
LRM
77
0
0
20 Feb 2025
Why context matters in VQA and Reasoning: Semantic interventions for VLM
  input modalities
Why context matters in VQA and Reasoning: Semantic interventions for VLM input modalities
Kenza Amara
Lukas Klein
Carsten T. Lüth
Paul Jäger
Hendrik Strobelt
Mennatallah El-Assady
30
1
0
02 Oct 2024
CV-Probes: Studying the interplay of lexical and world knowledge in
  visually grounded verb understanding
CV-Probes: Studying the interplay of lexical and world knowledge in visually grounded verb understanding
Ivana Beňová
Michal Gregor
Albert Gatt
43
0
0
02 Sep 2024
How and where does CLIP process negation?
How and where does CLIP process negation?
Vincent Quantmeyer
Pablo Mosteiro
Albert Gatt
CoGe
29
6
0
15 Jul 2024
Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?
Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?
Letitia Parcalabescu
Anette Frank
MLLM
CoGe
VLM
84
3
0
29 Apr 2024
Quantifying and Mitigating Unimodal Biases in Multimodal Large Language
  Models: A Causal Perspective
Quantifying and Mitigating Unimodal Biases in Multimodal Large Language Models: A Causal Perspective
Meiqi Chen
Yixin Cao
Yan Zhang
Chaochao Lu
37
13
0
27 Mar 2024
SurvBeNIM: The Beran-Based Neural Importance Model for Explaining the
  Survival Models
SurvBeNIM: The Beran-Based Neural Importance Model for Explaining the Survival Models
Lev V. Utkin
Danila Eremenko
A. Konstantinov
23
0
0
11 Dec 2023
The Scenario Refiner: Grounding subjects in images at the morphological
  level
The Scenario Refiner: Grounding subjects in images at the morphological level
Claudia Tagliaferri
Sofia Axioti
Albert Gatt
Denis Paperno
24
1
0
20 Sep 2023
SurvBeX: An explanation method of the machine learning survival models
  based on the Beran estimator
SurvBeX: An explanation method of the machine learning survival models based on the Beran estimator
Lev V. Utkin
Danila Eremenko
A. Konstantinov
32
4
0
07 Aug 2023
Interpreting Vision and Language Generative Models with Semantic Visual
  Priors
Interpreting Vision and Language Generative Models with Semantic Visual Priors
Michele Cafagna
L. Rojas-Barahona
Kees van Deemter
Albert Gatt
FAtt
VLM
17
1
0
28 Apr 2023
1