ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.08720
  4. Cited By
Contextual Semantic Interpretability

Contextual Semantic Interpretability

18 September 2020
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
    SSL
ArXivPDFHTML

Papers citing "Contextual Semantic Interpretability"

11 / 11 papers shown
Title
Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation
Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation
Hugo Porta
Emanuele Dalsasso
Diego Marcos
D. Tuia
95
0
0
14 Sep 2024
Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive Learning
Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive Learning
Andrei Semenov
Vladimir Ivanov
Aleksandr Beznosikov
Alexander Gasnikov
42
6
0
04 Apr 2024
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?
Sonia Laguna
Ricards Marcinkevics
Moritz Vandenhirtz
Julia E. Vogt
35
17
0
24 Jan 2024
Coarse-to-Fine Concept Bottleneck Models
Coarse-to-Fine Concept Bottleneck Models
Konstantinos P. Panousis
Dino Ienco
Diego Marcos
28
5
0
03 Oct 2023
UFO: A unified method for controlling Understandability and Faithfulness
  Objectives in concept-based explanations for CNNs
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
32
0
0
27 Mar 2023
Concept Embedding Analysis: A Review
Concept Embedding Analysis: A Review
Gesina Schwalbe
32
28
0
25 Mar 2022
Towards a Collective Agenda on AI for Earth Science Data Analysis
Towards a Collective Agenda on AI for Earth Science Data Analysis
D. Tuia
R. Roscher
Jan Dirk Wegner
Nathan Jacobs
Xiaoxiang Zhu
Gustau Camps-Valls
AI4CE
39
68
0
11 Apr 2021
Revisiting the Importance of Individual Units in CNNs via Ablation
Revisiting the Importance of Individual Units in CNNs via Ablation
Bolei Zhou
Yiyou Sun
David Bau
Antonio Torralba
FAtt
59
116
0
07 Jun 2018
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,684
0
28 Feb 2017
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,837
0
08 Jul 2016
The Application of Two-level Attention Models in Deep Convolutional
  Neural Network for Fine-grained Image Classification
The Application of Two-level Attention Models in Deep Convolutional Neural Network for Fine-grained Image Classification
Tianjun Xiao
Yichong Xu
Kuiyuan Yang
Jiaxing Zhang
Yuxin Peng
Zheng-Wei Zhang
158
789
0
24 Nov 2014
1