ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.00110
  4. Cited By
CLIP-QDA: An Explainable Concept Bottleneck Model

CLIP-QDA: An Explainable Concept Bottleneck Model

30 November 2023
Rémi Kazmierczak
Eloise Berthier
Goran Frehse
Gianni Franchi
ArXivPDFHTML

Papers citing "CLIP-QDA: An Explainable Concept Bottleneck Model"

9 / 9 papers shown
Title
Benchmarking XAI Explanations with Human-Aligned Evaluations
Benchmarking XAI Explanations with Human-Aligned Evaluations
Rémi Kazmierczak
Steve Azzolin
Eloise Berthier
Anna Hedström
Patricia Delhomme
...
Goran Frehse
Massimiliano Mancini
Baptiste Caramiaux
Andrea Passerini
Gianni Franchi
28
1
0
04 Nov 2024
FI-CBL: A Probabilistic Method for Concept-Based Learning with Expert
  Rules
FI-CBL: A Probabilistic Method for Concept-Based Learning with Expert Rules
Lev V. Utkin
A. Konstantinov
Stanislav R. Kirpichenko
36
0
0
28 Jun 2024
Conceptual Learning via Embedding Approximations for Reinforcing
  Interpretability and Transparency
Conceptual Learning via Embedding Approximations for Reinforcing Interpretability and Transparency
Maor Dikter
Tsachi Blau
Chaim Baskin
43
0
0
13 Jun 2024
Incorporating Expert Rules into Neural Networks in the Framework of
  Concept-Based Learning
Incorporating Expert Rules into Neural Networks in the Framework of Concept-Based Learning
A. Konstantinov
Lev V. Utkin
40
3
0
22 Feb 2024
Sparse Linear Concept Discovery Models
Sparse Linear Concept Discovery Models
Konstantinos P. Panousis
Dino Ienco
Diego Marcos
34
15
0
21 Aug 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
369
12,003
0
04 Mar 2022
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep
  learning representations with expert knowledge graphs: the MonuMAI cultural
  heritage use case
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Natalia Díaz Rodríguez
Alberto Lamas
Jules Sanchez
Gianni Franchi
Ivan Donadello
Siham Tabik
David Filliat
P. Cruz
Rosana Montes
Francisco Herrera
49
77
0
24 Apr 2021
CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip
  Retrieval
CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval
Huaishao Luo
Lei Ji
Ming Zhong
Yang Chen
Wen Lei
Nan Duan
Tianrui Li
CLIP
VLM
329
781
0
18 Apr 2021
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
255
4,796
0
24 Feb 2021
1