ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.13453
  4. Cited By
Learning to Intervene on Concept Bottlenecks

Learning to Intervene on Concept Bottlenecks

25 August 2023
David Steinmann
Wolfgang Stammer
Felix Friedrich
Kristian Kersting
ArXivPDFHTML

Papers citing "Learning to Intervene on Concept Bottlenecks"

11 / 11 papers shown
Title
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
155
0
0
28 Apr 2025
Addressing Concept Mislabeling in Concept Bottleneck Models Through Preference Optimization
Addressing Concept Mislabeling in Concept Bottleneck Models Through Preference Optimization
Emiliano Penaloza
Tianyue H. Zhan
Laurent Charlin
Mateo Espinosa Zarlenga
51
0
0
25 Apr 2025
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
M. Zarlenga
Gabriele Dominici
Pietro Barbiero
Z. Shams
M. Jamnik
KELM
191
0
0
24 Apr 2025
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Samuele Bortolotti
Emanuele Marconato
Paolo Morettin
Andrea Passerini
Stefano Teso
61
2
0
16 Feb 2025
Improving deep learning with prior knowledge and cognitive models: A
  survey on enhancing explainability, adversarial robustness and zero-shot
  learning
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
F. Mumuni
A. Mumuni
AAML
37
5
0
11 Mar 2024
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?
Sonia Laguna
Ricards Marcinkevics
Moritz Vandenhirtz
Julia E. Vogt
35
17
0
24 Jan 2024
Human Uncertainty in Concept-Based AI Systems
Human Uncertainty in Concept-Based AI Systems
Katherine M. Collins
Matthew Barker
M. Zarlenga
Naveen Raman
Umang Bhatt
M. Jamnik
Ilia Sucholutsky
Adrian Weller
Krishnamurthy Dvijotham
66
39
0
22 Mar 2023
GlanceNets: Interpretabile, Leak-proof Concept-based Models
GlanceNets: Interpretabile, Leak-proof Concept-based Models
Emanuele Marconato
Andrea Passerini
Stefano Teso
106
64
0
31 May 2022
Post-hoc Concept Bottleneck Models
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Zou
145
185
0
31 May 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
339
12,003
0
04 Mar 2022
Interactive Disentanglement: Learning Concepts by Interacting with their
  Prototype Representations
Interactive Disentanglement: Learning Concepts by Interacting with their Prototype Representations
Wolfgang Stammer
Marius Memmel
P. Schramowski
Kristian Kersting
91
26
0
04 Dec 2021
1