ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.11231
  4. Cited By
Learning Unsupervised Hierarchies of Audio Concepts

Learning Unsupervised Hierarchies of Audio Concepts

21 July 2022
Darius Afchar
Romain Hennequin
Vincent Guigue
ArXivPDFHTML

Papers citing "Learning Unsupervised Hierarchies of Audio Concepts"

9 / 9 papers shown
Title
Detecting music deepfakes is easy but actually hard
Detecting music deepfakes is easy but actually hard
Darius Afchar
Gabriel Meseguer-Brocal
Romain Hennequin
63
6
0
07 May 2024
Of Spiky SVDs and Music Recommendation
Of Spiky SVDs and Music Recommendation
Darius Afchar
Romain Hennequin
Vincent Guigue
21
4
0
30 Jun 2023
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
177
185
0
03 Feb 2022
Explainability in Music Recommender Systems
Explainability in Music Recommender Systems
Darius Afchar
Alessandro B. Melchiorre
Markus Schedl
Romain Hennequin
Elena V. Epure
Manuel Moussallam
21
48
0
25 Jan 2022
Leveraging Hierarchical Structures for Few-Shot Musical Instrument
  Recognition
Leveraging Hierarchical Structures for Few-Shot Musical Instrument Recognition
Hugo Flores Garcia
Aldo Aguilar
Ethan Manilow
Bryan Pardo
44
32
0
14 Jul 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
31
20
0
26 Apr 2021
On Interpretability of Deep Learning based Skin Lesion Classifiers using
  Concept Activation Vectors
On Interpretability of Deep Learning based Skin Lesion Classifiers using Concept Activation Vectors
Adriano Lucieri
Muhammad Naseer Bajwa
S. Braun
M. I. Malik
Andreas Dengel
Sheraz Ahmed
MedIm
161
64
0
05 May 2020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
1