ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.07120
  4. Cited By
Making Corgis Important for Honeycomb Classification: Adversarial
  Attacks on Concept-based Explainability Tools

Making Corgis Important for Honeycomb Classification: Adversarial Attacks on Concept-based Explainability Tools

14 October 2021
Davis Brown
Henry Kvinge
    AAML
ArXivPDFHTML

Papers citing "Making Corgis Important for Honeycomb Classification: Adversarial Attacks on Concept-based Explainability Tools"

3 / 3 papers shown
Title
On Interpretability of Deep Learning based Skin Lesion Classifiers using
  Concept Activation Vectors
On Interpretability of Deep Learning based Skin Lesion Classifiers using Concept Activation Vectors
Adriano Lucieri
Muhammad Naseer Bajwa
S. Braun
M. I. Malik
Andreas Dengel
Sheraz Ahmed
MedIm
161
64
0
05 May 2020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,567
0
17 Apr 2017
1