ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.08909
  4. Cited By
MEGEX: Data-Free Model Extraction Attack against Gradient-Based
  Explainable AI

MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI

19 July 2021
T. Miura
Satoshi Hasegawa
Toshiki Shibahara
    SILM
    MIACV
ArXivPDFHTML

Papers citing "MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI"

26 / 26 papers shown
Title
Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review
Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review
Sonal Allana
Mohan Kankanhalli
Rozita Dara
32
0
0
05 May 2025
Attackers Can Do Better: Over- and Understated Factors of Model Stealing Attacks
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
AAML
49
0
0
08 Mar 2025
Models That Are Interpretable But Not Transparent
Models That Are Interpretable But Not Transparent
Chudi Zhong
Panyu Chen
Cynthia Rudin
AAML
66
0
0
26 Feb 2025
From Counterfactuals to Trees: Competitive Analysis of Model Extraction Attacks
From Counterfactuals to Trees: Competitive Analysis of Model Extraction Attacks
Awa Khouna
Julien Ferry
Thibaut Vidal
AAML
49
0
0
07 Feb 2025
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against Model Extraction Attacks
Yixiao Xu
Binxing Fang
Rui Wang
Yinghai Zhou
S. Ji
Yuan Liu
Mohan Li
Zhihong Tian
MIACV
AAML
63
0
0
20 Jan 2025
Cybersecurity in Industry 5.0: Open Challenges and Future Directions
Cybersecurity in Industry 5.0: Open Challenges and Future Directions
Bruno Santos
Rogério Luís C. Costa
Leonel Santos
AI4CE
34
1
0
12 Oct 2024
XSub: Explanation-Driven Adversarial Attack against Blackbox Classifiers
  via Feature Substitution
XSub: Explanation-Driven Adversarial Attack against Blackbox Classifiers via Feature Substitution
Kiana Vu
Phung Lai
Truc D. T. Nguyen
AAML
36
0
0
13 Sep 2024
VidModEx: Interpretable and Efficient Black Box Model Extraction for
  High-Dimensional Spaces
VidModEx: Interpretable and Efficient Black Box Model Extraction for High-Dimensional Spaces
Somnath Sendhil Kumar
Yuvaraj Govindarajulu
Pavan Kulkarni
Manojkumar Somabhai Parmar
FAtt
46
0
0
04 Aug 2024
Privacy Implications of Explainable AI in Data-Driven Systems
Privacy Implications of Explainable AI in Data-Driven Systems
Fatima Ezzeddine
29
3
0
22 Jun 2024
Knowledge Distillation-Based Model Extraction Attack using Private
  Counterfactual Explanations
Knowledge Distillation-Based Model Extraction Attack using Private Counterfactual Explanations
Fatima Ezzeddine
Omran Ayoub
Silvia Giordano
AAML
MIACV
45
0
0
04 Apr 2024
A Survey of Privacy-Preserving Model Explanations: Privacy Risks,
  Attacks, and Countermeasures
A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures
Thanh Tam Nguyen
T. T. Huynh
Zhao Ren
Thanh Toan Nguyen
Phi Le Nguyen
Hongzhi Yin
Quoc Viet Hung Nguyen
68
8
0
31 Mar 2024
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey
  and the Open Libraries Behind Them
Unraveling Attacks in Machine Learning-based IoT Ecosystems: A Survey and the Open Libraries Behind Them
Chao-Jung Liu
Boxi Chen
Wei Shao
Chris Zhang
Kelvin Wong
Yi Zhang
31
3
0
22 Jan 2024
SoK: Taming the Triangle -- On the Interplays between Fairness,
  Interpretability and Privacy in Machine Learning
SoK: Taming the Triangle -- On the Interplays between Fairness, Interpretability and Privacy in Machine Learning
Julien Ferry
Ulrich Aïvodji
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
FaML
26
5
0
22 Dec 2023
Model Stealing Attack against Graph Classification with Authenticity,
  Uncertainty and Diversity
Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity
Zhihao Zhu
Chenwang Wu
Rui Fan
Yi Yang
Defu Lian
Enhong Chen
AAML
27
0
0
18 Dec 2023
Continual Learning From a Stream of APIs
Continual Learning From a Stream of APIs
Enneng Yang
Zhenyi Wang
Li Shen
Nan Yin
Tongliang Liu
Guibing Guo
Xingwei Wang
Dacheng Tao
CLL
27
3
0
31 Aug 2023
Data-Free Model Extraction Attacks in the Context of Object Detection
Data-Free Model Extraction Attacks in the Context of Object Detection
Harshit Shah
G. Aravindhan
Pavan Kulkarni
Yuvaraj Govidarajulu
Manojkumar Somabhai Parmar
MIACV
AAML
44
3
0
09 Aug 2023
FDINet: Protecting against DNN Model Extraction via Feature Distortion
  Index
FDINet: Protecting against DNN Model Extraction via Feature Distortion Index
Hongwei Yao
Zheng Li
Haiqin Weng
Feng Xue
Kui Ren
Zhan Qin
23
4
0
20 Jun 2023
Extracting Cloud-based Model with Prior Knowledge
Extracting Cloud-based Model with Prior Knowledge
Songtao Zhao
Kangjie Chen
Meng Hao
Jian Zhang
Guowen Xu
Hongwei Li
Tianwei Zhang
AAML
MIACV
SILM
MLAU
SLR
39
5
0
07 Jun 2023
Marich: A Query-efficient Distributionally Equivalent Model Extraction
  Attack using Public Data
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public Data
Pratik Karmakar
D. Basu
MIACV
23
6
0
16 Feb 2023
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks
  against Interpretable Models
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models
Abdullah Çaglar Öksüz
Anisa Halimi
Erman Ayday
ELM
AAML
21
2
0
04 Feb 2023
XRand: Differentially Private Defense against Explanation-Guided Attacks
XRand: Differentially Private Defense against Explanation-Guided Attacks
Truc D. T. Nguyen
Phung Lai
Nhathai Phan
My T. Thai
AAML
SILM
27
14
0
08 Dec 2022
PriMask: Cascadable and Collusion-Resilient Data Masking for Mobile
  Cloud Inference
PriMask: Cascadable and Collusion-Resilient Data Masking for Mobile Cloud Inference
Linshan Jiang
Qun Song
Rui Tan
Mo Li
16
4
0
12 Nov 2022
I Know What You Trained Last Summer: A Survey on Stealing Machine
  Learning Models and Defences
I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences
Daryna Oliynyk
Rudolf Mayer
Andreas Rauber
39
106
0
16 Jun 2022
Increasing the Cost of Model Extraction with Calibrated Proof of Work
Increasing the Cost of Model Extraction with Calibrated Proof of Work
Adam Dziedzic
Muhammad Ahmad Kaleem
Y. Lu
Nicolas Papernot
FedML
MIACV
AAML
MLAU
68
28
0
23 Jan 2022
Data-Free Knowledge Transfer: A Survey
Data-Free Knowledge Transfer: A Survey
Yuang Liu
Wei Zhang
Jun Wang
Jianyong Wang
35
48
0
31 Dec 2021
Counterfactual Explanations and Algorithmic Recourses for Machine
  Learning: A Review
Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
Sahil Verma
Varich Boonsanong
Minh Hoang
Keegan E. Hines
John P. Dickerson
Chirag Shah
CML
24
162
0
20 Oct 2020
1