ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.00958
  4. Cited By
Isolation and Induction: Training Robust Deep Neural Networks against
  Model Stealing Attacks

Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

2 August 2023
Jun Guo
Aishan Liu
Xingyu Zheng
Siyuan Liang
Yisong Xiao
Yichao Wu
Xianglong Liu
    AAML
ArXivPDFHTML

Papers citing "Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks"

10 / 10 papers shown
Title
Model-Guardian: Protecting against Data-Free Model Stealing Using Gradient Representations and Deceptive Predictions
Model-Guardian: Protecting against Data-Free Model Stealing Using Gradient Representations and Deceptive Predictions
Yunfei Yang
Xiaojun Chen
Yuexin Xuan
Zhendong Zhao
AAML
67
0
0
23 Mar 2025
CopyrightShield: Spatial Similarity Guided Backdoor Defense against
  Copyright Infringement in Diffusion Models
CopyrightShield: Spatial Similarity Guided Backdoor Defense against Copyright Infringement in Diffusion Models
Zhixiang Guo
Siyuan Liang
Aishan Liu
Dacheng Tao
AAML
76
1
0
02 Dec 2024
Efficient Backdoor Defense in Multimodal Contrastive Learning: A
  Token-Level Unlearning Method for Mitigating Threats
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats
Kuanrong Liu
Siyuan Liang
Jiawei Liang
Pengwen Dai
Xiaochun Cao
MU
AAML
36
1
0
29 Sep 2024
Towards Robust Object Detection: Identifying and Removing Backdoors via
  Module Inconsistency Analysis
Towards Robust Object Detection: Identifying and Removing Backdoors via Module Inconsistency Analysis
Xianda Zhang
Siyuan Liang
AAML
28
2
0
24 Sep 2024
Compromising Embodied Agents with Contextual Backdoor Attacks
Compromising Embodied Agents with Contextual Backdoor Attacks
Aishan Liu
Yuguang Zhou
Xianglong Liu
Tianyuan Zhang
Siyuan Liang
...
Tianlin Li
Junqi Zhang
Wenbo Zhou
Qing-Wu Guo
Dacheng Tao
LLMAG
AAML
39
8
0
06 Aug 2024
Multimodal Unlearnable Examples: Protecting Data against Multimodal
  Contrastive Learning
Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning
Xinwei Liu
Xiaojun Jia
Yuan Xun
Siyuan Liang
Xiaochun Cao
42
7
0
23 Jul 2024
Towards Robust Physical-world Backdoor Attacks on Lane Detection
Towards Robust Physical-world Backdoor Attacks on Lane Detection
Xinwei Zhang
Aishan Liu
Tianyuan Zhang
Siyuan Liang
Xianglong Liu
AAML
47
10
0
09 May 2024
Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal
  Contrastive Learning via Local Token Unlearning
Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning
Siyuan Liang
Kuanrong Liu
Jiajun Gong
Jiawei Liang
Yuan Xun
Ee-Chien Chang
Xiaochun Cao
AAML
MU
34
13
0
24 Mar 2024
Dual Attention Suppression Attack: Generate Adversarial Camouflage in
  Physical World
Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World
Jiakai Wang
Aishan Liu
Zixin Yin
Shunchang Liu
Shiyu Tang
Xianglong Liu
AAML
143
194
0
01 Mar 2021
Transferable Adversarial Attacks for Image and Video Object Detection
Transferable Adversarial Attacks for Image and Video Object Detection
Xingxing Wei
Siyuan Liang
Ning Chen
Xiaochun Cao
AAML
77
221
0
30 Nov 2018
1