ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19269
  4. Cited By
Neural Antidote: Class-Wise Prompt Tuning for Purifying Backdoors in Pre-trained Vision-Language Models

Neural Antidote: Class-Wise Prompt Tuning for Purifying Backdoors in Pre-trained Vision-Language Models

26 February 2025
Jiawei Kong
Hao Fang
Sihang Guo
Chenxi Qing
Bin Chen
Bin Wang
Shu-Tao Xia
    AAMLVLM
ArXiv (abs)PDFHTML

Papers citing "Neural Antidote: Class-Wise Prompt Tuning for Purifying Backdoors in Pre-trained Vision-Language Models"

17 / 17 papers shown
Title
Reconstructive Neuron Pruning for Backdoor Defense
Reconstructive Neuron Pruning for Backdoor Defense
Yige Li
X. Lyu
Xingjun Ma
Nodens Koren
Lingjuan Lyu
Yue Liu
Yugang Jiang
AAML
80
44
0
24 May 2023
Robust Contrastive Language-Image Pre-training against Data Poisoning
  and Backdoor Attacks
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor Attacks
Wenhan Yang
Jingdong Gao
Baharan Mirzasoleiman
VLM
167
20
0
13 Mar 2023
ZegCLIP: Towards Adapting CLIP for Zero-shot Semantic Segmentation
ZegCLIP: Towards Adapting CLIP for Zero-shot Semantic Segmentation
Ziqi Zhou
Bowen Zhang
Yinjie Lei
Lingqiao Liu
Yifan Liu
VLM
80
175
0
07 Dec 2022
Conditional Prompt Learning for Vision-Language Models
Conditional Prompt Learning for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VLMCLIPVPVLM
148
1,359
0
10 Mar 2022
Vision-Language Pre-Training with Triple Contrastive Learning
Vision-Language Pre-Training with Triple Contrastive Learning
Jinyu Yang
Jiali Duan
Son N. Tran
Yi Xu
Sampath Chanda
Liqun Chen
Belinda Zeng
Trishul Chilimbi
Junzhou Huang
VLM
110
297
0
21 Feb 2022
Backdoor Defense via Decoupling the Training Process
Backdoor Defense via Decoupling the Training Process
Kunzhe Huang
Yiming Li
Baoyuan Wu
Zhan Qin
Kui Ren
AAMLFedML
56
194
0
05 Feb 2022
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Yi Zeng
Si-An Chen
Won Park
Z. Morley Mao
Ming Jin
R. Jia
AAML
129
177
0
07 Oct 2021
Learning to Prompt for Vision-Language Models
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLMCLIPVLM
507
2,413
0
02 Sep 2021
Align before Fuse: Vision and Language Representation Learning with
  Momentum Distillation
Align before Fuse: Vision and Language Representation Learning with Momentum Distillation
Junnan Li
Ramprasaath R. Selvaraju
Akhilesh Deepak Gotmare
Shafiq Joty
Caiming Xiong
Guosheng Lin
FaML
221
1,975
0
16 Jul 2021
Learning Transferable Visual Models From Natural Language Supervision
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIPVLM
993
29,871
0
26 Feb 2021
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution
  Generalization
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
Dan Hendrycks
Steven Basart
Norman Mu
Saurav Kadavath
Frank Wang
...
Samyak Parajuli
Mike Guo
Basel Alomair
Jacob Steinhardt
Justin Gilmer
OOD
363
1,757
0
29 Jun 2020
Natural Adversarial Examples
Natural Adversarial Examples
Dan Hendrycks
Kevin Zhao
Steven Basart
Jacob Steinhardt
Basel Alomair
OODD
236
1,484
0
16 Jul 2019
Learning Robust Global Representations by Penalizing Local Predictive
  Power
Learning Robust Global Representations by Penalizing Local Predictive Power
Haohan Wang
Songwei Ge
Eric Xing
Zachary Chase Lipton
OOD
122
967
0
29 May 2019
A new Backdoor Attack in CNNs by training set corruption without label
  poisoning
A new Backdoor Attack in CNNs by training set corruption without label poisoning
Mauro Barni
Kassem Kallas
B. Tondi
AAML
116
358
0
12 Feb 2019
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
Basel Alomair
AAMLSILM
146
1,854
0
15 Dec 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
319
12,138
0
19 Jun 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
335
20,110
0
07 Oct 2016
1