ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.14977
  4. Cited By
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models

Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models

22 February 2024
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
    AAML
ArXivPDFHTML

Papers citing "Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models"

5 / 5 papers shown
Title
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data
Dorde Popovic
Amin Sadeghi
Ting Yu
Sanjay Chawla
Issa M. Khalil
AAML
49
0
0
27 Mar 2025
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
86
50
0
13 Oct 2021
Improved Baselines with Momentum Contrastive Learning
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
267
3,369
0
09 Mar 2020
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
287
0
02 Dec 2018
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
296
39,194
0
01 Sep 2014
1