ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.12508
  4. Cited By
Probe-Me-Not: Protecting Pre-trained Encoders from Malicious Probing

Probe-Me-Not: Protecting Pre-trained Encoders from Malicious Probing

19 November 2024
Ruyi Ding
Tong Zhou
Lili Su
A. A. Ding
Xiaolin Xu
Yunsi Fei
    AAML
ArXiv (abs)PDFHTML

Papers citing "Probe-Me-Not: Protecting Pre-trained Encoders from Malicious Probing"

3 / 3 papers shown
Title
Model Immunization from a Condition Number Perspective
Model Immunization from a Condition Number Perspective
Amber Yijia Zheng
Cedar Site Bai
Brian Bullins
Raymond A. Yeh
MedIm
28
0
0
29 May 2025
ProDiF: Protecting Domain-Invariant Features to Secure Pre-Trained Models Against Extraction
ProDiF: Protecting Domain-Invariant Features to Secure Pre-Trained Models Against Extraction
Tong Zhou
Shijin Duan
Gaowen Liu
Charles Fleming
Ramana Rao Kompella
Shaolei Ren
Xiaolin Xu
AAML
117
0
0
17 Mar 2025
On the Relationship between Self-Attention and Convolutional Layers
On the Relationship between Self-Attention and Convolutional Layers
Jean-Baptiste Cordonnier
Andreas Loukas
Martin Jaggi
184
535
0
08 Nov 2019
1