ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.14597
  4. Cited By
Adversarial Attacks on Foundational Vision Models

Adversarial Attacks on Foundational Vision Models

28 August 2023
Nathan Inkawhich
Gwendolyn McDonald
R. Luley
    VLM
ArXivPDFHTML

Papers citing "Adversarial Attacks on Foundational Vision Models"

14 / 14 papers shown
Title
Robustness Tokens: Towards Adversarial Robustness of Transformers
Brian Pulfer
Yury Belousov
S. Voloshynovskiy
AAML
45
0
0
13 Mar 2025
Task-Agnostic Attacks Against Vision Foundation Models
Brian Pulfer
Yury Belousov
Vitaliy Kinakh
Teddy Furon
S. Voloshynovskiy
AAML
77
0
0
05 Mar 2025
Replace-then-Perturb: Targeted Adversarial Attacks With Visual Reasoning
  for Vision-Language Models
Replace-then-Perturb: Targeted Adversarial Attacks With Visual Reasoning for Vision-Language Models
Jonggyu Jang
Hyeonsu Lyu
Jungyeon Koh
H. Yang
VLM
AAML
45
0
0
01 Nov 2024
Liveness Detection in Computer Vision: Transformer-based Self-Supervised
  Learning for Face Anti-Spoofing
Liveness Detection in Computer Vision: Transformer-based Self-Supervised Learning for Face Anti-Spoofing
Arman Keresh
Pakizar Shamoi
54
5
0
19 Jun 2024
TIMA: Text-Image Mutual Awareness for Balancing Zero-Shot Adversarial
  Robustness and Generalization Ability
TIMA: Text-Image Mutual Awareness for Balancing Zero-Shot Adversarial Robustness and Generalization Ability
Fengji Ma
Li Liu
Hei Victor Cheng
VLM
33
0
0
27 May 2024
One Prompt Word is Enough to Boost Adversarial Robustness for
  Pre-trained Vision-Language Models
One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models
Lin Li
Haoyan Guan
Jianing Qiu
Michael W. Spratling
AAML
VLM
VPVLM
31
21
0
04 Mar 2024
Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial
  Robustness
Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness
Sibo Wang
Jie Zhang
Zheng Yuan
Shiguang Shan
VLM
36
19
0
09 Jan 2024
Adapting Contrastive Language-Image Pretrained (CLIP) Models for
  Out-of-Distribution Detection
Adapting Contrastive Language-Image Pretrained (CLIP) Models for Out-of-Distribution Detection
Nikolas Adaloglou
Félix D. P. Michels
Tim Kaiser
M. Kollmann
VLM
37
0
0
10 Mar 2023
Pre-trained Adversarial Perturbations
Pre-trained Adversarial Perturbations
Y. Ban
Yinpeng Dong
AAML
62
22
0
07 Oct 2022
Towards Out-of-Distribution Adversarial Robustness
Towards Out-of-Distribution Adversarial Robustness
Adam Ibrahim
Charles Guille-Escuret
Ioannis Mitliagkas
Irina Rish
David M. Krueger
P. Bashivan
OOD
31
6
0
06 Oct 2022
Fine-grain Inference on Out-of-Distribution Data with Hierarchical
  Classification
Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification
Randolph Linderman
Jingyang Zhang
Nathan Inkawhich
H. Li
Yiran Chen
OODD
145
6
0
09 Sep 2022
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
356
5,811
0
29 Apr 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy
  Text Supervision
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
328
3,708
0
11 Feb 2021
Robust Out-of-distribution Detection for Neural Networks
Robust Out-of-distribution Detection for Neural Networks
Jiefeng Chen
Yixuan Li
Xi Wu
Yingyu Liang
S. Jha
OODD
161
84
0
21 Mar 2020
1