ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.11342
  4. Cited By
Generating Natural Adversarial Examples

Generating Natural Adversarial Examples

31 October 2017
Zhengli Zhao
Dheeru Dua
Sameer Singh
    GAN
    AAML
ArXivPDFHTML

Papers citing "Generating Natural Adversarial Examples"

50 / 324 papers shown
Title
Improving the Transferability of Adversarial Examples by Inverse Knowledge Distillation
Improving the Transferability of Adversarial Examples by Inverse Knowledge Distillation
Wenyuan Wu
Zheng Liu
Yong Chen
Chao Su
Dezhong Peng
Xu Wang
AAML
37
0
0
24 Feb 2025
Tougher Text, Smarter Models: Raising the Bar for Adversarial Defence Benchmarks
Tougher Text, Smarter Models: Raising the Bar for Adversarial Defence Benchmarks
Yang Wang
Chenghua Lin
ELM
35
0
0
05 Jan 2025
BinarySelect to Improve Accessibility of Black-Box Attack Research
BinarySelect to Improve Accessibility of Black-Box Attack Research
Shatarupa Ghosh
Jonathan Rusert
AAML
74
0
0
13 Dec 2024
Rethinking Visual Counterfactual Explanations Through Region Constraint
Rethinking Visual Counterfactual Explanations Through Region Constraint
Bartlomiej Sobieski
Jakub Grzywaczewski
Bartlomiej Sadlej
Matthew Tivnan
P. Biecek
CML
43
0
0
16 Oct 2024
Natural Language Induced Adversarial Images
Natural Language Induced Adversarial Images
Xiaopei Zhu
Peiyang Xu
Guanning Zeng
Yingpeng Dong
Xiaolin Hu
AAML
28
0
0
11 Oct 2024
SCA: Highly Efficient Semantic-Consistent Unrestricted Adversarial Attack
SCA: Highly Efficient Semantic-Consistent Unrestricted Adversarial Attack
Zihao Pan
Weibin Wu
Yuhang Cao
Zibin Zheng
DiffM
AAML
60
1
0
03 Oct 2024
Explaining an image classifier with a generative model conditioned by
  uncertainty
Explaining an image classifier with a generative model conditioned by uncertainty
Adrien Le Coz
Stéphane Herbin
Faouzi Adjed
34
0
0
02 Oct 2024
Legilimens: Practical and Unified Content Moderation for Large Language
  Model Services
Legilimens: Practical and Unified Content Moderation for Large Language Model Services
Jialin Wu
Jiangyi Deng
Shengyuan Pang
Yanjiao Chen
Jiayang Xu
Xinfeng Li
Wenyuan Xu
34
6
0
28 Aug 2024
Deep Learning with Data Privacy via Residual Perturbation
Deep Learning with Data Privacy via Residual Perturbation
Wenqi Tao
Huaming Ling
Zuoqiang Shi
Bao Wang
21
2
0
11 Aug 2024
E-Bench: Towards Evaluating the Ease-of-Use of Large Language Models
E-Bench: Towards Evaluating the Ease-of-Use of Large Language Models
Zhenyu Zhang
Bingguang Hao
Jinpeng Li
Zekai Zhang
Dongyan Zhao
31
0
0
16 Jun 2024
A Constraint-Enforcing Reward for Adversarial Attacks on Text
  Classifiers
A Constraint-Enforcing Reward for Adversarial Attacks on Text Classifiers
Tom Roth
Inigo Jauregi Unanue
A. Abuadbba
Massimo Piccardi
AAML
SILM
21
1
0
20 May 2024
On Adversarial Examples for Text Classification by Perturbing Latent
  Representations
On Adversarial Examples for Text Classification by Perturbing Latent Representations
Korn Sooksatra
Bikram Khanal
Pablo Rivas
SILM
AAML
27
3
0
06 May 2024
Is ReLU Adversarially Robust?
Is ReLU Adversarially Robust?
Korn Sooksatra
Greg Hamerly
Pablo Rivas
14
3
0
06 May 2024
Global Counterfactual Directions
Global Counterfactual Directions
Bartlomiej Sobieski
P. Biecek
DiffM
56
5
0
18 Apr 2024
Cross-Lingual Transfer Robustness to Lower-Resource Languages on
  Adversarial Datasets
Cross-Lingual Transfer Robustness to Lower-Resource Languages on Adversarial Datasets
Shadi Manafi
Nikhil Krishnaswamy
AAML
40
0
0
29 Mar 2024
Defense Against Adversarial Attacks on No-Reference Image Quality Models
  with Gradient Norm Regularization
Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization
Yujia Liu
Chenxi Yang
Dingquan Li
Jianhao Ding
Tingting Jiang
21
3
0
18 Mar 2024
Universal Prompt Optimizer for Safe Text-to-Image Generation
Universal Prompt Optimizer for Safe Text-to-Image Generation
Zongyu Wu
Hongcheng Gao
Yueze Wang
Xiang Zhang
Suhang Wang
EGVM
10
9
0
16 Feb 2024
Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion
  Models!
Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models!
Shashank Kotyan
Poyuan Mao
Pin-Yu Chen
Danilo Vasconcellos Vargas
AAML
DiffM
35
0
0
07 Feb 2024
Transcending Adversarial Perturbations: Manifold-Aided Adversarial
  Examples with Legitimate Semantics
Transcending Adversarial Perturbations: Manifold-Aided Adversarial Examples with Legitimate Semantics
Shuai Li
Xiaoyu Jiang
Xiaoguang Ma
AAML
16
0
0
05 Feb 2024
Manipulating Predictions over Discrete Inputs in Machine Teaching
Manipulating Predictions over Discrete Inputs in Machine Teaching
Xiaodong Wu
Yufei Han
H. Dahrouj
Jianbing Ni
Zhenwen Liang
Xiangliang Zhang
9
0
0
31 Jan 2024
An Empirical Study of In-context Learning in LLMs for Machine
  Translation
An Empirical Study of In-context Learning in LLMs for Machine Translation
Pranjal A. Chitale
Jay Gala
Raj Dabre
LRM
26
5
0
22 Jan 2024
Finding a Needle in the Adversarial Haystack: A Targeted Paraphrasing
  Approach For Uncovering Edge Cases with Minimal Distribution Distortion
Finding a Needle in the Adversarial Haystack: A Targeted Paraphrasing Approach For Uncovering Edge Cases with Minimal Distribution Distortion
Aly M. Kassem
Sherif Saad
AAML
23
1
0
21 Jan 2024
Where and How to Attack? A Causality-Inspired Recipe for Generating
  Counterfactual Adversarial Examples
Where and How to Attack? A Causality-Inspired Recipe for Generating Counterfactual Adversarial Examples
Ruichu Cai
Yuxuan Zhu
Jie Qiao
Zefeng Liang
Furui Liu
Zhifeng Hao
CML
19
5
0
21 Dec 2023
SenTest: Evaluating Robustness of Sentence Encoders
SenTest: Evaluating Robustness of Sentence Encoders
Tanmay Chavan
Shantanu Patankar
Aditya Kane
Omkar Gokhale
Geetanjali Kale
Raviraj Joshi
14
0
0
29 Nov 2023
Adversarial Doodles: Interpretable and Human-drawable Attacks Provide
  Describable Insights
Adversarial Doodles: Interpretable and Human-drawable Attacks Provide Describable Insights
Ryoya Nara
Yusuke Matsui
AAML
24
0
0
27 Nov 2023
Generating Valid and Natural Adversarial Examples with Large Language
  Models
Generating Valid and Natural Adversarial Examples with Large Language Models
Zimu Wang
Wei Wang
Qi Chen
Qiufeng Wang
Anh Nguyen
AAML
21
4
0
20 Nov 2023
DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
  Language Models
DALA: A Distribution-Aware LoRA-Based Adversarial Attack against Language Models
Yibo Wang
Xiangjue Dong
James Caverlee
Philip S. Yu
23
2
0
14 Nov 2023
A Survey on Transferability of Adversarial Examples across Deep Neural
  Networks
A Survey on Transferability of Adversarial Examples across Deep Neural Networks
Jindong Gu
Xiaojun Jia
Pau de Jorge
Wenqain Yu
Xinwei Liu
...
Anjun Hu
Ashkan Khakzar
Zhijiang Li
Xiaochun Cao
Philip H. S. Torr
AAML
29
26
0
26 Oct 2023
Break it, Imitate it, Fix it: Robustness by Generating Human-Like
  Attacks
Break it, Imitate it, Fix it: Robustness by Generating Human-Like Attacks
Aradhana Sinha
Ananth Balashankar
Ahmad Beirami
Thi Avrahami
Jilin Chen
Alex Beutel
AAML
27
4
0
25 Oct 2023
Finite-context Indexing of Restricted Output Space for NLP Models Facing
  Noisy Input
Finite-context Indexing of Restricted Output Space for NLP Models Facing Noisy Input
Minh Nguyen
Nancy F. Chen
17
0
0
21 Oct 2023
Beyond Hard Samples: Robust and Effective Grammatical Error Correction
  with Cycle Self-Augmenting
Beyond Hard Samples: Robust and Effective Grammatical Error Correction with Cycle Self-Augmenting
Zecheng Tang
Kaiqi Feng
Juntao Li
Min Zhang
26
2
0
20 Oct 2023
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
S. M. Fazle
J. Mondal
Meem Arafat Manab
Xi Xiao
Sarfaraz Newaz
AAML
19
0
0
18 Oct 2023
Survey of Vulnerabilities in Large Language Models Revealed by
  Adversarial Attacks
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
Erfan Shayegani
Md Abdullah Al Mamun
Yu Fu
Pedram Zaree
Yue Dong
Nael B. Abu-Ghazaleh
AAML
147
145
0
16 Oct 2023
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models
  Against Adversarial Attacks
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models Against Adversarial Attacks
Yanjie Li
Bin Xie
Songtao Guo
Yuanyuan Yang
Bin Xiao
AAML
30
15
0
01 Oct 2023
On the Trade-offs between Adversarial Robustness and Actionable
  Explanations
On the Trade-offs between Adversarial Robustness and Actionable Explanations
Satyapriya Krishna
Chirag Agarwal
Himabindu Lakkaraju
AAML
36
0
0
28 Sep 2023
Understanding Pose and Appearance Disentanglement in 3D Human Pose
  Estimation
Understanding Pose and Appearance Disentanglement in 3D Human Pose Estimation
K. K. Nakka
Mathieu Salzmann
DRL
CoGe
21
2
0
20 Sep 2023
Machine Translation Models Stand Strong in the Face of Adversarial
  Attacks
Machine Translation Models Stand Strong in the Face of Adversarial Attacks
Pavel Burnyshev
Elizaveta Kostenok
Alexey Zaytsev
SILM
AAML
16
0
0
10 Sep 2023
Lost In Translation: Generating Adversarial Examples Robust to
  Round-Trip Translation
Lost In Translation: Generating Adversarial Examples Robust to Round-Trip Translation
Neel Bhandari
Pin-Yu Chen
AAML
SILM
37
3
0
24 Jul 2023
Abusing Images and Sounds for Indirect Instruction Injection in
  Multi-Modal LLMs
Abusing Images and Sounds for Indirect Instruction Injection in Multi-Modal LLMs
Eugene Bagdasaryan
Tsung-Yin Hsieh
Ben Nassi
Vitaly Shmatikov
16
79
0
19 Jul 2023
NatLogAttack: A Framework for Attacking Natural Language Inference
  Models with Natural Logic
NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic
Zióu Zheng
Xiao-Dan Zhu
AAML
LRM
40
5
0
06 Jul 2023
SCAT: Robust Self-supervised Contrastive Learning via Adversarial
  Training for Text Classification
SCAT: Robust Self-supervised Contrastive Learning via Adversarial Training for Text Classification
J. Wu
Dit-Yan Yeung
SILM
25
0
0
04 Jul 2023
Evaluating Paraphrastic Robustness in Textual Entailment Models
Evaluating Paraphrastic Robustness in Textual Entailment Models
Dhruv Verma
Yash Kumar Lal
Shreyashee Sinha
Benjamin Van Durme
Adam Poliak
23
5
0
29 Jun 2023
A Survey on Out-of-Distribution Evaluation of Neural NLP Models
A Survey on Out-of-Distribution Evaluation of Neural NLP Models
Xinzhe Li
Ming Liu
Shang Gao
Wray L. Buntine
14
20
0
27 Jun 2023
Visual Adversarial Examples Jailbreak Aligned Large Language Models
Visual Adversarial Examples Jailbreak Aligned Large Language Models
Xiangyu Qi
Kaixuan Huang
Ashwinee Panda
Peter Henderson
Mengdi Wang
Prateek Mittal
AAML
23
137
0
22 Jun 2023
Anticipatory Thinking Challenges in Open Worlds: Risk Management
Anticipatory Thinking Challenges in Open Worlds: Risk Management
Adam Amos-Binks
Dustin Dannenhauer
Leilani H. Gilpin
13
0
0
22 Jun 2023
A Multilingual Evaluation of NER Robustness to Adversarial Inputs
A Multilingual Evaluation of NER Robustness to Adversarial Inputs
A. Srinivasan
Sowmya Vajjala
AAML
15
3
0
30 May 2023
Diffusion-Based Adversarial Sample Generation for Improved Stealthiness
  and Controllability
Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability
Haotian Xue
Alexandre Araujo
Bin Hu
Yongxin Chen
DiffM
30
41
0
25 May 2023
Improving Classifier Robustness through Active Generation of Pairwise
  Counterfactuals
Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Ananth Balashankar
Xuezhi Wang
Yao Qin
Ben Packer
Nithum Thain
Jilin Chen
Ed H. Chi
Alex Beutel
17
0
0
22 May 2023
Latent Magic: An Investigation into Adversarial Examples Crafted in the
  Semantic Latent Space
Latent Magic: An Investigation into Adversarial Examples Crafted in the Semantic Latent Space
Bo Zheng
DiffM
17
1
0
22 May 2023
A Survey of Safety and Trustworthiness of Large Language Models through
  the Lens of Verification and Validation
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
Xiaowei Huang
Wenjie Ruan
Wei Huang
Gao Jin
Yizhen Dong
...
Sihao Wu
Peipei Xu
Dengyu Wu
André Freitas
Mustafa A. Mustafa
ALM
32
81
0
19 May 2023
1234567
Next