ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.14440
  4. Cited By
Asymmetric Bias in Text-to-Image Generation with Adversarial Attacks

Asymmetric Bias in Text-to-Image Generation with Adversarial Attacks

22 December 2023
Haz Sameen Shahgir
Xianghao Kong
Greg Ver Steeg
Yue Dong
ArXivPDFHTML

Papers citing "Asymmetric Bias in Text-to-Image Generation with Adversarial Attacks"

19 / 19 papers shown
Title
One Prompt to Verify Your Models: Black-Box Text-to-Image Models Verification via Non-Transferable Adversarial Attacks
One Prompt to Verify Your Models: Black-Box Text-to-Image Models Verification via Non-Transferable Adversarial Attacks
Ji Guo
Wenbo Jiang
Rui Zhang
Guoming Lu
Hongwei Li
AAML
82
0
0
30 Oct 2024
Understanding the Effectiveness of Large Language Models in Detecting
  Security Vulnerabilities
Understanding the Effectiveness of Large Language Models in Detecting Security Vulnerabilities
Avishree Khare
Saikat Dutta
Ziyang Li
Alaia Solko-Breslin
Rajeev Alur
Mayur Naik
ELM
75
46
0
16 Nov 2023
The Generative AI Paradox: "What It Can Create, It May Not Understand"
The Generative AI Paradox: "What It Can Create, It May Not Understand"
Peter West
Ximing Lu
Nouha Dziri
Faeze Brahman
Linjie Li
...
Khyathi Chandu
Benjamin Newman
Pang Wei Koh
Allyson Ettinger
Yejin Choi
AIMat
81
78
0
31 Oct 2023
Survey of Vulnerabilities in Large Language Models Revealed by
  Adversarial Attacks
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
Erfan Shayegani
Md Abdullah Al Mamun
Yu Fu
Pedram Zaree
Yue Dong
Nael B. Abu-Ghazaleh
AAML
203
161
0
16 Oct 2023
Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion
  Models?
Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?
Yu-Lin Tsai
Chia-Yi Hsu
Chulin Xie
Chih-Hsun Lin
Jia-You Chen
Yue Liu
Pin-Yu Chen
Chia-Mu Yu
Chun-ying Huang
DiffM
73
86
0
16 Oct 2023
VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via
  Pre-trained Models
VLATTACK: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models
Ziyi Yin
Muchao Ye
Tianrong Zhang
Tianyu Du
Jinguo Zhu
Han Liu
Jinghui Chen
Ting Wang
Fenglong Ma
AAML
VLM
CoGe
71
42
0
07 Oct 2023
Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM
Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM
Bochuan Cao
Yu Cao
Lu Lin
Jinghui Chen
AAML
55
148
0
18 Sep 2023
Universal and Transferable Adversarial Attacks on Aligned Language
  Models
Universal and Transferable Adversarial Attacks on Aligned Language Models
Andy Zou
Zifan Wang
Nicholas Carlini
Milad Nasr
J. Zico Kolter
Matt Fredrikson
287
1,449
0
27 Jul 2023
Red-Teaming the Stable Diffusion Safety Filter
Red-Teaming the Stable Diffusion Safety Filter
Javier Rando
Daniel Paleka
David Lindner
Lennard Heim
Florian Tramèr
DiffM
161
198
0
03 Oct 2022
Hierarchical Text-Conditional Image Generation with CLIP Latents
Hierarchical Text-Conditional Image Generation with CLIP Latents
Aditya A. Ramesh
Prafulla Dhariwal
Alex Nichol
Casey Chu
Mark Chen
VLM
DiffM
358
6,854
0
13 Apr 2022
High-Resolution Image Synthesis with Latent Diffusion Models
High-Resolution Image Synthesis with Latent Diffusion Models
Robin Rombach
A. Blattmann
Dominik Lorenz
Patrick Esser
Bjorn Ommer
3DV
383
15,454
0
20 Dec 2021
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
399
20,114
0
23 Oct 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
89
1,837
0
06 May 2019
Are adversarial examples inevitable?
Are adversarial examples inevitable?
Ali Shafahi
Wenjie Huang
Christoph Studer
Soheil Feizi
Tom Goldstein
SILM
53
282
0
06 Sep 2018
Benchmarking Neural Network Robustness to Common Corruptions and Surface
  Variations
Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations
Dan Hendrycks
Thomas G. Dietterich
OOD
77
199
0
04 Jul 2018
Adversarial Patch
Adversarial Patch
Tom B. Brown
Dandelion Mané
Aurko Roy
Martín Abadi
Justin Gilmer
AAML
70
1,095
0
27 Dec 2017
Microsoft COCO: Common Objects in Context
Microsoft COCO: Common Objects in Context
Nayeon Lee
Michael Maire
Serge J. Belongie
Lubomir Bourdev
Ross B. Girshick
James Hays
Pietro Perona
Deva Ramanan
C. L. Zitnick
Piotr Dollár
ObjD
396
43,619
0
01 May 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
259
14,912
1
21 Dec 2013
Auto-Encoding Variational Bayes
Auto-Encoding Variational Bayes
Diederik P. Kingma
Max Welling
BDL
437
16,940
0
20 Dec 2013
1