ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.06302
  4. Cited By
Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak
  Attacks

Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks

10 June 2024
Zonghao Ying
Aishan Liu
Xianglong Liu
Dacheng Tao
ArXivPDFHTML

Papers citing "Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks"

12 / 12 papers shown
Title
POISONCRAFT: Practical Poisoning of Retrieval-Augmented Generation for Large Language Models
POISONCRAFT: Practical Poisoning of Retrieval-Augmented Generation for Large Language Models
Yangguang Shao
Xinjie Lin
Haozheng Luo
Chengshang Hou
G. Xiong
J. Yu
Junzheng Shi
SILM
52
0
0
10 May 2025
REVEAL: Multi-turn Evaluation of Image-Input Harms for Vision LLM
REVEAL: Multi-turn Evaluation of Image-Input Harms for Vision LLM
Madhur Jindal
Saurabh Deshpande
AAML
45
0
0
07 May 2025
Manipulating Multimodal Agents via Cross-Modal Prompt Injection
Manipulating Multimodal Agents via Cross-Modal Prompt Injection
Le Wang
Zonghao Ying
Tianyuan Zhang
Siyuan Liang
Shengshan Hu
Mingchuan Zhang
A. Liu
Xianglong Liu
AAML
33
1
0
19 Apr 2025
Towards Understanding the Safety Boundaries of DeepSeek Models: Evaluation and Findings
Towards Understanding the Safety Boundaries of DeepSeek Models: Evaluation and Findings
Zonghao Ying
Guangyi Zheng
Yongxin Huang
Deyue Zhang
Wenxin Zhang
Quanchen Zou
Aishan Liu
X. Liu
Dacheng Tao
ELM
74
6
0
19 Mar 2025
CUE-M: Contextual Understanding and Enhanced Search with Multimodal Large Language Model
CUE-M: Contextual Understanding and Enhanced Search with Multimodal Large Language Model
Dongyoung Go
Taesun Whang
Chanhee Lee
Hwayeon Kim
Sunghoon Park
Seunghwan Ji
Dongchan Kim
Young-Bum Kim
Young-Bum Kim
LRM
172
1
0
19 Nov 2024
JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal
  Large Language Models against Jailbreak Attacks
JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks
Weidi Luo
Siyuan Ma
Xiaogeng Liu
Xiaoyu Guo
Chaowei Xiao
AAML
77
69
0
03 Apr 2024
Does Few-shot Learning Suffer from Backdoor Attacks?
Does Few-shot Learning Suffer from Backdoor Attacks?
Xinwei Liu
Xiaojun Jia
Jindong Gu
Yuan Xun
Siyuan Liang
Xiaochun Cao
86
18
0
31 Dec 2023
Pre-trained Trojan Attacks for Visual Recognition
Pre-trained Trojan Attacks for Visual Recognition
Aishan Liu
Xinwei Zhang
Yisong Xiao
Yuguang Zhou
Siyuan Liang
Jiakai Wang
Xianglong Liu
Xiaochun Cao
Dacheng Tao
AAML
65
25
0
23 Dec 2023
FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual Prompts
FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual Prompts
Yichen Gong
Delong Ran
Jinyuan Liu
Conglei Wang
Tianshuo Cong
Anyu Wang
Sisi Duan
Xiaoyun Wang
MLLM
129
118
0
09 Nov 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
313
11,953
0
04 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
364
8,495
0
28 Jan 2022
Dual Attention Suppression Attack: Generate Adversarial Camouflage in
  Physical World
Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World
Jiakai Wang
Aishan Liu
Zixin Yin
Shunchang Liu
Shiyu Tang
Xianglong Liu
AAML
140
194
0
01 Mar 2021
1