ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10838
21
0

LARGO: Latent Adversarial Reflection through Gradient Optimization for Jailbreaking LLMs

16 May 2025
Ran Li
Hao Wang
Chengzhi Mao
    AAML
ArXivPDFHTML
Abstract

Efficient red-teaming method to uncover vulnerabilities in Large Language Models (LLMs) is crucial. While recent attacks often use LLMs as optimizers, the discrete language space make gradient-based methods struggle. We introduce LARGO (Latent Adversarial Reflection through Gradient Optimization), a novel latent self-reflection attack that reasserts the power of gradient-based optimization for generating fluent jailbreaking prompts. By operating within the LLM's continuous latent space, LARGO first optimizes an adversarial latent vector and then recursively call the same LLM to decode the latent into natural language. This methodology yields a fast, effective, and transferable attack that produces fluent and stealthy prompts. On standard benchmarks like AdvBench and JailbreakBench, LARGO surpasses leading jailbreaking techniques, including AutoDAN, by 44 points in attack success rate. Our findings demonstrate a potent alternative to agentic LLM prompting, highlighting the efficacy of interpreting and attacking LLM internals through gradient optimization.

View on arXiv
@article{li2025_2505.10838,
  title={ LARGO: Latent Adversarial Reflection through Gradient Optimization for Jailbreaking LLMs },
  author={ Ran Li and Hao Wang and Chengzhi Mao },
  journal={arXiv preprint arXiv:2505.10838},
  year={ 2025 }
}
Comments on this paper