ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2512.05485
  4. Cited By
TeleAI-Safety: A comprehensive LLM jailbreaking benchmark towards attacks, defenses, and evaluations
v1v2 (latest)

TeleAI-Safety: A comprehensive LLM jailbreaking benchmark towards attacks, defenses, and evaluations

5 December 2025
Xiuyuan Chen
Jian Zhao
Yuxiang He
Yuan Xun
Xinwei Liu
Yanshu Li
Huilin Zhou
Wei Cai
Ziyan Shi
Yuchen Yuan
Tianle Zhang
Chi Zhang
Xuelong Li
    ELM
ArXiv (abs)PDFHTMLGithub (6★)

Papers citing "TeleAI-Safety: A comprehensive LLM jailbreaking benchmark towards attacks, defenses, and evaluations"

0 / 0 papers shown
Title

No papers found