ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.01823
  4. Cited By
Hard Adversarial Example Mining for Improving Robust Fairness

Hard Adversarial Example Mining for Improving Robust Fairness

3 August 2023
Chenhao Lin
Xiang Ji
Yulong Yang
Q. Li
Chao Shen
Run Wang
Liming Fang
    AAML
ArXivPDFHTML

Papers citing "Hard Adversarial Example Mining for Improving Robust Fairness"

2 / 2 papers shown
Title
Refining Positive and Toxic Samples for Dual Safety Self-Alignment of LLMs with Minimal Human Interventions
Refining Positive and Toxic Samples for Dual Safety Self-Alignment of LLMs with Minimal Human Interventions
Jingxin Xu
Guoshun Nan
Sheng Guan
Sicong Leng
Yong-Jin Liu
Zixiao Wang
Yuyang Ma
Zhili Zhou
Yanzhao Hou
Xiaofeng Tao
LM&MA
57
0
0
08 Feb 2025
Recent Advances in Adversarial Training for Adversarial Robustness
Recent Advances in Adversarial Training for Adversarial Robustness
Tao Bai
Jinqi Luo
Jun Zhao
B. Wen
Qian Wang
AAML
76
473
0
02 Feb 2021
1