ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.21819
  4. Cited By
Optimizing Safe and Aligned Language Generation: A Multi-Objective GRPO Approach

Optimizing Safe and Aligned Language Generation: A Multi-Objective GRPO Approach

26 March 2025
Xuying Li
Zhuo Li
Yuji Kosuga
Victor Bian
ArXivPDFHTML

Papers citing "Optimizing Safe and Aligned Language Generation: A Multi-Objective GRPO Approach"

3 / 3 papers shown
Title
GuardReasoner-VL: Safeguarding VLMs via Reinforced Reasoning
GuardReasoner-VL: Safeguarding VLMs via Reinforced Reasoning
Yong-Jin Liu
Shengfang Zhai
Mingzhe Du
Yulin Chen
Tri Cao
...
Xuzhao Li
Kun Wang
Junfeng Fang
Jiaheng Zhang
Bryan Hooi
OffRL
LRM
12
0
0
16 May 2025
Safety in Large Reasoning Models: A Survey
Safety in Large Reasoning Models: A Survey
Cheng Wang
Yong-Jin Liu
Yangqiu Song
Duzhen Zhang
Zechao Li
Junfeng Fang
Bryan Hooi
LRM
174
1
0
24 Apr 2025
FlexLLM: A System for Co-Serving Large Language Model Inference and Parameter-Efficient Finetuning
FlexLLM: A System for Co-Serving Large Language Model Inference and Parameter-Efficient Finetuning
Xupeng Miao
Gabriele Oliaro
Xinhao Cheng
Vineeth Kada
Ruohan Gao
...
April Yang
Yingcheng Wang
Mengdi Wu
Colin Unger
Zhihao Jia
MoE
94
9
0
29 Feb 2024
1