Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2503.21819
Cited By
Optimizing Safe and Aligned Language Generation: A Multi-Objective GRPO Approach
26 March 2025
Xuying Li
Zhuo Li
Yuji Kosuga
Victor Bian
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Optimizing Safe and Aligned Language Generation: A Multi-Objective GRPO Approach"
3 / 3 papers shown
Title
GuardReasoner-VL: Safeguarding VLMs via Reinforced Reasoning
Yong-Jin Liu
Shengfang Zhai
Mingzhe Du
Yulin Chen
Tri Cao
...
Xuzhao Li
Kun Wang
Junfeng Fang
Jiaheng Zhang
Bryan Hooi
OffRL
LRM
12
0
0
16 May 2025
Safety in Large Reasoning Models: A Survey
Cheng Wang
Yong-Jin Liu
Yangqiu Song
Duzhen Zhang
Zechao Li
Junfeng Fang
Bryan Hooi
LRM
174
1
0
24 Apr 2025
FlexLLM: A System for Co-Serving Large Language Model Inference and Parameter-Efficient Finetuning
Xupeng Miao
Gabriele Oliaro
Xinhao Cheng
Vineeth Kada
Ruohan Gao
...
April Yang
Yingcheng Wang
Mengdi Wu
Colin Unger
Zhihao Jia
MoE
94
9
0
29 Feb 2024
1