ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.17792
  4. Cited By
$H^3$Fusion: Helpful, Harmless, Honest Fusion of Aligned LLMs

H3H^3H3Fusion: Helpful, Harmless, Honest Fusion of Aligned LLMs

26 November 2024
Selim Furkan Tekin
Fatih Ilhan
Tiansheng Huang
Sihao Hu
Zachary Yahn
Ling Liu
    MoMe
ArXivPDFHTML

Papers citing "$H^3$Fusion: Helpful, Harmless, Honest Fusion of Aligned LLMs"

2 / 2 papers shown
Title
Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable
Tiansheng Huang
Sihao Hu
Fatih Ilhan
Selim Furkan Tekin
Zachary Yahn
Yichang Xu
Ling Liu
53
9
0
01 Mar 2025
Panacea: Mitigating Harmful Fine-tuning for Large Language Models via Post-fine-tuning Perturbation
Panacea: Mitigating Harmful Fine-tuning for Large Language Models via Post-fine-tuning Perturbation
Yishuo Wang
Tiansheng Huang
Li Shen
H. Yao
Haotian Luo
Rui Liu
Naiqiang Tan
Jiaxing Huang
Dacheng Tao
AAML
MoMe
CLL
111
2
0
30 Jan 2025
1