ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.07772
  4. Cited By
Alignment with Preference Optimization Is All You Need for LLM Safety

Alignment with Preference Optimization Is All You Need for LLM Safety

12 September 2024
Réda Alami
Ali Khalifa Almansoori
Ahmed Alzubaidi
M. Seddik
Mugariya Farooq
Hakim Hacid
ArXivPDFHTML

Papers citing "Alignment with Preference Optimization Is All You Need for LLM Safety"

2 / 2 papers shown
Title
Maximizing the Potential of Synthetic Data: Insights from Random Matrix
  Theory
Maximizing the Potential of Synthetic Data: Insights from Random Matrix Theory
Aymane El Firdoussi
M. Seddik
Soufiane Hayou
Réda Alami
Ahmed Alzubaidi
Hakim Hacid
28
1
0
11 Oct 2024
Noise Contrastive Alignment of Language Models with Explicit Rewards
Noise Contrastive Alignment of Language Models with Explicit Rewards
Huayu Chen
Guande He
Lifan Yuan
Ganqu Cui
Hang Su
Jun Zhu
60
43
0
08 Feb 2024
1