ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.14988
  4. Cited By
Beyond Fine-tuning: Unleashing the Potential of Continuous Pretraining
  for Clinical LLMs

Beyond Fine-tuning: Unleashing the Potential of Continuous Pretraining for Clinical LLMs

23 September 2024
Clément Christophe
Tathagata Raha
Svetlana Maslenkova
Muhammad Umar Salman
Praveen K Kanithi
Marco AF Pimentel
Shadab Khan
    LM&MA
ArXivPDFHTML

Papers citing "Beyond Fine-tuning: Unleashing the Potential of Continuous Pretraining for Clinical LLMs"

2 / 2 papers shown
Title
A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment
A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment
Jean-Philippe Corbeil
Amin Dada
Jean-Michel Attendu
Asma Ben Abacha
Alessandro Sordoni
Lucas Caccia
François Beaulieu
Thomas Lin
Jens Kleesiek
Paul Vozila
LM&MA
17
0
0
15 May 2025
Stabilizing Reasoning in Medical LLMs with Continued Pretraining and Reasoning Preference Optimization
Stabilizing Reasoning in Medical LLMs with Continued Pretraining and Reasoning Preference Optimization
Wataru Kawakami
Keita Suzuki
Junichiro Iwasawa
LRM
75
0
0
25 Apr 2025
1