ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.07482
  4. Cited By
Training Overhead Ratio: A Practical Reliability Metric for Large
  Language Model Training Systems

Training Overhead Ratio: A Practical Reliability Metric for Large Language Model Training Systems

14 August 2024
Ning Lu
Qian Xie
Hao Zhang
Wenyi Fang
Yang Zheng
Zheng Hu
Jiantao Ma
ArXivPDFHTML

Papers citing "Training Overhead Ratio: A Practical Reliability Metric for Large Language Model Training Systems"

2 / 2 papers shown
Title
Safe Delta: Consistently Preserving Safety when Fine-Tuning LLMs on Diverse Datasets
Safe Delta: Consistently Preserving Safety when Fine-Tuning LLMs on Diverse Datasets
Ning Lu
Shengcai Liu
Jiahao Wu
Weiyu Chen
Zhirui Zhang
Yew-Soon Ong
Qi Wang
Ke Tang
21
0
0
17 May 2025
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
266
4,532
0
23 Jan 2020
1