ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.15296
  4. Cited By
You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning

You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning

25 January 2025
Ayan Sengupta
Siddhant Chaudhary
Tanmoy Chakraborty
ArXivPDFHTML

Papers citing "You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning"

3 / 3 papers shown
Title
Position: Enough of Scaling LLMs! Lets Focus on Downscaling
Position: Enough of Scaling LLMs! Lets Focus on Downscaling
Ayan Sengupta
Yash Goel
Tanmoy Chakraborty
41
0
0
02 May 2025
Efficient LLMs with AMP: Attention Heads and MLP Pruning
Efficient LLMs with AMP: Attention Heads and MLP Pruning
Leandro Giusti Mugnaini
Bruno Yamamoto
Lucas Lauton de Alcantara
Victor Zacarias
Edson Bollis
Lucas Pellicer
A. H. R. Costa
Artur Jordao
47
0
0
29 Apr 2025
Compression Laws for Large Language Models
Compression Laws for Large Language Models
Ayan Sengupta
Siddhant Chaudhary
Tanmoy Chakraborty
31
0
0
06 Apr 2025
1