ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.11681
  4. Cited By
MINI-LLM: Memory-Efficient Structured Pruning for Large Language Models

MINI-LLM: Memory-Efficient Structured Pruning for Large Language Models

16 July 2024
Hongrong Cheng
Miao Zhang
J. Q. Shi
ArXivPDFHTML

Papers citing "MINI-LLM: Memory-Efficient Structured Pruning for Large Language Models"

3 / 3 papers shown
Title
Lightweight Safety Classification Using Pruned Language Models
Lightweight Safety Classification Using Pruned Language Models
Mason Sawtell
Tula Masterman
Sandi Besen
Jim Brown
94
2
0
18 Dec 2024
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,858
0
18 Apr 2021
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for
  Network Compression
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression
Yawei Li
Shuhang Gu
Christoph Mayer
Luc Van Gool
Radu Timofte
137
189
0
19 Mar 2020
1