ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.20650
  4. Cited By
NeuZip: Memory-Efficient Training and Inference with Dynamic Compression
  of Neural Networks

NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks

28 October 2024
Yongchang Hao
Yanshuai Cao
Lili Mou
    MQ
ArXivPDFHTML

Papers citing "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks"

1 / 1 papers shown
Title
70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float
70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float
Tianyi Zhang
Yang Sui
Shaochen Zhong
V. Chaudhary
Xia Hu
Anshumali Shrivastava
MQ
32
0
0
15 Apr 2025
1