ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.16883
  4. Cited By
Compressing the Backward Pass of Large-Scale Neural Architectures by
  Structured Activation Pruning

Compressing the Backward Pass of Large-Scale Neural Architectures by Structured Activation Pruning

28 November 2023
Daniel Barley
Holger Fröning
    AI4CE
ArXivPDFHTML

Papers citing "Compressing the Backward Pass of Large-Scale Neural Architectures by Structured Activation Pruning"

2 / 2 papers shown
Title
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
264
4,489
0
23 Jan 2020
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,638
0
03 Jul 2012
1