ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01311
  4. Cited By
Energy Considerations for Large Pretrained Neural Networks

Energy Considerations for Large Pretrained Neural Networks

2 June 2025
Leo Mei
Mark Stamp
    AI4CE
ArXivPDFHTML

Papers citing "Energy Considerations for Large Pretrained Neural Networks"

15 / 15 papers shown
Title
From Words to Watts: Benchmarking the Energy Costs of Large Language
  Model Inference
From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference
S. Samsi
Dan Zhao
Joseph McDonald
Baolin Li
Adam Michaleas
Michael Jones
William Bergeron
J. Kepner
Devesh Tiwari
V. Gadepally
53
142
0
04 Oct 2023
On the Steganographic Capacity of Selected Learning Models
On the Steganographic Capacity of Selected Learning Models
Rishit Agrawal
Kelvin Jou
Tanush Obili
Daksh Parikh
Samarth Prajapati
Yash Seth
Charan Sridhar
Nathan Zhang
Mark Stamp
29
1
0
29 Aug 2023
Rotational Equilibrium: How Weight Decay Balances Learning Across Neural
  Networks
Rotational Equilibrium: How Weight Decay Balances Learning Across Neural Networks
Atli Kosson
Bettina Messmer
Martin Jaggi
53
17
0
26 May 2023
A ConvNet for the 2020s
A ConvNet for the 2020s
Zhuang Liu
Hanzi Mao
Chaozheng Wu
Christoph Feichtenhofer
Trevor Darrell
Saining Xie
ViT
159
5,171
0
10 Jan 2022
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
768
42,055
0
28 May 2020
Quantization Networks
Quantization Networks
Jiwei Yang
Xu Shen
Jun Xing
Xinmei Tian
Houqiang Li
Bing Deng
Jianqiang Huang
Xiansheng Hua
MQ
72
346
0
21 Nov 2019
To Compress, or Not to Compress: Characterizing Deep Learning Model
  Compression for Embedded Inference
To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Qing Qin
Jie Ren
Jia-Le Yu
Ling Gao
Hai Wang
Jie Zheng
Yansong Feng
Jianbin Fang
Zheng Wang
18
22
0
21 Oct 2018
Benchmark Analysis of Representative Deep Neural Network Architectures
Benchmark Analysis of Representative Deep Neural Network Architectures
Simone Bianco
Rémi Cadène
Luigi Celona
Paolo Napoletano
BDL
66
677
0
01 Oct 2018
The Marginal Value of Adaptive Gradient Methods in Machine Learning
The Marginal Value of Adaptive Gradient Methods in Machine Learning
Ashia Wilson
Rebecca Roelofs
Mitchell Stern
Nathan Srebro
Benjamin Recht
ODL
65
1,032
0
23 May 2017
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
772
36,813
0
25 Aug 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,020
0
10 Dec 2015
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
257
8,842
0
01 Oct 2015
Going Deeper with Convolutions
Going Deeper with Convolutions
Christian Szegedy
Wei Liu
Yangqing Jia
P. Sermanet
Scott E. Reed
Dragomir Anguelov
D. Erhan
Vincent Vanhoucke
Andrew Rabinovich
465
43,658
0
17 Sep 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
1.6K
100,386
0
04 Sep 2014
Exploiting Linear Structure Within Convolutional Networks for Efficient
  Evaluation
Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
Emily L. Denton
Wojciech Zaremba
Joan Bruna
Yann LeCun
Rob Fergus
FAtt
177
1,689
0
02 Apr 2014
1