ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.04305
  4. Cited By
Measuring the Algorithmic Efficiency of Neural Networks

Measuring the Algorithmic Efficiency of Neural Networks

8 May 2020
Danny Hernandez
Tom B. Brown
ArXivPDFHTML

Papers citing "Measuring the Algorithmic Efficiency of Neural Networks"

16 / 16 papers shown
Title
The Graph's Apprentice: Teaching an LLM Low Level Knowledge for Circuit Quality Estimation
The Graph's Apprentice: Teaching an LLM Low Level Knowledge for Circuit Quality Estimation
Reza Moravej
Saurabh Bodhe
Zhanguang Zhang
Didier Chetelat
Dimitrios Tsaras
Yingxue Zhang
Hui-Ling Zhen
Jianye Hao
M. Yuan
57
1
0
17 Feb 2025
Life-Cycle Emissions of AI Hardware: A Cradle-To-Grave Approach and Generational Trends
Life-Cycle Emissions of AI Hardware: A Cradle-To-Grave Approach and Generational Trends
Ian Schneider
Hui Xu
Stephan Benecke
David Patterson
Keguo Huang
Parthasarathy Ranganathan
Cooper Elsworth
70
2
0
01 Feb 2025
AI capabilities can be significantly improved without expensive
  retraining
AI capabilities can be significantly improved without expensive retraining
Tom Davidson
Jean-Stanislas Denain
Pablo Villalobos
Guillem Bas
OffRL
VLM
24
26
0
12 Dec 2023
Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI
Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI
Dustin Wright
Christian Igel
Gabrielle Samuel
Raghavendra Selvan
32
15
0
05 Sep 2023
Criticality versus uniformity in deep neural networks
Criticality versus uniformity in deep neural networks
A. Bukva
Jurriaan de Gier
Kevin T. Grosvenor
R. Jefferson
K. Schalm
Eliot Schwander
31
3
0
10 Apr 2023
Bifrost: End-to-End Evaluation and Optimization of Reconfigurable DNN
  Accelerators
Bifrost: End-to-End Evaluation and Optimization of Reconfigurable DNN Accelerators
Axel Stjerngren
Perry Gibson
José Cano
28
4
0
26 Apr 2022
SlimFL: Federated Learning with Superposition Coding over Slimmable
  Neural Networks
SlimFL: Federated Learning with Superposition Coding over Slimmable Neural Networks
Won Joon Yun
Yunseok Kwak
Hankyul Baek
Soyi Jung
Mingyue Ji
M. Bennis
Jihong Park
Joongheon Kim
18
16
0
26 Mar 2022
Joint Superposition Coding and Training for Federated Learning over
  Multi-Width Neural Networks
Joint Superposition Coding and Training for Federated Learning over Multi-Width Neural Networks
Hankyul Baek
Won Joon Yun
Yunseok Kwak
Soyi Jung
Mingyue Ji
M. Bennis
Jihong Park
Joongheon Kim
FedML
74
21
0
05 Dec 2021
Automated Essay Scoring Using Transformer Models
Automated Essay Scoring Using Transformer Models
Sabrina Ludwig
Christian W. F. Mayer
Christopher Hansen
Kerstin Eilers
Steffen Brandt
19
38
0
13 Oct 2021
SECDA: Efficient Hardware/Software Co-Design of FPGA-based DNN
  Accelerators for Edge Inference
SECDA: Efficient Hardware/Software Co-Design of FPGA-based DNN Accelerators for Edge Inference
Jude Haris
Perry Gibson
José Cano
Nicolas Bohm Agostini
David Kaeli
41
19
0
01 Oct 2021
Compute and Energy Consumption Trends in Deep Learning Inference
Compute and Energy Consumption Trends in Deep Learning Inference
Radosvet Desislavov
Fernando Martínez-Plumed
José Hernández-Orallo
35
113
0
12 Sep 2021
Greenformers: Improving Computation and Memory Efficiency in Transformer
  Models via Low-Rank Approximation
Greenformers: Improving Computation and Memory Efficiency in Transformer Models via Low-Rank Approximation
Samuel Cahyawijaya
26
12
0
24 Aug 2021
Accelerating Federated Learning with a Global Biased Optimiser
Accelerating Federated Learning with a Global Biased Optimiser
Jed Mills
Jia Hu
Geyong Min
Rui Jin
Siwei Zheng
Jin Wang
FedML
AI4CE
34
9
0
20 Aug 2021
Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Narendra Chaudhary
Sanchit Misra
Dhiraj D. Kalamkar
A. Heinecke
E. Georganas
Barukh Ziv
Menachem Adelman
Bharat Kaul
26
9
0
16 Apr 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
297
10,220
0
16 Nov 2016
1