ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.12478
  4. Cited By
Predicting Training Time Without Training

Predicting Training Time Without Training

28 August 2020
L. Zancato
Alessandro Achille
Avinash Ravichandran
Rahul Bhotika
Stefano Soatto
ArXivPDFHTML

Papers citing "Predicting Training Time Without Training"

21 / 21 papers shown
Title
PreNeT: Leveraging Computational Features to Predict Deep Neural Network
  Training Time
PreNeT: Leveraging Computational Features to Predict Deep Neural Network Training Time
Alireza Pourali
Arian Boukani
Hamzeh Khazaei
72
0
0
20 Dec 2024
Predicting the Encoding Error of SIRENs
Predicting the Encoding Error of SIRENs
Jeremy Vonderfecht
Feng Liu
AI4CE
40
3
0
29 Oct 2024
UnifiedNN: Efficient Neural Network Training on the Cloud
UnifiedNN: Efficient Neural Network Training on the Cloud
Xingyu Lou
Arthi Padmanabhan
Spyridon Mastorakis
FedML
46
0
0
02 Aug 2024
Neural Lineage
Neural Lineage
Runpeng Yu
Xinchao Wang
34
4
0
17 Jun 2024
Diffusion Soup: Model Merging for Text-to-Image Diffusion Models
Diffusion Soup: Model Merging for Text-to-Image Diffusion Models
Benjamin Biggs
Arjun Seshadri
Yang Zou
Achin Jain
Aditya Golatkar
Yusheng Xie
Alessandro Achille
Ashwin Swaminathan
Stefano Soatto
MoMe
DiffM
43
10
0
12 Jun 2024
Off-the-Shelf Neural Network Architectures for Forex Time Series
  Prediction come at a Cost
Off-the-Shelf Neural Network Architectures for Forex Time Series Prediction come at a Cost
Theodoros Zafeiriou
Dimitris Kalles
AI4TS
29
0
0
17 May 2024
The fine print on tempered posteriors
The fine print on tempered posteriors
Konstantinos Pitas
Julyan Arbel
30
1
0
11 Sep 2023
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained
  Models
Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models
Guillermo Ortiz-Jiménez
Alessandro Favero
P. Frossard
MoMe
51
110
0
22 May 2023
Understanding Reconstruction Attacks with the Neural Tangent Kernel and
  Dataset Distillation
Understanding Reconstruction Attacks with the Neural Tangent Kernel and Dataset Distillation
Noel Loo
Ramin Hasani
Mathias Lechner
Alexander Amini
Daniela Rus
DD
42
5
0
02 Feb 2023
The Underlying Correlated Dynamics in Neural Training
The Underlying Correlated Dynamics in Neural Training
Rotem Turjeman
Tom Berkov
I. Cohen
Guy Gilboa
27
3
0
18 Dec 2022
PROFET: Profiling-based CNN Training Latency Prophet for GPU Cloud
  Instances
PROFET: Profiling-based CNN Training Latency Prophet for GPU Cloud Instances
Sungjae Lee
Y. Hur
Subin Park
Kyungyong Lee
19
1
0
10 Aug 2022
TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent
  Kernels
TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent Kernels
Yaodong Yu
Alexander Wei
Sai Praneeth Karimireddy
Yi Ma
Michael I. Jordan
FedML
17
30
0
13 Jul 2022
Cold Posteriors through PAC-Bayes
Cold Posteriors through PAC-Bayes
Konstantinos Pitas
Julyan Arbel
26
5
0
22 Jun 2022
Demystifying the Neural Tangent Kernel from a Practical Perspective: Can
  it be trusted for Neural Architecture Search without training?
Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training?
J. Mok
Byunggook Na
Ji-Hoon Kim
Dongyoon Han
Sungroh Yoon
AAML
42
23
0
28 Mar 2022
Benchmarking Resource Usage for Efficient Distributed Deep Learning
Benchmarking Resource Usage for Efficient Distributed Deep Learning
Nathan C. Frey
Baolin Li
Joseph McDonald
Dan Zhao
Michael Jones
David Bestor
Devesh Tiwari
V. Gadepally
S. Samsi
35
9
0
28 Jan 2022
Scaling Neural Tangent Kernels via Sketching and Random Features
Scaling Neural Tangent Kernels via Sketching and Random Features
A. Zandieh
Insu Han
H. Avron
N. Shoham
Chaewon Kim
Jinwoo Shin
11
31
0
15 Jun 2021
What can linearized neural networks actually say about generalization?
What can linearized neural networks actually say about generalization?
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
29
43
0
12 Jun 2021
Random Features for the Neural Tangent Kernel
Random Features for the Neural Tangent Kernel
Insu Han
H. Avron
N. Shoham
Chaewon Kim
Jinwoo Shin
24
9
0
03 Apr 2021
A linearized framework and a new benchmark for model selection for
  fine-tuning
A linearized framework and a new benchmark for model selection for fine-tuning
Aditya Deshpande
Alessandro Achille
Avinash Ravichandran
Hao Li
L. Zancato
Charless C. Fowlkes
Rahul Bhotika
Stefano Soatto
Pietro Perona
ALM
115
46
0
29 Jan 2021
Estimating informativeness of samples with Smooth Unique Information
Estimating informativeness of samples with Smooth Unique Information
Hrayr Harutyunyan
Alessandro Achille
Giovanni Paolini
Orchid Majumder
Avinash Ravichandran
Rahul Bhotika
Stefano Soatto
27
24
0
17 Jan 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
308
2,890
0
15 Sep 2016
1